RUDIMENTS OF ~ICALCULUS
S T U D I E S IN LOGIC AND THE F O U N D A T I O N S OF M A T H E M A T I C S V O L U M E 146
Honorary Editor: P. S U P P E S
Editors: S. A B R A M S K Y , S. A R T E M O V , D.M. GABBAY, R.A. SHORE, A.S. TROELSTRA,
London Moscow London Ithaca Amsterdam
ELSEVIER AMSTERDAM 9LONDON 9NEW YORK ~ OXFORD 9PARIS ~ SHANNON ~ TOKYO
RUDIMENTS OF ~ CALCULUS m
A. ARNOLD
c/o LaBRI Universitd Bordeaux I 351, cours de la Libdration 33405 Talence, France
D.
NIWII~SKI
Institute of Informatics University of Warsaw ul. Banacha 2 02097 Warsaw, Poland
2001
ELSEVIER AMSTERDAM
~ LONDON
~ NEW YORK
9O X F O R D
9P A R I S ~ S H A N N O N
~ TOKYO
ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box 211, 1000 AE Amsterdam, The Netherlands
9 2001 Elsevier Science B.V. All rights reserved.
T h i s w o r k is protected under c o p y r i g h t b y E l s e v i e r Science, and the f o l l o w i n g terms and c o n d i t i o n s a p p l y to its use: Photocopying Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for nonprofit educational classroom use. Permissions may be sought directly from Elsevier Science Global Rights Department, PO Box 800, Oxford OX5 1DX, UK; phone: (+44) 1865 843830, fax: (+44) 1865 853333, email:
[email protected] You may also contact Global Rights directly through Elsevier's home page (http://www.elsevier.nl), by selecting 'Obtaining Permissions'. In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (+1) (978) 7508400, fax: (+1) (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W 1P 0LP, UK; phone: (+44) 207 631 5555; fax: (+44) 207 631 5500. Other countries may have a local reprographic rights agency for payments. Derivative Works Tables of contents may be reproduced for internal circulation, but permission of Elsevier Science is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations. Electronic Storage or Usage Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or part of a chapter. Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher. Address permissions requests to: Elsevier Global Rights Department, at the mail, fax and email addresses noted above. Notice No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made.
First edition 2001 Library of Congress Cataloging in Publication Data A catalog record from the Library of Congress has been applied for.
ISBN: 0 444 50620 9 ISSN: 0049237X @ The paper used in this publication meets the requirements of ANSI/NISO Z39.481992 (Permanence of Paper). Printed in The Netherlands.
To the memory of Helena Rasiowa (191~'199~)
from whom we learned the algebraic approach to logic.
This Page Intentionally Left Blank
Preface
The #calculus is basically an algebra of monotonic functions over a complete lattice, whose basic constructors are functional composition and least and greatest fixed point operators. In some sense, the #calculus naturally extends the concept of an inductive definition, very common in mathematical practice. An object defined by induction, typically a set, is obtained as a least fixed point of some monotonic operator, usually over a powerset lattice. For example, the set of theorems of a formal theory is the least fixed point of the consequence operator. For some concepts, however, the use of coinduction, i.e. of greatest fixed points is more appropriate. For example, a maximal denseinitself subspace of a topological space can be defined as the greatest fixed point of the derivative operator. In the #calculus, both least and greatest fixed points are considered, but, more importantly, their occurrences can be nested and mutually dependent. The alternation between the least and greatest fixedpoint operators is a source of a sharp expressive power of the #calculus and gives rise to a proper hierarchy, just as the alternation of quantifiers is the basis of strength of firstorder logic. The #calculus emerged from numerous works of logicians and computer scientists and its use has become common in the works about verification of computer programs because it provides a simple way of expressing and checking their behavioural properties. In the literature, the most popular reference is perhaps the modal #calculus introduced by Kozen [55]; let us also mention prior works by Scott and de Bakker [88], Moschovakis [64], Emerson and Clarke [36], Park [80], and P r a t t [82]. Indeed, there is a wide variety of phenomena that can be modeled in the #calculus, from finite automata and regular expressions, to alternating automata on infinite trees or, even more generally, infinite games with finitely presentable winning conditions. From a point of view of computer science, a virtue of the #calculus is that it allows static characterization of dynamic concepts. Computation, by its nature, refers to time, and its properties are naturally expressed in terms of histories. It is possible, for instance, to model nondeterministic computations by (possibly infinite) trees, and to express computation properties using temporal operators and quantification over paths. In contrast to that approach,
viii
known as temporal logic, in a fixedpointdefinition of a computational property, an explicit reference to computation paths is no longer needed, since a fixed point contains information about the computation paths converging to it. In this context, the least and the greatest fixed point operators usually correspond to the references to finite (e.g., teachability) or potentially infinite (e.g., safety) periods of time, respectively. To give a simple illustration, the set of origins of all infinite paths in a graph (V, E) can be presented as the greatest fixed point of an equation X  E1X (where the graph is given by a relation E C_ V x V, E  1 y  {x E V "3y E Y, (x,y) E E}, and the term "greatest" refers to the powerset lattice p(V)). Another interesting and important feature of the #calculus is the similarity between its semantic aspects and twoplayer games with perfect information. Indeed, such games (more specifically, infinite parity games) turn out to be inherent to the semantics of the #calculus. In some sense, the converse is also true, i.e., the #calculus constitutes a useful framework for discussing games. In particular, one can give a #calculus explanation of the determinacy of certain infinite games. More precisely, in Chapter 4, we derive the Memoryless Determinacy Theorem (which says that in an infinite parity game, starting in an arbitrary position, one of the players has a winning strategy which depends only on the actual position, and not on the history of the play), from the Selection Property, where the latter is a kind of normal form result of the Boolean pcalculus. The #calculus can also be considered as a natural extension of the notion of an automaton to structures more complex than words and trees. A u t o m a t a are usually well tractable algorithmically due to a straightforward rather than inductive semantics. One may even consider automata themselves as a kind of a specification language, since, for words and trees, they achieve the expressive power of the monadic secondorder logic (by the fundamental results of Bfichi [21] and Rabin [83]). However, automata in general lack the compositionality of logical formulas, and so do not reflect the complexity of the properties specified. The #calculus combines the good points of both logic and automata. It offers an elegant and wellstructured mathematical notation inducing nice semantical hierarchies. On the other hand, solutions to the related algorithmic problems are already implicitly present in the structure of the fixedpoint expressions, as computing a least (or, dually, greatest) fixed point is one of general paradigms of algorithms. The aims of the book. This book presents what in our opinion constitutes the basis of the theory of the #calculus, considered as an algebraic system rather than a logic. We have wished to present the subject in a unified way, and in a form as general as possible. Therefore, our emphasis is on the generality of the fixedpoint notation, and on the connections between pcalculus, games, and automata, which we also explain in an algebraic way.
ix This book should be accessible for graduate or advanced undergraduate students both in mathematics and computer science. We have designed this book especially for researchers and students interested in logic in computer science, computer aided verification, and general aspects of automata theory. We have aimed at gathering in a single place the fundamental results of the theory, that are currently very scattered in the literature, and often hardly accessible for interested readers. The presentation is selfcontained, except for the proof of the McNaughton's Determinization Theorem (see, e.g., [97]). However, we suppose that the reader is already familiar with some basic automata theory and universal algebra. The references, credits, and suggestions for further reading are given at the end of each chapter. We wish to stress that our presentation is far from being complete. One important omission is the issue of proof systems of the #calculus. For this matter, we refer the reader to the original paper by Walukiewicz [103] who established the completeness of an axiomatization proposed by Kozen [55]. Another topic not considered here is a firstorder version of the #calculus, i.e., the fixedpoint extension of firstorder logic. We refer the reader to the monograph by Moschovakis [64] for general considerations, and to monographs by Ebbinghaus and Flum [29], and by Immerman [43] for fixedpoint logic over finite models. More generally, the connections between the #calculus and related logics of programs (see, e.g., [41]) are not considered, although they are a great motivation for the development of the itcalculus. Contents. In Chapter 1, we provide a background on fixed points of monotonic mappings over complete lattices. In addition to the basic definitions, we present a number of properties (typically inequalities), which can be viewed as fundamental laws (or tautologies) of the #calculus. In particular, the Beki~ principle, and Gauss elimination principle allow us to move between scalar and vector fixed points. A formalized language for the #calculus, based on the concept of fixedpoint terms, is introduced in Chapter 2, together with the notion of a #interpretation. To stress the algebraic character of the theory, we introduce a general concept of an abstract #calculus, so that fixedpoint terms themselves can be organized into a #calculus, somehow analogous to an algebra of (ordinary) terms. Then the meaning of fixedpoint terms under a particular #interpretation is obtained by a homomorphism of #calculi. We will meet other examples of abstract #calculi later in Chapters 5 and 7, in particular the #calculus of automata. Still in Chapter 2, we also show a special role played by #interpretations over powerset lattices, which in some sense are representative of all #interpretations. This leads to the Boolean #calculus, i.e., the calculus of
monotonic mappings over the Boolean algebra {0, 1}, which we study in detail in Chapter 3. The prevalent place occupied by this calculus is somehow analogous to that of the Boolean algebra in first order logic. In particular, as we will show later in Chapters 10 and 11, most of the algorithmic problems originating from the #calculus (including the wellknown modelchecking problem) reduce to evaluation of Boolean vector fixedpoint terms. Finally, we go beyond the standard Boolean #calculus by considering infinite powers of {0, 1}, in order to show the aforementioned Selection Property in its full generality. The next chapter is devoted to the correspondence between the #calculus and games. We show that the winning sets in parity games on graphs can be defined by fixedpoint terms, and conversely, the value of any fixedpoint term under a powerset interpretation coincides with a winning set in some parity game induced by the term and interpretation. As we have mentioned above, the Memoryless Determinacy Theorem follows from the Selection Property of the Boolean #calculus. Chapter 5 studies the connection between the #calculus and automata over finite and infinite words. We show that both formalisms define the same class of languages. A reader interested in this topic can read Chapter 5 without knowledge of Chapters 3 and 4. Chapter 6 introduces the concept of a powerset algebra, the idea of which can be traced back to the work of Jdnnson and Tarski [47, 48]. In this frame we present the modal #calculus of Kozen [55]. We also note a connection between preservation of fixedpoint terms and bisimulation. In Chapter 7, we establish an equivalence between the #calculus and automata which generalizes the correspondence already shown for automata on words in Chapter 5. To this end, we consider a very general concept of automaton, whose semantics can be given in an arbitrary powerset algebra and is defined in terms of parity games. Our automata generalize in particular nondeterministic and alternating automata on infinite trees. Again, we stress the algebraic character of the theory, by organizing the automata into an abstract #calculus. Then, the transformation from fixedpoint terms to automata is presented as a homomorphism of #calculi which, while failing to be surjective, captures all automata up to semantic equivalence. Reading of Chapter 7 requires the knowledge of Chapters 4 and 6, but not necessarily 3 and 5. Chapter 8 studies the problem of the hierarchy induced by the alternation of least and greatest fixedpoint operators. We show that this hierarchy is indeed proper in the powerset algebra of trees. Chapter 9 is motivated by the celebrated Rabin Complementation Lemma which, in a strengthening due to Muller and Schupp, takes form of a Simulation Theorem: An alternating automaton on trees can be simulated
xi by a nondeterministic one. Simplification of the Rabin's original proof has constituted a longstanding challenge, pursued by many authors. (A proof based on the #calculus was given by Emerson and Jutla [33].) In this chapter, we explain the Simulation Theorem in the framework of the #calculus, as a conditional elimination of the (lattice) intersection operator. Chapter 10 shows the decidability of the basic decision problems related to fixedpoint terms: nonemptiness of an interpretation of a term in a fixed #interpretation, satisfiability, and semantic equivalence of fixedpoint terms. As we have already remarked, most of the algorithmic problems of the #calculus can be reduced to the evaluation of vector Boolean fixedpoint terms. This leads us to the last chapter, where we analyze various algorithms that have been proposed for this problem, for which no polynomialtime algorithm is known at the time we close this book. An interested reader can read Chapter 11 directly after Chapter 3.
And% Arnold,
Damian Niwifiski 1
1 Damian Niwifiski was supported by Polish KBN grants no. 8 T l l C 002 11 and 8 T11C 027 16. Both authors were supported by a "Polonium" FrenchPolish grant in 19981999.
This Page Intentionally Left Blank
Table of Contents
1. Complete lattices and fixedpoint theorems . . . . . . . . . . . . . . . 1.1 Complete lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Least upper bounds and greatest lower bounds . . . . . . . 1.1.2 Complete lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Some algebraic properties of lattices ............. 1.1.4 Symmetry in lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.5 Sublattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.6 Boolean algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.7 Products of lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.8 Functional lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Fixedpoint theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Monotonic and continuous mappings . . . . . . . . . . . . . . . . 1.2.2 Fixed points of a function ......................... 1.2.3 Nested fixed points ............................... 1.2.4 Duality . . . . . . . .............................. 1.3 Some properties of fixed points ........................... 1.4 Fixed points on product lattices .......................... 1.4.1 The replacement lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 BekiE principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Gauss elimination ................................ 1.4.4 Systems of equations .............................. 1.5 Conway identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Bibliographical notes and sources .........................
2
.
The pcalculi: Syntax and semantics ...................... 2.1 pcalculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Functional pcalculi ..................................... 2.3 Fixedpoint terms . . . . . . . . . . . . . . ............. 2.3.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 The pcalculus of fixedpoint terms . . . . . . . . . . . . . . . . . 2.3.3 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Quotient pcalculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 1 2 3 4 5 5 7 8 8 8 10 14 17 19 24 25 27 30 31 39 39
41 42 44 46 47 47 49 50
xiv
Table of Contents
2.4.1 Families of interpretations . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Variants of fixedpoint terms . . . . . . . . . . . . . . . . . . . . . . . Powerset interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alternationdepth hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Clones in a pcalculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 A hierarchy of clones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 The syntactic hierarchy ........................... 2.6.4 The EmersonLei hierarchy ........................ Vectorial pcalculi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Vectorial extension of a pcalculus . . . . . . . . . . . . . . . . . . 2.7.2 Interpretations of vectorial fixedpoint terms . . . . . . . . . ........... 2.7.3 The vectorial hierarchy . . . . . . . . . . . . . 2.7.4 Vectorial fixedpoint terms in normal ........... ........... Bibliographic notes and sources . . . . . . . . . . .
51 51 53 55 55 55 56 56 58 58 60 62 63 69
The Boolean pcalculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Monotone Boolean functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Powerset, interpretations a.nd the Boolean pcalculiis . . . . . . . . 3.3 The selection property . . . . . . . . . . . . . . . . . ............ 3.3.1 Finite vectorial fixedpoint terms . . ............ 3.3.2 Infinite vectors of fixedpoint terms ............ 3.3.3 Infinite vectors of infinite fixedpoint terms . . . . . . . . . . 3.4 Bibliographic notes and sources . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 72 75 75 78 82 83
2.5 2.6
2.7
2.8 3
.
4 . Parity games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Games and strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Positional strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 The pcalculus of games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Boolean terms for games . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Game terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Games for the pcalculus ...................... 4.4.1 Games for Boolean terms . . . . . . . . . . . . . . . . 4.4.2 Games for powerset interpretations . . . . . . . . . . . . . . . . . 4.5 Weak parity games . . . . .......................... ......................... 4.6 Bibliographic notes and
87 88 88 93
97 99 101
105 5. The pcalculus on words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.1 Rational languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Preliminary definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.1.2 Rational languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.1.3 Arden’s lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.1.4 The pcalculus of extended languages . . . . . . . . . . . . . . . 110 5.2 Nondeterministic automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Table of Contents
xv
115 5.2.1 Automata on finite words . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Automata on infinite words . . . . . . . . . . . . . . . . . . . . . . . . 117 118 5.2.3 Parity automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Recognizable languages . . . . . . . . . . . . . . . . . . . . . 122 5.2.5 McNaughton’s determ rem . . . . . . . . . . . . . 125 5.3 Terms with intersection . . . . ...................... 126 languages . . . . . . . . . . . . . 128 5.3.1 The pcalculus of cons 5.3.2 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.3.3 Interpretation of terms by constrained languages . . . . . 135 5.3.4 Recognizable constrained languages . . . . . . . . . . . . . . . . . 136 138 5.4 Bibliographic notes and sources . . . . . . . . . . . . . . . . . . . . . . . . . . 6 . The pcalculus over powerset algebras ....................
141 141 6.1 Powerset algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.1.1 Semialgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Powerset algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 144 6.1.3 Logical operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Modal pcalculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.3 Homomorphisms, phomomorphisms, and bisimulations . . . . . 148 ..................... 153 6.4 Bibliographic notes and sources .
155 7. The pcalculus vs . automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.1 Automata over semialgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 .1 Informal description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 7.1.2 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 7.1.3 Hierarchy of indices . . . . ....................... 159 160 7.1.4 Dual automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 Relation to classical automata . . . . . . . . . . . . . . . . . . . . . 162 7.2 Automata in the pcalculus perspective . . . . . . . . . . . . . . . . . . . 171 7.2.1 The pcalculus of automata . . . . . . . . . . . . . . . . . . . . . . . . 171 7.2.2 The interpretation as homomorphism . . . . . . . . . . . . . . . 177 7.3 Equivalences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 7.3.1 From fixedpoint terms to automata . . . . . . . . . . . . . . . . 180 7.3.2 From automata to fixedpoint terms . . . . . . . . . . . . . . . . 183 7.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 187 7.4 Bibliographic notes and sources .......................... 8
.
Hierarchy problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 191 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The hierarchy of alternating parity tree automata . . . . . . . . . . 192 192 8.2.1 Games on the binary tree . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Alternating automata on trees ..................... 195 8.2.3 A diagonal argument ............................. 198
xvi
Table of Contents 8.2.4 The hierarchy of Mostowski indices for tree languages 8.2.5 Universal languages ............................... 8.3 Weak alternating automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Bibliographic notes and sources . . . . . . . . . . . . . . . . . . . . . . . . . .
9
.
. 198 199 201 203
Distributivity and normal form results .................... 205 9.1 The propositional pcalculus ............................. 205 9.2 Guarded terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 208 9.2.1 Introductory example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Guarded terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 9.3 Intersectionfree terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.3.2 A syntactic notion of intersection . . . . . . . . . . . . . . . . . . . 211 9.4 The powerset construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 9.5 The up case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 9.6 The simulation theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 9.6.1 McNaughton’s Theorem revisited . . . . . . . . . . . . . . . . . . . 221 222 9.6.2 The simulation theorem ........................... 9.6.3 The Rabin complementation lemma . . . . . . . . . . . . . . . . 228 229 9.7 Bibliographic notes and sources . . . . . . . . . . . . . . . . . . . . . . . . . .
10. Decision problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233 234 10.1 Disjunctive mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Decidability of emptiness for disjunctive pterms . . . . . . . . . . . 235 10.2.1 Compact lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 236 10.2.2 Disjunctiveness of fixedpoints ..................... 10.2.3 Emptiness of nondeterministic tree automata . . . . . . . . 239 10.3 The regularity theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 10.3.1 Quotients of trees . . . . . . . . ..................... 240 241 10.3.2 The regularity theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 The satisfiability over powerset algebras . . . . . . . . . . . . . . . . 242 . . . . . . . . . . . . . . 242 10.4.1 Bounded decomposition . . . . . . . . . . 10.4.2 Tree models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 10.4.3 Finite models and decidability . . . . . . . . . . . . . . . . . . . . . 248 10.5 Bibliographic notes and sources ....................
.
11 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 11.1 Evaluation of vectorial fixedpoint terms . . . . . . . . . . . . . . . . . . . 253 11.1.1A naive algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 11.1.2 Improved algorithms .............................. 258 11.2 Winning positions and winning strategies . . . . . . . . . . . . . . . . . . 265 11.2.1 Computations of winning positions . . . . . . . . . . . . . . . . . 265 11.2.2 Computations of winning strategies . . . . . . . . . . . . . . . . . 266
Table of Contents
xvii
11.3 Bibliographic notes and sources ..........................
267
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
269
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
275
This Page Intentionally Left Blank
1. Complete lattices and fixedpoint theorems
The #calculus is based on the celebrated K n a s t e r  T a r s k i fixedpoint t h e o r e m which states t h a t a monotone function over a complete lattice has a least fixed point. In this chapter we review basic properties of complete lattices and show the fixedpoint theorem and its variants. By duality, the K n a s t e r  T a r s k i theorem also assures the existence of a greatest fixed point of a monotone function, which gives rise to definitions combining both extremal fixed points. We discuss general properties of such fixedpoint definitions in Section 1.3. We then extend our concepts to vectors of functions and show the basic properties of vectorial fixed points. This Concept is very useful throughout the book, although it turns out to be redundant. Indeed, we will see in Section 1.4 t h a t the vectorial fixed points can be reduced to scalar ones by Beki~ principle and Gauss elimination method.
1.1
Complete lattices
1.1.1 L e a s t u p p e r b o u n d s a n d g r e a t e s t l o w e r b o u n d s Let (E, _ #x. fd (x) = #x. f (x, d) = g" (d). On the other hand, for any d e D, g"(d)  f ( g " ( d ) , d )  ](g")(d), hence g "  ] ( g " ) , which implies g" > g', i.e., g"(d) > g'(d) for any d. The proof for v is similar by the principle of symmetry. KI Note that, with a slight abuse of notation, the equality of the previous proposition can also be presented as # x . f ( x , y) = # x ( y ) . f ( x ( y ) , y), and similarly for y, as suggested by Robert Maron. As a consequence of the previous propositions, we get the following property. P r o p o s i t i o n 1.2.23. Let E be a complete lattice and D be an ordered set. If f " E x D + E is monotonic in its two arguments, then # x . f ( x , y ) and v x . f (x,y) are monotonic mappings from D to E.
1.2 Fixedpoint theorems
17
In particular, if D is equal to E, then #x.f(x,y) and yx.f(x,y) are two monotonic mappings from the complete lattice E into E, which have least and greatest fixed points. It is natural to denote the extremal fixed points #x.f(x, y) (rasp. vx.f(x, y)) by #y.#x.f(x, y) and ~,y.#x.f(x, y) (resp. #y.~,x. f (x, y) and ~,y.~,x.f (x, y) ). Another important case is when D is a product of complete lattices. In a general setting, let E l , . . . , E,~ be complete lattices and let f(xl,...
,Xn) " E l x . . . x E l  1 x Ei x Ei+l x . . . x En + Ei
be monotonic in all its arguments. By Proposition 1.2.23, the mappings # x i . f ( x l , . . . ,Xn) and y x i . f ( x l , . . . ,Xn) are also monotonic in all their arguments. If Ej is equal to Ei, we can also consider the mappings ~Xj.#xi.f(Xl,...
,Xn), # X j . 1 ] x i . f ( X l , . . . ,Xn), etc.,
as well as the mappings
,x,,),
,x,,), tc.
If, moreover, for some k not equal to i or j, Ek  Ei  Ej, then the mappings Oxj.O'xi.f(xl,... ,Xn) are monotonic and range over Ek. Hence, the mappings denoted by
#xk.Oxj.O'xi.f(xl,...,xn) and ,xk.Oxj.O'xi.f(xl,...,Xn) are also well defined. We shall consider such mappings in Section 1.3 (page 19), usually when all the Ei's are equal. 1.2.4 D u a l i t y
Definition 1.2.24. If E is a Boolean algebr% for any mapping f 9E ~ E, we define the dual mapping f " E + E by f(x)  f(2). It is obvious that f  g implies ~  f. If f is monotonic, f is monotonic_ too. The extremal fixed points #x.f(x) and yx.f(x) are related to #x.f(x) and ~,x.f(x) by the following proposition.
Proposition 1.2.25. N
f (x)

. x . f (x)

f (x)
N
f
18
1. Complete lattices and fixedpoint theorems
Proof. Observe first that if a is a fixed point of f, i.e., a = f ( a ) , then g =  f ( g ) , i.e., g is a fixed point of f. Next, in any Boolean algebra, we have a _< b if and only if b 0 Hk (T'). 2.6.3 T h e s y n t a c t i c hierarchy When T is the pcalculus over fixT(F) (see Section 2.3.1, page 47, and Proposition 2.3.6, page 48) we denote by Zk(F) and Ilk (F) the syntactic hierarchy of fixed point terms defined by Xo(F)

o(F) 
I
nctV(F),
Xk+I(F) Hk+l (F)

~(/Ik(f))

y(Z~k(F))
and, for k < c~,
Moreover, it is clear that fixT(F)  [Jk>o Zk(F)  [Jk>O Hk(F).
2.6.4 The EmersonLei hierarchy A slightly different definition of a hierarchy for the set fixT(F) has been proposed by Emerson and Lei [35], in the context of the modal #calculus (see Section 6.2, page 145). Their definition is originally based on a concept of an alternationdepth of a formula which is defined "topdown". We can rephrase that definition in our setting, by inductively defining the classes Z EL and H EL of fixedpoint terms as follows. Let #EL(T') be the closure of a set T' under the application of symbols in F and under the #operator; note that this class may be not closed under composition. Let r'EL(T') be EL1 (F) defined similarly. Let ZoEL(F)  HoEL(F)  functT(F) and let ~V'k+
IIk+ 1(F)  Comp(Z~EL( G EL ) ) . Of course, [.Jk>0 z E L ( F ) _ [Jk>0HEL (F)  fixT(F) and it is easy to see that z E L ( F ) C_ Zk(F) and HEn(F) C Ilk(F), but these inclusions are ,
strict. For example, the term # x . , y . f ( x , y, #z.t,w.f(x, z, w)), where f E F , is in •2(F) but not in z2EL(F). To see that it is in ~2(F), note that so are
2.6 Alternationdepth hierarchy
/ /7k51
Ek+l
Cornp ( Zk , Hk )
~k ~
~" Hk
~~
jJ J Comp ( Z2 , /72 )
Z2
~
jJ
H2
Comp ( ~1,1/1)
K1 Zo /70
Fig. 2.1. The hierarchy of clones
57
58
2. The #calculi: Syntax and semantics
the terms # z . u w . f ( x , z , w ) , ~y. f (x, y, v) and u y . f ( x , y , # z . ~ w . f ( x , z , w ) ) . On the other hand, our term cannot be obtained by composition of two terms in &EL (F) since the variable x occurs free in #z.uw.f(x, z, w). This t e r m is actually of alternation depth 3 in the sense of Emerson and Lei [35].
2.7 V e c t o r i a l
ttcalculi
In Section 1.4, page 24, we have introduced a notation for vectorial fixedpoint terms. We now wish to incorporate this issue in the formalism of the present chapter. To this end, it is convenient to introduce the concept of a vectorial pcalculus, i.e., a vectorial extension of the #calculus in the sense of Section 2.1 (page 42). Thus, in particular, vectorial fixedpoint terms will be formally presented as vectors of the ordinary (i.e., scalar) fixedpoint terms. By the Beki~ principle (Lemma 1.4.2, page 27), vectorial fixedpoint terms do not lead to a greater expressive power. However, vectorial notation has several advantages. In particular, it admits a prefix normal form which gives rise to an elegant characterization of the alternationdepth fixedpoint hierarchy. Also, vectorial fixed points terms are usually more succinct t h a n the equivalent scalar ones. 2.7.1 V e c t o r i a l e x t e n s i o n of a
ttcalculus
D e f i n i t i o n 2.7.1. A vector v of length n is any sequence ( v l , . . . , vn/ of n elements. For i E { 1 , . . . , n}, ~ ( v ) is the ith element of the sequence v (see Section 1.1.7, page 7). Obviously, a vector is defined by its components so t h a t if v and v ' are two vectors of length n, Vi e { 1 , . . . ,n},Tri(v) = 7ri(v')
implies
v = v'.
If v is a vector of length n and v' is a vector of length n' we denote by (v, v ') the vector of length n + n' defined by
~((~ ~'/)  { ~(v) '
7rin(v')
if 1 _< i _< ~, ifn+l_i_Kn+n'.
Obviously, this product is associative: (v, (v', v")) = ((v, v'), v") and we m a y denote this vector by (v, v', v"). If v is a vector of length n and if i = (il,. 99, iml is a sequence of elements of { 1 , . . . ,n} then Try(v) is the vector of length m defined by 7rj(Tr~(v)) = r% (v), i.e., r r i ( ( v l , . . . , vn)) = (vii,... , Vim).
2.7 Vectorial #calculi
59
2.7.2. Let T(T, id, at, comp, #, y) be a #calculus. T h e vectorial extension of t h e #  c a l c u l u s 7" is t h e s t r u c t u r e
Definition
T*  (T*, id*, ar*, comp*, #*, y*) where  T* 
id*
Tn
 [.Jn>l 
,
id,
 for t  ( t l , . . . , tn) C T n, ar*(t)  [Ji~l ar(ti),  for t  ( t l , . . . , t , ) E T n, a n d p" Vat + T,
comp*(t,p)  ( t l [ p ] , . . . , t n [ p ] > C T n,  for t  (tl , . . . , tn) C T n, a n d for x  ( X l , . . . , X n } a v e c t o r of distinct variables, O*x.t is t h e v e c t o r ( t ~ , . . . , t'~) of T " w h e r e t~ is r e c u r s i v e l y defined as follows.  If n  1 t h e n t~l  Oxl.t. §  If n > 1, l e t (t i) ,    , t~/)l,~i+l,. ~* (Xl,
t~
Then

   ,
Xi1,
Xn}.(tl,...,
Xi+l,",
Oxi.(ti[id{t~ i ) / X l , . ,
.. , t ~ ) )_
,
ti1, ti+l,..,
tli)_ 1 / X i   1 , Oi+l
""
tn}.
'
N o t e t h a t T = T 1 C T* a n d t h a t all t h e o p e r a t i o n s on T* are e x t e n s i o n s of t h e o p e r a t i o n s on T, t h e r e f o r e in t h e sequel we will o m i t t h e s u p e r s c r i p t *. We will also d e n o t e by t[p] t h e o b j e c t comp* (t, p). 2 . 7 . 3 . Let T be a #calculus. If x = ( X l , . . . ,Xn) is a vector of distinct variables, and if t = ( t l , . . . , tn) C T " then ar(Ox.t)  a r ( t ) 
Proposition
Proof. T h e p r o o f is by i n d u c t i o n on n. For n = 1, this p r o p e r t y is j u s t t h e A x i o m 3 of t h e #  c a l c u l u s . n L e t O x . t  ( t ~ , . . . , t~). T h e n a r ( O x . t )  [Ji=l ar(t~l w h e r e ti! __ ~ X i . t i [ i d { t ~ i ) / x l , " "
, ~§i  1 / X i _ l ,
§~i+l / X i + l ,
"" . ' t ( ~ ) / X n } ]
and t (~)
~''" =
Thus,
0(Xl,...
~
~i+1~''"
,Xil,Xi+l,.
,Xn>.(tl,...
,til,ti+l,'''
,tn).
60
2. The #calculi: Syntax and semantics
C_ a r ( ( t l , . . . , t i  l , t i + l , . . . c
a
(t) 
,tn})  {Xl,... ,Xil,Xi+l,... ,Xn}
{Xl,...
__ a r ( t i [ i d { t ~ i ) / x l , . , . , t~21/Xi_l,vi+l § /Xi+l, and ar(t~) C . .. , t(~ ) / X n } ] )   { X i } . § § , t(~) But ar(ti[id{t~ i ) / x l , . . . , . i _ l / z i  1 , ~i+1/Xi+l,... /z~}])
(af(ti)  {Xl,... ,Zil,Xi+l,... ,Xn}) U Uj:~i ar(t~ i)) a r ( t )  {Xl,... ,Xil,Xi]l,... ,Xn}, and the result follows.
K]
Let T1 and T2 be two pcalculi. A homomorphism h from T1 to T2 can be extended into a mapping h* from the vectorial pcalculus over T~ to the vectorial pcalculus over T2 by h * ( ( t l , . . . , t n } )  ( h ( t l ) , . . . , h(tn)). The following result is a consequence of the definition of a vectorial pcalculus. P r o p o s i t i o n 2.7.4. h* (Ox.t)  Ox.h* (t).
Proof. The proof is similar to the previous one, by induction on the length of t. K1
2.7.2 I n t e r p r e t a t i o n s of v e c t o r i a l f i x e d  p o i n t t e r m s In case 7 is the ifcalculus of fixedpoint terms (see Section 2.3.2, page 47), the elements of its vectorial extension are called vectorial fixedpoint terms. Note that an expression like O k x ( k ) . . . . . 0 1 x ( 1 ) . t , is not, properly speaking, a vectorial fixedpoint term, but rather a metaexpression denoting such a term, which, by definition, is a vector of ordinary (i.e., scalar) fixedpoint terms. However, we will call such a metaexpression a vectorial fixedpoint term, keeping in mind its real meaning. Since the mapping that sends a term t to its interpretation [t]z is a homomorphism (see Proposition 2.3.10, page 50) we get, as a consequence of the previous proposition, the following property, which is basic for all applications of vectorial fixedpoint terms. P r o p o s i t i o n 2.7.5. Let r  Okx(k).....Oxx(1).t be a vectorial fixedpoint term. Let Z be an interpretation. Then [7]z and [t]z are in the vectorial ol functional ,cal ulu f ( D z ) ..... l.itlz. The following proposition shows that [Ox.tlz  Ox.[t]z is indeed a fixed point and justifies our definition of Ox.t in a vectorial pcalculus. It can also be seen as a generalization of the Beki~ principle. P r o p o s i t i o n 2.7.6. Let .T(D) be the functional #calculus over D, and let v " Vat + D be a valuation. For any vector x  ( X l , . . . ,Xn} of distinct
2.7 Vectorial #calculi
61
variables and any vector f  ( f l , . . . , f~) of elements of 9r(D), the vector (Ox.f)[v], which belongs to the vectorial extension of 9r(D), is equal to the extremal fixed point of the mapping g 9D n + D n defined by g ( d l , . . . , dn) f[v{dl/Xl,... , d n / x n } ] , and is also equal to O(Yl,
. . . , yn/Xn}].
,Yn}f[V{yl/Xl,
Proof. The proof is by induction on n. For n  1, the result is a consequence of the definition of Ox.f in ~(D). Now, let us assume t h a t 0  # (the proof is similar for 0  ~), and let d  ( d l , . . . , d~) be the least fixed point of g. Let # x . f  ( f ~ , . . . , f~) so that ( # x . f ) [ v ] ( f ~ [ v ] , . . . , f'~[v]). By definition of # x . f , f:[v]  (#Xio fi[id{ f~ i ) / x ~ , . . . ,
f~l
f~/)l/Xi1,
/Xi+l,
. . . , f(n i ) / X n ) ] ) [ V ] .
By Proposition 2.3.10, f~[v] 
#yi.(fi[id(f~i)/Xl,.. 
. , f}i_)1 / x i  1 , f } ~ l / X i + l , . . . , f(i)/x~)][v{~ji/xi)])
f ~ [ i d { f ~ ) / x ~ , . . . , f~)l/X~_~, f ~ l / x ~ + ~ , . . . , f(~)/xn}][v{f'[v]/x~}].
Since id{f~ i ) / X l , . . . , equal to
v{h~i)/xl
~''"
f(i)l/Xi1, f~l/Xi+l,.
oi_llxi_l ~ ,~(i)
where hj(i) _ ."f!i)[v{f~[v]/xi}], 3 , f~[v]  f~[v{h~ ~)/Xl
~''"
~
. . , f(i)/x,~} , v ( f ~ [ v ] / x i )
f'[v]/xi ~ 'h(i) ~ i + 1 /Xi+l
~''"
~ h(~)
is
/Xn}
we get
,,i_l/Xi1 ~ ,~(i)
~ f i [, v ] / x i ,
'h(i) "i+l /Xi+l ~''"
~h~ ) /Xn)]
"
Applying the induction hypothesis to ~...
~,~i_1~,~i+1~
= (#(xl,...
.
~
,xil,xi+l,...
~.
,xn).(fl,...,
~
f :)l
~
~.
fi1, fi+l,...,
fn))[v{f~[v]/xi}]
we get, for j ~ i, hj
 fj
/Xl ~"" ,...,
~,~,i_l/Xi1
_l,f~[v],,Oi+l,...
~fi[v]/xi
~'~'i+l / X i + l ~ " "
~
/Xn}]
9
>
f[v{h~i) / x l , . . 9 , h ~ l / x i _ l , f~[v]/xi, ~(~) ,oi+ 1 IX~+l, . .., h(~ ) /Xn)], by which it follows t h a t d < ~(~) f~, [v],0~(i) _ (h~~) , . . . ,0oi_~, oi+1,... , h(~) ). In particular, d~ .(fl,...,
:
(~(Xl,..
_
fi[v{d~l/Xl,...,d~_l/Xil,di/xi,
where d}  f 5 i ) [ v { d i / x , } ] . But v { d ~ / X l , . . 9 , d i' _ l / X i  1 , id{f}i)/Xl,
di/xi, d~+l/Xi+l,...,
. . . , f ( i ! ) l / X i  1, f ( i ~ l / X i q  l , . ,
Thus, di >_ f i [ i d ( f } i ) / x l , by which it follows t h a t di >_ ( # x i . f i [ i d { f } i ) / x l ,
d~+l/Xi+l,...,d~n/Xn}]
d~n/Xn} is equal to
", f ( n i ) / X n } z k v { d i / x i }
. . . , f{!)l/Xi1,
f{~)l/Xiql,.
. . . , f(i)l/xi1,
f(i~l/Xi+l,.
9
. . , f(i)/Xn}][v{di/xi}], . . , f(.~)/x~}])[v]

f:MV7
2.7.3 T h e vectorial h i e r a r c h y D e f i n i t i o n 2.7.7. Let T be a #calculus and let T* be its vectorial extension. A subset C c T* is called a v e c t o r i a l c l o n e if w
it contains Var,  if ( t l , . . . , t n ) E C then ti C C for any i E { 1 , . . . , n } , if t and t' are in C, so is (t, t'}, it is closed under composition in the following sense: If t C C and if p is a substitution such t h a t p ( y ) E C N T for any y E a r ( t ) then t[p] E C .



A vectorial clone C is a #  v e c t o r i a l c l o n e if it is additionally closed under the p operator in the following sense: if t = ( t l , . . . , tn} is in C, and if x is a vector of n distinct variables, then p x . t is in C. Similarly, C is a y  v e c t o r i a l c l o n e if it is closed under the y operator and it is a f i x e d  p o i n t v e c t o r i a l c l o n e if it closed under both p and y, that is, if it is both a #vectorial clone and a vvectorial clone. It is easy to see t h a t the intersection of a nonempty family of vectorial clones of any type is again a vectorial clone of this type. Therefore, for any
2.7 Vectorial #calculi
63
set T' C_ T*, there exists a least vectorial clone containing T'; we shall denote it by Comp + (T'). Similarly, there exist a least #vectorial clone, a least yvectorial clone, and a least fixedpoint vectorial clone containing T ~, which we shall denote respectively by #+ (T'), y+ (T'), and fix + (T'). For T' C_ T*, let p r ( T ' )  {t C T l 3 ( t l , . . . ,tn) c T ' , 3 i " t  ti} be the set of components of vectors of T. It is easy to see that if C is a vectorial clone, then T ~ C_ C ca p r ( T ' ) C_ C. Therefore, any kind of the above vectorial clones is indeed generated by a subset of T. Given a subset T ~ of T we define a hierarchy of elements of T, relative to T l, by
,F,g (T')

17o
(T')

Comp ~ (T'),
and, for k < w,
~k~l(T')
//k~l (T')

#+

v ~
( / / g (T')) (E'g (T'))
The following proposition is an immediate consequence of the previous deftnitions.
Proposition 2.7.8. For any k >__O, pr(Z~+(T')) G ~Vk(T') C_ Z ~ ( T ' ) and
p~(~g (T')) c_ ~ (T') c_ II;+ (T'). 2.7.4 V e c t o r i a l f i x e d  p o i n t t e r m s in n o r m a l f o r m Let us consider the #calculus of fixed point terms defined in Section 2.3.2 and its vectorial extension, as defined in Section 2.7.1. For each integer n > 1 we define the following classification on the ntuples of fixed point terms in
~V(F).  5:~(F)  7)~(F) is the set of ntuples of base terms (see Definition 2.3.1, page 47). If t E $ ~ ( g )  P ~ ( F ) then # x . t C 8 ~ ( F ) and y x . t E P ~ ( F ) .  For n > 0, if t C S ~ ( F ) then # x . t E S ~ ( F ) and v x . t 6 7)k~+l(F), if t E 7)~(F) then # x . t C Skn+l(F) and y x . t C 7)~(F). 


64
2. The #calculi: Syntax and semantics
D e f i n i t i o n 2.7.9. A vectorial fixedpoint term of length n is in n o r m a l f o r m if it belongs to S~ (F) or to 7)~ (F) for some k >_ 0. Note t h a t this classification does not cover all the ntuples. Moreover it is purely syntactic as exemplified in the next proposition, and it is not a hierarchy, since, for instance, $~. (F) q2 Sk~+l (F). P r o p o s i t i o n 2.7.10. Let t be a vector of base terms. Then 0 1 X l . . . . . O p . x p . t is in S ~ ( F ) if and only if pkO, or p _> 1, k >_ 1, 0~  # and 0 2 x 2 . ' . . . O p . x p . t 
C 3~(F)U
7)a~_~ ( E ) .
A s y m m e t r i c a l result holds f o r P ~ ( F ) .
By Proposition 2.7.8 (page 63), it is clear t h a t every component of an ntuple in 8 ~ ( F ) (resp. 79~(r)) is in Z k ( F ) ) (resp. H k ( r ) ) ) . In this section we prove a kind of converse, transforming the previous classification into a hierarchy (see Proposition 2.7.14) by using the notion of a variant. This gives rise to a normal form for describing fixedpoint terms. First we introduce the following notation: If x  ( x l , . . . ,x~} then {x} { X l , . . . , x~}. If p is a substitution and t  ( t l , . . . , t~) an ntuple of terms then p { t / x } denotes p { t l / x l , . . . ,tn/xn}. For any vector v  ( v l , . . . , vn) and any i 91 _< i _< n, we denote by v _ i the vector (vl, . . . , v i  1 , vi+l, . . . , Vn}. For instance, we can shorten the definition of t ~  Ox.t given in Definition 2.7.2 (page 59) by writing t~  O x i . t i [ i d { O x _ i . t _ i / x _ i } ] We say t h a t a ntuple t  ( t l , . . . , t n ) is a variant of t'  ( t ~ , . . . , t ' ~ ) , denoted b y t ~ t ~ i f V i  l < _ i < _ n , ti~t~. The following proposition generalizes Axiom 7 of a pcalculus. P r o p o s i t i o n 2.7.11. Let p be a substitution and let t'  (Ox.t)[p]. L e t y be a vector of variables such that {y} N ar(t')  O. T h e n t' ~ O y . ( t [ p { y / x } ] ) . Proof. The proof is by induction on the length n of the tuples. For n  1, the result is immediate by definition of ~. Otherwise t~  (Oxi.ti[id{s(i) /x_i}])[p]  Oz.ti[id{s (i) / x  i } ] [ p { z / x i } ] , where s (i)  ( s ~ i) , " , 8~21 , ~+1, ~(i) " , s(~ ) >  O x _ i . t _ i Thus, .
.
.
t'~ ~ Oyg.t~[id{s (~) / x _ ~ } ] [ p { z / x ~ } ] [ i d { y ~ / z } ] .
Taking into account the choice of z and yi, we can check that, on a r ( t ) , .
By the induction hypothesis,
.id{y
/z} 
2.7 Vectorial #calculi
lx
65
)] 
thus t~ ~ O y ~ . t i [ p { O y _ ~ . ( t _ i [ p { y l x } ] ) l x  ~ , y~/z~}] On the other hand, the i  t h component of O y . ( t [ p { y l x } ] )
is
which is equal to t~ since, on ar(ti),
K] Applying the previous proposition with p  id we get the vectorial generalization of the property (v3) in the definition of a variant (Section 2.4.2, page 51). C o r o l l a r y 2.7.12. If {y} gl (ar(t)  {x})  0 then Ox.t ~ O y . ( t [ i d { y / x } ] ) . We also have a generalization of (v4). Proposition
2.7.13. If {x} Cl ar(t)  O then Ox.t ~ t.
Proof. Let t  { t l , . . . , tn} and Ox.t  (El,... , t~). By definition we have t~ = O z i . t i [ i d { O x _ i . t _ i / x _ i } ] . Since { x  i } N ar(ti)  0, t i [ i d { O x _ i . t _ i / x _ i } ] ti[id] ~ ti, thus t~ ~ Oxi.ti. Since xi ~ ar(ti), Oxi.ti ~ ti. [5 As a consequence of this proposition, we get the following Proposition
2.7.14. For any n >_ 0 and any n > 0, we have, up to ~ ,
Snn(F)
U
~])nn(F)C sn_~_l(F) n ~[:)rt%1(F).
P r o p o s i t i o n 2.7.15. Let x, y, and t of length n, x !, yl, and t ~ of length n'. If the sets {y}, {y'}, and a r ( O x . t ) U ar(Ox'.t') are pairwise disjoint, then (Ox.t, Ox'.t 1) ~ O(y, y ' ) . ( t [ i d { y l x } ] , t ' [ i d { y ' / x ' } ] ) .
Proof. By Corollary 2.7.12, we have Ox.t ~ O y . t [ i d { y / x } ]
and
Ox'.t' ~ O y ' . t ' [ i d { y ' / x ' } ] .
Thus we have only to show that (Ox.t, Ox'.t') ~ O ( x , x ' ) . ( t , t ' ) provided x N x' = x n ar(t') = x' N ar(t) = O. The proof is by induction on n + n'. For n = n' = 1, we have O<x,x'). 0, k :> 0. In this case we have 0~ = 0~' = 0 a n d t ' = Ox~.r', t"  Ox~'.r" where, a s s u m i n g t h a t 0  p, t' E S~ ( F ) , t " E S~ ( g ) , t
r"

,
t
~" " ....Op,,
8'
" s"
nl
nl
n"
7)n"
(F)u
.
If r ' a n d r " are b o t h in S k ( F ) or b o t h in 7)k_1 ( F ) , we can a p p l y t h e n" i n d u c t i o n hypothesis. Otherwise, assume t h a t r' E Skn ~ (F) a n d r " C 7)k_1 (F). T h e n , by P r o p o s i t i o n 2.7.13, t" ~ #x~'.t" and we can apply the i n d u c t i o n h y p o t h e s i s to r ' a n d t ' . T h e second step is to show t h a t if tl

~lZII" " " " . ~ m Z m
tH

01ZUl.''"
8!
.~mZmU .Stt,
w h e r e s' a n d s" are base t e r m s t h e n ,
11"
,
9
,
,
w h e r e s '1 a n d s "I are base terms. Here the p r o o f is by i n d u c t i o n on m. If m  0 t h e r e is n o t h i n g to do. If m > 1, let r'  02z2.. .OmZ'm .s' a n d r"02z~.'" .OmZ'~.s" so t h a t t '  0 1 z ' 1 . r ' a n d t '  0 1 z ~ P . r ''. By P r o p o s i tion 2.7.15, we get '
"
(t', t")  el(y', Now, by a p p l y i n g P r o p o s i t i o n 2.7.11 (page 64) we get t h a t r ' [ i d { y ' / z ' l } ] is equivalent to some vector in t h e form 0 2 v 2 . . . . . O ~ V m . S l w h e r e sl is a vector of base t e r m s , a n d similarly for r"[id{y"/z~'}], so t h a t we can a p p l y t h e i n d u c t i o n h y p o t h e s i s to (r'[id{y'/z~}], r"[id{y"/z~'}]). [3 Now, we are r e a d y to prove t h e m a i n t h e o r e m of this section.
68
2. The #calculi: Syntax and semantics
2 . 7 . 1 9 . Let t be a term in Z k ( F ) (resp. IIk(F)). If k > O, there exists n and a vector t in S~.(F) (resp. 79~(F)) such that t is a variant of the first component of t. If k  0 then there exist n, a vector tl E $ ~ ( F ) and a vector t2 E 7)~(F) such that t is a variant of the first component of both tl and t2. Moreover, ar(t) = ar(t) in the former case and ar(tl) = ar(t2) = ar(t) in the latter. Theorem
Proof. The proof is by induction on k. For k = 0 the proof is by induction on t. If t is a base term, we have t ~ Ox.t which is in S ~ ( F ) or in 791~(F), according to the value of 0. If t = f ( t l , . . . , tin) then each ti is in Z o ( F ) =//0 (F), and, by the induction hypothesis, ti is a variant of the first component of a vector ti, which can be chosen in either in SF ~(F) or in 7)1 ~(F). Let us choose SF ~(F) (the proof is similar for 79F~(F)). By Proposition 2.7.18, t  ( t l , . . . , tin} is in S{~ (F). Now, ( t l , . . . , tin} consists of the components of indices i l , . . . , im of the vector t. Let y be a vector of length n such t h a t {y}Clar(t) = O and y be a variable neither in {y} nor in ar(t). Let s = It(y, Y } . ( P Y . f ( Y i l , . . . ,Yi~), t) which is a variant of a vector in s ~ + l ( F ) , since ( P Y . f ( Y i l , . . . , y i ~ ) , t } is a variant of a vector in S ~ + I ( F ) by Proposition 2.7.18. The hypotheses of Proposition 2.7.17 are satisfied, thus the first component of ( P Y . f ( Y i l , . . . , yi~), t) is a variant of ([Ly.f(Yil,... ,yim))[id{t/y}], and since # Y . f ( Y i l , . . . ,Yim) ~ f ( Y i l , . . . ,Yim), it is also a variant of f ( Y i l , . . . ,yim)[id{t/y}] = f ( t l , . . . , tm). For k > 0, the proof is by induction on the definition of Z k ( F ) (the proof is similar for Ilk (F)). Let t = #x.t' with t' in ~2k (F) or in Hk1 (F). By the induction hypothesis t ~ is a variant of the first component t~ of a vector t ~ of length n which is in S~ (F) if t' is in ~n (F), ~]')kn_l(F) if t' is in U k _ 1 (F) and k > 1, S~ (F) if t' is in Hk1 (F) and k  1. Let y be a vector of length n  1 such t h a t ar(t') A {y}  0. Then #(x, y}.t' is always a variant of a vector in S ~ ( F ) and its first component Sx is equal to # x 9(ti[id{#y.t' l/Y}]) By the choice of y, tl[id{py.t'l/Y}] '  t~l[ic~ ~ t~ t ~, thus s l ~ #x.t t Let t  t'[p] where t' e Z k ( F ) and p(x) e ~ k ( F ) for any x e ar(t'). Let x  ( x l , . . . ,xm) be such t h a t {x}  ar(t'), and let s  ( p ( x l ) , . . . , p(x,~)), so t h a t t'[p]  t'[id{s/x}]. By the induction hypothesis, t' is a variant of n I the first component of a vector t' C Sk (F) and p(xi) is a variant of the first component of a vector ti C S~ ~(F). Let t  ( t l , . . . , tm), let ij be the index of p(xj) in t, let y be a vector of length n such t h a t a r ( t ) N { y }  0  { x } N { y }
2.8 Bibliographic notes and sources
69
and let z  ( Y i l , . . . , Yim). By Proposition 2.7.11 (page 64), t"  t ' [ i d { z / x } ] is a variant of a vector in Skn' (F), thus (tt! ,t} is a variant of a vector in S2'+n(F), and so is p ( x ' , y}. (t", t} where { x ' } n { y }  { x ' } n a r ( t )  { x ' } n a r ( t " )  O. We can apply Proposition 2.7.17 to get that the first component of p ( x ' , y } . ( t " , t} is a variant of t ' [ i d { z / x } ] [ i d { t / y } ] . It is easy to check that this term is equal to t ' [ i d { z / x } ] [ i d { t / y } ] t'[p]. [] From Proposition 2.7.8 (page 63) and the previous theorem we get" C o r o l l a r y 2.7.20. For a t e r m t the following properties are equivalent. 
t e

t e
 t is a c o m p o n e n t of s o m e vectorial t e r m in S ~ ( F ) for some n >_ 1 and similarly f o r H and 79.
2.8 Bibliographic
notes
and
sources
We have already mentioned in the bibliographic notes after chapter 1, the prototypes of #calculi considered by Park [80] and Emerson and Clarke [36]. The pcalculus as a general purpose logical system, not confined to a particular application, originated with the work of Kozen [55] who proposed the modal #calculus as an extension of the propositional modal logic by a least fixed point operator (by duality, the logic comprises the greatest fixed points as well). Similar ideas appeared slightly earlier in the work of Pratt [82], who based his calculus on the concept of a least root rather than least fixed point. Nearly at the same time, Immerman [42] and Vardi [98] independently gave a modeltheoretic characterization of polynomial time complexity, using an extension of firstorder logic by fixedpoint operators (the subject not covered in this book). The modal #calculus of Kozen has subsequently received much study motivated both by the mathematical appeal of the logic, and by its potential usefulness for program verification. These studies revealed that the #calculus subsumes most of previously defined logics of programs (see [41] and references therein), is decidable in exponential time (Emerson and Jutla [32]), and admits a natural complete proof system (Walukiewicz [103]). A slightly different approach to the #calculus was taken by the authors of this book [74, 75, 77, 10, 78], who considered an algebraic calculus of fixedpoint terms interpretable over arbitrary complete lattices or, more restrictively, over powerset algebras. We will reconcile the two views in Chapter 6, by presenting the modal pcalculus in our framework of powerset algebras. The concept of an abstract pcalculus as presented in Section 2.1 of this chapter has not been presented before; it can be viewed as a remote and
70
2. The #calculi: Syntax and semantics
logue of the concept of a combinatorial algebra in the Acalculus (see Barendregt [11]).
3. T h e B o o l e a n ttcalculus
The simplest nontrivial complete lattice consists only of _1_and T with _L ~ T; this is the celebrated Boole algebra. We denote it ]3 and use the classical notations, 0 for _1_, 1 for T, + for V, and the operator A is denoted by a dot or omitted. We call the functional #calculus over B (see Section 2.2, page 44), 9~(IB), the Boolean pcalculus. The importance of the Boolean #calculus stems from the role of powerset interpretations (see Section 2.5, page 53) via the obvious isomorphism of B E and 7)(E). Note that if E is finite, ~(B E) is captured by the vectorial extension of ~(]B) (see Section 2.7, page 58). As E can be infinite, we will be led also to consideration of infinite vectors of Boolean fixedpoint terms (see Sections 3.3.2 and 3.3.3, pages 78 and sq.). The Boolean #calculus is the core of this book. As we will see later in Chapters 10 and 11, most of the algorithmic problems of the #calculus amounts to evaluating Boolean vectorial fixedpoint terms. Also, the selection property shown in Section 3.3 (page 75) turns out to be crucial for determinacy of the infinite games of Chapter 4. These games in turn are essential for the equivalence between the #calculus and automata (Chapter 7), as well as for the hierarchy problem (Chapter 8).
3.1 Monotone
Boolean
functions
Let h C B X + B. If x C X we write h = h(x) to make it clear that h depends on x and we write h(t) for h[id{t/x}], if x is known from the context. Then the classical Shannon lemma is written h(x) = 5h(0) + xh(1). Indeed, if h is monotonic with respect to x, Shannon's lemma becomes L e m m a 3.1.1 ( S h a n n o n ' s l e m m a ) , h(x) = h(0) + xh(1).
Proof. Let g(x) = h(O) + xh(1). We have g(0) = h(0) and g(1) = h(0) + h(1) which is equal to h(1) since h(0) _< h(1). [::] Here are some consequences of this lemma. P r o p o s i t i o n 3.1.2. #x.h(x) = h(O); yx.h(x) = h(1).
72
3. The Boolean #calculus
Proof. Since h(x)  h(O) + xh(1) with h(0) < h(1), we get h(h(O))  h(O) + h(0)h(1)  h(0) and h(1)  h ( 0 ) + h(1)h(1)  h(1). Therefore h(0) and h(1) are fixed points of h(x). But we know that h(0) _ #x.h(x) < vx.h(x) < h(1), hence the result. [3 The following proposition shows that any monotonic mapping is homomorphic with respect to Boolean sum and product. P r o p o s i t i o n 3.1.3. If h(x) is monotonic with respect to x then h(x + y) h(x) + h(y) and h(xy)  h(x)h(y).
Proof. We have to prove that for any b, b' E 18, h(b + b')  h(b) + h(b'). Without loss of generality, we may assume that b _< b' hence h(b) < h(b'). Then h(b + b')  h(b')  h(b) + h(b'). The proof is similar for the product. Moreover, it is easy to see that ar(h(x + y))  ar(h(xy))  ar(h(x) + h(y))  ar(h(x)h(y)) which is equal to ar(h(x)) U ar(h(y)). [:] Another consequence of the previous lemma is the following corollary. C o r o l l a r y 3.1.4. For any x C Var, if h is monotonic with respect to x, then hip]  hip{O/x}] + p(x)h[p{1/x}].
Proof. If is easy to see that (h[id{O/x}] + xh[id{1/x}])[p]

h[id{O/x}][p] + p(x)(h[id{1/x}][p])
=
hip{O/x}] + p(x)h[p{1/x}]. O
3.2 Powerset
interpretations
and
the
Boolean
ttcalculus
Due to the obvious isomorphism of the powerset lattice (T'(E), C_) and the product lattice B E, we can identify any powerset interpretation 27 of the universe 7)(E) (see Section 2.5, page 53) with a #interpretation of the univers~ ]!3E, i.e., such that the interpretation of any fixedpoint term is an object of s ( B E ) . Now, any function in 5(]B E) can readily be represented by a possibly infinite vector of (possibly infinite) monotone Boolean terms. In particular, with any fixedpoint term t we can associate its characteristic Boolean function that is the representation of It]z. In this section, we fix a notation for these Boolean characteristics, and justify it by some basic results. The purpose of this section is technical 1, as we will later wish to exploit the aforementioned connection between powerset and Boolean interpretations. 1 This section is not related to the subsequent sections of this chapter.
3.2 Powerset interpretations and the Boolean #calculus
73
Let Z be a powerset interpretation for F with domain 7)(E). Let h be the canonical bijection from ]BE to 7)(E), i.e., if g is a mapping from E to 113, h(g) = {e C E i g ( e ) = 1}. Note t h a t this bijection is monotonic. This bijection is extended into a bijection from (]BE)D = ]BExD to 7)(E) D, still denoted by h. Thus, if g is a mapping from E x D to ]!3, h(g) is the m a p p i n g from D to 7)(E) defined by h(g)(d) = {e E E [ g ( e , d ) = 1}. Let Var ~ be a new set of variables, large enough for there to exist an injection u : E x Var + Var ~. For e C E and x E Var, we denote by ue,x the variable u ( e , x ) of Vat ~. If x is a vector of distinct variables of Var, indexed by an a r b i t r a r y set J, we denote by u~ the vector of variables of Var ~, indexed by E x J, such t h a t if the component of index j of x is y, the component of index ( e , j ) ofu~ is ue,y, which we also denote by ( U e , x d ) e C E , j C j . In particular, if x is a variable of Vat, u(x) is the vector indexed by E whose component of index e is u~,~. Likewise, if X is a subset of Var, u x is the subset {u~,~ I e E E, x C X } of Vat'. For any valuation w : Var' + ]13, h* (w) is the mapping from tsar to 7)(E) defined by h * ( w ) ( x )  {e e E Iw(ue,x ) = 1}. I f w e see w(u(x)) as an element of ] B E we can also write h * ( w ) ( x )  h(w(u(~))). From these definitions it follows immediately. L e m m a 3.2.1. Let x be a vector of length n of distinct variables in Vat. Let w : Vat' + ]3 and a be a vector in (]BE)n, that can also be viewed as a vector of Boolean values indexed by E x { 1 , . . . , n } . Then h * ( w { a / u ~ } ) h*(w){h(a)/x}.
Proof. By the r e m a r k above h * ( w { a / u ~ } ) ( y ) = h ( w { a / u ~ } ( u ( y ) ) ) . Let x be @ 1 , . . . ,Xn) and a be ( a l , . . . ,an). Then h ( w { a / u ~ } ( u ( x ~ ) ) ) = h(ai) = h * ( w ) { h ( a ) / x } ( x i ) , and for any y which is not a component of x, h(w{a/u~}(u(y)))
 h(w(u(y)))  h*(w)(y)  h * ( w ) { h ( a ) / x } ( y ) . [3
D e f i n i t i o n 3 . 2 . 2 . For any valuation v 9 Var + P ( E ) we define char(v) " B Vat' __+ B , the characteristic of v, by
xE Vat eCv(x) so that for any w"
Var ~ + ]B,
xE Var eEv(x)
L e m m a 3 . 2 . 3 . Let v " Vat + 7)(E) and w 9 Vat ~ ~ 13 be two valuations. Then c h a r ( v ) ( w )  1 if and only if Vx e Var, v ( x ) C h * ( w ) ( x ) .
74
3. The Boolean/zcalculus
Proof. We have char(v)(w) = 1 if and only if Vx E Var, Ve E v(x), w(u~,x) = 1. B u t w ( u ~ , ~ ) = 1 if and only if e E h*(w)(x). [7 D e f i n i t i o n 3.2.4. Let t be any fixedpoint t e r m over F and let/7 be a powerset interpretation over P ( E ) . W i t h t and 27 we associate the characteristic function x z ( t ) " ][3 Vat' _~ ]BE such t h a t the component of index e of x z ( t ) is equal to y'~v.eEi~]z[v] char(v).
Proposition
3.2.5. Let t be a term and let Z be a powerset interpretation.
For any w : Vat' + 13, h(xz(t)(w)) = [tlz[h*(w)]. In particular, if t is closed, h(xz(t)) = [t]z. Proof. By definition of h and Xz, e E h(xz(t)(w)) if and only i f t h e component of index e of x z ( t ) ( w ) is equal to 1 if and only if there exists v : Var + P ( E ) such that e E [t]z[V] and char(v)(w) = 1. But, by the previous lemma, char(v)(w) = 1 if and only if Vx E Var, v(x) C h* (w)(x). Therefore, we have e E h ( x z ( t ) ( w ) ) if and only if there exists v such t h a t e E [tlz[v] and Vx E Var, v(x) C_ h*(w)(x). If such a v exists, by monotonicity of It]z, e E [t]z[h*(w)]. Conversely, if e E [t]z[h*(w)], we can take v = h*(w) and thus e E h(xz(t)(w)). [:] As we have seen before, h : ]BE __~ ~)(E) can be extended componentwise into a m a p p i n g from (]BE)n to 79(E) n, and the mapping Xz is extended to vectorial terms by X z ( ( t l , . . . ,tn)) = ( x z ( t l ) , . . . ,xz(t~)}. It is easy to see t h a t the previous proposition is still valid for vectorial terms. P r o p o s i t i o n 3.2.6. Let t be a vectorial term and let 27 be a powerset interpretation. For any w : Vat' + ]3, h ( x z ( t ) ( w ) ) = [tlz[h*(w)]. As a consequence, Xz preserves fixedpoint operators in the following sense. 3.2.7. Let t be a vectorial term of length n, and let x be a vector of variables of Var of length n. Then xz(Ox.t) = Ou~.xz(t).
Proposition
Proof. By Proposition 2.2.2 (page 45), it is sufficient to prove t h a t for any w : Va~' + ~ , x z ( O ~ . t ) ( ~ ) : (Ou~.x~(t))(~). As usual, we prove the result for 0 = #, the case 0 = y is symmetric. Let a
=
X z ( # x . t ) ( w ) and b
=
(#u~.xz(t))(w).
By the previous proposition h(a) = bx.tlz[h*(w)] = (tzx.Itlz)[h*(w)] which is equal, by Proposition 2.2.4 (page 46), to #x.([t]z[h*(w){x/x}] ). It follows t h a t h(a) = [tiz[h*(w){h(a)/x}] which is also equal, by L e m m a 3.2.1, to [t]z[h*(w{a/u,}]. Again by the previous proposition,
3.3 The selection property
ltiz[h*(w{a/u
})]

h(xz(t)(w{alu
75
}))
and since h is a bijection, this implies a  x z ( t ) ( w { a / u ~ } ) . It follows that >_ b On the other hand, b  ( # u ~ . x z ( t ) ) ( w )  # u ~ . ( x z ( t ) ( w { u ~ / u ~ } ) ) ) c z ( t ) ( w { b / u ~ } ) . It follows that h(b)  h ( ) t z ( t ) ( w { b / u ~ } ) )  [ t l z [ h * ( w { b / u ~ } ) ]
[t]z[h*(w){h(b)/x})].
Hence, h(b) >_ # x . ( l t ] z [ h * ( w ) { x / x } ) ] )  (~x.ltlz)[h*(w)] ipx.tlz[h*(w)]. By the previous proposition, [~x.t]z[h*(w)] is equal to h ( ) r h(a) and since h is a monotonic bijection, h(b) >_ h(a) implies b >_ a. E] By iterated applications of the previous proposition, we get
Proposition 3.2.8. If T is the vectorial term Oku~k.
"""
3.3 The
OkXk.'''
.OlXl.t.
then
Xz(~')

. O i u ~ l . x z ( t ).
selection
property
It is obvious that for any closed Boolean terms bl, b2, the disjunction bl lb2 is equivalent to one of the disjuncts, bi. Although this property is not true for nonclosed terms, it generalizes, somewhat surprisingly, to closed terms with arbitrary fixedpoint prefix. That means, for any closed fixedpoint term 01xl.02x2 . . . . . Okxk.(bl + b2) (where bl, b2 need not, of course, to be closed) we can select i E {1, 2} such that the original term is equivalent to 01 xl.02x2 . . . . . Okxk.bi. This property further generalizes to vectorial fixedpoint terms, and also to infinite vectors of infinite disjunctions. We call it selection property since it allows the replacement of the sum by an adequately selected summand. The selection property underlies several important results in the #calculus. In particular, it is at the basis of the determinacy result for parity games proved in the next chapter (Theorem 4.3.8, page 92). We give a simple proof of this result for vectorial terms of finite length. Then we generalize it for vectorial terms indexed by sets of arbitrary cardinality like those used in the previous section. 3.3.1 F i n i t e v e c t o r i a l f i x e d  p o i n t t e r m s First we need some lemmas. As a consequence of the Gauss elimination principle (Proposition 1.4.7, page 30), we get
76
3. The Boolean #calculus
L e m m a 3.3.1. Let x = ( X l , . . . , X n ) and let x be a variable not in x. Let f = f ( x , x l , . . . ,Xn) be a monotonic function over a complete lattice and let f = ( f l , . . . , fn) be a vector of such functions. Let us denote by ( x , x ) the sequence (X, X l , . . . ,xn} and by ( f , f ) the sequence ( f , f ~ , . . . , f ~ ) . Let g = O x . f and g = Ox.f[id{g/x}]. Then O ( x , x ) . ( f , f } = (g,g[id{g/x}]}.
Remark We will use the obvious notation f ( g ) for f [ i d { g / x } ] and g(g) for g[id{g/x}]. The next lemma is a particular case of Proposition 1.4.5, page 29. Lemma Then
3.3.2. Let f (x, x) be a function and f (x, x) be a vector of functions.
O(x, x ) . ( f (x, x), f ( f (x, x), x)) =
O(x,X).(f(x,X), f
(x, x)).
L e m m a 3.3.3. Let for i = O, 1, 2, f i ( x , x ) be monotonic functions such that fo = f l + f2, and let f be a vector of functions of the same length as x. Then there exists go,gl,g2,g such that O(x,x).(fi, f ) = (gi,g(gi)) and go = gl +g2.
[email protected] Let g = O x . f and gi = Ox.fi(g). By L e m m a 3.3.1
o(~,~).(f,, I ) = (g,,g(g,)). Let ~2 be 0 or I according to whether 0 is # or u. Then, by Proposition 3.1.2 (page 71), gi = f/(~,g(~2)). Since fo = fl + f2, we get go = gl + g2 by Proposition 3.1.3 (page 72). V1 Proposition
3.3.4. For i 
~i
0, 1, 2 and k > O, let
 01(X1,X'1).O2(X2,X2) " ' ' " " O k ( X k , X k ) ' ( f i , : }
with Io = f~ + f2. Then there exist go, gl, g2, g such that ki = (gi, g(gi)) and 9o = gl Jr 92.
Proof. The proof is by induction on k. For k = 1, the result is proved in L e m m a 3.3.3. By the induction hypothesis, ki = 01 (xl,xl).(g~,g(gi)) with g0 = gl + g2. By L e m m a 3.3.2, ki = O l ( X l , X l ) . ( g i , g ( X l ) ) , and, again by L e m m a 3.3.3, ki = (hi, h(hi)} with h0 = hi + h2. [7 T h e o r e m 3.3.5. Let f : B k x ]3 kn + 13n (with k > 0), and, for i = O, 1, 2 let f i : B k x ]~kn __} • , and k i __ 0 1 ( X l , X 1 ) . O 2 ( X 2 , X 2 } . . . " . O k ( X k , X k ) . ( f i , :
) ~ ~n+l.
If f o = f l + f2 then ko = k l + k2 , and, moreover, there exists i C { 1, 2} such that ko  ki.
3.3 The selection property
77
P r o o f . By the above proposition, k i  ( g i , g ( g ~ ) } with go  gl ~ g2 E 13. By Proposition 3.1.3 (page 72), g ( g o )  g ( g l ) + g ( g 2 ) , hence k0  kl + k2. Since go  gl + g2 C 13, there exists i such that go  gi, hence the result. rn
The following "dual" theorem is proved in exactly the same way. 3 . 3 . 6 . Let f" 13k
Theorem
let f i " B k x ]3 k~ ~ ki
X
13kn ~
13n ( w i t h k > 0), a n d , f o r i 
O, 1, 2
13, a n d
 01(Xl,X1).O2(X2,X2).
" " " .On(Xk,Xk}.(fi,
f)
E 13rid1.
I f f o  f l . f 2 t h e n ko  k l .k2 .
As a consequence of Theorem 3.3.5, we get the following result. T h e o r e m 3.3.7. L e t , f o r i  1 , . . . , n , f i  f ( 1 ) + f}2) w h e r e f ( J ) " 1 3 k n ]3. T h e n , f o r each i  1 , . . . , n , t h e r e e x i s t s s ( i ) C {1, 2} s u c h t h a t OlXl''" .Okxk.(fl,
. . . , f i , " " . , f ~ } .OkXk.(f~s(1)),
OIXl.''"
Proof.
. . . , f(s(i)),
. . . , f(ns(n))}.
V1
Apply n times Theorem 3.3.5.
We can express this theorem another way. Add 2n Boolean variables Z l , . . . , zn and z~,.. . , z~' and replace f i _ f ( o ) + f ( 1 ) b y f [  z i . f ( ~ z ~ . f (1). 3.3.8. L e t g ( z l , . . . , zi, . . . , z n ,
Theorem
O l X 1 . 0 2 X 2 .
"
9 9
!
Zl
, . . .
.OkXk.(f;,...,
! , Zi,
. . .
f~,...,
, Ztn)

f~).
T h e n t h e r e e x i s t b l , . . . , bn E 13 s u c h t h a t
g ( 1 , . . . , 1 , . . . , 1 , . . . , 1 , . . . , 1)  g ( b l , . . . , b i , . . .
,b~,bl,...
,bi,...,b~).
E x a m p l e 3.3.9. Let l(Zl,
!
I
Zl, z2) 
y2,
We have # ( y l , Y 2 , Y 3 } . ( Z l . y
2 + Z[.X3,Z2.y ((Zl.Z;
nt Z ~ ) X 3 ,
+ 1 Jr Z ; . X 3 , X 3 ) I
!
z 
!
(Z2Z 1 nt Z 2 ) X 3 , X 3 }
and
f(z ,z ,Zl,! Hence,
I 1} 9 Z ; )   ( Z l . Z 2! nt Z l!, Z 2 . Z 1I t Z2,
.yl +
i
78
3. The Boolean #calculus
f(1,1,1,1)
=
(1,1,1),
I'(1,1,0,0)
=
(0,0,1),
f(O,O,l,1)
=
(1,1,1},
f(1, O,O, 1)
=
(1,1,1>,
f(O,l,l,O)
=
(1,1,1}.
It follows t h a t u(xl, x2, x3).#(yl, y2, ya).(y2+x3, yl +xa, x3) is also equal to
l](Xl,X2,X3).U(yl,Y2,Y3).(y2,x3,x3) ,
to
l](Xl,X2,X3).~(yl,Y2,Y3).(x3,Yl,X3),
and to z~(xl,xu,x3}.#(yl,yu,Y3).(xu,x3,x3},
but not to
3.3.2 Infinite vectors of fixedpoint terms T h e o r e m 3.3.8 continues to hold for infinite systems of equations, i.e., when we consider t h a t the sequences x j and f are infinite instead of being of finite length n. Let, for j = 1 , . . . , k, x j be a vector of variables, indexed by some set I of a r b i t r a r y cardinality, and let f be a vector of monotonic Boolean functions over the variables in the vectors x j , indexed by the same set I. Indeed, f can be viewed as a monotonic m a p p i n g from (BZ) k to ]13z. Then, since B x is a complete lattice, 01Xl.O2X2.''" .OkXk.fl(Xl,X2... ,Xk) exists. Let us consider two vectors of Boolean variables z and z', indexed by I, _,t(1) and let us assume t h a t f  (f/}i~, with k  z i . f (~ + ~iai . For any vector u C ]Bz let us denote by g the vector (u7}icx T h e n t h e following result is a generalization of T h e o r e m 3.3.8, where 1 denotes t h e infinite vector of l's of a p p r o p r i a t e length. T h e o r e m 3 . 3 . 1 0 . Let g ( z , z ' ) = Olx1.02x2.'.. .OkXk.f. Then there exists u e B Oo such that g(1, 1) = g(u, ~). To prove this t h e o r e m we need a definition. D e f i n i t i o n 3 . 3 . 1 1 . We say t h a t a monotonic m a p p i n g f (z,z',x)
" B ~ x B* x ( B I ) ~ + 1B*
has property S (S for Selection) if Vu, u', v, v' E ]BI such t h a t u < v and u' < v', V e l , e 2 C ( B I ) m such t h a t el < e2, if u + u' < f ( u , u', e l ) then there exist w, w ' C B I such t h a t  u < w < v and u ' < w ' < v' 
"t/.lt ! W'tO
I,
3.3 The selection p r o p e r t y
 ~ + ~,
f(~,
~,,,~)
79
 f(~, ~,,,~),
w h e r e u + u ' a n d u 9u ' a r e t h e p o i n t w i s e e x t e n s i o n s t o ]]31 of t h e s u m a n d p r o d u c t of ]!3. Lemma
3.3.12.
/f f(z,z',x)

zi. f~o) + ziJ i'~(1)f o r s o m e f(o) and
(fi)icI
is such that f o r all i in I , fi 
f/(1), then
it has property S.
Proof. It is sufficient t o s h o w t h a t f ( z , z', x )  z.gx ( x ) + z ' . g 2 ( x ) h a s p r o p e r t y S in t h e f o l l o w i n g r e s t r i c t e d sense: Vu, u', v, v' E ]!3 s u c h t h a t u ___ v a n d u' < v', V e l , e2 E ( B / ) m s u c h t h a t ex < e~, i f u + u ' _ f(u,u',el) then there e x i s t w, w' E ]I31 s u c h t h a t  u<wl ve', a n d d e 113V be defined by dv  0 ~ v C V'. We claim t h a t c _ I t h a t v C Ve' =v hv(d)

0.
B y definition of h, h
(d)

a,(h(d),
. . . , h(d),
d,
c,
. . . , c).
Now, a s s u m e t h a t v C VI'. T h e n t h e r e exists p _> 1 such t h a t v  vjp_~. Let v'  v / . Since r(v')  m , we get, by L e m m a 4.3.3, h v ( d ) _< d~,. Since v' c V', d.,  0. A s s u m e t h a t v C Ve~ 1  V~. T h e n there exists j C J~+~ such t h a t v  vj with j + 1 E Je a n d r(vj+~) < m. It follows t h a t h ~ ( d ) 0, such t h a t ?.t  U l 2 L 2 " ' ' U n . . . In o t h e r words, L +  U n > l L n w h e r e L n . . {Ul Un C A* l Ui E L}, a n d we h a v e L +  L U L L +.  L*  {c} U L +. T h u s , LL*  L +.  u E L ~ if a n d o n l y if t h e r e exists an infinite s e q u e n c e u 1 , . . . , u n , . . . , w i t h u~ c L, such t h a t u  u l    u n .  Let us r e m a r k t h a t if u  "tt0Ul  9 , t h e l e n g t h lul of u is equal to ~~n>0 luil 9 If ui is E for a l m o s t all n, t h e n u is a finite word. T h e r e f o r e , if c belongs t o L t h e n L* C_ L ~ C o n v e r s e l y if L ~ c o n t a i n s finite words t h e n ~ E L. M o r e precisely, L ~ N A* is e m p t y if E ~ L a n d is equal to L* if c E L. T h e r a t i o n a l l a n g u a g e s of A* are defined by i n d u c t i o n as follows  0 is a r a t i o n a l l a n g u a g e ; for each letter a, {a} is a r a t i o n a l l a n g u a g e ,  if L a n d L ~ are r a t i o n a l l a n g u a g e s , so are L L3 L ~ a n d LL'.  if L is a r a t i o n a l l a n g u a g e , so is L*. Let R a t ( A * ) be t h e set of all r a t i o n a l l a n g u a g e s of A*. If we also use t h e infinite i t e r a t i o n L ~ in t h e a b o v e i n d u c t i v e definition, we get t h e set R a t ( A ~ ) of r a t i o n a l l a n g u a g e s of A ~  if L is in R a t ( A * ) , it is also in R a t ( A ~ 1 7 6  if L a n d L' are in R a t ( A ~ 1 7 6 so are L L3 L' a n d LL',  if L is in R a t ( A ~ 1 7 6 so are L* a n d L ~
5.1.3 Arden's lelnma We are going to p r o v e t h a t L* a n d L ~ can be defined as t h e least a n d g r e a t e s t fixed p o i n t s of s o m e m o n o t o n i c m a p p i n g over 7)(A~176
Lemma 5.1.4 (Arden's Lemma). Let L C_ A* and L ~ C_ A c~. The extremal fixed points of the monotonic mapping that associates L~U L K with K C A ~176 are
108
5. The #calculus on words #x.(L' U Lx)

L*L',
,x.(L' U Lx)

{ A~ L*L' U L ~
if a E L, i f c r L.
Proof. Let K  # x . ( n x U L ' ) and K '  • x . ( L x U n ' ) . Since L ' U L L * L ~  L * L ' we have K C L * L ' . Since K  L ' U L K we have L K C K and L' C_ K . It follows t h a t L L ' C L K C_ K . More generally, if L n L ' C K t h e n L n + I L '  L L n L ' C L K C K . Hence L * L ' C K . If e C L, we have A ~  {c}A ~ C_ L A ~ C L ' U L A ~ . Hence A ~ C_ K ' , thus K '  A ~ . Let us a s s u m e c r L. Since, by definition of L ~ , L L ~  L ~, we have L ' U L ( L * L ' U L ~)  L ' U L L * L ' U L L ~  L * L ' U L ~. It follows t h a t L * L ' U L ~ C K I"
Let us show t h a t , if z r L, any set K " such t h a t K " C_ L K " is included in L ~. Let wo C K " , t h e n wo E L K " and Wo  u o w l with ~ # u0 E L a n d wl E K " . By induction, it can be shown t h a t t h e r e exist two sequences uo, u l , . . . E L, wo, W l , . . . E K " such t h a t , for any n > 0, wo  U O U l . . ' u n w ~ + l . It follows t h a t t h e word w  u o u l . ' . C L ~ is a prefix of w0 and since none of t h e ui is the e m p t y word, w is an infinite word, t h a t implies w  w 0 . Let K "  K '  K . Since K C K ' , we have K '  K U K " . Hence, K ' = L K ' U L '  L K U L K " U L '  K U L K " . It follows t h a t K "  K '  K is included in L K " , and, by the previous result, K " C_ L z. Hence, K ~ C K U L~ L * L ' U L ~~. K1 

It should be observed t h a t r , x . ( L ' U L x )  p x . ( L ' U L x ) U r , x . L x . This is a special case of a m o r e general result relating least and greatest fixed points in some specific cases (see [90]). It shoud be also observed t h a t L W A cr is equal to A ~ if c E L and to L a otherwise, so t h a t r,x.(L' U L x ) is always equal to L*L' U L~A ~. C o r o l l a r y 5.1.5. Let L C A ~ L K  L ~ U L , K , we have
and let L , 
#x.Lx

L**L~,
~x.Lx

~ A~ t L*L~UL~
T h e r e is a m a t r i c i a l version of this l e m m a .
L N A*, L~ 
if e E L , ifcCL.
L N A ~. S i n c e
5.1 Rational languages
Lll
9 1 4 9 1 4L9l j
109
9149 Llm
9
9
Let M ~•
be the set of matrices L 
Lil
"'"
Lij
"''
Lira
""
Lnj
"..
Lnm
9
9
Lnl
where the L i j a r e subsets of A ~ . Obviously, we make M n x m a complete lattice by using the product ordering (see Section 1.1.7, page 7). We denote by + the least upper bound in this lattice. We also define a m a t r i x product from M n x m x M m x p inm to M n• by ( L L ' ) i j  ~~k=~ L i k L ~ j . This product is monotonic in its two arguments, so t h a t for L C M n • C M n• the mapping denoted I~X + L ! : M n• ~ M T M has extremal fixed points. Let L C M n • 1 4 9 We define L +, L * C M n• and L ~ C M n• as follows. 
L+
 ~n>l
Ln 9
 L* = [cn] + L +, where [cn] is the m a t r i x defined by [c~]ij = c if i = j, q~ otherwise 9  u E (L~)~ if and only if for every j _ O, there exists ij C { 1 , . . . , n } , u j C A ~ such t h a t i0 = i, u j C L i j i j + l , u = u o u l . . . It is easy to see t h a t for any K C M ~• [c~]K = K = K[cm], and t h a t for any L E M n x n ~ L +  .L + L L + = L L * . Let us also r e m a r k t h a t u C ( L P ) i j if and only if there are io, i l , . 9 , ip E {1,...,n},ul,...,Up C A ~ such t h a t i0 = i, ip = j , u j C L i j _ l i j , u = UoU 1 9 9 9Up.
Lemma
5.1.6. px.(Lx
+ L')
L,x.(Lx + L')

L*L',

L*L' + L~A ~
Proof. The proof is similar to the scalar case. Let K # x . ( L x + L ' ) and K '  y x . ( L x + L ' ) . Since L * L ~  L ~ + L L * L ~, K C_ L * L ~. Since K  L ~ + L K we have L K C K and L ~ C_ K . It follows t h a t L L ~ C L K C_ K . More generally, if L n L ~ C K then L n + I L ~  L L n L ~ C L K C K . Hence L * L ~ C K . It is easy to see t h a t L i j ( L ~ ) y C_ (L~)i, hence L ( L ~) C_ L ~. On the other hand, if u C (L~)~, then u  nov with uo C L~j and v E ( L ~ ) j , hence L ( L ~)  L ~. It follows t h a t L ( L * L ' + L ~ A ~ ) + L '  L * L ' + L ~ A ~ , hence L * L ~ + L ~ A ~ C K ~. To prove the converse inclusion, it is enough to prove K ~  K C L ~ A ~ . Let w0 C ( K '  K ) i o , then there is il and UO C L i o i 1 such t h a t wo  now1 with Wl C ( K ~  K ) i l . By induction, it can be shown t h a t there exist three sequences io, i l , . . . , no, u l , . . . , with
110
5. The/zcalculus on words
uj C L q q + l , w o , w l , . . . , with wj C ( K t  K ) q , such that, for any j > 0, wo  u o u l " . u j w j + l . It follows that the word w  UoUl"" C (L~)io is a prefix of w0, t h a t implies wo C (L~A~176 [~
5.1.4 The/zcalculus
of e x t e n d e d l a n g u a g e s
Let t be a t e r m with at(t)  {x}. Let us assume that there exists L C_ A* such t h a t for any v " Var ~ P(A~176 [t][v]  Lv(x). By analogy with lax]Iv] av(x), we find it convenient to write [t]  Lx. Then [#x.(y V t)][v] (with y ~ x) is the least fixed point of the m a p p i n g t h a t associates with K the set [y V t][v{K/x}]  [ y l [ v { K / x } ] U[t][v{K/x}] v ( y ) U L K . By the Arden's L e m m a (Lemma 5.1.4, page 107), [#x.(yVt)][v] L*v(y) and, by the same analogy as above, we could write [#x.(y V t)]  L*y. We are going to formalize this way of denoting the interpretation [t] of an intersectionfree term by introducing the following pcalculus whose objects are called extended languages. D e f i n i t i o n 5.1.7. An extended language is a pair E  (X, L) where X is a subset of Vat and L is a subset of A ~ U A * X . If E  (X, L} is an extended language, we denote by E~ the set L n A ~ , and by Ex for x C X , the set {u E A* l ux C L}, so that L can be uniquely written E~ U UJxex E~x. Moreover since Ex is empty for any x not in X , we can also write Ett U U~e Vat E~x. Note t h a t if E  ( X , L ) is an extended language, and Y D X then E I  (Y, L) is still an extended language, not equal to E, where for any y C Y  X, E~  0. In particular, if L C_ A c~, for any X C_ Var, (X, L} is an extended language. We denote by g s the set of all extended languages. We are showing t h a t g s together with the following operations, is a #calculus.
 ar((X,L})= X,  For each variable x, the extended language ~ is the pair ({x}, x} i.e., ~?tt = 0, 
 Let us define E ' = 
x ' =


(X', L') = (X, L)[p] for ( X , L ) C $ s and p : Var + $ s
= G
u
vor
=
It is easy to see t h a t E ' = (X, L}[p] is still an extended language: If Ey ~ 0 then there exists x such t h a t Ex ~ 0 and p(x)u ~ 0, which implies x C X and y e ar(p(x)), hence y e X ' . If E = (X, L) is an extended language, then  # x . E is the extended language E ' of arity X ' = X  {x} defined by 9

E;E
,
5.1 Rational languages
111
,, Ey  ExEy, for y E X ' ,  ux.E is equal to ( X ' , A ~ U A*X') if ~ E Ex, otherwise it is equal to the extended language (X', E') where E ' is defined by 9
E~=E;Ey,
foryEX'.
Note that if x r X, then E~ = 0. Since 0* = {c} and 0 ~ = 0, we get
# x . E : ux.E : E. P r o p o s i t i o n 5.1.8. The set o] extended languages is a itcalculus. Moreover
it satisfies a strengthened version of Axiom 7, namely: For any z not in ar((Ox.E)[p]), (Ox.E)[p] : Oz.(E[p{2/x}]). Proo]. 1. 2. 3. 4.
Obviously a r ( ~ ) = {x}. By definition, ar(E[p]) is always equal to ar'(E, p). Obviously, by construction, ar(Ox.E)= a r ( E )  {x}. if E ' : ~[p] then ar(E') = ar(p(x)) and
 E'y = p(X)y, for any variable y E ar(E'). It follows that E' = p(x). 5. Only the sets p(x) for x E ar(E) are needed to define E[p]. 6. Let us show t h a t E[p][~] = E [ p , ~]. It is easy to see that E[p][~]~
E[p]~ U U

E[p]y~(y)~
y6 Var
E~ U U
:
Exp(x)~ U U
x E Var
U E~p(x)y~(y)~
x C Var y E Var
On the other hand
e[,o.~]~

e~ U U
E~(p(x)[~])~
x E Var
=
U
U
xE Var
yE Var
We also have, for z E ar(E[p][~])  a r ( E [ p , ~]), E[p][~]z
=
U
E[p]y~(y)z
yE Var
= On the other hand,
U
U
xE Vat yE Vat
112
5. The #calculus on words
E[p.~]z 
U
E~(p(~)[~])z
x E Vat
=
U E~( U p(~)~(y)z. x6 Vat
[email protected] V a t
7. Let E '  # x . E , Z  a r ( E ) , Y  Z { x } , and Z  ar(E'[p]). T h e n (E'[p])~  E~ U U y c z E~p(y)~ = E ; E t t U UyEZ E ; E y p ( y ) ~ and, for any ! z e Z , (E'[p])z = U y e Y E yP(Y)z = U y c x , E ~* E y p ( y )~ . Let v be any variable not in Z, let F  E [ p { 9 / x } ] and let F ' #v.F. First, we have a r ( F ' )  ar(E'[p])  Z. This is because a r ( F ' ) =
a~'(E,p{~/~}) {~} where
a,'(E,,o{,~/x})  U a,(p(y)) u {v I ~ e X }  Z u {~ I ~ e X } . yEY
Since v ~ Z , a r ( f ' )  Z . Now, F~  F~F~ and, for z E Z, F~  F ~ F z . But it is easy to see t h a t Fv  Ex, t h a t F~  [JyeY E y p ( y ) ~ , and t h a t Fz  [Jycy E y p ( y ) z . Hence, it follows t h a t E'[p]  F ' . T h e proof is similar for E ~  u x . E . D If E is an extended language, we denote by a E the unique extended language E ~ such t h a t  ar(E')
ar(E),
Efll _ a E ~ ,
 V x E a r ( E ) , E~  a E x .
If E ~ and E " are two extended languages, we define their union E U E ~ as the unique extended language E of arity a r ( E ) U a r ( E ' ) such t h a t 
E~  E~!
U
It E~,
 for any x E a r ( E ) , E~  E~ U E'~t. Note t h a t if E~ r 0 then E~ r 0 or E~~ r 0, which implies x E a r ( E ' ) or x E a r ( E " ) . T h e n we can define the mapping a " f i x T ( A { v } ) + g s a ( a t )  a ~ ( t ) , a ( t V t')  or(t) U a(t'), cr(Ox.t)  Ox.a(t). Proposition
5.1.9.
The m a p p i n g a is a h o m o m o r p h i s m
by a ( x ) 
~,
of #calculi.
Proof. We have just to prove ar(t)  a r ( a ( t ) ) and a(t[p])  cr(t)[a(p)] by induction on t. The first point is obvious. Let us show the second one. 
a(x[p])

a(p(x))
 ~(at)[~(p)] a(E[p]).

a(p)(x)
acr(t)[a(p)] 
 
~[a(p))].
a~(t[p]) 
a(at[p]), since, in C/:, (aE)[p] 
5.1 Rational languages
113
a(tVt')[a(p)] (a(t)Ua(t'))[a[p)] a(t)[a(p)]Ua(t')[a(p)] a(t[p]Vt'[p]) = a((t V t')[p]), since in Cs ( E U E')[p]  E[p] U E'[p].  Since (Ox.t)[p] Oz.(t[p{2/x}]) with z ~ ar((Ox.t)[p]), we have 




a((Ox.t)[p])  Oz.a(t[p{2/x}])  Oz.a(t)[a(p){2/x)]. O n t h e o t h e r h a n d a(Ox.t)[a(p)]  (Ox.a(t))[a(p)]. Since z ~ ar(Ox.t)[p)] C ar(t[p])  ar(a(t)[~(p)]), z is not in ar((Ox.a(t))[a(p)]) and, by P r o p o s i tion 5.1.8 (page 111), (Ox.a(t))[a(p)] Oz.a(t)[a(p){2/x}].
E] Identifying any subset of A ~ with an e x t e n d e d l a n g u a g e of arity ~, we get: P r o p o s i t i o n 5 . 1 . 1 0 . For any term t C fixT(A{v}) and any v : Var + 7 ) ( A ~ ) , It][v] = a(t)[v]. 1It
a clo
d
]tl[q
=
Proof. T h e p r o o f is by i n d u c t i o n on t, applying P r o p o s i t i o n 5.1.9.
E]
As a direct c o n s e q u e n c e of t h e definition of t h e pcalculus of e x t e n d e d languages, we get: 5 . 1 . 1 1 . For every term t, the set a(t)~ is in Rat(A~176 and for each x C Var, the set a(t)x is in Rat(A*).
Proposition
Before p r o v i n g t h e converse we need two results. P r o p o s i t i o n 5 . 1 . 1 2 . For any rational language L in Rat(A ~ ) the language L  {c} is also in R a t ( A ~ ) .
Proof. T h e p r o o f is by i n d u c t i o n on L.  IfL0orL{a},thenL{E}L. 
( L U L')  { a } 
(L
{c}) U (L'
{e}.
 N o t e t h a t c E L L ~ if a n d only if c E L N L ~. In this case
(LL')  {c}  (L  {c})(L'  {e}); otherwise, ( L L ' ) 
{c} = LL'. {~})+.  N o t e t h a t c C L ~ if a n d only if e C L. In this case L ~ = L* U ( L hence, n ~  {c} = ( n  {e}) + U ( L  {c}) ~.  L*  { ~ } = ( L 
{c}) ~, r]
114
5. The #calculus on words
Definition
5 . 1 . 1 3 . A l a n g u a g e is cfree if it does not contain c. 5 . 1 . 1 4 . Every language in Rat(A ~ ) is equal to a finite union
Proposition
m
.Sou U i1
where the Li are rational languages in Rat(A*) and the Ki are cfree rational languages in Rat(A*). Proof. T h e p r o o f is by i n d u c t i o n on L.  T h e result is t r u e for L  0, {a}, a n d t h e induction step for L ' U L" is obvious.  A s s u m e t h a t L  L0 U uim__l L i K ~ where each K i is cfree. T h e n L L ' = m
LoL' U U i = l L i K e . If, moreover, L'  L~ m ~
l
m ~ U Ui=I
!
t
LiK~ ~ where each K i is
lw
cfree, t h e n LoL'  LoL'o U U i = l L o L i K i a n d t h e result is proved. m If L  L0 U U i = I L i K ~ where each K i is cfree, t h e n L*  L 0 U Ui~=l L~)LiK~ a n d L ~  L~ U U/~=l L ~ L i K ~ . B u t L~  L~) U (L0  {c}) ~. V3 C o r o l l a r y 5 . 1 . 1 5 . If L is in R a t ( A ~ ) then L N A * is in Rat(A*), and L n A ~ is in R a t ( A ~ ) . Proposition 5 . 1 . 1 6 . If E is an extended language such that E~ and Ex, for any variable x E a r ( E ) , are rational, then there exists a term t of arity a r ( E ) such that a(t) = E. If E~ is a subset of A* then t is in Z1, otherwise it is in 112.
Proof. T h e p r o o f is in several steps. In a first step we prove by i n d u c t i o n on L t h a t for every l a n g u a g e L in R a t ( A * ) a n d any variable x, t h e r e exists a t e r m t C Z~ of arity {x} such t h a t a(t)  ({x}, Lx}. If L and If L If L



 0 t h e n we t a k e t  px.x. T h u s a ( # x . x )  # x . a ( x ) . B u t a(x)~  r a(x)x  z , thus, # x . c r ( x )  {c}*q} q~.  {a} t h e n we t a k e t  ax, and a(t)  ({x}, {ax}).  L1 U L2 w i t h ( { x } , L i x }  a(ti) t h e n we take t  tl V t2. T h e n , 
u
 If L  L1L2 with ( { x } , L i x }  cr(ti) t h e n we take t  tl[id{t2/x}]. T h e n a(t)  a ( t l ) [ i d { a ( t 2 ) / x } ]  ( { x } , L 1 L 2 x ) . L  L'* with ( { x } , L ' x )  a(t') t h e n we take t  #y.(t[id{y/x}] V x), w h e r e y ~ x. In this case a(t)  p y . ( a ( t [ i d { y / x } ] ) t U a ( x ) ) = # y . ( { x , y } , L ' y U {x})  ( { x } , L ' * x ) . 
I
f
5.2 Nondeterministic automata
115
Next, we prove t h a t if L C_ A ~ is rational, there exists a term t in ~1 or in II2 of arity 0 such t h a t a(t)  (0, L). It is easy to see that if a(t)  ( { x } , K x ) where K is ~free, and where t C 21, then a ( y x . t )  (0, K ~ ) . It follows t h a t if a(t)  ( { x } , L x ) and a(t')  K ~ then a(t[id{t'/x}])  (O, L K ~ ) . Since t c Z1 and t' C / / 2 , the t e r m t[id{t'/x}] is i n / / 2 . Finally, if a(t)  ( { x } , L x ) then a ( t [ i d { E ~ / x } ]  (0, L), where E~ is the extended language (0, {c}). Thus, by using Proposition 5.1.14, we get the result. Now let t~ of arity 0 be such t h a t a(t~)  (0, E~), and, for x C a r ( E ) , tx of arity {x} be such t h a t a(tx)  ({x}, E~). Then E  a(t~ V V~e~r(g)t~). D T h e o r e m 5 . 1 . 1 7 . Every rational language over an alphabet A is a component of the standard interpretation of some vectorial fixedpoint term yx.#y.t(x,y) where t is built up only with c, V, and the unary symbols a for a in A. Moreover, if this language is included in A*, it is the standard interpretation of a simpler term # x . t ( x ) .
5.2 Nondeterministic
automata
This section is devoted to the presentation of nondeterministic a u t o m a t a on words. In a first part, we recall the definition of an a u t o m a t o n over finite words. Then we will focus on a u t o m a t a for infinite words emphasizing the notion of chain a u t o m a t o n or parity automaton because of their close relationship with the pcalculus. In contrast to the previous section, we shall not consider a u t o m a t a on both finite and infinite words: by Corollary 5.1.15 (page 114), we can deal separately with languages of finite words and languages of infinite words. 5.2.1 A u t o m a t a
on finite words
An automaton is a tuple , 4 S T I  F



(S, T, I, F ) where
is a n o n e m p t y finite set of states, is a subset of S x A x S, the set of transitions, is a n o n e m p t y subset of S, the initial states, is a subset of S, elements of which are called the accepting states.
An a u t o m a t o n .A is said to be deterministic if I is a singleton and if for all (s, a) C S x A, there exists at most one s' in S such that (s, a, s') C T.
116
5. The #calculus on words
A r u n of ,4 on a word u  a x . . . a n in A* is a sequence of states p sosl...snsn such t h a t so E I and Vi " 1 s'}. Let K be the vector defined by Vs C S, K s  Us,~g Lss,. It is easy to see t h a t K  T*U
w h e r e T is t h e n x n m a t r i x defined by Tss,  {a C A I s
a
s'}
and U is the vector defined by Us  ~ if s E F, 0 otherwise. Now we prove the following result which can be viewed as a formulation of the celebrated Kleene's theorem. Theorem
5.2.3. For any language L included in A*, the following statemerits are equivalent. 1. L is recognizable. 2. There exists a t e r m t in Z1 (A{v}), of arity 0 such that 3. L is rational.
it]
 L.
Proof. 1 ::> 2. The language recognized by .4 is equal t o U sEI,s'cF L ss, = UsE1Ls where L = T * U . By Lemma 5.1.6 (page 109) L = # x . ( T x V U ) . Then L is the interpretation of a vectorial term in EJi+(A{v}), and, by Proposition 2.7.8 (page 63), each Ls is the interpretation of a t e r m ts in E 1 (A{v}). Since L is the union of some of the Ls, it is the interpretation of the union of some of the ts, which is also in Z1 (A{v}). 2 ~ 3. By Proposition 5.1.11 (page 113) and Corollary 5.1.15 (page 114). 3 ~ 1. This is a classical result of the elementary theory of languages and automata. [3.
5.2 Nondeterministic automata 5.2.2
Automata
on infinite
117
words
An automaton for infinite words is almost the same as an automaton for finite words. The main difference lies in the criterion to accept a word or not. An automaton is a tuple ~4 (S, T, I, C) where S T I  C 


is a nonempty is a subset of is a nonempty is a subset of
finite set of states, S • A • S, the set of transitions, subset of S, the initial states, S ~, called the acceptance criterion.
A run of ,4 on t h e word u  a o a l . . . a n . . , in A ~ is a sequence of s t a t e s p SoSl...s~... such t h a t so C I and Vi >_ 0, (si, ai, si+l} C T. A r u n p is accepting if it belongs to t h e a c c e p t a n c e criterion C. A word u is a c c e p t e d by ,4 if t h e r e exists an a c c e p t i n g r u n of A on u. It follows t h a t t h e set of words a c c e p t e d by an a u t o m a t o n w l  (S, T, I, C) does not change if S is r e s t r i c t e d to its subset of reachable states defined inductively by  every initial s t a t e is reachable,  s' is reachable if t h e r e exist a letter a a n d a reachable s t a t e s such t h a t
(s,a,s') c T. We d e n o t e by L(.A) t h e set of words accepted by ~4 and we say t h a t this set is recognized by wl. T h e following closure p r o p e r t i e s of recognizable languages are also given w i t h o u t proofs. We refer to [95]. T h e o r e m 5 . 2 . 4 . Let L , L ' a_ A ~ , K a_ B ~ and let h 9 A ~ B which is extended into a m a p p i n g f r o m A ~ to B ~. I f L an L ~ are recognizable, so are L U L ~, L M L ~, and h ( L ) C_ B ~ . I f K is recognizable, so is h  l ( K ) C_ A ~. A n a u t o m a t o n is deterministic if (i) t h e r e is only one initial s t a t e a n d (ii) for any s in S a n d a n y a in A, t h e r e is at m o s t one s' in S such t h a t (s, a, s'). In this case, for a n y word u, t h e r e is at m o s t one r u n of ,4 on u. A n a u t o m a t o n is complete if for any s in S a n d any a in A, t h e r e is at least one s' in S such t h a t (s, a, s'). In this case, for any word u, t h e r e is at least one r u n of ,4 on u. Now we define several kinds of a u t o m a t a according to how t h e a c c e p t a n c e criterion C is defined. Firstly, for any element p E S ~ we d e n o t e by Inf(p) t h e set of states t h a t occur infinitely often in p: Inf(sos~ . . . s n . . . )

{s C S IVm
>_ 0 , 3 n >_ m " s 
sn}.
An a c c e p t a n c e criterion C is a Muller criterion if t h e r e is a subset 9v of 7)(S) (9v is called a Muller condition) such t h a t C  {p E S ~ ] Inf(p) c 9v}. In this case we write M ( > v) instead of C.
118
5. The #calculus on words
It is a Rabin criterion if there is a finite set of pairs (Li, U~)(i  1 , . . . , k) (called a Rabin condition) such t h a t p E C if and only L~ N Inf(p)  0 a n d U~ N Inf(p) ~ 0 for some i" 1 ~ i ~ k. We write R((Li, U~)~=I,... ,k) for C. It is a chain Rabin criterion if it is a R a b i n criterion R((Li, Ui)i=l ..... k) such t h a t Li C_ Ui for i  1 , . . . , k and Ui+l C_ Li for i  1 , . . . , k  1. In other words, C is a chain R a b i n criterion if there is an increasing sequence E2k1 C E2k2 C . . . , C_ E1 C_ Eo with k >_ 1 (called a chain Rabin condition) such t h a t C  R((E2il,E2i2)i=l ..... k). It is a Biichi criterion if there is a subset F of S (called a Biichi condition)such t h a t C  {p C S ~ I Inf(p) N F ~ 0}, and we write B ( F ) for C. We recall the trivial observation t h a t B ( F U F')  B ( F ) U B(F'). We will refer to the a u t o m a t a with Muller criterion as to Muller a u t o m a t a , and similarly for other criteria. Obviously, a Biichi criterion is a special case of a chain R a b i n criterion since B ( F )  R((O,F)). 0 n the other hand, a Rabin criterion is a special case of a Muller criterion since
R((L~,U~)~=o ..... k)  M ( { X C_ S I 3i,1
< i
~=I..... k)

U(B(Ui)  B(L/)), i1

U (N FE~
Proof. Obvious.
 B(s F))
sEF
[:]
It follows t h a t a non e m p t y chain R a b i n criterion can always be defined by a strictly increasing sequence E2k1 C E2k2 C "" C E1 C Eo. If E2i1 = E2i2, we can w i t h d r a w this pair because B ( E 2 i  2 )  B ( E 2 i  1 ) = 0. If E2i = E2i1, we can replace the two pairs {E2i+l, E2i) and (E2i1, E2i2) by the single pair (E2i+l, E2i2} because (B(E2i)  B(E2i+I)) U ( B ( E 2 i  2 ) 
B(E2~I )) = B(E2~2~ )  B(E2~+I ). 5.2.3 Parity
automata
A n o t h e r way of expressing the chain R a b i n criterion is the parity criterion, similar to the one used in the definition of games in C h a p t e r 4. Let r be a m a p p i n g from S into IN (called a parity condition). For any subset E of
5.2 Nondeterministic automata
119
S , let r ( E )
be {r(s) I s 6 E}. Then the parity criterion P ( r ) is the set {p e S ~ ] max(r(Inf(p))) is even}. Equivalently, p  s o s l . . . s n . . . 6 P ( r ) if and only if lim supn~o o r(sn) is even. Proposition
5.2.6. A c r i t e r i o n C c S ~ is a chain R a b i n c r i t e r i o n if a n d only if it is a p a r i t y criterion.
Proof. Let C  R ( ( E 2 i + , , E 2 i > i = , ..... k), with Ej C E j _ , be a chain Rabin criterion. Let E ,  S and E2k+2  ~. For any subset s c S , let r ( s ) be the least i 6 { 1 , . . . 2 k } such t h a t s 6 E i  Ei+l. If follows t h a t p 6 C if and only if max(r(Inf(p))) is even. Conversely, let r " S ~ IN. Note t h a t if r' is defined by r' (s)  r ( s ) 4 2, we have P ( r )  P ( r ' ) . Thus, we may assume t h a t r ( S ) C_ { 1 , . . . ,2k} and we define E i  {s 6 S i r ( s ) >__ i}. Obviously Ei C_ E~I. It follows t h a t the least i such t h a t s e E~ is r(s) and P ( r )  R ( ( E 2 i + I , E 2 i ) i = I ..... k) for the same reason as above. O
The fact t h a t a chain Rabin criterion can always be defined by a strictly increasing sequence is equivalent to the fact that a parity criterion can always be defined by some r such t h a t r ( S ) is always one of the following four sets: {1, . . . , 2 k } , { 1 , . . . , 2 k 4 1), {O, . . . , 2 k  1 } , { 0 , . . . , 2 k  2 } . In this case the associated chain Rabin condition has k pairs, so we say t h a t this parity condition has k pairs. However, the four sets above provide a finer classification of parity criteria, called the Mostowski index (see Definition 7.1.7, page 160). In particular, the Bfichi criterion is a special case of chain Rabin criterion and it is easy to see that Bfichi automata are the parity automata where the rank function has its range equal to { I, 2}. We shall see at the end of this section that parity automata are tightly related to the pcalculus on words without intersection. We first show that any language recognized by an automaton can be recognized by a parity automaton. First we are going to transform a Muller or Rabin automaton into a chain Rabin one, using the technique of LAR (later appearance record) with hit introduced by Bfichi (see [96] for references). Let S be any finite set of cardinality n. We denote by S the set, of cardinality n!, of all permutations of S, i.e., the set of words sl ""Sn in S n such that i 7s j ~ si ~ sj. For any w in S and any s in S, w can be written in a unique way as usv with u, v 6 S*. We define an (infixed) binary operation 9 from S • S into S by w, s  uvs where w  usv. Let us choose an arbitrary element w, of S. Given this element, we define inductively a mapping A" S* ~ S by )~(c)  w . , ,k(us)  )~(u) 9 s. Now let p  s o s , ' " S m " " be an element of S w. We define the following sequence of elements of S" w0  w., Wm+l  )~(SO''" Sm), SO t h a t Wm+, =
120
5. The pcalculus on words
where w m = U S m V . For any letter s C S, let rankm(s) be the position of s in win. Since wm+l is obtained from Wm by moving sm at the end, we have UVS m
(i) rankm+l (s) = n if and only if s = Sm, (ii) if sm J= s then rankm+l (s) _< rankm(s), and (iii) if rankm+l(S) < rankm(s), then r a n k m ( s m ) < rankm(s), (iv) if rankm(s') < rankm(s) = rankm+l(s), then r a n k m ( s ' ) = rankm+l(s'), (v) if rankm(s) = rankm+l(S) J= n, then r a n k m ( s m ) > rankm(s). We also define the sequences xm and Ym by: XmYm is the unique decomposition of wm such that the first letter of Ym is Sm. Finally, let X,~ (resp.Ym) be the set of letters of xm (resp. ym). Since (Xm, Ym) is a partition of S, x~ = xj
c , Y~ = Yj.
P r o p o s i t i o n 5.2.7. Let Z = Inf(p) be the set of letters that occur infinitely often in p. Then (1) f o r almost all m , Ym C Z, (2) there exists infinitely m a n y m such that Ym = Z. Proof. Let Z'  S  Z be the set of letters that occurs only finitely many times in p. For s C Z', let Sp be its last occurrence in p. Then we have, by property (ii) mentioned above, rankp+l(S) >__rankp+2(s) _>__ rankp+i(s) _>, thus rankm(s) is ultimately constant" there exists ms such that Vm _> ms, rankm (s)  rankms (s), and by property (i), this constant rank cannot be n. By property (iv), we get
(vi) for any s' such that rankms (s') < rankms (s), we have Vrn _> ms, rankm (s')  rankms (s'), which implies, by property (i), Vrn >_ ms, s' r 8 m , thus s' C Z'. Let rn'  max{ms [ s E Z'} which is not equal to n and let t~ be equal to max{rankm, (s) I s C Z'} (if Z' is empty, g is equal to 0). By property (vi) we get s C Z' r 1 _ rankm,(S) _ g. Therefore there exists a word x of length g consisting of all the letters in Z' which is a prefix of all wm for m > m'. Moreover, for m > m', s,~ ~ Z', so that x is also a prefix of Xm. It follows t h a t Z' C_ X m hence Ym  S  X m C_ S  Z'  Z. This proves the point (1). Now assume that point (2) is false, i.e., there is an m' such that for any m >_ m', Ym is strictly included in Z, or equivalently, Z' is strictly included in X m . We can choose m' large enough so that z is a prefix of all x,~ for m > rn' and indeed a strict prefix since Z' is strictly included in X m . Therefore, for all rn ~ m' there exists s,~ C Z such that z s ~ is a prefix of Xm. It is easy to ' ' for no letter in zs~m can be equal to Sm+ 1 see that s 'm ~ Sm+l ~ Sm+l  Sin, Since s m,' is in Z there exists a p > m' such that Sp  sm,' and let us consider the least p that has this property. Then for rn' "S! . Note that none of the entries of L ~ ) contains the empty word. We define also the vectors K(Y) by: K ! j) is the set of all infinite words u  a o . . . a p . . , such that ao > s o . . . s p 3Sl,...,sp...cS0 ~(i). D e f i n i t i o n 5.3.4. A constrained language is a pair C  (X, L) where X is a subset of Var and L is a subset of 7)(A ~ x P ( X ) ~ ) , i.e., a set of pairs (u, ~) with u E A ~ and ~" IN + 7)(X) which is closed in the following sense:
Vu e A c~, V~, ~' " IN + P ( X ) ,
~ c_ ~' & (u, ~) E L =~ (u, ~1) C L.
5.3 Terms with intersection
129
For instance, the expression a{x}b{y} we have considered above will be identified to the pair (ab, {x}{y}00}. The arity of the constrained language C = (X, L) is equal to X. Let Cs be the set of all constrained languages. We make this set a #calculus as follows. D e f i n i t i o n 5.3.5. If C = (X, L) is a constrained language and if p : Var + Cs is a substitution, where p(x) = (Y~, L~), we define C[p] as the pair (Y, L') where Y = ar'(C, p) and L' is defined by: (u, ~) E L' if and only if 
vi
c
r~, ~(i) c_ Y,
 there exists (u, {) C L such that Vi C IN, Vx C {(i), (u[i),~[i)lY~) ~ L~. If ~ C ~' C_ X ~, then (u[i),~[i)lY~> ~ nx ~ (u[i),~'[i)lY~) ~ n~. It follows that (u,~) E L' ~ (u,~'} ~ L'. That implies that ( Y , L ' ) i s a constrained language. The following proposition can be seen as an alternative definition of C[p], which is sometimes more useful. Proposition
5.3.6. Let C[p] = (Y, L'). Given (u,~> with ~ : IN + 7)(X),
we define ~ : IN+ 7)(Z) by ~(i) = {x C X ] (u[i),~[i) l Yx} E Lz}. Then (u,~} C L' V> (u,{) C L. Proof. Let us assume (u, ~) C L. For any i C IN and any x C ~(i), we have e ~) ~ y e ~'(j), hence we get C 1 X c_ 4'. On the other hand, since ~ ' I X  ~', we have
x c 4'[j) ~ ( u ' [ j ) , 5 ' [ j ) l X )  (u[i + j), ~[i + j)) e D and, by definition of r r (j)  r + j)  {x}. It follows that ('1 {x} C_ r Hence, ~' = ( ' ] X U ~' ] {x} C_ ~'U r and since (u', C') E C we have (~', ,t' u ,r
~ c.
5.3 Terms with intersection
133
P~oof of , ~ . c c_ g ( , z . c ) Assume (u,~) E ux.C. By definition, there exists r + P({x}) such that (u, ~ U r E C and Vi, x E r =~ (u[i), ~[i)U r E C. Let i be such that x E r We have (u[i), ~[i)U r E C. We also have, for any j such that x E r r + j), (u[i)[j), [[i)[j) U r
 (u[i + j), [[i + j) U r
+ j)) E C.
Hence ~t c
r F1
Proposition 5.3.14. i d  id. Proof. For any x, (u, r (with r 9IN ~ P({x})) is in the dual of ~ if and only if (u, r r ~ if and only if x r r if and only if x C r if and only if
e ~.
n
Proposition 5.3.15. #x.~C yx.C, ,x.~C #x.C. Proof. The second equality is a consequence of the first one and of the fact t h a t C  C. If x ~ at(C) then #x.C  yx.C  C. let us consider the case where x c at(C). First we remark t h a t for ~ " IN + P ( X  {x}) and r " ]hi + 7)({x}), U r  ~ U r where the complements are implicitly taken in the adequate sets of variables. Using the definitions of #x.C and ux.C we get E #x.C
r
~ ,x.C
r r r ~, r
3 r , s') l (s , regular languages by means of fixedpoint terms with intersection of the alternationfree level Comp (Z~ U II~) was given in [10].
This Page Intentionally Left Blank
6. T h e ttcalculus over powerset algebras
In Chapter 2, Section 2.3.3 (page 49), we have discussed the interplay of syntax and semantics of the #calculus on an abstract level, using the concept of a #interpretation, the universe of which can be an arbitrary complete lattice. We have subsequently observed that any #interpretation is in some sense equivalent to a powerset interpretation, i.e., one whose universe consists of a powerset (see Section 2.5). In Chapters 3 and 5 we have studied two important powerset interpretations: the Boolean algebra of logical values (the universe of IB can be viewed as the powerset of a singleton set), and the lattice of languages of (finite or infinite) words. Now we are going to concentrate on a wide subclass of powerset interpretations, subsuming those two and many others (in particular, the modal #calculus of [55]). The main feature of these interpretations, is that the basic operations are not only monotonic, but also additive; in other words, they can be constructed "from below", from some underlying simpler structure. Our interest in such interpretations is motivated mainly by the considerations of the next chapter, where we will establish a connection between the #calculus, and a very general concept of automata. But we will show some applications of the new concept already in this chapter, by generalizing the notion of a bisimulation between transition systems.
6.1 Powerset 6.1.1
algebras
Semialgebras
A signature (sometimes called type) is any finite set of function symbols Sig C_ Fun. Recall (see Section 2.3.1, page 47) that each symbol f in Fun has a fixed finite arity; we will denote it by p(f). According to the common sense of the word, an algebra over Sig consists of a set (universe), and an interpretation of each symbol in Sig by a function (operation) of corresponding arity over this universe. In the more general case of a partial algebra, operations need not to be everywhere defined. We will consider a yet more general situation where the operations may be also overdefined.
142
6. The #calculus over powerset algebras
D e f i n i t i o n 6.1.1. A semialgebra over a signature Sig is presented by B (B, {f• I f C Sig}), where B is a set called the universe of B, and, for each f E Sig of arity p(f), fB is a relation over B of arity p(f) + 1, t h a t is, fB C_ B p($)+1. We call fB a p ( f )  a r y basic operation from B p(f) to B, and write b " f ( b l , . . . ,bp(i) ) to mean ( b l , . . . ,bp(i),b ) E fB. Note t h a t a basic operation need not be a total function from B p(f) to B; if it is the case for each f E Sig, a semialgebra is an algebra over Sig (briefly: Sigalgebra), and then clearly " coincides with  .
E x a m p l e 6.1.2. A directed graph G  (1:, E / with a set of vertices V and a set of edges E C_ V • V can be viewed as a semialgebra over a signature consisting of one unary symbol, say Sig  {f}. The universe of this semialgebra is V and f v _ E. Note t h a t f a (v) " w if and only if there is an edge from v to w. E x a m p l e 6 . 1 . 3 . A syntactic tree over an arbitrary signature Sig can be presented as a m a p p i n g t " dom t + Sig, where dom t is a set of finite sequences (i.e., words) over n a t u r a l numbers, dom t C_ IN*, and moreover the following conditions are satisfied. 1. dom t is closed under prefixes: if xy C dom t then x C dom t. In particular, the e m p t y word c is always in dom t. 2. If w c dom t and t(w) has arity k then wi E dom t if and only if 1 < i ~ k. Note t h a t w E dom t is a leaf of the tree (a node without successors) if and only if t(w) has arity 0, i.e. is a constant symbol. Finite syntactic trees coincide with closed terms over Sig in an obvious manner, for example a t e r m f(c, f(c, d)) can be presented as the tree t with dom t = {~, 1, 2, 21, 2  2 } , such t h a t t(~)  t(2)  f, t(1)  t ( 2 . 1 )  c, and t(22)  d. Therefore, infinite syntactic trees can be viewed as infinite terms. A syntactic tree t can be organized into a semialgebra t
(dom t , { f t l f e Sig})
of universe dom t, where, for each f C Sig,
ft _ {(wl,...,wp(f),w)
I t(w)  f }.
Note t h a t w " f t ( v l , . . . , Vk) only if t(w)  f and v l , . . . , vk are the successors of w. This t is in general not an algebra since no operation is defined at the root s, unless t is a constant. For f E Sig of arity p(f) > O, the basic operation f t is a partial function from (dom t) p(S) to dom t, but if p(f)  0, f t C_ dom t is not a partial function of arity 0 whenever it contains at least two elements.
6.1 Powerset algebras
143
Let Tsig denote the set of all syntactic trees over Sig. This set can be also organized into a semialgebra (which is actually an algebra), 7sig = (Tsig, {fTs~g I f E Sig}), where, for f C Sig and tl, ... ,tp(f) C Tsig, fTs~g ( t l , . . . , tp(f)) is the syntactic tree of domain
{~} U
U
{iw I w C dom ti}
i1,... ,p(f)
defined by t(~)
t(iw)

f

t~(w) f o r w e d o m t i
The finite syntactic trees form a subalgebra of this algebra which coincides with the classical free algebra of closed terms. Therefore, 7sig can be viewed as an algebra of infinite terms. 6.1.2 P o w e r s e t a l g e b r a s For each signature Sig we fix a signature Sig~ which can be presented by Sig~  Sig U { f l f c Sig}, where, for each f E Sig, f is a symbol not in Sig of arity p(f)  p(f). D e f i n i t i o n 6.1.4. Let 1 3  ( B , { f B I f C Sig)) be a semialgebra over signature Sig. The powerset algebra over B is an algebra over the signature Sigh of the form
~aB (T)(B),{fP~ [ f e Sig} u {fpB [ f e Sig}) where 7)(B) is the set of all subsets of B, and, for each L 1 , . . . ,Lp(f) C_ B,
faS(L1, . . . ,Lp(f))
C
Sig, and for

{bl3al C L1 , . . .,3ap(f) C L p ( f ) f P B ( L I , . . . ,Lp(f))
f
. fB
(al,...,ap(f))
9
b),

{ b l V a l , . . . ,ap(f) E B, f B ( a l , . . . ,ap(f)) " b ~ ai C Li for some i}. Clearly, (7)(B), C_) is a complete lattice and the operations fP• and f~B, for f c Sig, are monotonic with respect to C_. Therefore the powerset algebra of B constitutes a/zinterpretation of Sig~ in the sense of Definition 2.3.7 (page 49). It induces an interpretation of the fixedpoint terms over Sig~,
144
6. The #calculus over powerset algebras
according to Definition 2.3.8 (page 49). We shall denote this interpretation by [tie B. Considering ( P ( B ) , C_> as a Boolean algebra, we can further observe that the operation f e n is dual to f e b in the sense of Definition 1.2.27, (page 18) since
feB(L1,...,Lp(I)
)

feB(L1,...,Lp(f)
)
for all L 1 , . . . , Lo(f) C B. This remark allows us to syntactically define the dual t of a fixedpoint term t" it is a term obtained by exchanging in t, p and u as well as f and f. More precisely, we define t by induction of t as follows. N
D e f i n i t i o n 6.1.5. The dual t of a term t is defined inductively by iftisavariablex,,ttx, if t  f ( t l , . . . ,tp(i)) then { 

f(tl,... ,tp(s)),
 if t  f ( t l , . . . ,to(f)) then t  f ( t l , . . . , t o ( f ) ) ,  if t  #x.t' (resp. ux.t') then {  ux.t' (resp. #x.t'). As an immediate consequence, we get: P r o p o s i t i o n 6.1.6. The mapping [t]eB " 7)(B) a~(t) + 7)(B) is the dual of
[t]~ B. In particular, if t is closed, then l t ] e B  [tip B. 6.1.3 L o g i c a l o p e r a t i o n s The lattice operations V and A in coincide with the usual settheoretical operations of union and intersection. We will see that they can also be "constructed from below" if the underlying semialgebra is enriched by one particular operation. Let 13 be a semialgebra over signature Sig, and let eq C Fun be a symbol of arity 2 not in Sig. We let Beq be a semialgebra over signature Sig U { eq}, such that, for f E Sig, the interpretation fueq coincides with f u , and eq~q = {(b,b,b) [ b C B}. In other words, eqB~(a,b) " c if and only if a  b  c. Then, in the powerset algebra of 13eq, we have
eq eBeq (L1, L2)

L1 N L2
and
Kq~U~.(L1, L2) for any L1, L2 C_ B.

L1 U L2
6.2 Modal #calculus
145
E x a m p l e 6.1.7. Let Sig be a signature whose only symbol is eq. Consider a semialgebra B over this signature, whose universe consists of a single element, say B = {,}, and eq is interpreted as above (which, in this case is the only nontrivial interpretation). Then it is plain to see that the powerset algebra pB is isomorphic to the Boolean algebra 113 of Chapter 3 (via the mapping 0 ~+ 0, {,} ~ 1), up to the identification of the symbols eq with A, and e~q with V.
6.2 Modal
ttcalculus
The propositional modal pcalculus introduced by Kozen [55] is perhaps the best known realization of the #calculus. It extends the propositional modal logic, possibly with many modalities (sometimes referred to as system K) by the least fixedpoint operator; the greatest fixed point is definable by the duality law : y X . p ( X ) = ~#X.~p(~X) (see Section 1.2.4,page 17). In our setting, the Kozen's logic can be presented as a calculus of fixed points interpreted in a powerset algebra, under the only restriction that all symbols in the signature except for eq have arity at most one. To be more precise, let us first recall the classical definition. Let Prop ("propositions") and Act ("actions") be two finite sets of symbols. Let Vat, as always, be a countably infinite set of variables. The formulas of the modal #calculus are defined inductively by the following clauses.  Any variable x E Vat is a formula (called, in this context, a propositional variable).  For a proposition p C Prop, p, as well as/5, are formulas. If ~, ~ are formulas, so are (~ V ~) and (~ A ~).  If ~p is a formula and a C Act an action then (a} ~, as well as [a]~, are formulas.  If ~ is a formula and x C Vat a variable, then #x.~, as well as ux.~, are formulas. 
The semantics of the calculus is provided by Kripke structures. Such a structure can be presented as a tuple
M
(S M, {pM C S M i P C Prop}, {a M C S M X S M l a C Act}}
where S M is an underlying set of states (or worlds), and the pM's and aM's are interpretations of propositions and actions, respectively. Note that M can be viewed as a labeled graph (or a transition system) whose nodes (S M) are labeled by sets of propositions, and edges by actions. The interpretation of formulas is defined relatively to a valuation val : Vat ~ ~)(sM). The interpretation [~]M[Val] of a formula qo is a set
146
6. The #calculus over powerset algebras
of states; the relation s E [99[M[va~ is also written by M, val, s ~ 99, and read: 99 is true in M at the state s, under the valuation val. It is defined inductively by the following clauses. 


M, val, s ~ p if and only if s C val(p), M, val, s ~ f if and only if s f[ val(p), M, val, s ~ (99 V ~) if and only if M, val, s ~ 99 or M, val, s ~ ~, M, val, s ~ (99 A r if and only if M, val, s ~ 99 and M, val, s ~ g2, M, val, s ~ (a}99 if and only if there exists s' such that (s, s') E a M and M, val, s' ~ 99, M, val, s ~ [a]99 if and only if for all s', (s, s') E a M implies M, val, s' ~ 99, M, val, s ~ #x.99 if and only if for all W C_ S M, [991M[Val{W/x}] C_ W implies s E W, M, val, s ~ ux.99 if and only if there exists W, such that s C W and W C_ [~]M[val{W/x}].
Note that, by the KnasterTarski theorem (Theorem 1.2.8, page 10), the last two conditions mean that the state s belongs to the least (respectively, the greatest) fixed point (in 7)(sM)) of the operator W ~ [99]M[Val{W/x}] induced by the formula 99 and the variable x.
Remark. In the above presentation, the negation operator is absent, except for that /5 can be viewed as the negation of p. Some authors include the negation operator in the language, by letting (,99) be a formula whenever 99 is, and setting, of course, M, val, s ~ ~99 if and only if M, val, s ~= 99. But then the clauses concerning fixedpoint formulas need to be modified, in order to guarantee the existence of the fixed points, e.g., by forcing monotonicity of the induced operators. To this end, it is usually assumed that #x.99 (as well ux.99) is a formula only if x occurs under an even number of negations. Note that the new set of formulas is an extension of the previous one (up to identifying/5 with (~p)). In the extended logic, one of the modalities, as well as one of the fixedpoint operators, become redundant, because a formula [a]99 is semantically equivalent to ~(a)(~99), and ux.99(x) is equivalent to ~#x.~(~x). On the other hand, any sentence (i.e., a formula without free variables) of the extended logic can be transformed to a sentence in which the negation is applied only to propositional variables, i.e., to a sentence of the pcalculus that we have presented above. It can be achieved by using the aforementioned equivalences, as well as the De Morgan laws, and the size of the resulted formula increases at most by a constant factor. Therefore, since we are ultimately interested in sentences, the absence of the explicit negation is not an essential restriction. The propositional modal #calculus can be presented in our setting as follows. Let Sig  PropU ActU { eq} be a signature in which the propositions are
6.2 Modal #calculus
147
considered of arity 0, and actions of arity 1. Then, there is a straightforward translation f2t of the formulas of the modal #calculus into the fixedpoint terms over signature Sig~, as well as a converse translation t 2 f (terms to formulas). The f 2 t mapping is given by the following rules. f2t "x~x 9p ~ p p ~ 9(qo A r ~+ eq(f2t(qo), f 2 t ( r "(qo V r ~ dq(f2t(qo), f 2 t ( r 9(a)~a ~~ a(f2t(~a)) " [a]~a ~~ &(f2t(~p)) " #x.qo ~~ #x.f2t(qo) " ux.qo ~ ux.f2t(qD) The converse transformation is analogous: t2f :x ~ x :p~p : eq(tl, t2 ) ~+ ( t 2 f (tl) A t 2 f (t2)) 9dq(tl,t2) ~ ( t 2 f ( t l ) V t 2 f ( t 2 ) ) : a(t)~+ ( a ) t 2 f ( t ) " ~t(t) ~+ [a]t2 f (t) : #x.t ~ px.t2f(t) " ux.t ~ ux.t2 f (t) It is plain to see that t 2 f o f 2 t and f 2 t o t 2 f are identity mappings on their respective domains. Now, a Kripke structure M as above can be identified with a semialgebra over Sig, let us denote it by ;EM, as follows. The universe of ](]M is sM; for each p E Prop, and s E S M, we let ptCM  S if and only if s C pM; for each a C Act, and Sl,S2 E S M, we let Sl  a tom (s2) if and only if (sl, s2) E a M. In other words, the binary relation a ~:M amounts precisely to the converse of a M (recall that Sl  a~CM(s2) if and only if (s2,sl) E atC~), while the relations p~:M and pM coincide. Passing to the powerset algebra ~O)EM, we have the following.
P r o p o s i t i o n 6.2.1. Let Sig be a signature constructed as above f r o m the sets Prop and Act. Let ~a be a formula of the modal #calculus, and t a fixedpoint term over Sig~. Then, for any Kripke structure M , and any valuation val : Var + 7 ) ( s M ) , [~lM[val]
If2t(~)lp~M[Val]
and ltl~M[Val]

[t2f(t)]M[val ].
[email protected] The claims follow easily from the definitions by induction on ~ and on t respectively. Let us check the induction step for (a)~. We have [(a)~aIM[val ]

{s 13s', (s,s') E a M and s' C [~]M[Val]}

alCM([cfl]M[Val])
=
a ~M ([f2t(qp)]~:M [val]) (by the induction hypothesis)
=
[a(f2t(~))]V~M[val ]
=
[f2t((a)99)]VlCM[Val]
148
6. The #calculus over powerset algebras V1
The above considerations show t h a t Kripke structures can be identified with certain semialgebras, and the modal #calculus over the Kripke structures coincides with the calculus of fixedpoint terms over the corresponding powerset algebras. But the converse interpretation is also easy. Let Sig be a signature such t h a t all symbols except for eq are of arity at most 1. Viewing the symbols of arity 0 as propositions, and those of arity 1 as actions, we can identify any semialgebra B with a Kripke structure, say, MB = (B,{pB I P C Sig, at(p) = 0}, { ( f ~ )  I I f C Sig, at(f) = 1}. It is obvious that, by applying the previous transformation M ~ ](:M to MB, we shall obtain again K:MB = B. Thus, from Proposition 6.2.1, we can also get the equalities [t]pB[val ]  [t2 f (t)]MB [val], and [~IMB [val]  [f2t(9~)lpB[val]. This shows t h a t the modal pcalculus can indeed be derived from the pcalculus over powerset algebras, by merely restricting the arity of the function symbols in the signatures.
6.3 Homomorphisms,
#homomorphisms,
and
bisimulations The classical notion of a homomorphism for algebras extends easily to semialgebras. D e f i n i t i o n 6.3.1. Let
B  (B, {f• ] f C Sig}) and B '  (B', {fB' l f E Sig}} be two semialgebras over the same signature Sig. A mapping h : B + B ~ is a homomorphism if for any f C Sig, and for any b, bl,... ,bo(f) C B, b " f B ( b l , . . . ,bo(l) ) implies h(b) fB' (h(bl),... ,h(bp(f))). Note t h a t if both B and B t are algebras, h is a homomorphism in the usual sense. Definition
6.3.2. We call a homomorphism of semialgebras h : B + ! " fB'(b~,... ,bp(l)), for some b E B and
B' reflective if whenever h(b)
b[, " . ' b'p(S) E B', there exist b ~ , . . . , bp(s) C B such that  b " f B ( b ~ , . . . , bp(s) ),  b ~  h(bi), for i  1 , . . . , p ( f ) . E x a m p l e 6.3.3. Let t C Tsig be a syntactic tree over Sig as defined in E x a m p l e 6.1.3, page 142. For w E dom t, the subtree of t induced by w is the m a p p i n g t.w over the domain {v : wv C dom t} defined by t.w(v) = t(wv). It is plain to see t h a t t.w is a syntactic tree.
6.3 Homomorphisms, #homomorphisms, and bisimulations
149
Now, it follows easily by our definitions that the mapping h : dom t + Tsig defined by h(w) = t.w is a homomorphism from the semialgebra t to the algebra 7sig. If w   f t ( v l , . . . , vk) then t(w) = f and vi  wi. Then h(w) = t.w = f Ts~g(t.wl,. . . , t . w k ) = f Ts~g(h(vl),. . . , h(vk)). Let us observe t h a t this homomorphism is reflective. Indeed, suppose t.w = f T  s ~ ( t l , . . . , tk). Then t(w) = f, and w has k successors in dom t : w l , . . . , wk; moreover an easy calculation shows that t.wi = ti, for i = 1 , . . . ,k. Hence, we have w  f t ( w l , . . . , w k ) , and h(wi) = t~, for i = 1 , . . . , k, as required. Adding the special symbol eq to a signature Sig, interpreted as explained in Section 6.1.3 (page 144) does not change the nature of a h o m o m o r p h i s m h. The following proposition shows t h a t if we want to check whether or not a mapping is a (reflective) homomorphism, we do not have to care about the special symbol eq. P r o p o s i t i o n 6.3.4. Let B and B ~ be two semialgebras over Sig and let Beq and B~eq be the semialgebras over Sig t2 {eq} obtained by extending B and B' by eq Beq and eqB'eq. If h : B + B ' is a (reflective) h o m o m o r p h i s m from B to B', it is also a (reflective) h o m o m o r p h i s m from Beq to B~eq. Proof. Any m a p p i n g h is homomorphic with respect to eq: If b  eqB~ (b', b') then b  b '  b" which implies h(b) " eqB'~(h(b'),h(b")). Any mapping h is reflective with respect to eq: If h(b) " eqt~'~ (b', b") then b  eq B~ (b, b) with h(b) = b' = b". [:] Our first observation is that a reflective homomorphism induces a homomorphism of the respective powerset algebras. P r o p o s i t i o n 6.3.5. Let B and B t be semialgebras over Sig, and let h " 13 + B' be a reflective homomorphism. Then the mapping h 1 " 7)(B ') + P ( B ) , where h  l ( L )  {b e B I h ( b ) e n } , is a homomorphism, whenever 7)(B) and 7)(13 ') are considered as algebras over Sig~. Proof. Let f C Sig. We first verify that, for any L I , . . . , Lp(f) C_ B ~, h  l ( f aB' ( L 1 , . . . ,Lp(f)))

f a B ( h  l ( L 1 ) , . .. , h  l ( L p ( f ) ) )
Indeed, the inclusion _D is true for any homomorphism, and the inclusion C_ is a straightforward consequence of the reflectiveness of h. It remains to verify that hl(] ~B'(L1,...,Lp(f)))

]~B(hl(L1),...
,hl(Lp(f)))
150
6. The #calculus over powerset algebras N
Using the above equality, the characterization of f in terms of complementation (page 143), and the fact that h  l ( L )  h  l ( L ) , for any L C_ B', we have
hl(f~B'(L1,...
h  l ( J ~B' ( L 1 , . . . , Lp(:)))
,Lp(:))
h  l ( f ~~ ( L 1 , . . . , Lp(:))) f~B (h1 ( L 1 ) , . . . , h 1 (Lp(:))) =
f~/3(hl(L1),... , h l(Lp(:)))

]M3(hl(L1),...,hl(Lp(:))) [:]
As we have said at the beginning of this chapter, we wish to consider powerset algebras as interpretations for the pcalculus. Therefore, we are interested in morphisms that preserve the whole clone of fixedpoint definable operations, and not only the basic algebraic structure. Recall that for any semialgebra over Sig, and a fixedpoint term t over Sig~, the interpretation [t]~ B is well defined; that is, for any valuation v 9 ar(t) + 7)(B), [t]pB(v ) is a subset of B. D e f i n i t i o n 6.3.6. Let C and 2) be two powerset algebras over the signature Sig~. A homomorphism h : C + 79 is a phomomorphism if, for every fixedpoint term t and for every valuation v : Var + C,
h([t]c[V])
=
[tlz~[hov]
where h o v : Vat + D is defined by (h o v)(x) = h(v(x)), for x C Var. Since t can be a base term (see Definition 2.3.1, page 47), a phomomorphism is always a homomorphism. Remark. In the general perspective of Chapter 2, we could organize the set of all interpretations ltl~B into a pcalculus, actually a subcalculus of the functional calculus over (P(B), C). In this setting, the property defining a phomomorphism means precisely that h is a homomorphism of pcalculi. P r o p o s i t i o n 6.3.7. The homomorphism h 1 : J~(B') + 7)(B) considered in Proposition 6.3.5 is a #homomorphism.
Proof. We show by induction on a term t that, for any valuation v" Vat + 7)(B'), h 1 ([tlp~, [v])  ltl~B[h x o v]. If t is a variable, the claim is obvious.
6.3 Homomorphisms, #homomorphisms, and bisimulations
151
If t  f ( t l , . . . , tp(s)), or t  f ( t l , . . . , tp(s)), the result follows by the induction hypothesis about t l , . . . , tp(f), and the fact h 1 that is a homomorphism (Proposition 6.3.5). Finally assume t h a t t = 8x.t'. Then ltlpB, [v] is the extremal fixed point of the mapping f : 7)(B ') + 7)(B ') defined by f ( L ' ) = [t'l~B,[v{L'/x}] and It]~B[h 1 o v] is the extremal fixed point of the mapping g 97)(B) + 7)(B) defined by g(L)  l t ' l p B [ ( h 1 o v ) { L / x } ] . By the induction hypothesis, h  l ( f ( L ' ) ) = g ( h  l ( L ' ) ) , and since h 1 : 7)(B ') + 7)(B) is obviously /~inf and/%supcontinuous for any ordinal/~, the result is a consequence of L e m m a 1.2.15 (page 13). [3 From Propositions 6.3.5 and 6.3.7 we get immediately: C o r o l l a r y 6.3.8. Let 13 and 13' be semialgebras over Sig, and let h" 13 + 13' be a reflective homomorphism. For any closed fixedpoint term ~ and any b C B, b E [~]~ r h(b) 6
H B,As an example of use of the previous proposition, let us consider the reflective homomorphism h 9t + 7sig defined in the Example 6.3.3 by h(w) t.w. Let ~ be a closed fixed point term. We have w C [7"1[9t if and only if t . w @ ]T]pTs~g, and we get, in particular, the following characterization of
I~]~T~Proposition
6.3.9.
t 6
]~]pTs~g if and only if c E Iw]~t.
An important notion related to Kripke structures and the modal ttcalculus is the notion of bisimulation. Indeed this notion is easily extended to semialgebras of arbitrary signatures. 6.3.10. A bisimulation between semialgebras B and B' over the same signature Sig is a binary relation R C B • B' such that
Definition
Vb C B , ~ b ' E B' bRb' Vb'6B' 3b C B bRb'  V f 6 Sig, Vb 6 B , b ' E B' 9 bRb',  if b " f B ( b l , . . . , b k ) , then there exist b [ , . . . , b ~ 6 B', such t h a t b' " f B ' ( b ~ , . . . , b ~ ) , and biRb~, for i  1 , . . . , k ;  if b' " f B ' ( b ~ , . . . , b ~ ) , then there exist b l , . . . , b k 6 B, such t h a t b " f B ( b l , . . . ,bk), and biRb~, for i  1 , . . . ,k. It is easy to check that this definition applied to semialgebras associated with Kripke structures is exactly the usual definition of a bisimulation relation.
152
6. The Itcalculus over powerset algebras
If (Ri)icI is a family of bisimulations between B and 13' then the relation UiEI Ri is also a bisimulation between B and B'. Thus, if there exists a bisimulation between B and B', there exists a greatest one. The following proposition is a straightforward consequence of the above definitions. P r o p o s i t i o n 6.3.11. If h " B ~ 13' is a homomorphism of semialgebras then the relation {{b, h(b)) l b e B } C B x B ' is a bisimulation if and only if h is surjective and reflective. We will call such a homomorphism a bisimulation homomorphism. If hi 9 B ~ + B and h2 " B" ~ B are two bisimulation homomorphisms, the composition product hi h21 of the two binary relations hi and h~1 is a bisimulation between B' and B". Conversely, we can generalize a result of [3]. P r o p o s i t i o n 6 . 3 . 1 2 . If R is a bisimulation between B' and B", then there exists a semialgebra 13 and two bisimulation homomorphisms h 913' + 13 and h2 913" ~ 13 such that R C_ hi " h~ 1
Proof. Obviously the relation ( R . R  1 ) * is an equivalence over the domain B' of B', denoted by R This relation is also a bisimulation between B' and itself, since for any n, ( R . R  1 ) n is a bisimulation between B and itself. Let B  B'/R be the set of equivalence classes of B'. We give B a structure of semialgebra over Sig by setting b " f • ( b l , . . . , bn) if and only if there exist b' e b, b~ e bl, . . . , b" e bn such that b' " fB' ( b [ , . . . , b~). Let hi " B' + B be the mapping defined by: hi(b') is the equivalence class of b'. Let us show t h a t it is a bisimulation homomorphism. By construction hi is surjective, and, by definition of B, b' " fB' (b~,... , b~) :::>hi (b') " fB(hx (b~),... , h i (bin)). It remains to prove that hi is reflective. Let hi(b') " f B ( b l , . . . , b,~). By definition of B there exist c', e 'l , . . . , c n' such t h a t c' " f•' (c~1,..., %), hi (c')  hl(b') and hi(c~)  b~ for i  1,.. . , n. Since c' R b' and since Ris a bisimulation, there exist b~  R c~ (and thus hi (b~)  hi (c~)) such that b' " fB' (b,1,..., b~). Now let us remark t h a t for any b" E B " there exists at least one b' E B ' such t h a t b'Rb", and t h a t if b~ Rb" and b~Rb" then bl =R be. Therefore we can define a surjective mapping h2" B " + B by h2(b")  hl(b') for any b' such t h a t b'Rb". It immediately follows that R C hi h21. Let us show t h a t h2 is a reflective homomorphism. If b" " f ~ " ( b ~ ' , . . . , b~), then h2(b")  hi (b') for some b' such that b'Rb". Since R is a bisimulation, there exist b~Rb~ such t h a t b' " f B ' ( b ~ , . . . ,b~). But then h2(b")  hl(b') " f ~ ( h l ( b ~ ) , . . . ,hl(b~n)) fB(h2(b~l'),... ,h2(b~)). If b " f B ( b l , . . . ,bn) and if he(b")  b, there exists b'Rb" such t h a t hx(b')  h2(b")  b. Since hi is reflective, there exist b~,... , b~ such that b' " fB' ( b ~ , . . . , b'~) and hi (b~)  bi. Since R is a bisimulation there exist b~', . . . , b~ such that b" 9 fB" (b~',... , b~) with biRbi; " hence h2(b~')  hi (b~)  bi. []
6.4 Bibliographic notes and sources
153
Together with Proposition 6.3.7, it follows, in particular, that if there is a bisimulation between B1 and B2, then for any closed fixedpoint term t, [t~pB1 # 0 if and only if [t]~z~2 7~ O. Another consequence of the previous proposition is that if there is a bisimulation between B' and B ' , there exist two bisimulation homomorphisms hi : B ~ + B and h2 : B" ~/3 such that hi h~ 1 is the greatest bisimulation between B' and B".
6.4 Bibliographic
notes
and
sources
The idea of a powerset algebra can be traced back to the work by Jdnnson and Tarski [47, 48]. The definition presented in this chapter extends the concept of a powerset algebra (without dualities) considered in [77, 10, 78]. Similar structures were considered in the work of McAllester, Givan, Witty and Kozen [60]. The notion of bisimulation for semialgebras as well as its characterization by bisimulation homomorphisms are straightforward extensions of similar notions for transition systems [81, 63, 3].
This Page Intentionally Left Blank
7. T h e t t  c a l c u l u s vs. a u t o m a t a
We have seen in C h a p t e r 5 t h a t finite a u t o m a t a running over finite or intlnite words can be adequately characterized by a suitably designed #calculi. We will now set up a similar characterization on a much more general level  for a u t o m a t a interpreted in arbitrary semialgebras. To this end, we will view a u t o m a t a as syntactic objects which, like terms, can have multiple interpretations. Technically, we will define the semantics of a u t o m a t a over semialgebras in terms of parity games played by A d a m and Eva as in Chapter 4. We shall see t h a t this setting comprises m a n y familiar examples, in particular the nondeterministic and alternating a u t o m a t a on trees. We will further organize our a u t o m a t a into a #calculus similar to the #calculus of fixedpoint terms of Section 2.3. As there, we shall see t h a t an interpretation is a h o m o m o r p h i s m from the #calculus of a u t o m a t a into a functional #calculus. Then, we will establish a connection between aut o m a t a and fixedpoint terms, by showing t h a t a natural translation from terms to a u t o m a t a is a homomorphism which, failing surjectivity, captures all the a u t o m a t a up to semantical equivalence. We conclude t h a t a u t o m a t a and fixedpoint terms are two notations for one and the same thing.
7.1 A u t o m a t a
over semialgebras
Basically, we view an a u t o m a t o n as a finite set of transitions, i.e. equations of the form x = f ( Y l , . . . ,Yk), where f is a function symbol in Sig, and the symbols x, y i , . . , are the a u t o m a t o n ' s states. We qualify each state as existential or universal, and associate with it a natural number called its rank. A proviso is m a d e t h a t each state is a head (i.e., the lefthand side) of exactly one transition. This may appear as a drastic restriction, but we shall see t h a t it is not the case, since any Boolean combination of transitions can be easily expressed in such a form thanks to the operation eq.
Classical automata take as input special objects, e.g. words or trees, possibly infinite. A computation typically consists in matching the positions of an input object (e.g., word or tree) with the transitions of an automaton.
156
7. The #calculus vs. automata
This gives rise to a marking of the positions by the automaton's states, usually called a run. Such a run is sometimes viewed as a strategy in a game t h a t an automaton plays in favor of the acceptance of the input object. We will adopt the game perspective while generalizing the above situation to arbitrary semialgebras. 7.1.1 I n f o r m a l d e s c r i p t i o n Let us fix an automaton, and a semialgebra B. We define a game of Eva and Adam, thinking that Eva plays for the automaton's benefit. The play consists of a possibly infinite sequence of rounds in which the players move alternately. Each round starts at a position of the form (b,z), where x is a state of the automaton, and b is an element of B. The rules of the game applicable to the position (b, x) are as follows. Suppose the transition for x is z = f ( y l , . . . , Yk) Then, if x is universal, Eva has to find some elements d l , . . . ,dk C B such that b  f B ( d l , . . . , d k ) . Adam answers by selecting i C { 1 , . . . ,k}, and the starting position of the next round is (di,yi). If x is existential, the development is symmetric. Thus, Adam starts by choosing d l , . . . ,dk C B such that b  f B ( d l , . . . ,dk), and then Eva picks an i in { 1 , . . . ,k}. Again, the nextround position is (di,y~). If any of the players cannot make her or his move, the other player wins the game. Note t h a t it may happen, e.g., if there is no decomposition b  f B ( d l , . . . ,dk); on the other hand, if the arity of f is p ( f ) = 0, and b  f• (i.e., a decomposition is possible), then Eva wins if x is universal, and Adam wins if x is existential as, obviously, the other player cannot pick an i C 0. If the players play ad infinitum, the win is determined by the parity criterion applied to the sequence of (ranked) states subsequently assumed in the play. Thus, if the highest rank occurring infinitely often is even, Eva wins the game, otherwise A d a m is the winner. We say that the automaton accepts an element b C B if Eva has a winning strategy at the position (b, x0), where x0 is a distinguished initial state. In the next section, we will present these concepts more formally, in terms of the parity games defined in Chapter 4. However, aiming at a parallel between a u t o m a t a and fixedpoint terms, we will allow the former, as well as the latter, to define operators (p(B)) k ~ fg(B) rather than merely subsets of B. To this end, we will equip our a u t o m a t a with an additional feature, namely variables. (In some sense, these variables can be seen as free variables of the automaton, while the states are its boun variables.)
7.1.2 Basic definitions We fix a countably infinite set of variables Var = {v0, v l , . . . }, and a finite signature Sig, assuming the following
7.1 Automata over semialgebras
157
Proviso. T h e symbol eq of arity 2 belongs to Sig and, in any semialgebra B, is i n t e r p r e t e d by eqB(a, b) " c if and only if a  b  c (see Sections 6.1.2 and 6.1.3). D e f i n i t i o n 7 . 1 . 1 . An automaton over a signature Sig is presented as a tuple
A  (Sig, Q, 17, x i , Tr, qual, rank I, where 





Q is a finite set of states. V c_ Vat, V M Q  ~, is a finite set, the set of the a u t o m a t o n ' s variables. T h e elements of Q u V are referred to as the a u t o m a t o n ' s symbols. T h e a u t o m a t o n is said to be closed if its set V of variables is empty. x i C Q u V is an initial symbol referred to as initial state if x i E Q. Tr is a set of transitions, which are pairs of the form (x, f ( y l , . . . , Yp(S))} also presented as equations x  f ( y l , . . . ,yp(f)) where f C Sig, x E Q, and y l , . . . ,yp(s) E Q u V. We assume t h a t for each state x C Q, there is exactly one t r a n s i t i o n of the form x  t; we call x the head of the transition, a n d write t  Tr(x). qual " Q + {3, V} is qualification function. We call those states x E Q for which qual(x)  3 existential, and those for which qual(x)  V, universal. rank 9 Q 4 IN is a m a p p i n g t h a t with each state x associates its rank, rank(x).
We will often refer to the components of an a u t o m a t o n A by the superscript 'A', thus A  (Sig, QA V A, x A, Tr A, anal A , rank A)
D e f i n i t i o n 7 . 1 . 2 . T h e semantics of a u t o m a t a will be given in terms of parity games of C h a p t e r 4. Let A be an a u t o m a t o n as above, and let B be a semialgebra. Let val : V + P ( B ) be a m a p p i n g t h a t associates a subset val(z) C_ B with each variable z E V. W i t h these data, we associate a game G ( A , ]3, val)  (Rose, Rosa, Mov, rank) as follows. T h e set of positions Pos = Pose U Posa is a disjoint union of 
the set of state positions B • QA the set of transition positions t h a t is the set of all elements
(x = f (Yl, . . . , Yk ), w) e Tr A x B* such t h a t w = ( a l l , . . . , dk, b) with b  f • ( d l , . . . , note such a position by
((b,x)  f ( ( d l , y l ) , . . .
dk). We sometimes de
, (dk,yk))).
158
7. The #calculus vs. automata
 the set of variable positions B x V A. Note t h a t the symbols x, y l , . . . , Yk in the transition position above need not be different, and t h a t an equality x = yi (or yi = yj) does not imply b = di (or di = di). For this reason, we cannot view the tuple ( d l , . . . , dk, b} as a valuation of the variables y l , . . . , yk, x. The positions are distributed to the players as follows. A state position (b, x) is a position of Eva, whenever quaIA(x) V. A transition position (x = f ( y x ,  . . ,yk), ( d l , . . . ,dk,b)) is a position of Eva, whenever qual A (x) = 3.  A variable position (b,z) is a position of Eva, whenever b ~ val(z).  The remaining are positions of Adam.


The relation Mov is defined by the following rules. There is an edge from a state position (b, x) to a transition position (x = f ( y l , . . . , y k ) , { d l , . . . , d k , b } ) . There is an edge from a transition position
(x  f (y~, . . . , Yk ), {dl , . . . , dk, b) ) to the position (di,yi), for each i = 1 , . . . ,k. (Note that the latter can be a state or a variable position.)  There are no other edges. Note t h a t no edge comes out from a variable position (b,z), z C V A, and, whenever a play reaches such a position, Eva wins if and only if b C val(z). Finally, we define the function rank over positions, according to the rank of the state part of the position. T h a t is, for a state position (b, x), we let rank(b,x) = rankA(x), and we let rank(p) = 0, for all other positions. D e f i n i t i o n 7.1.3. An element b of B is recognized (or accepted) by A with respect to a mapping val : V + 7)(B) if the position (b, xi) is winning for Eva in the game G(A, B, val). We denote the set of all such elements by [A]pB(val ). Consequently, [A]~ B denotes the mapping 7)(B) y ~ 7)(B) t h a t sends val to [AleB(val ). Two concepts of equivalence between a u t o m a t a will be in order. We say t h a t two a u t o m a t a A and A' with the same set of variables are semantically equivalent if, for any semialgebra B, the mappings [Ale B and [A']e B are identical. A closer relation is defined via the concept of isomorphism. We call two a u t o m a t a over Sig, A and A ~, isomorphic if they have the same set of variables, i.e. V A = V A' , and there exists a bijective mapping h : QA _~ QA', which we call isomorphism, such that
7.1 Automata over semialgebras A t
159
A ~
for each x C QA, rankA(x) _ rank (h(x)), and qualA(x)  qual (h(x)), for any f C Sig, there is a transition x  f ( x l , . . . ,xk) in Tr A if a n d only if there is a t r a n s i t i o n h(x)  f ( h * ( x l ) , . . . ,h*(xk)) in Tr A', where
h*(y)
=
h(y) if y c QA yifyEV A
Clearly, the inverse m a p p i n g h 1 satisfies a symmetric condition and so h 1 is an isomorphism from A ~ to A. T h e following is an easy consequence of the definitions. Proposition
7 . 1 . 4 . If two automata are isomorphic, they are also semanti
cally equivalent. We also consider a useful construction which eliminates the useless symbols of an a u t o m a t o n , i.e. the symbols not reachable from the initial one. Let A = (Sig, Q, V, xi, Tr, qual, rank} be an a u t o m a t o n . Consider a directed g r a p h whose set of nodes is Q u V, and there is an edge from x to y if a n d only if there is a transition of the form x = f ( . . . , y , . . . ), for some f E Sig. Let red(Q) (respectively, red(V)) be the set of the states (resp. variables) y such t h a t t h e r e is a p a t h in this graph from xI to y. Note t h a t xi E red(Q)U red(V), and, if x1 E V then the set red(Q) is e m p t y and =
D e f i n i t i o n 7 . 1 . 5 . An a u t o m a t o n A is in reduced form (or simply: reduced) if red (Q A) = Q A a n d red (Y A) = vA. For any a u t o m a t o n A, we define the reduced version red(A) as an a u t o m a ton whose set of states is red(QA), the set of variables is red(VA), a n d the set of transitions as well as the functions qual and rank are o b t a i n e d from those of A by restriction to red(Q A) and red(V A) in the obvious way. Note t h a t the set of variables of red(A) can be smaller t h a n t h a t of A. However, it is easy to see t h a t the construction preserves semantics in the following sense.
Proposition 7.1.6. For any semialgebra B, and any mapping val from V to P(B), [Al~B(val) [red(A)]~B(val r red(VA)). 7.1.3 Hierarchy
of indices
We will show in Section 7.3 of this chapter t h a t a u t o m a t a are, in some precise sense, equivalent to fixedpoint terms. We now define a p a r a m e t e r of ant o m a t a which, in this equivalence, will correspond to the a l t e r n a t i o n d e p t h of terms.
160
7. The #calculus vs. automata
First, let us r e m a r k t h a t an a u t o m a t o n whose minimal value of rank is greater t h a n 2 is equivalent to the one with the function rank modified by rank(q) := rank(q)  2 . Therefore, from now on, we assume t h a t all the automata we consider satisfies the property: min(rank(Q)) E {0, 1} (provided Q is not empty!). D e f i n i t i o n 7 . 1 . 7 . For any a u t o m a t o n A which has a n o n e m p t y set of states we define the Mostowski index of A as the pair (~, k), where k

m a x (rank(Q))
and ~ C {0, 1} is given by 
min (rank(Q))
so t h a t it is the pair (min(rank(Q)) , max(rank(Q)) ). Essentially, the Mostowski index measures the amplitude of the r a n k of an a u t o m a t o n . As we have r e m a r k e d it in Section 5.2.3 (page 118), the B/ichi acceptance criterion is a special case of parity criterion characterized by the fact t h a t the r a n k function ranges over the set {1,2}. Therefore, a Biichi a u t o m a t o n can be presented as a parity a u t o m a t o n of Mostowski index (1, 2). It is sometimes convenient to compare indices of a u t o m a t a . We will say t h a t an index (5, k) is compatible with an index (J, k') if either 5' _< 5 and k _< k' or 5  0, ~'  1, and k + 2 < k'. It is easy to see t h a t if (~,k) is compatible with (~', k') then, for any a u t o m a t o n of index (5, k) there exists a semantically equivalent a u t o m a t o n of index (5~, M). This concept gives rise to a formal hierarchy of a u t o m a t a (see Figure 7.1). We shall see later in C h a p t e r 8 t h a t the expressive power of the a u t o m a t a increases indeed with the level in the hierarchy. 7.1.4 Dual automata
We have seen in Proposition 6.1.6 (page 144) that any term t has a dual such t h a t [/]eB is the dual m a p p i n g of [tie B  7)(B) a~(t) + 7)(B). We define in this section the notion of a dual a u t o m a t o n which enjoys a similar property. D e f i n i t i o n 7 . 1 . 8 . Let A  (Sig, Q, V, xi, Tr, qual, rank} be an a u t o m a t o n , and let (5, k) be its Mostowski index. The dual a u t o m a t o n of A is A (Sig, Q, V, xx, Tr, qual', rank') where 

qual' (q)  V if and only if qual(q)  3, rank(q)+l ifs0 rank'(q)rank(q) 1 if s  1
7.1 Automataoversemialgebras 161
I
(1,2k+2)
IX
(0,2k+l)
I
(1,2k+l)
(0,2k)
(1,4)
(0,3)
(1,3)
(0,2)
(1,2[
()0,1)[
(1,1)
(0,0)
Fig. 7.1. The hierarchyof Mostowskiindices
162
7. The #calculus vs. automata
It follows t h a t the Mostowski index of A is (1, k + 1) if ~ = 0 and (0, k  1) if ~ = 1 so t h a t duality exchanges the two classes on a same row of the diagram of Figure 7.1. The following proposition is direct consequence of the definition.
P r o p o s i t i o n 7.1.9. The dual of fl is exactly A. P r o p o s i t i o n 7.1.10. For any automaton A, the mapping
lAId, ~]')(B)at(A) ~ ~(J~) is the dual of [A]p~. Proof. Let val be a mapping from ar(A) to 7)(B) and let val be defined by val(v) = 7 ) ( B )  val(v). Let G (resp. G') be the game associated with A and val (resp. A and val), as in Definition 7.1.2, page 157. These two games have the same sets of positions and moves, but the position of Adam and Eva are interchanged as well as the parities of the rank of the state positions. Thus a play in G is also a play in G t (and viceversa) and a play is won by Eva in G if and only if it is won by A d a m in G I. Therefore, if Eva has a winning strategy at some position in G, this very strategy is winning for Adam at the same position in G I, and, by the determinacy theorem (Theorem 4.3.10, page 92), we get as a consequence of Definition 7.1.3 (page 158),
[Alp~(val) [fi]~B(val ). V1
7.1.5 R e l a t i o n to classical a u t o m a t a We will now illustrate our general concept of a u t o m a t a by some familiar examples presented in our setting. To this end it is convenient to slightly modify the definition of an a u t o m a t o n in order to provide a possibility of choice between multiple transitions, in existential or universal mode. We will later see t h a t this modification can be easily incorporated in our setting.
A l t e r n a t i n g a u t o m a t a . Rather than to release our requirement t h a t each state is the head of only one transition, we extend the structure of transitions by means of the operations V and A. In an alternating a u t o m a t o n A, we let transitions be of the form x  T, where Tr(x)  • is any element of the least set of expressions D A containing the base terms f (Yl,... ,Ya) C D A, for f C Sig and y l , . . . , yk C Q A u V A, and closed under (binary) V and A, i.e. if T1 and ~2 are in D A, so are (T1 V T2) and (TI A ~2). Note t h a t these terms are all guarded functional terms as defined in Section 9.2.2, page 208. Given a semialgebra B and a mapping val 9V A + 7)(B), the semantics is again defined by the game G ( A , B , val)  (Pose, Posa, Mov, rank),
7.1 Automata over semialgebras
163
but with slightly modified positions and moves. We extend the concept of s t a t e positions, by letting a state p o s i t i o n be any pair (b, x  7t), where 7~ is a s u b t e r m of T r ( x ) different from a variable; in other words, a subterm of T r ( x ) in D A. (Recall t h a t , in the previous game, we have p r e s e n t e d the state positions by (b,x), but we could u n a m b i g u o u s l y present t h e m also by (b, x  T r ( x ) ) . ) Next, we let the t r a n s i t i o n p o s i t i o n s be of the form (x  f (yl , . . . , Yk ), v), where f ( y l , . . . , Yk ) is a base s u b t e r m of T r ( x ) . A state position of the form ( b , x  (71 o 72)) is a position of Eva for o  V, and of A d a m for o  A. A state position of the form (b, x  f ( y l , . . . , Yk)) belongs to E v a or A d a m depending on whether x is universal or existential, respectively. ( E x a c t l y as for the position (b, x) in the previous game.) Dually, a t r a n s i t i o n position (x  f ( y l , . . . , Yk), v) belongs to Eva or A d a m depending on w h e t h e r x is existential or universal, respectively. T h e r e is a move from a position ( b , x  (71 ~ to ( b , x  7i), i  1,2. T h e r e m a i n i n g moves are like in the previous game: there is a move from (b,x  f ( y l , . . . ,yk)) to (x  f ( y l , . . . , yk), ( d l , . . . , dk, b}), for any d l , . . . , dk, b C B such t h a t b " f(dl,... , d k ) , a n d from (x  f ( Y l , .  . , Y k ) , ( d l , . . . , d k , b / ) to (di,yi) if yi is a variable, a n d to ( d i , y i  T r ( y i ) ) if yi is a state. T h e r a n k of a s t a t e position (b, x  7') is r a n k A (x), and the r a n k of the remaining positions is 0. Finally, I A l ~ B ( v a l ) is defined as the set of all b E B such t h a t the state position (b, x A  T r ( x A ) ) is winning for Eva. It should be clear t h a t if an a l t e r n a t i n g a u t o m a t o n A h a p p e n s to be in the form of Section 7.1.2, t h e n the above game coincides with the g a m e defined there, up to identification of positions ( b , x  7) with (b, x). We will show t h a t , in general, any a l t e r n a t i n g a u t o m a t o n A can be t r a n s f o r m e d to an a u t o m a t o n A ~ in the previous sense, semantically equivalent to A. T h e c o n s t r u c t i o n consists in consecutively eliminating V and A. We will show the case of V. Suppose the transition for x is x  (71V72) Let us modify A by removing this transition, and adding instead the three transitions x  e q ( x l , x 2 ) ,
x l  71,
x2  T2,
where xl and x2 are some fresh symbols, now considered as states of A, with q u a l ( x l ) = q u a l ( x 2 ) = q u a l ( x ) , and r a n k ( x 1 ) = rank(x2)= r a n k ( x ) . We additionally reset the qualification of x by q u a l ( x ) := 3. T h e g r a p h of the g a m e is modified accordingly. Note t h a t in the former g a m e we have h a d a position (b, x = (71 V 72)), for b C B, from which E v a could move to (b, x = 71) or to (b, x = 72). In the new game we have a position ( b , x = e q ( x l , x 2 ) ) , from which A d a m can move to (x = e q ( x l , x 2 ) , ( b , b , b } ) and t h e n Eva can move to (b, xl = T1) or to (b, x2 = 72). Clearly, the resulting state position is d e t e r m i n e d solely by Eva. Now, to see t h a t the modified a u t o m a t o n is equivalent to the original one, it is enough to show
164
7. The #calculus vs. automata
t h a t any positional s t r a t e g y for Eva in the original game, winning at a position (a,x A  Tr(xA)), can be t r a n s f o r m e d into a s t r a t e g y for E v a in the modified g a m e winning at the position (a,x A  Tr'(xA)), and vice versa, for a e B. (Here Tr'(x A)  Tr(xA), unless x A coincides with the s t a t e x for which the transition has been actually modified as above.) T h e transf o r m a t i o n is fairly easy: for example, if a s t r a t e g y in the original game, at the position (b, x = (71 V T2)), chooses a move to (b,x = ~1) then t h e corr e s p o n d i n g s t r a t e g y in the modified game will choose (b, Xl  71), at the position (x = eq(x~, x2), (b, b, b}). Conversely, given a s t r a t e g y for Eva in the modified game, we d e t e r m i n e her move at the position (b,x = (T1 V T2)) in the original game, depending on t h a t s t r a t e g y ' s choice at a position (x = eq(xl,x2), (b,b,b}). It is easy to see t h a t the modified strategies behave as expected. If the transition for x in the original g a m e is x = (~1A ~'2), the modification is similar except for t h a t the new qualification of x shall become V. T h e justification is similar, and we leave it to the reader. Now, by a r e p e a t e d application of these two steps, we eventually o b t a i n an a u t o m a t o n in the required form, semantically equivalent to the original one.
Remark. A l t e r n a t i n g a u t o m a t a considered above look more close to classical a u t o m a t a t h a n our generic a u t o m a t a defined in Section 7.1.2. For example, the transitions of a finite a u t o m a t o n over finite words, say, p a q, a n d b
p + r, can be represented as a single a l t e r n a t i n g transition p  (aq V br) (see below). But of course, we could also use here, e.g., the t r a n s i t i o n (br V aq), and it seems n a t u r a l not to distinguish between these two possibilities. More generally, we could let the righthand sides of our transitions be the elements of a free distributive lattice generated by the base t e r m s f ( y l , . . . , Yk), r a t h e r t h a n as formal expressions. Thus we could also identify, e.g., ((~1 V72) V73) with (~1V(72 V73)), ((71V72)AT3) with ((71AT3)V(T2AT3)), etc. Such an a p p r o a c h m a y a p p e a r more natural, although it is less convenient for a formal presentation. However, is easy to see t h a t if the t e r m s ~and ~~ are equivalent over the flee distributive lattice, then an exchange of ~ for T ~ in an a l t e r n a t i n g a u t o m a t o n A defined as above shall not alter the semantics of A. Therefore, in the examples below, we will abuse the n o t a t i o n , writing freely, e.g., x  ~1 V ~2 V ~3, by which we m e a n t h a t the t r a n s i t i o n in question can be x  ((T1 V ~2) V T3) or x  (T1 V (T2 V ~3)), or p e r h a p s x  ((T3 V ~1) V ~2), and we leave to the reader the verification t h a t an a c t u a l choice is u n i m p o r t a n t .
p 
Nondeterministic a u t o m a t a . A nondeterministic automaton over Sig is an a l t e r n a t i n g a u t o m a t o n whose transitions are in the form x  ~1 Y . . . V ~~n, where m > 1, and each ~i is a base t e r m of the form f ( y l , . . . ,yp(f)), for some y l , . . . ,Yk E Q A u V A, and some f E S i g  {eq}. (The special
7.1 Automata over semialgebras
165
s y m b o l eq is e x c l u d e d from t r a n s i t i o n s for technical reasons. N o t e t h a t it shall r e a p p e a r if we convert a n o n d e t e r m i n i s t i c a u t o m a t o n into t h e general f o r m of Section 7.1.2, by t h e p r o c e d u r e described above for a l t e r n a t i n g a u t o m a t a . ) T h e qualification of all t h e states is universal, while no c o n s t r a i n t is i m p o s e d on t h e rank. For such a u t o m a t a , we will simplify t h e g a m e G(A, B, val) by letting t h e s t a t e positions be only in t h e form (b,x = 71 V    V ~,~) or (b,x  Ti), where Tr A(x) = 71 V " " V 7"rn. We a s s u m e t h a t t h e r e is a move of E v a from (b, x = ~1 V    V Tin) (directly) to (b,x = ~i), for i = 1 , . . . , m. It is easy to see t h a t this simplification does not alter t h e semantics of t h e a u t o m a t a . By calling t h e a b o v e a u t o m a t a " n o n d e t e r m i n i s t i c " , we have wished to c o n t r a s t t h e m with t h e general a l t e r n a t i n g a u t o m a t a , r a t h e r t h a n to e m p h a size n o n d e t e r m i n i s m . To complete t h e picture, let us call a n o n d e t e r m i n i s t i c a u t o m a t o n deterministic if, in each t r a n s i t i o n x = ~1 V . . . V Tin, t h e function symbols in t h e ~i's are pairwise different. (Such a c o n t r a d i c t o r y p h r a s i n g is c o m m o n in a u t o m a t a t h e o r y since nondeterministic is u n d e r s t o o d as not necessarily deterministic.) We note however t h a t t h e qualification of such an a u t o m a t o n as d e t e r m i n i s t i c is p r o b l e m a t i c if t h e semialgebra on which it is i n t e r p r e t e d is n o t itself deterministic in the following sense: no e l e m e n t can be d e c o m p o s e d in two different ways. In Section 10.4.1 (page 242) we i n t r o d u c e formally this property, called t h e r e codeterminism, a n d show t h a t t h e semialgebras of s y n t a c t i c trees (see E x a m p l e 6.1.3, page 142) have this property. T h e r e f o r e an a d e q u a t e n o t i o n of d e t e r m i n i s m should involve b o t h an a u t o m a t o n a n d an i n t e r p r e t a t i o n . However, since d e t e r m i n i s m is not an issue in this c h a p t e r , we do not go f u r t h e r in these considerations.
In the examples below, the variables of automata do not play any essential role, and we can always assume V A = O. Therefore we will simply omit this item in the presentation of automata. A u t o m a t a o n w o r d s . Let ~ be a finite a l p h a b e t . T h e set of finite or infinite words over 5;, Z ~ = Z:* U Z ~ can be organized into a semialgebra (in fact, an algebra) over t h e s i g n a t u r e Sig = ZU{e}U{eq}, where each s y m b o l a 6 Z is i n t e r p r e t e d by t h e o p e r a t i o n u ~ a u of Definition 5.1.2 (page 106), e is a c o n s t a n t s y m b o l i n t e r p r e t e d by t h e e m p t y word c, a n d eq is i n t e r p r e t e d as usual. T h e n a finite n o n d e t e r m i n i s t i c a u t o m a t o n over finite words, .4 = ( S , T , I , F ) (see Section 5.2.1, page 115) can be p r e s e n t e d in our s e t t i n g as follows. S u p p o s e first t h a t t h e r e is a single initial state I = {x i ] . T h e n we can keep S as t h e set of states, a n d let t h e transitions be of t h e form x = aiYi V . . . V amYm or x = alYl V . . . V amym V e, where x ~ Yi is a t r a n s i t i o n of t h e original a u t o m a t o n , a n d t h e s u m m a n d e occurs w h e n e v e r x is an a c c e p t i n g state. N o t e t h a t in the new a u t o m a t o n , t h e r e is a single t r a n s i t i o n x = . . . , for each s t a t e x. We let rank(x) = 1, for all s t a t e s x E S, so t h a t no infinite play can be won by Eva. T h e qualification of states is
166
7. The #calculus vs. automata
unimportant, because of the monadicity of operations. (There is at most one move from a position (u, x = ay), and thus the next state position is determined, whoever moves the first.) If there is more than one initial state, say I = {zl,... , z~}, we add a fresh initial state xx and a transition xl = Zl V... V ze. An automaton over infinite words with parity acceptance condition defined as in Section 5.2.2 (page I17) can be presented similarly, except for that the constant e does not occur on the righthand sides of transitions. Readily, we can extend the definition and, by allowing e, to let an automaton accept finite as well as infinite words. Let us consider a simple example. To avoid confusion, we will refer to the presentation of a classical word automaton in our setting as to an abstract automaton. Let ~F = {a, b}, and suppose an automaton (in classical presena
b
a
b
t a t i o n ) has t r a n s i t i o n s p > p, p > q, q > q, p ~ r, where the s t a t e p is initial, a n d the s t a t e q accepting. T h e a b s t r a c t a u t o m a t o n has t h e transitions p  ap V bq V br, q  aq V e. T h e following sequence of positions forms a possible play in t h e g a m e associated with t h e c o r r e s p o n d i n g a b s t r a c t aut o m a t o n i n t e r p r e t e d in t h e a l g e b r a of words: (aba, p = apV bq), (aba, p = ap), (p, aba) = a(p, ba), (ba, p = bq V br), (ba, p = bq), (p, ba) = b(q, a), (a, q = aq V e), (a, q = aq), (q, a) = a(q, c), (c, q = aq V e), (c, q = e). T h e play is won by E v a since A d a m is not able to move from t h e last position. Intuitively, we can t h i n k t h a t E v a has used a s t r a t e g y suggested by an accepting r u n on t h e word aba, p % p ~ q % q. T h a t is, at a position of the form (w, x  ~1 V T2), t h e s t r a t e g y told E v a which t r a n s i t i o n to choose: x  71, or x = T2. Here t h e choice at t h e position (aba, p = ap V bq) was obviously indicated by t h e first letter of aba; while at t h e position (ba, p = bq V br) t h e advice of the s t r a t e g y was essential, b e c a u s e of the n o n d e t e r m i n i s m of our a u t o m a t o n . It is not h a r d to prove t h a t the accepting runs of a classical w o r d aut o m a t o n always induce t h e winning strategies for E v a in the g a m e associated w i t h t h e a b s t r a c t a u t o m a t o n , a n d conversely, t h e strategies induce runs. Consequently, the g a m e semantic of the a b s t r a c t a u t o m a t a i n t e r p r e t e d in t h e a l g e b r a of words coincides with the classical semantics of word a u t o m a t a . We will show it in detail for a u t o m a t a on trees, which can be viewed as a g e n e r a l i z a t i o n of words to a r b i t r a r y signature. Automata o n t r e e s . T h e reader familiar with tree a u t o m a t a m a y have f o u n d our p r e s e n t a t i o n of n o n d e t e r m i n i s t i c a u t o m a t a quite close to t h e usual p r e s e n t a t i o n of such a u t o m a t a (assuming the p a r i t y acceptance condition), except for t h a t a t r a n s i t i o n x = T1 V  . . V Tk is sometimes p r e s e n t e d as a set of t r a n s i t i o n s {x = T 1 , . . . , x = Tk }. Consider a n o n d e t e r m i n i s t i c a u t o m a t o n A over a signature Sig. Let A = (Sig, QA, x A, TrA, qual A, rank A) be a n o n d e t e r m i n i s t i c a u t o m a t o n . W i t h t h e classical i n t e r p r e t a t i o n , a tree a u t o m a t a takes as an input a s y n t a c t i c tree
7.1 Automata over semialgebras
167
over Sig  { eq}, i.e. an element t of Tsig{eq} (see E x a m p l e 6.1.3, page 142) A c o m p u t a t i o n consists in e x a m i n i n g the nodes of the tree, level by level, according to the t r a n s i t i o n table, s t a r t i n g from the root. Formally, a run of A on t is defined as a m a p p i n g r " dom t ~ QA such t h a t r(c)  x A and, for each w 6 d o m t, if r(w) = x, t(w) = f , and the t r a n s i t i o n for x is x = T1 V    V ~k, t h e n f ( r ( w l ) , . . . , r ( w p ( f ) ) ) coincides with Ti, for some i (recall t h a t w l , . . . , w k are the successors of w in dom t). A run is considered accepting if, for each infinite path in dom t, i.e. a sequence w0, W l , . . . such t h a t each We+l is a successor of we in dom t, limsupe__~ rankA(r(we)) is even. T h e tree language recognized by A consists of those trees t for which there exists an accepting run of A on t. For example, an a u t o m a t o n with transitions x = a ( x , x ) V b(z,z) and z = a(x, x) V b(z, z), where rank(x) = 0 and rank(z) = 1, recognizes those trees over the s i g n a t u r e {a, b} (with ar(a)  ar(b) = 2) which do not have a p a t h with infinitely m a n y b's. This e x a m p l e can be generalized to an a u t o m a t o n over a s i g n a t u r e {a0, a l , . . . ,ak} (again with all the symbols binary) whose states are {x0, X l , . . . , Xk} with rank(xi) = i, and the transitions are of the form
xi

ao(xo,xo) V a l ( X l , X l ) V . . . V a k ( x k , x k )
It is plain to see that this automaton accepts a tree t if and only if, for each infinite p a t h w0, w l , . . . , limsupe_+c o ~t(we) is even, where ~ai  i . An i m p o r t a n t class of tree a u t o m a t a is induced by the Biichi a c c e p t a n c e criterion (see Section 5.2.2, page 117). A Biichi automaton differs from the a u t o m a t o n above only in t h a t the acceptance condition is given by a set of states F C_ Q. A r u n on a tree t is considered accepting if for each infinite p a t h ( w 0 , w l , . . . ) , r(wn) G F, for infinitely m a n y n's. It is plain to see t h a t the Biichi a u t o m a t a can be equivalently presented as the p a r i t y a u t o m a t a of Mostowski index (1, 2) (the states of F are ranked 2, and the r e m a i n i n g states are r a n k e d 1). In E x a m p l e 6.1.3 (page 142), we have introduced two semialgebras related to syntactic trees; now on we shall reconsider t h e m enriched by the o p e r a t i o n eq. Hence, there are two ways of presenting a tree a u t o m a t o n in our general setting: it can be i n t e r p r e t e d in the algebra of all trees 7sig, or, for any tree t E Tsig{eq}, in the semialgebra t. We shall see t h a t b o t h ways are in some precise sense equivalent to the classical semantics of a u t o m a t a on trees. Consider first the semialgebra t. Recall t h a t its universe is d o m t, and w " f t ( v l , . . . , vk), whenever t(w)  f and V l , . . . , vk are the successors of w. Consider the g a m e associated with A and t. Note t h a t the positions of E v a are of the form (w, x  71 VV Tin) or (w, x  ~i), and the positions of A d a m of the form (x  ~~,v), where T1 V   . V ~m  Tr(x). At a position of the first kind, Eva selects one ~i by moving to (w, x  ~i). Suppose ~i  f (yl, 9 9, Yk).
168
7. The #calculus vs. automata
T h e n Eva can make a next move only if f = t(w), and then she has no choice: she moves to (w,x)  f ( ( w l , y l ) , . . . , (wk, yk)). From there, A d a m can move to any (wi, yi = Tr(yi)), i = 1 , . . . ,k (provided k > 0). Thus, in course of a play, the first components of the positions of Eva form a p a t h in dom t. In fact, the p a t h is selected by the moves of A d a m   in the early version of this game, due to Gurevich and H a r r i n g t o n [40], the player corresponding to our A d a m was called Pathfinder. Now suppose there is an accepting run r : dom t ~ QA. We construct an induced positional strategy sr for Eva, as follows. For each position p = (W, X  71 V    V 7"m) such t h a t r(w) = x, we let sr : p ~ (w, x = Ti), where ~i is the t e r m chosen by the a u t o m a t o n in the run, i.e., it is of the form f(yl,...,yk), such t h a t f  t(w) and Yi = r(wi), for i = 1 , . . . , k . For the remaining positions, we let s~ be defined arbitrarily. (Note t h a t the move of Eva from a position (w, x = ~) is determined by the unique decomposition of w in t, w  f t ( w l , . . . , wk), with f = t(w).) Now it is plain to see t h a t in each play starting from the position (c, x A  Tr(xA)) and consistent with the s t r a t e g y s~, the positions of Eva are of the form (w, x = ... ), where x = r(w). Therefore Eva cannot loose in a finite play, and each infinite play corresponds in a n a t u r a l way to a p a t h in r and is won by Eva since r is accepting. Thus, the s t r a t e g y s~ is winning for Eva at the position (6, x A  T r ( x A ) ) . Conversely, suppose Eva has a positional strategy s which is winning at the position (c,x A  Tr(xA)). We construct a run rs on t by induction on w E dom t. Let rs(c)  x A. Next, suppose rs(w)  x, and s maps the position ( w , x = Tr(x)) to ( w , x = f ( Y l , . . . ,Yk))Since s is winning, obviously t(w) = f . We let r(wi) = y~, for i = 1 , . . . , k. Again, it is easy to see t h a t any infinite p a t h in r~ corresponds to some play consistent with the s t r a t e g y s, and thus it satisfies the parity acceptance condition. We conclude t h a t a tree t is accepted by the a u t o m a t o n A according to the classical definition if and only if the position (6, x A  Tr(xAi )) is winning for Eva in the game associated with A and t. A similar characterization can be shown for the interpretation of A in the algebra of trees 7sig: t is accepted by A if and only if Eva has a winning s t r a t e g y from the position (t, x A  Tr(xA)). To see it, it is enough to realize a connection between the previous game, and the game associated with A and _ g. Let us denote these games by G ( A , t) and G ( A , Ts~g), respectively. Give.l a positional s t r a t e g y sl for Eva in G ( A , t), we define a strategy s2 in G( i, 7s~g) as follows. For each w c dom t, whenever sl maps ( w , x = Tr(x)) to ( w , x = f ( y l , . . . ,Yk)), we let s2 map the position ( t . w , x = Tr(x)) to (t.w, x = f ( Y l , . . . , Yk)), where t.w is the subtree of t induced by w (see Example 6.3.3, page 148) i.e., dom t.w = { v : w v e dom t}, and t.w(v) = t ( w v ) , for v C dom t.w). For the remaining positions, s2 can be defined arbitrarily.
7.1 Automata over semialgebras
169
It is easy to show t h a t if sl is winning at ( s , x A  Tr(xA)) then s2 is winning at (t, x A  Tr(xA ) ). Similarly, given a positional strategy for Eva in G(A, 7sig), say s2, and given a tree t c Tsig, we consider a strategy sl in G(A, t), such that, whenever s2 maps (t.w,x = Tr(x)) to (t.w,x = f ( y l , . . . ,Yk)), Sl maps ( w , x = Tr(x)) to (w, x = f (Yl,. 9 Y~)). Again, it is easy to see t h a t whenever s2 is winning at (t,x A  Tr(xA)), so is sl at ( s , x A  T r ( x A ) ) i n G ( A , t ) . Consequently, the tree language recognized by A in the classical sense coincides precisely with the interpretation of A in the algebra 7~g. Finally, let us r e m a r k t h a t the deterministic a u t o m a t a over trees are essentially weaker t h a t nondeterministic a u t o m a t a . E x a m p l e 7 . 1 . 1 1 . Consider a nondeterministic a u t o m a t o n with the transitions xx = f ( x o , x l ) V f ( x l , x o ) , X O   a, and xl = b, where a and b are constant symbols. Interpreted over trees, this a u t o m a t o n accepts precisely the trees f(a, b) and ](b, a), but it is easy to see t h a n any deterministic aut o m a t o n accepting these two trees should also accept the trees f(a, a) and
f(b,b). T h e use of a nonunary symbol f was essential here. If we restrict ourselves to signatures with symbols of arity at most 1 then a u t o m a t a over trees can be easily identified with a u t o m a t a over (finite or infinite) words, and M c N a u g h t o n ' s t h e o r e m can be applied. T h e concept of a nondeterministic tree a u t o m a t o n can be naturally extended to t h a t of an alternating automaton on trees. We will consider such a u t o m a t a in C h a p t e r 8. However, the alternating a u t o m a t a on trees are not essentially simpler t h a n the alternating a u t o m a t a in any other semialgebra (in contrast to nondeterministic a u t o m a t a ) ; therefore we do not discuss t h e m here. O t h e r a c c e p t a n c e c o n d i t i o n s . In Section 5.2.2 (page 117), we have considered a u t o m a t a over infinite words with a somewhat weaker Biichi acceptance criterion, but also with acceptance criteria apparently more general t h a n the parity condition, as, e.g., the Rabin criterion, given by a set of pairs of sets of states, and the yet more general Muller criterion, given by any family of sets of states. We have seen however t h a t any nondeterministic a u t o m a t o n over infinite words with Muller (or Rabin) condition can be transformed into an equivalent deterministic a u t o m a t o n with a parity condition (by Theorem 5.2.8, page 121, and McNaughton's theorem). As far as determinism is concerned, this analogy does not extend to a u t o m a t a over semialgebras, and even not to a u t o m a t a over trees, as we have seen in the Example 7.1.11. On the other hand, the parity acceptance condition turns out to preserve its universal power also for our most general concept of a u t o m a t a , and it is in
170
7. The #calculus vs. automata
fact an easy consequence of the aforementioned properties of the a u t o m a t a on words. To see it, let us consider an a u t o m a t o n A defined as in Section 7.1.2 (page 156), except for that the rank function is replaced by a recognizable set of infinite words C A C_ (QA)~. The game G(A, B, val) is defined as previously except for t h a t the winning criterion for infinite plays is given by C A as follows. Consider a play p 0 , p l , . . , starting from a state position P0  (b, x). By definition, the play forms an alternating sequence of state positions and transition positions. We can derive an infinite sequence of states consisting of the second components of the state positions, i.e., of P0, p2, p4, . . . . We let, by definition, the play be won by Eva if this sequence is in cA; otherwise A d a m is the winner. A parity acceptance condition can be easily presented in this way, by taking C A  {U E Q W . limsuPn~oo rankA(un) is even }. Similarly, a Muller acceptance condition given by a set 9r C_ 7)(Q A) can be presented by the set of strings u C Q~ such that the set of states that occur infinitely often in u is an element of 9r; clearly, this set is wregular. Now, by the McNaughton theorem and Theorem 5.2.8 (page 121), there exists a deterministic a u t o m a t o n D with the parity acceptance condition such t h a t L ( D )  C A. We construct a parity a u t o m a t o n A x D equivalent to A as a kind of product of A and D. (For simplicity, we consider a u t o m a t a without variables.) We let QAxD _ QA x QD, xAIXD __ (xAI, xl~), and the transitions of A x D be of the form (x,p)

f ( ( y l , q ) , . . . , (ya,q)) X
where x  f ( y l , . . 9, Yk) is a transition of A, and p > q is a transition of D. (By this writing, we allow a transition of the form (x, p)  c, whenever x  c is a transition of A.) We let qual AxD (x, p)  qual A (x), and rank AxD (x, p) rank D (p) . Now, let B be a semialgebra, and let b E B. Suppose Eva has a winning strategy in the game G (A, B) from a position (b, x A). Then it is not difficult to see t h a t she can also win while playing the game associated with the product a u t o m a t o n , G ( A x D, B), from the position (b, (x A, x~)), with essentially the same strategy (the missing states of D are determined by the states of A and the transition table of D). Indeed, any infinite play consistent with the strategy shall derive a sequence of states in C A, hence accepted by D. So the play will satisfy the parity condition induced by rankA x D. Conversely, suppose Eva has a strategy s winning at position (b, (x A, xD)) in the parity game G ( A x D , B ) . We can assume t h a t s is positional. Then Eva can use the strategy s while playing from (b, x A) in G(A, B). More specifically, suppose an initial play in G(A, B) consists of a sequence of positions p o , P l , . . . ,P2m, and P2m  (a,x) is a position of Eva. Let the sequence of states of A derived from the state positions be x A  Y o , Y l , . . . , Y m  x,
7.2 Automata in the Itcalculus perspective
171
and suppose x D yoyl...~ym q, i.e., the automaton D enters the state q after reading the word y o y l . . . Ym. Then Eva uses the advice of the strategy s at the position (a, (x, q)) (neglecting the second components of the states). The argument for the case of an initial play of odd length is similar. Again, it is plain to see t h a t the resulted strategy is winning for Eva. (Note however, t h a t it need not be positional. In general, in a game with the winning condition given by the Muller criterion, the winner may not have a positional strategy, and an analogue to Corollary 4.4.3 (page 97) fails to hold [62].) Thus, the parity automaton A • D is indeed semantically equivalent to A, as desired.
7.2 Automata
in the
ttcalculus
perspective
7.2.1 T h e / t  c a l c u l u s o f a u t o m a t a We will now organize the a u t o m a t a into a #calculus in the sense of Chapter 2. The basic idea goes back to the classical construction of finite a u t o m a t a for regular expressions. Given a u t o m a t a for languages L and M, one constructs a u t o m a t a for the languages L t2 M, L M , and L* by a kind of operations uniformly defined over automata. This can be further extended to a construction of a nondeterministic automaton for an wlanguage L ~, and more generally for [Ji MiL~ (where L, Mi, L~ are regular sets of finite words), i.e. for any rational set of infinite words. As we have seen in Chapter 5, the two kinds of iteration: L* and L ~ correspond to the least and greatest fixedpoint operator respectively. Situating a u t o m a t a in the #calculus frame will allow us to express this correspondence both generally and precisely. However, to organize the a u t o m a t a into a #calculus is slightly more difficult than was an analogous task for fixedpoint terms in Section 2.3.2 (page 47). Difficulty will arise in composition. Intuitively, while composing an a u t o m a t o n A with a substitution p into an automaton, say Alp], one should convert each variable z of A into the initial symbol of the automaton p(z). Thus, if in course of a computation of the automaton Alp] the symbol z is assumed, the computation (or the play, if we think in terms of games) will not stop but switch to the automaton p(z), and proceed. Now, a confusion may obviously arise if the states of the automaton A and those of the a u t o m a t a p(z) are not different. An easy remedy to this is to make all the sets of symbols in question pairwise disjoint. However, if we wish to satisfy the Axiom 7 of the #calculus, we need to do it very carefully. We will define the universe of our #calculus as the set of reduced aut o m a t a presented in some canonical way. The idea behind it comes from a
172
7. The #calculus vs. automata
graphical r e p r e s e n t a t i o n which one can n a t u r a l l y associate with any a u t o m a ton in reduced form. Intuitively, as a first step, we unravel the a u t o m a t o n into a possibly infinite tree labeled by the a u t o m a t o n ' s symbols. T h a t is, we put the initial symbol xx at the root, and, whenever the label of a node is x, x is a state, and the unique transition with the head x is x  f ( x l , . . . ,xk), we create k successors of this node labeled by x l , . . . , x k , respectively. At the second step, we p r u n e this tree leaving only the first (lexicographically) occurrence of each state. More specifically, let _G_Ube a linear ordering of IN* t h a t orders sequences first by length, and t h e n lexicographically. T h a t is, u _E v if lul < Ivl, or lul  Iv], and there exists w, ul, vl E IN*, and i, j E N , such t h a t u  w i u l , v  w j v l , and i < j (as n a t u r a l numbers). Definition
7 . 2 . 1 . An a u t o m a t o n A is in canonical form if
 A is reduced,  QA c_ IN*,  if the initial symbol is a state, it is s, if x  f ( x l , . . . ,xk) is a transition in Tr A and xi is a state (not a variable) then 
xi
K_ xi, K
and, moreover, if QA is not e m p t y then m i n ( r a n k ( Q A ) ) E {0, 1}.

We denote the set of all a u t o m a t a over Sig in canonical form by A u t s i g T h e following can be seen as a justification of the t e r m "canonical". P r o p o s i t i o n 7 . 2 . 2 . For any a u t o m a t o n A , one can compute an a u t o m a t o n in canonical f o r m i s o m o r p h i c to r e d ( A ) . Proof. By P r o p o s i t i o n 7.1.6, we m a y assume t h a t the a u t o m a t o n A = (Sig, Q, V, x i , Tr, qual, rank) is already in reduced form. If x1 is a variable then, by virtue of reduced form, we have Q  Tr  O, and thus A is obviously in canonical form. Suppose Xl is a state. Define a tree tA " dom tA + Q, with d o m tA C IN*, inductively as follows. 
tA(~)

x~.
 If t A ( w )  x, and the transition with head x is x  f ( x x , . . . ,xk) then, for each i such t h a t xi is a state, w has a successor wi in dom tA valued t A ( w i )  Xi. T h e node w has no other successors. Note t h a t by virtue of reduced form, each x E Q is a value of tA. Now, for each x E Q, let h ( x ) be the least (with respect to the ordering _E) node w C dom tA, such t h a t t ( w )  x. Clearly, h is one to one, and h ( x x )  s. Let C be
7.2 Automata in the #calculus perspective
173
the unique a u t o m a t o n with the set of states QC = {h(x) : x c Q}, such t h a t h is an isomorphism from the original a u t o m a t o n A to C. It is straightforward to check t h a t this C is in canonical form, and by Proposition 7.1.4, it is semantically equivalent to A. T h e condition concerning rank is easily obtained by a translation (see Section 7.1.3, page 159). K] We are going to organize the set A u t s i g into a #calculus. T h e first two items of Definition 2.1.1 (page 42) are easy to define: For each variable z C Vat, we let 2 be a trivial a u t o m a t o n (Sig, O, { z } , z , ~, O, O, ). We also define the arity of any a u t o m a t o n A in A u t s i g by ar(A)  V A. Note t h a t by assumption, all variables in a t ( A ) are accessible from the initial symbol. Now let A  (Sig, Q, V, x i , Tr, qual, rank), be an a u t o m a t o n in A u t s i g , and let p " Var + A u t s i g be a substitution. We define the a u t o m a t o n comp(A, p)  A[p] by the following construction. At first, for each z E V, we define an auxiliary set add(z) C_ IN* whose elements we call addresses of z. If the initial symbol xx is a variable then we let a d d ( x / )  {c}. Note that, since A is reduced, we have in this case V  {xi}, and so the definition of the addresses is completed. Suppose the initial symbol x i is a state. For each z C V, we let add(z) C_ IN* be the set of all words of the form xi, such t h a t x C Q, the transition of head x is x  f ( y l , . . . , Yk), and Yi  z. Note that, since A is reduced, we have add(z) 7~ ~, for each z C V. Also, if z 7~ z ! then add(z) n add(z')  r It is also convenient to distinguish those variables z in V for which the initial symbol of p(z) is a state (not a variable); we will call these variables active and note Vp  {z E V " xPr(z) C Qp(Z)}. The a u t o m a t o n comp(A, p)  A[p] is defined by the following items. (Recall that, by the superscript convention, Qp(z) denotes the set of states of the a u t o m a t o n p(z), etc.)
QA[p] __ ~ U UzEV
add(z)Q p(z)"
T h a t is, the states of A[p] are either states of A, or sequences of the form xy, where x is an address of z, and y is a state of the a u t o m a t o n p(z).
vA[P]  UzEv VP(Z). If xr is a variable then x A[p] is the initial symbol of the a u t o m a t o n p ( x i ) . Otherwise, x A[p]  x I  e. T h e transitions of Alp] are of two kinds.  For each transition x  f ( x l , . . . ,xk) of A, we have a transition x I f ( x ~ , . . . ,xk) of A[p], where
174
7. The #calculus vs. automata !
Xi

xi if xi
=
xiifxicVp
=
x~ ( x ~ ) i f x i E V  V p
C Q
T h a t is, whenever xi is a variable in Vp, we replace it by its address xi, and whenever xi E V  V p , we replace it by the initial symbol (a variable) of the corresponding a u t o m a t o n p(xi).  For each transition y = f ( y l , . . . , Yk) of p(z), where z C Vp, and for each x c add(z), we have a transition xy  f ( y ~ , . . . ,y~) of A[p], where !
Yi

xyi if Yi C Qp(z)
=
yi if yi c v p(z)
T h a t is, for each address x of z, we have a separate copy of the set of transitions of p(z), obtained by prefixing the states of p(z) by x.  F o r x C Q, qualA[p](x)  qual(x), and rankA[P](x)  rank(x). For z C Vp, y C Qp(Z) and w E add(z), we let qualA[p](wy)  qualP(Z)(y), and rankA[p](wy)  rankP(z) (y). It follows from the construction that the a u t o m a t o n A[p] is in canonical form. Note t h a t if the initial symbol x i of A is a variable then Alp] = p(xx). In order to define operators # and v over automata, it will be convenient first to fix two a u t o m a t a of arity 0, Art and A~, with the property that, for any semialgebra/~, [Att~B  B, while [A~]pB  0. Using the proviso t h a t the symbol eq C Sig is always interpreted in the s t a n d a r d way (Section 6.1.2), we let Art
A~


(Sig, (~}, O, ~, {~  eq(~, c)}, qual" c ~ 3, rank" ~ ~ O) (Sig, {c}, O, ~, {~  eq(c, E)}, qual " c ~ V, rank" c ~+ 1)
so t h a t A/] is the dual of Art. Note that substituting V for 3 in the qualification of c would not change the semantics of these two automata. Now let A = (Sig, Q, V, xI, Tr, qual, rank) be an automaton in A u t s i g , and let z c Var. We define the a u t o m a t o n # z . A as follows. If z r V, we let p z . A = A. If z E V and x x C V then, by virtue of reduced form, we must have x x = z. In this case we let # z . A = As Now suppose z C V and x1 is a state. Let add(z) be the set of addresses of z defined as above. Again, by virtue of reduced form, add(z) ~ q}. Let minadd(z) be the least element of add(z) with respect to the ordering EWe define p z . A by the following items.
7.2 Automata in the/tcalculus perspective
175
_ Quz.A _ Q U { m i n  a d d ( z ) } .  V t t z ' A 
{z}.
V 
_ttz.A 
"x I

XI

~.
 For each t r a n s i t i o n x  "I of A, we have a transition x  ~~ in Tr " x A where 7' is o b t a i n e d from 7 by replacing each occurrence of z (if any) by rainadd(z). Additionally, we have in Tr uxA a transition rainadd(z)  A, where A is o b t a i n e d from Tr(xr) by replacing each occurrence of z by minadd(z).  T h e m a p p i n g qual ~zA is an extension of qual by qualUXA(minadd(z)) 
qual(s).  Let k  m a x {rank(x) "x E Q}. T h e m a p p i n g rank ~ A rank by
rank uz'A (minadd(z))
is the extension of
k 
2
[~J
+1
T h a t is, rankUZA(minadd(z)) is k if k is odd, and k + 1 otherwise. It is easy to see t h a t # z . A is in canonical form. T h e a u t o m a t o n u z . A is defined in similar m a n n e r . Namely, if z ~ V, we let uz.A  A, and if z  xi, we let uz.A  Art. If z r V and x1 is a state then the a u t o m a t o n uz.A is defined analogically to p z . A above, with the only difference t h a t the m a p p i n g rank ~'xA now extends rank by the equation
rank "z(minadd(z))

k 2 [~]
T h a t is, rank uzA (z) is k if k is even, and k + 1 otherwise. Proposition
7.2.3. # z . A 
uz.ft, u z . A 
#z.A.
Proof. By P r o p o s i t i o n 7.1.9 (page 162), it is sufficient to show t h a t p z . A 
yz.A. If z ~ at(A) or if z  x i , the result is obvious. Otherwise, we have just to check t h a t the r a n k s of m i n  a d d ( z ) are equal in # z . A and uz.A. If A is of Mostowski index (0, k) t h e n
# z . A is of Mostowski index (0, k') with k'  2  [ k j  and the r a n k of m i n  a d d ( z ) in # z . A is k', 
+ 1,
 the r a n k of m i n  a d d ( z ) in # z . A is U + 1,  .A is of Mostoski index (1, k + 1),  vz.A is of Mostoski index (1 k") with k"  2 Fk+l ] I 2  and the r a n k of m i n  a d d ( z ) in uz.A is k".
176
7. The #calculus vs. automata
k+l It is easy to check that k' + l  k" , i.e. , 2 9 byk ] + 2  2 9 It] The proof is similar if A is of Mostowski index (1, k) and amounts to checking t h a t 2  [ } ]  2 9r k1 l. m
We are ready to show that our system satisfies the axioms of Definition 2.1.1. 7.2.4. The tuple (Autsia, id, at, comp, #, u) defined above is a #calculus. Proof. The satisfaction of Axioms 15 follows directly from definitions. To verify Axiom 6, we need to show that the automata (A[p])[Tc] and A[p,Tr] are identical (and not only isomorphic), where the substitution p * 7r is defined by p , 7r(x)  p(x)[Tr]. We will prove that these automata have the same sets of states. Since we consider several automata, we will now distinguish notationally the set of addresses of a variable z in an automaton, say, B, by writing addB(z). By definition, QA[o] _ Q u UzEv addA(z)Q p(z), and V A[p] = UzEV VP(Z) Therefore, Theorem
Q(A[p])[rr] =
Q U U addA(z)QP(Z) u
U
addA[p](;?)Q~(S)
2EU~evVP(z)
zEV
Now, it is easy to see that addA[p] (z)

U addA (z) add p(z) (2) zEV
and hence, using some basic set algebra, we get
Q(A[p])[~r] __ Q U U addA(z)QP(Z) U U zEV
U
addA(z) addP(Z) (s QTr(s)
z E V 5EVe(~)
On the other hand, we have
QA[p,~]
=
Qu UaddA(z)Qp,~(z) zeV
where Qp,~(z) _ Qp(Z)[~] _ Qp(Z) uU~ev~(~) addP(Z)(2) Q~(~). Hence we clearly get Q(A[p])[~] _ QA[p,~]. The proof that the remaining items of (A[p])[Tr] and A[p * 7r] coincide can be carried out similarly and we omit the details. Finally, in order to satisfy Axiom 7, we have to show that, for some variable y, (Ox.A)[p]  Oy.A[p{~)/x}]. It is easy to check that this equality holds indeed for any variable y, provided y r ar((Ox.A)[p])  Uzev{x} ar(p(z)). D In the sequel, by abuse of notation, we use the symbol A u t s i g also for denoting the #calculus of Theorem 7.2.4 (page 176).
7.2 Automata in the #calculus perspective
177
7.2.2 T h e i n t e r p r e t a t i o n as h o m o m o r p h i s m We will now show that the # and y constructions over automata behave indeed as one should expect. That is, the semantics of the automaton # x . A is the least fixed point of the interpretation associated with A, etc. In our general perspective of the pcalculi this amounts to the fact that interpretation of automata is a homomorphism of the #calculi. We have here a property analogous to that of fixedpoint terms, Proposition 2.3.10 (page 50), reminding the definition of the functional #calculus over a complete lattice (Section 2.2, page 44). T h e o r e m 7.2.5. Let B be a semialgebra over Sig. The mapping that with each automaton A in A u t s o associates its interpretation [A]p B is a homomorphism from the #calculus of automata to the functional #calculus over
. Proof. We need to verify the four conditions of the definition of homomorphism (page 44). For each condition, we first explain what does it actually mean for the mapping [']pz3, and then verify the claim. Identity. For any variable z in Var, [z]~B coincides with the ~ of the functional itcalculus, i.e. with the identity mapping 2 : 7)(B) z + 7)(B) that sends each mapping val : {z} + 7~(B) to val(z). Recall that 2 amounts to the trivial automaton (Sly, O, {z}, z, O, O, O, ). Then the game G(2, B, val) has only variable positions and a position (b, z) is winning for Eva if and only if b e val(z). Thus [2]~B(val)  val(z), as required.
Arity. The arity of lA]p B equals to the arity of A. This property is an immediate consequence of the definition. Composition. Let A = (Sig, Q, V, xi, Tr, qual, rank), be an automaton in A u t s i g , and let p : Var + Autsig be a substitution. Let V' = ar(A[p]). Then the mapping [A[p]]~B 7)(B) y' ~ P ( B ) coincides with [A]~B[[p]~B] , where [ p ] ~ " Var + P ( B ) is defined by [p]~B(z) [p(z)]~Note that, by definition of the functional itcalculus,
[AI~B[IPI~B]"~(B) v' + P(B) sends a mapping val' " V' + 7)(B) to [A]pB(val), where val " V + P ( B ) is defined by val(z) [p(z)]pB(val' r YP(Z)) 9 We have to verify
[A[p]]~B(val' )
[A]~B(val )
178
7. The #calculus vs. automata
If xi is a variable then A[p] = p(xx), and, by virtue of the reduced form, A  2i. Hence the righthand side amounts to l~i]M3(val )  val(xi) [p(xi)]pB(val'), and so the claim is satisfied. Suppose xr is a state. Let a E B. So, we have two games, and we need to show the following.
Claim. Eva can win the game G(A[p], 13, val') from a position (a, x A[p]) if and only if she can win the game G(A, 13, val) from a position (a, xi). The following observation will be useful (see Section 7.2.1). L a m i n a 7.2.6. Suppose that x E addA(z) is an address of z C Vp and that w is a state of the automaton p(z). Then Eva wins the game G(A[p], B, val') from a state position (b, xw) if and only if she wins the game (p(z), B, val') from the position (b, w).
Proof. It follows from the fact that the a u t o m a t o n p(z) is isomorphic to the a u t o m a t o n (not in normal form!) obtained from A[p] by restricting the states to xQ p(z) and taking x as the initial state. U] We are ready to prove the claim. (=>) Suppose s' is a globally winning positional strategy for Eva in the game G(A[p], 13, val'). We define a positional strategy s for Eva in the game G(A, B, val) as follows. Let x C Q and let the transition in A with the head x be x  f ( x l , . . . ,Xk). Therefore the corresponding transition in A[p] is x  f(x~l,... ,x~), where the X~l,... ,x Ik are defined as on the page 173. Suppose (b, x) is a position of Eva in G(A, 13, val), hence also in G(A[p], 13, val') (recall Q c QA[p]). So, s' is defined for a position (b,x) and maps it to (x  f ( x ~ , . . . ,Xk),V), ' for some v  (dl , . . . , dk , b}. We let s be defined for the position (b,x), and map it to (x = f ( x l , . . . ,Xk),V). Now suppose that, for an x as above, s' is defined for a transition po' , ( d l , . . . ,dk, b) ), and maps it to (x~ , di) . We let sition (x  f ( x ' l , . . . ,Xk) s be defined for the position (x = f ( x l , . . . ,Xk), {dl,... ,dk,b)) and map it to (xi,di). Note t h a t whenever xi is a state, the positions (x~,di) and (xi, di) coincide. We will show that if x / i s a variable, and Eva wins the game G(d[p],13, val') from the position (z~,di) by strategy s', then di G val(xi), and hence the position (xi, di) is winning for Eva in the game G(A, 13, val). There are two cases. ! m Ifxi~ is a state then the claim follows by Lemma 7.2.6. Indeed, x i  xi is an address of x~ (see page 173) and val(x~) is precisely
If x i' is a variable then it must be x~ (x~), and p(xi) 
xi.^~ Hence val(xi)  [x~]~B(val' [ Y p(x'))  val'(x~), and we have di e val I (xi)i , because this position is winning for Eva in G(A[p], 13, val').
7.2 Automata in the #calculus perspective
179
For all other positions, s can be defined arbitrarily. Now suppose Eva wins the game G(A[p],B, val') from a position (a,c) using the strategy s t . Observe that either this position or all its successors are in the domain of s' (depending on the qualification of x A[p]  c). Then it follows by the considerations above that every play consistent with s is also won by Eva. Indeed, such a play either coincides with a play in G(A[p], B, val') (in particular, it happens always if the play is infinite), or terminates in a variable position (d, xi), such that d C val(xi). (r Now suppose r is a globally winning positional strategy for Eva in the game G ( A , B , val). We know, moreover, that for each xi E V, if b C val(xi) then Eva wins the game G(p(xi),B, val' [ V p(x~)) from the position (b, x~(X~)). If p(xi)  2, for some z, we have b E val' (z), and so (b, z) is a variable position winning for Eva in G(A[p], B, val'). If xPi(z) is a state, we have by Lemma 7.2.6 a winning strategy in the game G(A[p], 13, val') from a corresponding position. By combining all these strategies we obtain a global winning strategy for Eva in G(A[p], B, val'). The argument is similar as in the previous paragraph, and we omit the details.
Fixed points. The condition splits into two cases. We find it convenient to start with the greatest fixed point. ( ~ ) Let A  (Sig, Q, V, xi, Tr, qual, rank), be an automaton in A u t s i g , and let z E Var. We have to show that for each mapping val" V  {z} + 7)(B), [yz.A]~B(val ) is indeed the greatest fixed point of the mapping that sends M C 7:)(B) to [A]~B(val  {z ~ M}). where val 9 {z ~ M } 9V + P ( B ) is the mapping which coincides with val on all variables y E V  {z}, and whose value on z is M. If z r V then, by definition, yz.A  A, and the above mapping is constant; so the claim is obvious. If z  x i then, by definition, yz.A  Art, and the mapping above is identity; so the claim is also true. Suppose z E V and xi is a state. Let us abbreviate [~,z.A]~B(val ) by L. By KnasterTarski Theorem (see Corollary 1.2.10, page 11), it is enough to show two things. (i) For all M C_ B, M C_ [A]~B(val  {z ~ M}) implies M C_ L. (ii) L C_ [Al~B(val  {z ~ L}). Ad (i). By hypothesis, there exists a globally winning strategy for Eva in the game G(A, B, val  {z ~ M } ) , say s, that is winning at each position (b,x A)  (b,~), provided b C M. We will show that from each such position (b, ~), Eva can also win the game G(yz.A, B, val) (which will prove M C L). To this end, we define a strategy s ~ as follows. Suppose x c Q and the transition of A with the head x is x  f ( x l , . . . , Xk). Then the corresponding
180
7. The #calculus vs. automata
transition in y z . A is x  f ( x ' l , . . . ,x~), where x i  x~ if xi E Q, and x i minadd(z), whenever xi  z. Now, if s is defined for a position (b,x) and maps it to (x  f (Xl , . . . , xk), v), v  ( d l , . . . , dk, b), we let s' be also defined for (b,x) and map it to (x  f ( x ~ l , . . . , x ~ ) , v ) . In turn, if s is defined for a position (x  f ( x l , . . . , x k ) , v ) (with v as above) and maps it to, say, (xi, di), t we let s' be defined for the position (x  f ( x ~ l , . . . ,Xk),V ) and m a p it to (x~, di). Moreover, for any b e M , if (b, minadd(z)) is a position of Eva, we define s t on this position as follows. Note that, by hypothesis, s is defined on the position (b, x i ) , say s 9 (b, x i ) ~ (xi  f ( x l , . . . ,Xk),V). We let s ' ' (b, minadd(z)) ~ (minadd(z)  f ( x ~ l , . . . , x kt ) , v ) , where the x ti, s are defined as above. Now suppose t h a t a play starting from a position (b,c), where b C M , is consistent with s t. We claim t h a t it is won by Eva. There are two possibilities. Either this play visits only finitely many times positions of the form (d, minadd(z)), and so, from some moment on, it coincides with a play in G ( A , B, val  {z ~ M}) consistent with a winning strategy s. Or, it enters such positions infinitely often. But, since the rank of the state minadd(z) in y z . A is maximal and even, the play is won by Eva also in this case. Ad (ii). Now, by hypothesis, we have a globally winning strategy for Eva in the game G ( y z . A , B , val), say r, which is winning at each position (b, x~ z A )  (b,c), provided b E L. We wish to show that, from each such position, Eva also wins the game G ( A , B , val  {z ~ L}). The situation is somehow opposite to the previous case: Whenever a play played by Eva according to the strategy s reaches a state position (d, minadd(z)), the cotresponding play in G ( A , B , val  {z ~ L}) would reach a variable position (d, z). But clearly we have d C L, since the righthand side of the transition for minadd(z) coincides with that for c. So Eva wins in this case. We leave a formal construction of a suitable strategy r t to the reader. ( # ) By duality, we reduce this case to the ycase by using Proposition 7.2.3 (page 175) and Proposition 7.1.10 (page 162). [:3
7.3 Equivalences 7.3.1 F r o m f i x e d  p o i n t t e r m s t o a u t o m a t a In view of Theorem 7.2.5 of the previous section and the connection between the functional #calculus and games established in Section 4.3 (page 88), a u t o m a t a and fixedpoint terms appear very close. We will make this intuition precise by showing t h a t any fixedpoint term can be transformed into an a u t o m a t o n which defines the same semantics. For this aim, the concept of h o m o m o r p h i s m of #calculi will prove beneficial.
7.3 Equivalences
181
We first equip the set A u t s i g with some operations which actually organize it into an algebra over Sig,..,. D e f i n i t i o n 7.3.1. For each f E Sig with p ( f )  k, let f be an a u t o m a t o n with the unique state c, the set of variables { Z l , . . . , Zk }, the unique transition c  f ( z l , . . . , Zk), and such t h a t qual I (c)  V and rank f (c)  0. Let f be defined similarly, with the only exception that qual](e)  3. (For concreteness, we assume t h a t z ~ , . . . , zk are the first k variables in Vat.) Now, using f and the composition (see page 173), for any A ~ , . . . , Ak E A u t s i g , we define an a u t o m a t o n fAuts~'J(Al,... ,Ak) by fAuts~'J(Al,... ,Ak)
= r [ i d { A 1 / z ~ , . . . ,Ak/zk}]. Similarily, we let ]Auts~'~(A1,... ,Ak)  f [ i d { A 1 / z l , . . .
,Ak/Zk}].
The following property will be useful. 7.3.2. For any substitution p, fAuts~'J(A1,... , A k ) [ p ]  fAutsiJ(Al[p],... ,Ak[p]) and fAutsi, ( A 1 , . . . , Ak)[p]  fAuts,,~ (A~ [p],..., Ak[p])
Lemma
Proof. We consider the case of f, the case of f is similar. Let us abbreviate i d { A 1 / z l , . . . , A k / z k } by i d { A / z } . Then, by the axiom 6 of the pcalculus, we have fAuts~'~(Al,... ,Ak)[p] = f[id{A/z}][p] = f [ i d { A / z } , p], where i d { A / z } , p ( z i ) = Ai[p]. Hence, the last term amounts to f [ i d { A l [ p ] / z l , . . . , Ak[p]/zk}] = fAutsi'J(Al[p], ... , Ak[p]), as required. D We are ready to define the aforementioned translation from the set of fixedpoint terms over Sigh to A u t s i g , auto " f i x T ( S i g ~ ) + A u t s i g . It is defined by induction on the structure of fixedpoint terms, using the construction of the pcalculus of automata.  For z C Var, a u t o ' z ~ 2, 




auto " f ( t l , . . . ,tk)~~ f A u t s ' " ( a u t o ( h ) , . . . , auto(tk)), auto " f ( h , . . . , t k ) ~ f A u t ~ " ~ ( a u t o ( h ) , . . . , auto(tk)), auto " Ox.t ~ Ox.auto(t).
7.3.3. The mapping auto 9 fix T ( S i g ~ ) above is a homomorphism of the pcalculi.
Proposition
+ A u t s i g
defined
Proof. The first and the last condition required of a h o m o m o r p h i s m in Definition 2.1.4 (page 44) follow directly by the definition of auto, and the equation dr(auto(t)) = at(t) can be easily checked by induction on t. It remains to show that, for any substitution p : Vat ~ T ( S i g ~ ) , auto(t[p]) = auto(t)[auto o p]. We will verify this condition by induction on t. For t = z, we have
182
7. The Itcalculus vs. automata auto(z[p])  auto(p(z))  2[auto o p]  auto(z)[auto o p].
For t  f ( t l , . . . ,tk), using the definition of substitution of fixedpoint terms, the induction hypothesis and Lemma 7.3.2, we have a u t o ( f (tl , . . . , tk ) [p])
=
a u t o ( f ( t l [ p ] , . . . ,ta[p])) f A u t s " ~ ( a u t o ( t l [ p ] ) , . . . , auto(tk[p]))
=
fAutsi'~(auto(tl)[auto o p ] , . . . , auto(tk)[auto o p])
=
auto(f(tl,...
 
,tk))[auto op]
The argument for t  ] ( t l , . . . , tk) is similar. Finally, let t  Ox.t'. Recall (see Definition 2.3.4, page 48) that (Ox.t')[p] is equal to Oz.t'[p{2/x}], where z is the first variable in Vat that does not belong to ar' (t, p)  Uwca~(t){z} ar(p(w)). Therefore, by the induction hypothesis, we have auto((Ox.t')[p])

auto(Oz.t'[p(2/x}])
=
Oz.auto(t'[p{2/x}])
=
Oz.auto(t')[auto o p{2/x}].
On the other hand, we have auto(Ox.t') = Ox.auto(t'). Moreover, while verifying the last axiom of the #calculus in the case of A u t s i g , we have remarked that (Ox.A)[p] = Oy.A[p{~)/x}] holds, whenever y ~ ar((Ox.A)[p]) (see page 176). Hence, in the present case, we have, by the choice of z, (Ox.auto(t'))[auto
o
p]

Oz.auto(t')[(auto
o
p){21x}]
Then the claim eventually follows by the observation that the substitutions auto o p ( 2 / x } and (auto o p ) { 2 / x } coincide, since a u t o ( 2 )  a u t o ( z )  2. E] Now, we are ready to show the crucial property of the homomorphism auto, namely that it preserves semantics. Theorem 13,
7.3.4. For each fixedpoint term t over Sig, and each semialgebra
iauto(t)l,B
=
[tl,
Proof. We have noted in Proposition 2.3.10 (page 50) that the interpretation of fixedpoint terms over a set of function symbols F under a #interpretation Z, t ~+ [t]z , is the unique homomorphism from f i x T ( F ) to the functional pcalculus over Dz satisfying the property h(f(xl,...
,xa~(f)))(val)  f Z ( v a l ( x l ) , . . .
, val(Xar(f))),
7.3 Equivalences
183
for each f E F , and any mapping val : { X l , . . . ,Xar(f ) } + D z . So it holds in particular for F  Sig~ and 2:  goB. Now, the mapping t ~ [auto(t)lp• is a composition of two homomorphisms: auto : f i x T ( S i g ~ ) + A u t s i g (Proposition 7.3.3) and the interpretation of a u t o m a t a (Theorem 7.2.5), and hence it is a homomorphism as well (see the remark after Definition 2.1.4, page 44). It remains to verify the equality l a u t o ( ~ ( X l , . . . , X k ) l e ~ ( v a l ) = ~eB(val(xl),..., v a l ( x k ) ) , for ~ equal to f or f , f C Sig. This property follows easily by the definition of auto. [7
7.3.2 F r o m a u t o m a t a
to fixedpoint terms
The reader may have noticed that the a u t o m a t a produced from fixedpoint terms by the homomorphism auto have some special form, for example the initial state c does not occur at the righthand side of transitions. As a m a t t e r of fact, the mapping auto is not an isomorphism. A priori one could expect that the automata, apparently more complex than the fixed point terms, are also more powerful semantically. We shall see however that it is not so: any automaton is semantically equivalent to some fixedpoint term. The construction given below of such a fixedpoint term is indeed a natural generalization of the construction given in Section 5.2.4 (page 122) in the case of words. Let A = (Sig, Q, V, x i , Tr, qual, rank) be an automaton in A u t s i g . We will construct a t e r m tA, semantically equivalent to A. We will also exhibit a relation between the Mostowski index of A and the level of the term tA in the alternating fixedpoint hierarchy. If the initial symbol x1 is a variable, we let tA = x i . If x i is a state, we will find our term tA as a component of a certain vectorial fixedpoint term TA. Let n = IQI and k = m a x r a n k ( Q ) . We fix n k fresh variables x~/) (distinct from V), doubly indexed by i  0, 1 , . . . , k, and q C Q. These variables (some of them) will be "new names" of the states of A. More precisely, we define a mapping ren : Q u V + ]far, by t e n ( x )  x~ ra~k(q)), for q C Q, and ren(z)  z, for z C V. Next, for each q C Q, we define a t e r m tq over Sig~, by
tq 
f ( r e n ( y l ) , . . . , ren(yp)) f(ren(yl), , ren(yp))
if qual(q)  V if qual(q)  3
We fix some ordering of Q (e.g. __E of Definition 7.2.1), and let x (i) be the vector of variables x~ i), q C Q, listed in this order. Similarly, we let t be the vector of the ti's. As the reader may have guessed, we are going to define T A by closing the vectorial fixedpoint term t by fixedpoint operators. However, since we
184
7. The #calculus vs. automata
wish to obtain an exact correspondence between the hierarchy of Mostowski indices and the fixedpoint hierarchy, we will carefully distinguish two cases. Let the Mostowski index of A be (t, k). Then, if t  O, we let TA

ox(k).
"'"
.PX (2) .#X(1) ./]X (0) .t
and, if t = 1, we let TA
~
0x(k) 9"'" .PX(2) . # X ( 1 ) . t
where the o p e r a t o r binding x (i) is # if i is odd and u if i is even. Therefore, 7"A is in the normal form in the sense of Section 2.7.4, and moreover,  if k is even then r A is in ~)kn+(l_t)(Sig),  if k is odd then TA is in Sk~+(l_~)(Sig). For example, if the Mostowski index of A is (1, 3) then the fixedpoint prefix of 7"A has form # y #, and if the index is (0, 3) then the prefix is It u # v, see also Figure 7.2, page 188. Recall t h a t , according to Definition 2.7.2, 7A is formally a vector of fixedpoint terms, and, by r e m a r k above and Proposition 2.7.8, its c o m p o n e n t s are in the class Zk+(l_O(Sig) or Hk+(l_o(Sig), depending on the parity of k. For q E Q, let ( T A ) q be the c o m p o n e n t of T A corresponding to the index q. It follows by construction t h a t the set of free variables of 7" A is ar(7"A)  V. Since A is reduced (in particular, any variable is reachable from xi), it can be easily seen t h a t this is also the set of free variables of ( W A ) x ~ . We let tA = ( W A ) x x . Proposition
7 . 3 . 5 . For any automaton A E A u t s i g , and any semialgebra
B,
ItA]~B
=
[AIpB
Proof. Let A = (Sig, Q, V, xi, Tr, qual, rank). If x1 is a variable, the result is obvious. W i t h o u t loss of generality we may assume t h a t A is closed. Indeed, if it is not the case originally, with each variable v of A we associate a c o n s t a n t symbol Cv not in Sig, and we let Sig ~ = Sig U {cv ] v E V}. We consider the closed a u t o m a t o n A' obtained from A by adding the universal transitions v = cv. W i t h each Sigsemialgebra B and any val : V + 7)(B), we associate , * the Sig'semialgebra B' with the same domain B as B, where b c ~y! if and only if b r val(v). It is easy to see t h a t [A]~u(val )  [ A ' ] ~ u, and t h a t [tA]~u(val)  [tA, ]~U" Therefore, it is enough to prove [AI~ B  [tA]~u for any closed a u t o m a t o n A. We will show this by comparing two vectorial Boolean
7.3 Equivalences
185
fixedpoint terms: the one associated with [tA]pu, and the one characterizing the game associated with the automaton A and the semialgebra B. Let us consider the game associated with the closed automaton A by Deftnition 7.1.2, page 157. Its set of state positions is B x Q; its set POSTr of transition positions is the subset of elements (q = f ( q l , . . . , qp), ( b l , . . . , bp, b)) such that b  f B ( b l , . . . , bp). The vectorial boolean fixedpoint term associated with this game (see Definition 4.3.2, page 89) is T
0
y(k)
.....u
y(2)
.#
y(1)
.u
y(O)
9 G
where each F and each X (j) (resp. G and each Y ( J ) ) is a vector indexed by B • Q (resp. P o s T r ) . The component of index (b,q) of F , where q = f (ql, . . . , qp), is
E{Y~ )
I tr  (q  f ( q l , . . . , q p ) , ( b l , . . . ,bp, b))} if (b,q) is an Eva's position, i.e., if qual(q) = V, ~{Y~~  (q  f ( q x , . . . ,qp), ( b l , . . . ,bp, b))} if qual(q)  3.

The component of index tr = (q = f ( q l , . . . , qp), ( b l , . . . , bp, b)) of G is 
l i pi1 x (rank(q~)) if tr is an Adam's position, i.e. if qual(q)  V, (bi ,ql) p (rank(qi)) if qual(q)  3.
 E i = l X(bi,q~)
By using Proposition 1.4.5 (page 29), we can substitute the definition of y~0) for y ~ 0 ) i n F , so that T is equal to
0 [ x(k)
X(2)
X(1)
X(~
]
Ft
][ l
where the component of index (b, q) of F ' , where q = f ( q l , . . .
, qp), is
p (rank(qi)) if qual(q)  V,  Zbf~(b~ ..... b~) 1Ii=~ X(b~,q,) (rank(q~)) if qual(q)  3.  YIbS~(b~ ..... b~) ~ P = I X(b,,q~) Because F ' does not depend on the variables in the Y ( J ) , we have, by Proposition 1.4.4 (page 28), that the component of index (b, q) of T is equal to the component of index (b, q) of T' = OX(k). ...
.pX(2).#X(1).pX(~
Now, by Proposition 3.2.6 (page 74), we have by Proposition 3.2.8 (page 75),
'.
[ AI B 
h(xpu('rA)),
X p B ( ' r A ) = Ou~,(~) . . . . .~,u~,(~) .pu~(1) .~,U~(o) .XpB(t).
and,
186
7. The #calculus vs. automata Note t h a t XpB(rA) is indexed by B • Q and that there is a natural bi
jection between X(J) and u~(j) (namely, X (j) (b,q) is identified with ub,~(j)) It follows t h a t [Alp B  [tA]pB if and only if for any b C B, the component of index (b, xi) of T ' is equal to the component of index (b, xx) of Ou~(k).....vu~(2).#u~(1).vU~(o).X~B(t). Indeed, we show that F ' = XpB(t), i.e., for any w : B • Var + ]!3, we have F ' ( w ) = X~B(t)(w). By Proposition 3.2.5 (page 74), h(x~B(t)(w)) [t]~B[h*(w)]. Therefore, we have to prove t h a t for any b and q, F[~,q)(w)  1 if and only if b C [tq]pB[h*(w)]. Let q = f ( q ~ , . . . , qp) be the transition of head q. If qual(q) = V then 
F'
(b,q)(W)
i
1,..
9

1 if and only if there is b "
,p, w ( X (ra~k(q~)) (b,,q,) )

fB( b l , . . .
,bp) such t h a t for
1,
 b E [tqlpB[h*(w)] if and only if there is b " f B ( b l , . . . , bp) such t h a t for i
1,.. . ,p, bi E h*(w)(x (r~nk(q~)q~
).
But bi C h*(w)(x q~ (rank(q~)) ) if and only if W(Ub,~(j) )  w ( X (j) (b,q))

1.
If qual(q) = 3 then
 F[b,q )(w)  0 if and only if there is b " f B ( b l , . . . , b p ) such t h a t for i  1,.. 9 ,p, w ( X ((rank(qi)) b,,q,) )  O,  b ~ [tq]~B[h*(w)] if and only if there is b " f B ( b l , . . . ,bp) such t h a t for i   1 , . . . ,p, bi ~ h*(w)(x (rank(q~))qi ), and we conclude in the same way as above. E]
7.3.3 C o n c l u s i o n We summarize the above considerations in the following. 7.3.6. For any automaton A over signature Sig, one can construct a fixedpoint term tA over the signature Sig~ with the same semantics, i.e., such that
Theorem
[AI~B
=
[tAI~B
for any semialgebra 13. Moreover, if the Mostowski index of A is (~, k) then tA can be chosen in IIk+(l_~)(Sig~), whenever k is even, and in Zk+(1~)(Sig~), whenever k is odd. Conversely, for any fixedpoint term t over signature Sig~, an automaton with the same semantics is given by a homomorphism of the #calculi auto : f i x T ( S i g ~ ) + A u t s i g . Moreover, if t is in Hk(Sig~), for k > 0, then
7.4 Bibliographic notes and sources
187
the Mostowski index of auto(t) is compatible with (1, k) or with (0, k  1 ) , depending on whether k is even or odd, respectively, and if t is in Zk (Sig~), for k > O, the index is compatible with (1, k) or (0, k  1), depending on whether k is odd or even, respectively. That is, with the assumed semantics, automata and fixedpoint terms have the same definability power, and the equivalence refines to the levels of the alternationdepth hierarchy, and the hierarchy of the Mostowski indices, correspondingly, as shown on the Figure 7.2. Proof. The first part is a consequence Proposition 7.2.2 and Proposition 7.3.5, as well as of the construction of T A used there. The second part is a consequence of Theorem 7.3.4. To see the property regarding indices, recall the definition of the #calculus of a u t o m a t a (Section 7.2.1). Observe, in particular, that the composition of a u t o m a t a preserves indices, while the operators # and y do not alter the first component of the index. Then the claim can be easily checked using the fact that auto is a homomorphism. [] Remark. We can note that although the classes of the hierarchy of Mostowski indices (Figure 7.1, page 161) and those of the alternationdepth hierarchy (Figure 2.1, page 57) coincide, Figure 7.2 is not obtained by superposition of Figures 2.1 and 7.1; instead, one of the hierarchies must be "switched". (The situation is somehow analogous as in the parallel between the Borel hierarchy and arithmetical hierarchy.)
7.4 Bibliographic
notes and sources
The concept of a finite automaton took its shape in the fundamental work of Kleene [53]; the equivalence between finite automata and regular expressions (Klenee theorem) established a general paradigm which is confirmed, in particular, by Theorem 7.3.6 above. We refer the reader to the bibliographic notes in the Chapter 5 (page 138) for more references to a u t o m a t a on finite and infinite words. Thatcher and Wright [94] and Doner [28] extended the concept of a finite automaton to (finite) trees, and Rabin [83] used a u t o m a t a on infinite trees in his landmark proof of decidability of the monadic second order arithmetic of n successors. We refer the reader to the exposition of the subject by Thomas [95, 97]. The alternating version of a finite automaton was introduced at the same time as the alternating Turing machine [24]; alternating a u t o m a t a on infinite trees were considered first by Muller and Schupp [71].
188
7. The #calculus vs. automata
X Z2k+2 ~(0,2k+1)(1,2k+2)~ H2k+2
IX
2'2k+1 ,~(1,2k+l)
Ix Z4 ~(0,3)
IX IX IX
!
(0,2k),~ H2k+l
! (1,4)~//4
I I I
~(1,3)
(0,2)~ H3
•2 ~(0,1)
(1,2)~//2
~(1,1)
(0,0)~ HI
~W'3
~1
Fig. 7.2. The hierarchy of Mostowski indices and of alternationdepth
7.4 Bibliographic notes and sources
189
A transformation of the formulas of the modal #calculus into Rabin tree automata was an essential part of the proof of the elementary decidability of this logic given by Streett and Emerson [91]. A more direct transformation via alternating a u t o m a t a was given later by Emerson and Jutla [33]. A converse translation, from a u t o m a t a to the #calculus, was shown in [77], by which the equivalence in expressive power between the Rabin automata and the #calculus of fixedpoint terms without intersection interpreted in the powerset algebra of trees was established. (Note that by the Rabin Theorem, adding intersection would not increase the expressive power of the #calculus.) In fact, [77] shows an equivalence on a more general level, namely between the #calculus over powerset algebras constructed from arbitrary algebras, and for a suitable concept of an automaton over arbitrary algebras. This result was subsequently extended to automata on semialgebras in [78]. A similar concept of automaton over transition system was introduced by Janin and Walukiewicz [45]. A yet more general concept of automata over complete lattices with monotonic operations was considered by Janin [44]. An equivalence between nondeterministic Biichi automata on trees and a kind of fixedpoint expressions (essentially equivalent to fixedpoint terms without intersection of the level//2) was discovered earlier by Takahashi [92]. The correspondence between fixedpoint terms of level//2 (with intersection interpreted in the powerset algebra of trees and nondeterministic B/ichi aut o m a t a was shown by the authors of this book [9]. (Note that this also yields the equivalence between nondeterministic and alternating B/ichi a u t o m a t a on trees.) A more direct proof of this result was given later by Kaivola [51].
This Page Intentionally Left Blank
8. Hierarchy
problems
8.1 Introduction A fundamental question about any #calculus is: Are the fixedpoint operators redundant? For example, any monotone Boolean function f : {0, 1} n + {0, 1} can be defined using the operations V and A, and so any term of the Boolean #calculus is equivalent to a term without fixedpoint operators. (This example also shows that the redundancy of fixed points does not make a pcalculus uninteresting.) It is not hard to give an example of a #calculus where actually both extremal fixedpoint operators are essential let us recall the pcalculus over infinite words discussed in Chapter 5. But then a more subtle question arises, whether the alternations between the least and greatest fixed points add to the expressive power of terms. In Section 2.6 (page 55), we have introduced a formal hierarchy which measures the complexity of objects of a pcalculus by the number of alternations between p and u. While this hierarchy is obviously strict for fixedpoint terms considered syntactically (see section 2.6.3, page 56), it need not be so for their interpretations within functional pcalculi. Note that in the latter case we ask for a minimal number of alternations required in any term t' such that [t'lz = [tlz. Indeed, in the case of the/tcalculus on words, we have seen that the hierarchy actually collapses to the alternationfree level Comp (•1,//1) in the presence of intersection (see Theorem 5.3.1, page 126) and to the level//2 (i.e., u#) without intersection (see Proposition 5.1.17, page 115). A priori it is not obvious that there must exist any functional pcalculus with infinite hierarchy. A first example of a pinterpretation with an infinite hierarchy was provided [75] by a powerset algebra of trees g97~ig (c.f. Example 6.1.3, page 142) restricted to the signature Sig U {V} (with V, standing for e~q, interpreted as the binary setunion). Note the absence of A and the duals f of f C Sig. The alternationdepth hierarchy over this interpretation corresponds precisely to the hierarchy of Mostowski indices of nondeterministic automata. The following terms witness the infinity of the hierarchy: N
....
v
v... v
192
8. Hierarchy problems
The interpretation comprises all trees (over signature { c l , . . . , ca}) such that, for each path, the maximal subscript i repeating infinitely often is even. (It can be easily seen, for instance, by considering the automaton associated with the above term by the mapping auto of Section 7.3.1, page 180.) The above terms can be further encoded within a single finite signature, provided it contains at least two symbols, one of them of arity > 1 (see [78] for details). This example does not settle, however, the hierarchy problem for the (unrestricted) powerset algebra of trees. As a matter of fact, all the above languages have their complements recognizable by Biichi automata, and therefore are in the ~2 class. The problem, better known as the hierarchy problem for the modal #calculus, had remained open for a while and was solved in 1996 independently by J. Bradfield and G. Lenzi. One example, given in a subsequent paper by Bradfield [19], consisted of the game terms presented above in Section 4.4.1, page 95 (originally given in a modal form). These formulas, defining the set of winning positions in a parity game, were first introduced by Emerson and Jutla for bipartite games, and then generalized by I. Walukiewicz for arbitrary parity games (see also Bibliographical notes at the end of this chapter).
8.2 The
hierarchy
of alternating
parity
tree
automata
8.2.1 G a m e s on t h e b i n a r y t r e e A game G  (Posa, Pose, Mov, rank) is a game on the binary tree if the underlying graph is the full binary tree, i.e., Pos  P o s a U POSe is the set { 1, 2}* of all words over { 1, 2} and Mov  { (u, ua) I u E { 1, 2}*, a C { 1, 2} }. With this game, assuming that r a n k ( P o s ) C_ { 1 , . . . ,k}, we associate a syntactic tree tG over the binary signature Sk  { c i l l < i _< k} U { d i l 1 _ 2 there is a simply alternating closed a u t o m a t o n B of s a m e M o s t o w s ki index such that
IAI~T~,,~IBIp~,.
Proof. We construct B by the following transformation. First we add to the states of A a new universal state qr of rank 2 and the transition qr eq(qm,qm). Next we replace each transition q  f ( q l , . . . , q ~ ) , where q is existential and f E Sig, by the transition q  f ( ql, q r , . . . , q r ) V f ( q r , q2, . . . , q r ) V V f ( qT, qr, . . . , qr, q~) V V g c f g(qm, q r , . . . ,qT) Moreover, q is now qualified universal. By Proposition 7.3.5 (page 184), we have to prove that [tAIpT~,j = [tB~e7s~,~. By construction of tA and tB, [tAle~s~,~ is the component of in
dex xI of the vector L A  Ox (k) . . . . . yx(2).px(1).[tAleTs~,j indexed by Q, the set of states of A, and [tBlpTs~,~ is the component of index xx of the vector L B  Oy (k) . . . . . uy(2).#y(1).ltAl~T~, ~ indexed by Q u {qr}. Obviously, the component of index qT of LB is Tsig. For the states q of which we have modified the transition, the component of index q of
196
8. Hierarchy problems
[tAJ~7s,,~ is ] p T s " ~ ( q l , q 2 , . . . ,qrn), a n d the same c o m p o n e n t of [tB]~Ts,,~ is fpTs,, (ql, q m , . . . , qT) U f~7s'"(qT, q2, 9 9 9, qT) U . . . U fPWs"j(qr, q T , . . . , qm) U Ug:2_$g~Ts~,j(qT, qT, . . . , qT). Since Tsig can be s u b s t i t u t e d for qT, t h e result is a consequence of t h e following equality, obviously t r u e for any T 1 , . . . , Tm C_ Tsig. ]'T~',
(T1, T2, . . . , T~)
U
f~T~" (T1, Tsig, . . . , Tsig) f~T~,~ (Tsig, T 2 , . . . , Tsig)
U
,
U
f~Xs~,~(Tsig, Tsig,... , Tm)
U
U g~Ts,,~(Tsig,Ts,g,...,Tsig). g:A$
=
.
.
This automaton B is an alternating automaton, and we transform it into a " standard" one as explained in Section ?.1.5, (page 162). By construction this automaton is simply alternating. D
If A is a simply alternating closed automaton, the game G(A, TSig) that allows us to define the semantics of A in the semialgebra 7sig can be simplified by using the fact that from an arbitrary state position (t, q) there is at most one possible move to a transition position: If the transition for q is q  eq(ql,q2), the move is to (q = eq(ql,q2), (t,t,t)); If q = f ( q l , . . . ,qm), a n d if t = f ( t l , . . . , t m ) , t h e move is to (q  f ( q l , . . . ,qm), ( t l , . . . ,tm,t}). This r e m a r k allows us to simplify this g a m e by cancelling t r a n s i t i o n positions, as we did it for t h e a u t o m a t o n Wk, so t h a t with A we associate t h e following g a m e G(A). 
T h e set of all positions is t h e set Tsig x Q e x t e n d e d by two positions Loss a n d Win. T h e positions Loss a n d Win are to E v a and are respectively of r a n k 1 a n d 2. A position (t, q) is to E v a if a n d only if q is existential. T h e r a n k of this position is the r a n k of q.  T h e only move from Loss is to Loss, the only move from Win is to Win. T h u s , Win is a winning position and Loss is not a winning position for Eva. T h e moves from (t, q) are defined as follows.  If q = eq(ql, q2), t h e r e is a move to (t, ql) and to (t, q2). If q = f ( q z , . . . , qm) a n d t = f ( t l , . . . , tin) (note t h a t in this case, (t, q) is an A d a m ' s position in G(A)), t h e r e are moves to the m positions (ti, qi). If q = f ( q l , . . . ,qm) a n d t has not t h e form f ( t l , . . . ,tin), t h e r e is a u n i q u e move to Loss. Thus, (t, q) is not a winning position for Eva. T h i s 
8.2 The hierarchy of alternating parity tree automata

197
is because in the original game, (t, q) is an Eva's position and Eva from which she cannot move to any transition position. If q  a, with a a constant symbol, there is a unique move to Win if t  a, to Loss otherwise.
Note t h a t from each Eva's position (t, q) there are 1 or 2 possible moves (according to w h e t h e r in the transition q  eq(ql, q2), ql  q2 or not), and t h a t from each A d a m ' s position, the n u m b e r of moves, always greater t h a n 0, is b o u n d e d by the m a x i m a l arity of symbols in Sig. Thus, any choice of a move by A d a m can be done by a sequence of binary choices. T h e above r e m a r k is the motivation for the following definition. D e f i n i t i o n 8 . 2 . 4 . Let A be a simply alternating closed a u t o m a t o n of Mostowski index (1, k) with k >_ 2. First we consider the two trees t• and tT of Ts~ which are the unique two trees t h a t satisfy the equalities tL  dl (t• t• and tT  d2(tT,tT). Next, we define the m a p p i n g 7A " Tsig x Q + Tsk by the following conditions. It is easy to see t h a t there is one and only one m a p p i n g satisfying them. Let t C Tsig and q C Q. Let i c { 1 , . . . ,k} be the rank of q.  If q  eq(ql,q2) t h e n "yA(t,q) is equal to di('yA(t, ql),'yA(t, q2)) if qual(q) 3 and to ci(~/A (t, ql), ~A (t, q2)) if qual(q)  V.  If q  f ( q l , . . . ,qm) then if t  f ( t ~ , . . . ,tin) then 7A(t,q) is equal to Ci(~/A (tl, ql), Ci(')'A (t2, q2), " " " , Ci(')'A (tin1, qm1), ~/A (tin, qm))''" )) if m > 1 and to ci(~/A(tl,ql),~/A(tl,ql))if m  1,  otherwise ~/A (t, q)  t• If q  a t h e n 3'A (t, q) is equal to tT if t  a, and to t_L otherwise. 


By a d a p t i n g the proof of Proposition 8.2.1, we can prove the following result. 8 . 2 . 5 . Let A be a simply alternating closed automaton of Mostowski index (1, k) with k >__2, and let q1 be its initial state. Then, Vt r Tsig, t r [A]~Ts~,~ r ~/A(t, qt) r
Proposition
Finally, let FA "Tsig + Tsk be defined by
FA (t)  dl (~/A (t, qI), ~/A (t, qt)). Since, obviously, FA (t) r if and only if 3'A (t, qI) E Lk, the previous proposition implies the following one. 8 . 2 . 6 . Let A be a simply alternating closed automaton o] Mostowski index (1, k) with k >__2. Then, Mt r Tsig, t e [A]~w~,~ ~ I~A (t) e Lk.
Proposition
198
8. Hierarchy problems
8.2.3 A d i a g o n a l a r g u m e n t Let us recall the definition of the usual ultrametric distance A s 0 on Tsi9 which makes Tsi9 a complete, and even compact, metric space [8]. D e f i n i t i o n 8.2.7. If t  t ~then A s o (t, t ~)  0. If the symbols at the root of t and t ~ are distinct, then Asi9 (t, t ~)  1. Otherwise, there exists a nonconstant symbol f, such that t  f (tl, . . . , tin), t'  f (t~ , . . . , t~m) and ti ~ t~ for some m 1Asi9 ( t i , ti) ! , which is not equal to i; in this case we set Asig(t, t') _ ~1 maxi= 0. A mapping h " Tsi9 ~ Tsig' is said to be nonexpansive if Vt, t' E T s o , A s o , ( h ( t ) , h ( t ' ) ) < A s o ( t , t ' ). It is contracting if there is a constant c < 1 such that Vt, t' E Tsig, A s i g , ( h ( t ) , h ( t ' ) ) i. Therefore the game associated with t and Wk is the same as the game associated with t and W~., and Wk accepts t if and only if W~ accepts t. H e n c e , L k N I ( k  L~k N K k .
Unfortunately, W~ is not weak when k is odd; the transition qk ck(qk,qk) V dk(qk,qT) V dk(qT,qk) of W~ contains qT of rank k  1. But in state qk, the automaton cannot accept any tree when the rank of qk is odd, for along at least one path in the tree it will remain for ever in this state. Therefore we can replace this transition by qk  Ck (qk, qk) V dk (qk, qk). This automaton W~' is now weak. Let Lg be the language recognized by this automaton when k is odd and be L~ when k is even. Then L~c is recognized by a weak automaton of Mostowski index (1, k). If the complement of L~ were also recognized by a weak automaton A of Mostowski index (1, k), we would have t f[ L~ r FA(t) E L~, t h a t is a contradiction when t is a fixed point of FA. E] We have seen in Proposition 8.2.3 (page 195) t h a t every automaton A is equivalent to a simply alternating automaton B of the same Mostowski index. But the construction we gave does not preserve weakness in general: If there is in A a transition q = f ( q l , . . . ,qm) with q existential, the construction of B replaces this transition by q = f ( q l , q T , . . . , qT) V f(qT, q 2 ,   . , qT) V V f (qT, qT,. 9 qT, qm) V Vg#l g(qT, qT, 99, qT) If A is weak, B is weak if we can assign to qT an even rank greater than or equal to the rank of q. In particular, if k is even, we can assign the rank k to qT and then B is weak. But if k is odd and if there is transition q = f ( q l , . . . , qm) with q existential
8.4 Bibliographic notes and sources
203
of rank k, B is not weak unless we assign to qT the rank k + 1 and then B is of Mostowski index k + 1. Since we may assume that each weak automaton of Mostowski index (1, k), with k even, is simply alternating, we get as a consequence of the previous theorem: T h e o r e m 8.3.2. The Mostowski index hierarchy of weak automata is strict.
8.4
Bibliographic
notes and sources
The fact that the alternation hierarchy collapses to the level 112 (i.e., u#) for the powerset algebra of infinite words without intersection, but at the same time II2 is different than Z2, was discovered by D. Park in collaboration with J. Tiuryn (see [81]). The complexity of reduction of fixedpoint terms to a II2 form was studied recently in [90]. Collapsing of the analogous hierarchy with intersection to the level Comp (Z1U II1) was shown in [10]. On the other hand, Wagner [100] showed that (in our present terminology) the hierarchy induced by the Mostowski indices of deterministic automata over infinite words is infinite. A different #calculus of finite or infinite words was examined in [76], where the concatenation of c~languages (but not intersection) was allowed. That #calculus captures the power of contextfree grammars (over finite or infinite words), and the alternation hierarchy again collapses at the level
Comp ( ~ u ~). For trees, Rabin [85] showed that (in our present terminology) Biichi automata have less expressive power than Rabin automata; the counterexample was the set of binary trees over alphabet {a, b} such that, along each path, b occurs only finitely often. This easily implies that the hierarchy in the powerset algebra of trees without intersection does not collapse at the level //2. The strictness of that hierarchy was shown [75] in 1986 (see also [78]). For the powerset algebra of trees with intersection, as well as for a closely related modal #calculus, the hierarchy problem has remained open for a decade (it was only known that the hierarchy does not collapse to the level II2 or •2 [9]). In 1996, Bradfield [17] showed the strictness of the hierarchy for the modal #calculus, by transferring some hierarchy from arithmetic previously studied by Lubarsky [58] (a simpler and selfcontained proof of this result was given later by the same author [18]). An independent proof of this result was given in the same year by Lenzi [57] who gave examples of formulas over n  a r y trees requiring increasing number of alternations. Both proofs have not automatically yielded the existence of a single powerset algebra of trees (with intersection) with an infinite hierarchy. This fact was shown later
204
8. Hierarchy problems
independently in [19] and in [5]; it is this last proof that we have presented in the book. The pcalculus formulas describing the winning positions in parity games were introduced for bipartite games by Emerson and Jutla [33] and generalized to arbitrary games by Walukiewicz [102]. The fact that these formulas also witness the strictness of the alternationdepth hierarchy was observed by Bradfield [18].
9. D i s t r i b u t i v i t y and n o r m a l form results
We have seen in Chapter 5 that, in the pcalculus over infinite words, the intersection operation A is redundant: Any closed fixedpoint term is semantically equivalent to a fixedpoint term without A. A similar property can be shown for the powerset algebra of trees, f97sig. It was first discovered by Muller and Schupp [72] and phrased in terms of automata: Any alterhating tree a u t o m a t o n can be simulated by a nondeterministic automaton. In this chapter, we generalize these results, by providing a condition on pinterpretations which is sufficient for elimination of intersection. (Per analoglare, we continue to call this property simulation.) Intuitively, this technical condition states t h a t A commutes with other operators like in the #calculus of words, where we have, e.g., aL n aL' = a(L n L').
9.1 The
propositional/zcalculus
Let F be a set of symbols, consisting of two binary symbols V and A which will be used as infix operators, and two 0ary symbols I and T. The terms built up from this set of symbols are called propositional. Let 7) be the set of all interpretations 2; such that the domain D z is a distributive complete lattice and V and A are interpreted as the least upper bound and the greatest lower bound in this domain. We say that two terms t and t' of the same arity are equivalent modulo 7), denoted by t = 9 t ~, if for any Z C 7?, ltlz = lt'lz. Similarly we say that two vectors of the same length are equivalent if their components are pairwise equivalent. The following lemma is a straightforward generalization of the Shannon's lemma (Lemma 3.1.1, page 71). L e m m a 9.1.1. For any functional term t C functT(F) and any variable x E Var, t =~ t[id{• V (x A t[id{T/x}]).
Proof. The proof is by induction on t, using the distributivity property.
206
9. Distributivity and normal form results
If t = T, _L, or if t = y 7~ x, the result is obvious, since then t [ i d { l / x } ] = t [ i d { T / x } ] = t and t V (x A t) = v t.  If t = x, we have t[id{_L/x}] V (x A t[id{T/~}]) = _L v (~/~ T) = v x.  L e t a = t[id{_L/x}],b = t [ i d { T / x } ] , a' = t'[id{_L/x}],b' = t ' [ i d { T / z } ] , and let us assume t = v a V (x A b), t' = ~ a' V (x A b'). T h e n t V t' = v a V (x A b) V a ' V (x A b') = v (a V a') V (x A (b V b')) and 
tAt'
=9
(a V (x A b)) A ( a ' V (x A b'))
T~
(a A a') V (x A a A b') V (a' A x A b) V (x A bA b')
T~
(a A a') V x A ((a A b') V (a' A b) V (b A b')).
But, since a o ai 0, 1 < i 0 then b can be uniquely decomposed in C by b " F c ( b l , . . . , bp(g)), for some b ~ , . . . , b p ( F ) C B. We let, for i = 1 , . . . , p ( F ) , a(wi) = bi, and Tc,bo (wi)  Fi, where F~ is the unique symbol in Sig K  {eq}, such t h a t bi  F c ( d l , . . . , dp(F~)). For all words v of length n + 1 t h a t are not successors of such w's, we let a(v) = Tc,bo(v) = • Clearly, the invariant is maintained. Finally, we let dom Tc,bo = {w: Tc,bo(W) ~ • Recall that, in general, with any syntactic tree T over Sig K  { e q } we associate a semialgebra T over Sig K, with eq interpreted as always, and the remaining symbols interpreted as in Example 6.1.3 (see also Section 7.1.5). The following property comes easily from the above construction. L e m m a 1 0 . 4 . 1 1 . TC,bo r dora TC,bo is a syntactic tree over Sig K  {eq} and c~ r dora Tc,bo is a reflective homomorphism from the semialgebra Tc,bo to C, such that a(c) = bo. In the sequel, by abuse of notation, we will denote Tc,bo r dom Tc,bo and a r dom Tc,bo simply by Tc,bo and a, respectively. We are ready to state the following treemodel property. P r o p o s i t i o n 1 0 . 4 . 1 2 . For any closed fixedpoint term t over Sig~, one can compute a type K , such that the following conditions are equivalent.
1. t has a model; 2. t g (given by Definition 10.4.9) has a model which is a semialgebra T associated with a syntactic tree over s i g g ; 3. t K has a model T as above, and moreover c C [tK]~oT . Proo]. Let K be the type computed in Corollary 10.4.4. (1 ::> 3). From Corollary 10.4.4 and Propositions 10.4.7 and 10.4.10, we know t h a t if t has any model then t K is satisfied in some codeterministic semialgebra C. Let bo c [tK]pc . We construct the tree semialgebra Tc,bo and the h o m o m o r p h i s m a : Tc,bo ~ C as above. Since a is reflective, we
248
10. Decision problems
have, by Corollary 6.3.8 (page 151), w C
[tgl~w~,~o r a(w) c ltgl~w~,~o ; in
particular, c C [tg]pwc,bo Ca bo C [tK]pwc,bo . (3 =a 2) is trivial. (2 =a 1). If T is a model of t K then, again by Proposition 10.4.10, the derived semialgebra ~(T) is a model of t. E]
10.4.3 F i n i t e m o d e l s a n d d e c i d a b i l i t y Recall that the satisfiability problem for fixedpoint terms is the question whether, for a given closed term t over Sig~, there exists a model, i.e. a semialgebra B such that [t]~B is nonempty. We are ready to state the decidability result. Theorem decidable.
10.4.13. The satisfiability problem for closed fixedpoint terms is
Proof. From Proposition 10.4.12, we know that the problem whether a closed fixedpoint term t over Sig~ has a model can be effectively reduced to the question whether, for some syntactic tree T over SigK, c C [tK]pT . By Proposition 6.3.9, (page 151) , this last condition is equivalent to T C lt]ezsig~ i . e , . to the membership of T in the interpretation of t K over the powerset algebra of all syntactic trees over SigK. Now, by the equivalence of a u t o m a t a and fixedpoint terms (Theorem 7.3.6, page 186), we can construct an automaton At~c over Sig ~ such that T E itK]e7s~i~ if and only if T C lArK ]e;rs~K . Therefore, the original question can be effectively reduced to the problem whether [At~r]e7sig K 7s 0; in other words, to the nonemptiness problem of an automaton interpreted in the powerset tree algebra. By the simulation theorem for tree a u t o m a t a (Theorem 9.6.10, page 229), we can effectively construct a nondeterministic automaton At~ ' such that IAtK]eTs~g~ ' = I A ~ I~~ , ~ . The decidability of the nonemptiness problem for the nondeterministic tree a u t o m a t a is shown in Section 10.2.3, page 239. This remark completes the proof. [1 From the equivalence of automata and fixedpoint terms (Theorem 7.3.6, page 186), we immediately get the following. C o r o l l a r y 10.4.14. It decidable whether, for a given automaton A (without variables), there exists a semialgebra B such that [A]eB 7L O. Another consequence is the decidability of the equivalence problem. C o r o l l a r y 10.4.15. It is decidable whether two fixedpoint terms have the same interpretation in all powerset algebras.
10.4 The satisfiability over powerset algebras
249
Proof. Assume first that fixedpoint terms tl and t2 are closed. By Proposition 6.1.6, page 144, for each closed term t we can construct its dual t such that, for any semialgebra/3, [ti~u  [tipu" Now, the inequality [tl ]~u # [t2ipu amounts to ([tll~u n u (I&I B N lt21,~) r 0, i.e., to ldq(eq(tl,&),eq(tl,t2))l~u
r
O
If tl, t2 are not closed, we have ar(tl)U ar(t2) = { Z l , . . . ,Zm}. Then we enrich the signature by some fresh constants d l , . . . , drn. Now, it is easy to see that the terms tl and t2 do not have the same interpretation in a powerset algebra ggB if and only if there exists an extension B' of the semialgebra B by interpretation of the constants d l , . . . , din, such that
[tl{dllzl,...,d,~lzm}l~B,
:/= lt2{dllzl,...,dmlzm}l~u,
Thus, in any case, the nonequivalence of tl and t2 reduces to the nonemptiness of a closed fixedpoint term. E] An analogous claim can be, of course, stated for automata.
The second main result of this section is the following. T h e o r e m 10.4.16 ( F i n i t e M o d e l P r o p e r t y ) . If a fixedpoint term has a
model, it has also a finite model. Proof. Like in the proof of the previous theorem, we will use the existence of an automaton ArK, and a nondeterministic automaton A'tK such that t has any model if and only if [A'tK ]~7s~K # O. Then, if t has a model then A'tK accepts some syntactic tree over Sig g. The latter, by Regularity Theorem (page 241), implies that A'tK accepts some regular tree, T say. Since the automaton A'ts+ is equivalent to At K over the powerset tree algebra, and Ats+ is semantically equivalent to t K, we have T c [tK]~%~K. By Proposition 6.3.9, this implies c C [tg]pw 9 NOW, by Proposition 10.3.3 (page 241), the quotient of T under the equivalence ~ is a finite semialgebra T / ~ over Sig K, and the canonical homomorphism h : w ~ [w]~ is reflective (where [w]~ denotes the equivalence class of w under ,~). Again, by Corollary 6.3.8 (page 151), e E [tg]pw implies [a]~ E [tK]~T/~. By Proposition 10.4.10 we conclude that [~]~ C [tips(T/~), and hence the derived semialgebra 5 ( T / ~ ) is the desired finite model of t. D
250
10. Decision problems
Remark on complexity. Let n  It]. It follows by the definition of the homomorphism auto translating fixedpoint terms to a u t o m a t a (Proposition 7.3.3, page 181), that the automaton At equivalent to a fixedpoint term t can have a size linear in the size of t. Thus the size of the signature Sig K constructed in Corollary 10.4.4 can be bounded by nISigl. If we consider the signature Sig as fixed, we can estimate the size of Sig K as n ~ and the same can be said about the size of the term t K constructed in Definition 10.4.9. Now, again, the size of the automaton AtK equivalent to t g (see the proof of Theorem 10.4.13) is linear in the size of t K, and hence can be estimated by n ~ Now, the complexity of the whole procedure relies mainly on the complexity of the translation of the automaton ArK into the nondeterministic automaton A~tK. This is probably best accomplished in the work by Muller and Schupp [72], who translate an alternating automaton with the Rabin chain condition (equivalent to the parity condition) of m states into a nondeterministic tree automaton with 2 m~ states and of Mostowski index m ~ (The way presented in this book, via the Simulation Theorem, is less direct.) This gives us a nondeterministic tree automaton of the size 2 ~~ and of Mostowski index n ~ whose nonemptiness is equivalent to the satisfiability of t (with n  Itl). Now the size of a minimal regular tree accepted by a nondeterministic automaton (if any) is polynomial in the number of states of this a u t o m a t o n (in fact, linear as remarked by Emerson [31]). The size of a minimal finite model of t amounts to the size of a minimal regular tree accepted by ArK. The complexity of the nonemptiness problem of a nondeterministic tree a u t o m a t o n with m states and a Mostowski index k is best estimated by Emerson and Jutla [32] a s m O(k). Thus we can test the nonemptiness of ArK in t i m e 2 n ~ 1 7 6   2 n~ This finally allows us to estimate the complexity of the satisfiability problem of fixedpoint terms, as well as the size of a finite model of a satisfiable term t, by a single exponential function in the size of t. It should also be noticed that, since all the constructions used in the proof of the above theorem are effective, if t is satisfiable, one can effectively construct a finite model of t.
10.5
Bibliographic
notes
and
sources
Let us review a number of related decidability results. The decidability of the nonemptiness problem for Rabin a u t o m a t a on infinite trees was shown by Rabin [83]; it constituted one of the steps in the proof of the Rabin Tree Theorem (see notes after the preceding chapter). The Regularity Theorem was slightly later discovered also by Rabin [84]. Emer
10.5 Bibliographic notes and sources
251
son [31] used it to show that the aforementioned nonemptiness problem is in class NP (in fact, NPcomplete, as shown later by Emerson and Jutla [32]). The decidability of the satisfiability problem for the modal #calculus was first shown by Kozen and Parikh [56], by a reduction to the monadic second order theory of binary tree, and application of the Rabin Tree Theorem. Streett and Emerson [91] gave an elementary decision procedure by a direct reduction to the nonemptiness problem of tree automata; they also established the small model theorem for the modal pcalculus. A singleexponential deterministictime upper bound was shown later by Emerson and Jutla [32], building on the result of Safra [86] on the complexity of determinization of automata on infinite words. The exponential lower bound follows by the work of Fischer and Ladner [37] on the propositional dynamic logic. This lower bound applies, of course, also to the satisfiability problem for our pcalculus over arbitrary powerset algebras. A simple, polynomial time procedure for the nonemptiness problem in the powerset algebra of trees without intersection, as well as for the satisfiability problem over all powerset algebras without intersection, was given in [78]. A similar result for disjunctive formulas of the modal #calculus (see bibliographic notes in Chapter 9) was independently shown by Janin and Walukiewicz [45]. Both procedures use a reduction to (scalar) terms of the Boolean pcalculus. The decidability (in exponential time) of the satisfiability problem for the pcalculus over powerset algebras (with intersection and dualities), shown in Section 10.4 can be also derived as a corollary to the results of McAllester, Givan, Witty and Kozen on Tarskian set constraints [60].
This Page Intentionally Left Blank
11. Algorithms
We have seen in the previous chapter, that some important decision questions for the #calculus amounts to computing the value of a finite vectorial Boolean fixedpoint term. We can also note that computing the value of It]z in any f i n i t e / t  i n t e r p r e t a t i o n Z can be reduced to this problem, by first constructing the induced powerset interpretation (c.f. Section 2.5, page 53), and then exploiting the correspondence between powerset interpretations and Boolean terms (explained in Section 3.2, page 72). A similar reduction applies to the problem of determining the winner at a given position of a parity game. (By the results of Chapter 4, the two problems are, in fact equivalent.) The problem of evaluating finite vectorial Boolean fixedpoint terms is, by the time we are writing, a fascinating challenge. No algorithm polynomial in the size of the term has been discovered. On the other hand, we know that the problem (more precisely, the decision version of it, namely, whether the first component is 1) is in the complexity class N P ~ c o  N P [34] (even in UP N c o  UP [49]). Thus we may believe that the question is tantalizingly close to feasible. We devote this last chapter to presenting some known algorithmic solutions to the problem.
11.1 Evaluation
of vectorial
fixedpoint
terms
We already know that if h is a monotonic mapping from B to B, u x . h ( x ) = h(1) and # x . h = h(0). It follows that the evaluation of a Boolean fixed point term is in time linear in the size of this term. Very often, however, fixed points to be evaluated are given in the vectorial form
01Xl02X2.''"
.OkXk.t(Xl
, X2 , . . . , Xk ).
Of course, this form can be transformed into scalar ones (indeed, it is its very definition: See Section 2.7, page 58) but it costs an exponential blowup in the size of the term to be evaluated. Fortunately, more efficient algorithms exist, which directly deal with vectorial expressions.
254
11. Algorithms
First of all, for any monotonic function t : ]!3~ + B ~, where x is a vector of variables of length n (for sake of notational simplicity, here we identify a vector of variables with the set of its components), let us consider the following sequences of elements of B~: ao

O,
al

t(O),
a2

t ( a l ),
ai+l
=
t(ai),
bo

1,
bl

t(1),
b2

t(bl),

t(bi),
and
9
.
.
bi+l
Lemma
11.1.1. =
•x.t(x)
=
bn.
Proof. Obviously, a n , y 
(Yl,Y2,y3}, and
t(x, y)  <x2Y3 + x3Y2, xx ~ x3 ~ Yl, 0). We want to compute v y . # x . t ( x , y). Let f ( y )  #x.t(x, y). By substituting 0 for y4 in Example 11.1.4, we know that f(yl,y2, y3)  . Then, v y . # x . t ( x , y)  a3 where a l  f ( 1 ) , a2  f ( a l ) , a3  f(a2). We have a l  f ( 1 )  #x.t(x, 1)  51,3 where b1,1

t(0,1) (0,1,0>,
bl,2

t(bl,1,1)  t((0, 1, 0), (1, 1, 1))  (1, 1, 0),
bl,3
=
t(bl,2, 1)  t((1, 1, 0), (1, 1, 1))  (1, 1, 0).
258
11. Algorithms Thus, al  (1, 1, 0) and we compute a2  b2,3 where 52,1
=
t(O, a l )  t((0,0,0),(1,1,0))  (0,1,0),
b2,2
=
t ( b 2 , 1 , a l )  t((0,1,0), (1,1,0))  (0,1,0),
52,3

t ( b 2 , 2 , a l )  t((0,1,0), (1,1,0))  (0,1,0).
Finally, a3  b3,3 where 53,1
=
t(O, a2)  t((O, O, 0), (0, 1, 0))  (0, O, 0),
53,2
=
t(b3,1, a2)  t((0, 0, 0), (0, 1, 0))  (0, 0, 0),
b3,3
=
t(b3,2, a2)  t((O, O, 0), (0, 1, 0))  (0, O, 0),
[3 The time complexity of this procedure relies on the following lemma, which is an immediate corollary of Lemma 11.1.2 (page 254). L e m m a 11.1.6. Let t 9 ]3 ~ x B y + B ~ be m o n o t o n i c , and g ( y ) = O x . t ( x , y ) . I f f o r any a E ]3 ~ and any b C B y, the t i m e needed f o r c o m p u t i n g t ( a , b) is bounded f r o m above by T , then the time needed f o r c o m p u t i n g g ( b ) is bounded f r o m above by n T , where n is the length of the vector x . By iteratively applying this lemma, we get the following result. P r o p o s i t i o n 11.1.7. The t i m e needed f o r c o m p u t i n g ~1X1O2X2" ""
.OkXk.t(Xl,X2,...
,Xk)  f l  O l X l  f 2 ( X l )
is at m o s t na[t[.
11.1.2 I m p r o v e d a l g o r i t h m s The naive algorithm can be improved in several ways. A v o i d i n g useless r e c o m p u t a t i o n s 01Xl.02X2.'"
9 .OkXk.t(Xl,X2,...
,Xk),
In the naive algorithm for computing each fixed point
Oixi.Oi+l Xi+l. " . . .OkXk.t(bl , b 2 , . . . , bi1, x i , Xi+l . . . Xk ),
is computed a number of times, for different values b l , . . . , bi1. Each time, the iterative computation of this fixed point starts with the initial value 0 or 1.
11.1 Evaluation of vectorial fixedpoint terms
259
The algorithm proposed in [20] is based on the observation t h a t it is not always necessary to start the computation with 0 or 1 (see L e m m a 11.1.9 below) and thus avoids a lot of useless computations, t h a t makes the algorithm run in time nk/21t], instead of nkJtJ for the naive algorithm. Let us state this result more precisely. W i t h o u t loss of generality we m a y restrict ourselves to vectorial Boolean terms
01Xl02X2.''"
.OkXk.t(Xl
, X,2, . . . , Xk ),
where 0i is equal to # Propsition 1.3.2, page without increasing its assume t h a t k is odd, any variable in z does
if i is odd and v if i is even. (By the golden lemma: 19, any vectorial term can be reduced to such a form size, see also Section 2.7.4, page 63.) We may also since t ( x l , x 2 , . . . ,Xk) = Oz.t(xl,x2,... , X k ) when not occur in any xi.
Proposition
There is an algorithm which c o m p u t e s
11.1.8.
#Xk.PYk.#Xkl.1]yk1
in t i m e (1 + (k +
"'"
#Xl.YYl.#xo.t(xk,
1)n)nkltl O, q > O. Let I be a subset of {1,... , pq} of cardinality q. Then there exist a vectorial/unctional Boolean term h ( z , u) " ]3 pq2 • 113r + ]3 pq2 and a subset J of {1,... ,pq2} of cardinality q such that v y . 7 ~ i # x . f ( x , y , u ) ~gpz.h(z,u). Moreover, the size o / h is equal to q • [fl and the construction of h is done in time q • Ill. Proof. Let 1 be ( 1 , . . . , 1) 6 ]Bq, let z  ( Z l , . . . , Zq) where the zi are distinct vectors of variables of length pq, and let h ( z , u)  h ( Z l , . . . , Zq, u) ( f (Zl, 1, u), f (z2, 7[IZl,
U ) , . . . , f ( Z i , 7 1 1 Z i _ l , U ) , . . . , f ( Z q , 7[IZq1, U)>.
Let t t z . h ( z , u ) be ( h l ( u ) , h 2 ( u ) , . . . ,hq(u)>. We claim that for i 1 , . . . , q, h i ( u )  # x . f ( x , 7~ihi1 (u), u) where, by convention, 7~xho(u)  1. By Proposition 1.3.2 (page 19) we have # z . h ( z , u ) #.