HANDBOOK OF PROOF THEORY
STUDIES IN LOGIC AND THE FOUNDATIONS OF MATHEMATICS VOLUME 137
Honorary Editor: P. S U P P ...
250 downloads
1101 Views
35MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
HANDBOOK OF PROOF THEORY
STUDIES IN LOGIC AND THE FOUNDATIONS OF MATHEMATICS VOLUME 137
Honorary Editor: P. S U P P E S
Editors"
London Moscow R . A . S H O R E , Ithaca A . S . T R O E L S T R A , Amsterdam S. A B R A M S K Y , S. A R T E M O V ,
ELSEVIER AMSTERDAM 9LAUSANNE 9NEW YORK ~ OXFORD ~ SHANNON 9SINGAPORE ~ TOKYO
HANDBOOK OF PROOF THEORY
Edited by SAMUEL
R. BUSS
University of California, San Diego
1998
ELSEVIER AMSTERDAM
9LAUSANNE
~ NEW YORK
9OXFORD
9SHANNON
~ SINGAPORE
~ TOKYO
ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box 211, 1000 AE Amsterdam The Netherlands
Library of Congress CataloginginPublication Data Handbook of proof theory / edited by Samuel R. Buss. p. cm.  (Studies in logic and the foundations of mathematics ; v. 137) Includes bibliographic references and indexes. ISBN 0444898409 (alk. paper) 1. Proof theory. I. Buss, Samuel R. II. Series. QA9.54.H35 1998 511.3dc21 9818922 CIP
ISBN: 0444898409 9 1998 Elsevier Science B.V. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Science B.V., Copyright & Permissions Department, P.O. Box 521, 1000 AM Amsterdam, The Netherlands. Special regulations for readers in the U.S.A. This publication has been registered with the Copyright Clearance Center Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923. Information can be obtained from the CCC about conditions under which photocopies of parts of this publication may be made in the U.S.A. All other copyright questions, including photocopying outside of the U.S.A., should be referred to the copyright owner, Elsevier Science B.V., unless otherwise specified. No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. O The paper used in this publication meets the requirements of ANSI/NISO Z39.481992 (Permanence of Paper). Printed in The Netherlands
Preface
Proof theory is the study of proofs as formal objects and is concerned with a broad range of related topics. It is one of the central topics of mathematical logic and has applications in many areas of mathematics, philosophy, and computer science. Historically, proof theory was developed by mathematicians and philosophers as a formalization for mathematical reasoning; however, proof theory has gradually become increasingly important for computer science, and nowadays proof theory and theoretical computer science are recognized as being very closely connected. This volume contains articles covering a broad spectrum of proof theory, with an emphasis on its mathematical aspects. The articles should not only be interesting to specialists in proof theory, but should also be accessible to a diverse audience, including logicians, mathematicians, computer scientists and philosophers. We have attempted to include many of the central topics in proof theory; but have opted to have selfcontained expository articles, rather than to have encyclopedic coverage. Thus, a number of important topics have been largely omitted, but with the advantage that the included material is covered in more detail and at greater depth. The chapters are arranged so that the two introductory articles come first; these are then followed by articles from the core classical areas of proof theory; finally the handbook ends with articles that deal with topics closely related to computer science. This handbook was initiated at the suggestion of the publisher, as a partial successor to the very successful Handbook of Mathematical Logic, edited by J. Barwise. Only one quarter of the 1977 Handbook of Mathematical Logic was devoted to proof theory, and since then there has been considerable progress in this area; as a result, there is remarkably little overlap between the contents of the Handbook of Mathematical Logic and the present volume. Sam Buss La Jolla, California November 1997
This Page Intentionally Left Blank
List of C o n t r i b u t o r s
J. Avigad, Carnegie Mellon University (Ch. V) S.R. Buss, University of California, San Diego (Ch. I and II) R.L. Constable, Cornell University (Ch. X) M. Fairtlough, University of Sheffield (Ch. III) S. Feferman, Stanford University (Ch. V) G. J~ger, Universitiit Bern (Ch. IX) G. Japaridze, University of Pennsylvania (Ch. VII) D. de Jongh, University of Amsterdam (Ch. VII) P. Pudls Academy of Sciences of the Czech Republic (Ch. VIII) W. Pohlers, Westf~lische WilhelmsUniversit~t (Ch. IV) R.F. St~rk, UniversitSt Freiburg (Ch. IX) A.S. Troelstra, University of Amsterdam (Ch. VI) S.S. Wainer, University of Leeds (Ch. III)
This Page Intentionally Left Blank
Table of C o n t e n t s
Preface
.....................................
List of Contributors
v
..............................
vii
Chapter I. An Introduction to Proof Theory S a m u e l R. B u s s
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Chapter II. FirstOrder Proof Theory of Arithmetic S a m u e l R. B u s s
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
Chapter III. Hierarchies of Provably Recursive Functions M a t t Fairtlough and Stanley S. W a i n e r
. . . . . . . . . . . . . . . .
149
Chapter IV. Subsystems of Set Theory and Second Order Number Theory W o l f r a m Pohlers
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
209
Chapter V. G5del's Functional ("Dialectica") Interpretation J e r e m y A vigad and S o l o m o n F e # r m a n . . . . . . . . . . . . . . . . .
337
Chapter VI. Realizability A n n e S. Troelstra
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
407
Chapter VII. The Logic of Provability Giorgi Japaridze and D i c k de Jongh
. . . . . . . . . . . . . . . . . .
475
Chapter VIII. The Lengths of Proofs Pavel P u d l d k
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
547
Chapter IX. A ProofTheoretic Framework for Logic Programming Gerhard J @ e r and Robert F. StSrk . . . . . . . . . . . . . . . . . . .
639
Chapter X. Types in Logic, Mathematics and Programming Robert L. Constable . . . . . . . . . . . . . . . . . . . . . . . . . . .
Name Index Subject Index
................................. ................................
683 787 797
This Page Intentionally Left Blank
CHAPTER I
An Introduction to P r o o f Theory Samuel R. Buss Departments of Mathematics and Computer Science, University of California, San Diego La Jolla, California 920930112, USA
Contents 1. P r o o f theory of propositional logic . . . . . . . . . . . . . . . . . . . . . . . . . 1.1. Frege p r o o f systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2. T h e propositional sequent calculus . . . . . . . . . . . . . . . . . . . . . . 1.3. P r o p o s i t i o n a l resolution refutations . . . . . . . . . . . . . . . . . . . . . . 2. P r o o f theory of firstorder logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. S y n t a x and semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. Hilbertstyle proof systems . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3. T h e firstorder sequent calculus . . . . . . . . . . . . . . . . . . . . . . . . 2.4. Cut elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5. H e r b r a n d ' s theorem, interpolation and definability theorems . . . . . . . . 2.6. Firstorder logic and resolution refutations . . . . . . . . . . . . . . . . . . 3. P r o o f theory for other logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Intuitionistic logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Linear logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
HANDBOOK OF PROOF THEORY Edited by S. R. Buss 1998 Elsevier Science B.V. All rights reserved
3 5 10 18 26 26 29 31 36 48 59 64 64 70 74
2
S. Buss
Proof Theory is the area of mathematics which studies the concepts of mathematical proof and mathematical provability. Since the notion of "proof" plays a central role in mathematics as the means by which the truth or falsity of mathematical propositions is established; Proof Theory is, in principle at least, the study of the foundations of all of mathematics. Of course, the use of Proof Theory as a foundation for mathematics is of necessity somewhat circular, since Proof Theory is itself a subfield of mathematics. There are two distinct viewpoints of what a mathematical proof is. The first view is that proofs are social conventions by which mathematicians convince one another of the truth of theorems. That is to say, a proof is expressed in natural language plus possibly symbols and figures, and is sufficient to convince an expert of the correctness of a theorem. Examples of social proofs include the kinds of proofs that are presented in conversations or published in articles. Of course, it is impossible to precisely define what constitutes a valid proof in this social sense; and, the standards for valid proofs may vary with the audience and over time. The second view of proofs is more narrow in scope: in this view, a proof consists of a string of symbols which satisfy some precisely stated set of rules and which prove a theorem, which itself must also be expressed as a string of symbols. According to this view, mathematics can be regarded as a 'game' played with strings of symbols according to some precisely defined rules. Proofs of the latter kind are called "formal" proofs to distinguish them from "social" proofs. In practice, social proofs and formal proofs are very closely related. Firstly, a formal proof can serve as a social proof (although it may be very tedious and unintuitive) provided it is formalized in a proof system whose validity is trusted. Secondly, the standards for social proofs are sufficiently high that, in order for a proof to be socially accepted, it should be possible (in principle!) to generate a formal proof corresponding to the social proof. Indeed, this offers an explanation for the fact that there are generally accepted standards for social proofs; namely, the implicit requirement that proofs can be expressed, in principle, in a formal proof system enforces and determines the generally accepted standards for social proofs. Proof Theory is concerned almost exclusively with the study of formal proofs: this is justified, in part, by the close connection between social and formal proofs, and it is necessitated by the fact that only formal proofs are subject to mathematical analysis. The principal tasks of Proof Theory can be summarized as follows. First, to formulate systems of logic and sets of axioms which are appropriate for formalizing mathematical proofs and to characterize what results of mathematics follow from certain axioms; or, in other words, to investigate the prooftheoretic strength of particular formal systems. Second, to study the structure of formal proofs; for instance, to find normal forms for proofs and to establish syntactic facts about proofs. This is the study of proofs as objects of independent interest. Third, to study what kind of additional information can be extracted from proofs beyond the truth of the theorem being proved. In certain cases, proofs may contain computational or constructive information. Fourth, to study how best to construct formal proofs; e.g., what kinds of proofs can be efficiently generated by computers?
Introduction to Proof Theory
3
The study of Proof Theory is traditionally motivated by the problem of formalizing mathematical proofs; the original formulation of firstorder logic by Frege [1879] was the first successful step in this direction. Increasingly, there have been attempts to extend Mathematical Logic to be applicable to other domains; for example, intuitionistic logic deals with the formalization of constructive proofs, and logic programming is a widely used tool for artificial intelligence. In these and other domains, Proof Theory is of central importance because of the possibility of computer generation and manipulation of formal proofs. This handbook covers the central areas of Proof Theory, especially the mathematical aspects of Proof Theory, but largely omits the philosophical aspects of proof theory. This first chapter is intended to be an overview and introduction to mathematical proof theory. It concentrates on the proof theory of classical logic, especially propositional logic and firstorder logic. This is for two reasons: firstly, classical firstorder logic is by far the most widely used framework for mathematical reasoning, and secondly, many results and techniques of classical firstorder logic frequently carryover with relatively minor modifications to other logics. This introductory chapter will deal primarily with the sequent calculus, and resolution, and to lesser extent, the Hilbertstyle proof systems and the natural deduction proof system. We first examine proof systems for propositional logic, then proof systems for firstorder logic. Next we consider some applications of cut elimination, which is arguably the central theorem of proof theory. Finally, we review the proof theory of some nonclassical logics, including intuitionistic logic and linear logic. 1. P r o o f t h e o r y
of propositional
logic
Classical propositional logic, also called sentential logic, deals with sentences and propositions as abstract units which take on distinct True/False values. The basic syntactic units of propositional logic are variables which represent atomic propositions which may have value either True or False. Propositional variables are combined with Boolean functions (also called connectives): a kary Boolean function is a mapping from {T, F} k to {T,F} where we use T and F to represent True and False. The most frequently used examples of Boolean functions are the connectives T and _L which are the 0ary functions with values T and F , respectively; the binary connectives A, V, D, ++ and @ for "and", "or", "ifthen", "ifandonlyif" and "parity"; and the unary connective ~ for negation. Note that V is the inclusiveor and @ is the exclusiveor. We shall henceforth let the set of propositional variables be V  (pl, p2, p3,...}; however, our theorems below hold also for uncountable sets of propositional variables. The set of formulas is inductively defined by stating that every propositional variable is a formula, and that if A and B are formulas, then (~A), (AAB), (AVB), (A D B), etc., are formulas. A truth assignment consists of an assignment of True/False values to the propositional variables, i.e., a truth assignment is a mapping T : V + (T, F}.
4
S. Buss
A truth assignment can be extended to have domain the set of all formulas in the obvious way, according to Table 1; we write ~(A) for the truth value of the formula A induced by the truth assignment T.
A T T F F
B T F T F
Table 1 Values of a truth assignment (~A) (A A B) (A V B) (A D B) (A F T T T F F T F T F T T T F F T
~ B) T F F T
(A @ B) F T T F
A formula A involving only variables among P l , . . . , P k defines a kcry Boolean function fA, by letting fA(Xl,...,Xk) equal the truth value ~(A) where T(pi) = Xi for all i. A language is a set of connectives which may be used in the formation of Lformulas. A language L is complete if and only if every Boolean function can be defined by an Lformula. Propositional logic can be formulated with any complete (usually finite) language L   for the time being, we shall use the language 9, A, V and D. A propositional formula A is said to be a tautology or to be (classically) valid if A is assigned the value T by every truth assignment. We write ~ A to denote that A is a tautology. The formula A is satisfiable if there is some truth assignment that gives it value T. If F is a set of propositional formulas, then F is satisfiable if there is some truth assignment that simultaneously satisfies all members of F. We say F tautologically implies A, or F ~ A, if every truth assignment which satisfies F also satisfies A. One of the central problems of propositional logic is to find useful methods for recognizing tautologies; since A is a tautology if and only if ~A is not satisfiable, this is essentially the same as the problem of finding methods for recognizing satisfiable formulas. Of course, the set of tautologies is decidable, since to verify that a formula A with n distinct propositional variables is a tautology, one need merely check that the 2n distinct truth assignments to these variables all give A the value T. This bruteforce 'method of truthtables' is not entirely satisfactory; firstly, because it can involve an exorbitant amount of computation, and secondly, because it provides no intuition as to why the formula is, or is not, a tautology. For these reasons, it is often advantageous to prove that A is a tautology instead of using the method of truthtables. The next three sections discuss three commonly used propositional proof systems. The socalled Frege proof systems are perhaps the most widely used and are based on modus ponens. The sequent calculus systems provide an elegant proof system which combines both the possibility of elegant proofs and the advantage of an extremely useful normal form for proofs. The resolution refutation proof systems are designed to allow for efficient computerized search for proofs. Later, we will extend these three systems to firstorder logic.
Introduction to Proof Theory
5
1.1. Frege p r o o f s y s t e m s The mostly commonly used propositional proof systems are based on the use of modus ponens as the sole rule of inference. Modus ponens is the inference rule, which allows, for arbitrary A and B, the formula B to be inferred from the two hypotheses A 3 B and A; this is pictorially represented as A
ADB B In addition to this rule of inference, we need logical axioms that allow the inference of 'selfevident' tautologies from no hypotheses. There are many possible choices for sets of axioms: obviously, we wish to have a sufficiently strong set of axioms so that every tautology can be derived from the axioms by use of modus ponens. In addition, we wish to specify the axioms by a finite set of schemes.
1.1.1. Definition. A substitution a is a mapping from the set of propositional variables to the set of propositional formulas. If A is a propositional formula, then the result of applying a to A is denoted Aa and is equal to the formula obtained by simultaneously replacing each variable appearing in A by its image under a. An example of a set of axiom schemes over the language ~, A, V and D is given in the next definition. We adopt conventions for omitting parentheses from descriptions of formulas by specifying that unary operators have the highest precedence, the connectives A and V have second highest precedence, and that D and ~ have lowest precedence. All connectives of the same precedence are to be associated from right to left; for example, A D ~B D C is a shorthand representation for the formula (A D ((~B) D C)). 1.1.2. Definition.
Consider the following set of axiom schemes:
Pl :) (P2 :)Pl) (Pl :)P2) :)(Pl :)~P2) :)~Pl (Pl ~P2) :) (Pl :)(P2 ~P3)) :)(Pl ~P3) (~Pl) :)Pl Pl D p l V p 2 P2 D Pl Vp2 (p~ DP3) D (P2Dp3) D (p~VP2DP3)
plAp2DPl Pl Ap2 D P2 p~ Dp~.Dp~Ap2
The propositional proof system jc is defined to have as its axioms every substitution instance of the above formulas and to have modus ponens as its only rule. An ~'proof of a formula A is a sequence of formulas, each of which is either an ~axiom or is inferred by modus ponens from two earlier formulas in the proof, such that the final formula in the proof is A. We write ~ A, or just ~ A, to say that A has an ~'proof. We write F ~ A, or just F k A, to say that A has a proof in which each formula either is deduced according the axioms or inference rule of ~" or is in F. In this case, we say that A is proved from the extralogical hypotheses F; note that F may contain formulas which are not tautologies.
6
S. Buss
1.1.3. Soundness and completeness of ~'. It is easy to prove that every ~'provable formula is a tautology, by noting that all axioms of ~" are valid and that modus ponens preserves the property of being valid. Similarly, whenever F ~ A, then F tautologically implies A. In other words, ~ is (implicationally) sound; which means that all provable formulas are valid (or, are consequences of the extralogical hypotheses F). Of course, any useful proof system ought to be sound, since the purpose of creating proofs is to establish the validity of a sentence. Remarkably, the system ~" is also complete in that it can prove any valid formula. Thus the semantic notion of validity and the syntactic notion of provability coincide, and a formula is valid if and only if it is provable in ~'.
The propositional proof system ~ is complete and is implicationally complete; namely, (1) If A is a tautology, then ~ A . (2) If F ~ A, then F ~ A.
Theorem.
The philosophical significance of the completeness theorem is that a finite set of (schematic) axioms and rules of inference are sufficient to establish the validity of any tautology. In hindsight, it is not surprising that this holds, since the method of truthtables already provides an algorithmic way of recognizing tautologies. Indeed, the proof of the completeness theorem given below, can be viewed as showing that the method of truth tables can be formalized within the system ~'. 1.1.4. Proof. We first observe that part (2) of the completeness theorem can be reduced to part (1) by a two step process. Firstly, note that the compactness theorem for propositional logic states that if F ~ A then there is a finite subset F0 of F which also tautologically implies A. A topological proof of the compactness theorem for propositional logic is sketched in 1.1.5 below. Thus F may, without loss of generality, be assumed to be a finite set of formulas, say F  {B1,..., Bk}. Secondly, note that F ~ A implies that B1 D B2 D ... D A is a tautology. So, by part (1), the latter formula has an JCproof, and by k additional modus ponens inferences, F k A. (To simplify notation, we write k instead of ~ .) It remains to prove part (1). We begin by establishing a series of special cases, (a)(k), of the completeness theorem, in order to "bootstrap" the propositional system ~'. We use symbols r r X for arbitrary formulas and II to represent any set of formulas.
(a) e r ~ r Proof: Combine the three axioms (r D r D r D (r D (r D r D r r D (r D r and r D (r D r D r with two uses of modus ponens.
D (r D r
(b) Deduction Theorem: F, r ~ r if and only if F k r D r Proof: The reverse implication is trivial. To prove the forward implication, suppose C1, C2,..., Ck is an ~'proof of r from F, r This means that Ck is r and that each
Introduction to Proof Theory
7
6'/is r is in F, is an axiom, or is inferred by modus ponens. It is straightforward to prove, by induction on i, that F ~ r D Ci for each Ci.
(c) r ~ r ~  r ~ r Proof: By the deduction theorem, it suffices to prove that r D r r F r To prove this, use the two axioms r D (r D 9r uses of modus ponens.
and (r D r
D (r D r
D 9r and three
(d) r 9r F r
Proof: From the axiom r D (~r D r
we have r F ~r D r Thus, by (c) we get r F 9r D r and by (b), r162 F ~~r Finally modus ponens with the axiom r D r gives the desired result. (e) 9 r 1 6 2 1 6 2 1 6 2
Proof: This former follows from (d) and the deduction theorem, and the latter follows from the axiom r D (r D r
(f) r ~r ~ ~(r ~ r Proof: It suffices to prove r ~ r D (r D r theorem, it suffices to prove r r D r F r modus ponens.
Thus, by (c) and the deduction The latter assertion is immediate from
(g) r 1 6 2 r 1 6 2
Proof: Two uses of modus ponens with the axiom r D r D (r A r (h) r F ,(r A r
and r F 9(r A r
Proof: For the first part, it suffices to show F 9r D (r A r suffices to show ~ (r A r similar.
and thus, by (c), it D r which is an axiom. The proof that r F 9(r A r is
(i) CFr162 andCFr162 Proof: r D (r V r
and r D (r V r
are axioms.
(j) ~r ~r e (r v r Proof: It suffices to prove r F 9r D 9(r V r 9r ~ (r V r D r For this, we combine (i) r F r D r by (e), (ii) F r D r by (a), (iii) F (r D r D (r D r D ((r V r D r with two uses of modus ponens.
and this, by (c), follows from
an axiom,
(k) r F ,9r
Proof: By (d), r 9r F r and obviously, r162
F 9,r
from the next lemma. 1.1.4.1. Lemma.
If F, r F r and F, 9r ~ r then F F r
So r F 9,r follows
8
S. Buss
P r o o f . By (b) and (c), the two hypotheses imply that F ~ 9 r D 9 r and F t 9 r D 9~r These plus the two axioms ( 9 r D 9r D ( 9 r D ~9r D 9 9 r and 9 9 r D r give F F r [:] 1.1.4.2. L e m m a . Let the formula A involve only the propositional variables among p l , . . . , p n . For 1 < i ~ n, suppose that Bi is either Pi or 9pi. Then, either B 1 , . . . , B~ F A
or
B 1 , . . . , B~ t 9A.
P r o o f . Define T to be a truth assignment that makes each Bi true. By the soundness theorem, A (respectively, 9A), can be proved from the hypotheses B1,... ,Bn only if y(A)  T (respectively ~(A)  F). Lemma 1.1.4.2 asserts that the converse holds too. The lemma is proved by induction on the complexity of A. In the base case, A is just pi" this case is trivial to prove since Bi is either Pi or 9pi. Now suppose A is a formula A1 V A2. If a(A)  T, then we must have T ( A i ) T for some i e {1,2}; the induction hypothesis implies that B 1 , . . . , B n F Ai and thus, by (i) above, B 1 , . . . , B~ F A. On the other hand, if T(A)   F, then T(A1) = T ( A 2 )   F , so the induction hypothesis implies that B ~ , . . . , B~ F 9Ai for both i  1 and i = 2. From this, (j) implies that B 1 , . . . , B ~ ~ 9A. The cases where A has outermost connective A, D or 9 are proved similarly. [::]. We are now ready to complete the proof of the Completeness Theorem 1.1.3. Suppose A is a tautology. We claim that Lemma 1.1.4.2 can be strengthened to have B1,...,B~ b A where, as before each Bi is either Pi or 9pi, but now 0 _ k < n is permitted. We prove this by induction on k = n , n  1 , . . . , 1 , O. For k = n, this is just Lemma 1.1.4.2. For the induction step, note that B 1 , . . . , B k k A follows from B1,...,Bk,pk+l F A and B 1 , . . . , B ~ , 9p~+1 F A by Lemma 1.1.4.1. When k  O, we have that b A, which proves the Completeness Theorem. Q.E.D. Theorem 1.1.3 1.1.5. It still remains to prove the compactness theorem for propositional logic. This theorem states: C o m p a c t n e s s T h e o r e m . Let F be a set of propositional formulas. (1) F is satisfiable if and only if every finite subset of F is satisfiable. (2) F ~ A if and only if there is a finite subset F0 of F such that F0 ~ A. Since F ~ A is equivalent to F U {gA} being unsatisfiable, (2) is implied by (1). It is fairly easy to prove the compactness theorem directly, and most introductory books in mathematical logic present such a proof. Here, we shall instead, give a proof based on the Tychonoff theorem; obviously this connection to topology is the reason for the name 'compactness theorem.'
Introduction to Proof Theory
9
Proof. Let V be the set of propositional variables used in F; the sets F and V need not necessarily be countable. Let 2y denote the set of truth assignments on V and endow 2y with the product topology by viewing it as the product of IV] copies of the two element space with the discrete topology. That is to say, the subbasis elements of 2 y are the sets Bp,i  (T: 7"(p)  i} for p E Y and i E (T, F}. Note that these subbasis elements are both open and closed. Recall that the Tychonoff theorem states that an arbitrary product of compact spaces is compact; in particular, 2y is compact. (See Munkres [1975] for background material on topology.) For r E F, define De = (T E 2y : T ~ r Since r only involves finitely many variables, each De is both open and closed. Now F is satisfiable if and only if McerDr is nonempty. By the compactness of 2V, the latter condition is equivalent to the sets MCeroDr being nonempty for all finite F0 C F. This, in turn is equivalent to each finite subset F0 of F being satisfiable. [] The compactness theorem for firstorder logic is more difficult; a purely modeltheoretic proof can be given with ultrafilters (see, e.g., Eklof [1977]). We include a prooftheoretic proof of the compactness theorem for firstorder logic for countable languages in section 2.3.7 below. 1.1.6. R e m a r k s . There are of course a large number of possible ways to give sound and complete proof systems for propositional logic. The particular proof system ~ used above is adapted from Kleene [1952]. A more detailed proof of the completeness theorem for ~" and for related systems can be found in the textbook of Mendelson [1987]. The system 9r is an example of a class of proof systems called Frege proof systems: a Frege proof system is any proof system in which all axioms and rules are schematic and which is implicationally sound and implicationally complete. Most of the commonly used proof systems similar to ~" are based on modus ponens as the only rule of inference; however, some (nonFrege) systems also incorporate a version of the deduction theorem as a rule of inference. In these systems, if B has been inferred from A, then the formula A D B may also be inferred. An example of such a system is the propositional fragment of the natural deduction proof system described in section 2.4.8 below. Other rules of inference that are commonly allowed in propositional proof systems include the substitution rule which allows any instance of r to be inferred from r and the extension rule which permits the introduction of abbreviations for long formulas. These two systems appear to be more powerful than Frege systems in that they seem to allow substantially shorter proofs of certain tautologies. However, whether they actually are significantly more powerful than Frege systems is an open problem. This issues are discussed more fully by Pudls in Chapter VIII. There are several currently active areas of research in the proof theory of propositional logic. Of course, the central open problem is the P versus N P question of whether there exists a polynomial time method of recognizing tautologies. Research on the proof theory of propositional logic can be, roughly speaking, separated into three problem areas. Firstly, the problem of "proofsearch" is the question of
10
S. B u s s
what are the best algorithmic methods for searching for propositional proofs. The proofsearch problem is important for artificial intelligence, for automated theorem proving and for logic programming. The most common propositional proof systems used for proofsearch algorithms are variations of the resolution system discussed in 1.3 below. A second, related research area is the question of proof lengths. In this area, the central questions concern the minimum lengths of proofs needed for tautologies in particular proof systems. This topic is treated in more depth in Chapter VIII in this volume. A third research area concerns the investigation of fragments of the propositional proof system ~'. For example, propositional intuitionist logic is the logic which is axiomatized by the system ~" without the axiom scheme ~A D A. Another important example is linear logic. Brief discussions of these two logics can be found in section 3. 1.2. T h e p r o p o s i t i o n a l sequent calculus The sequent calculus, first introduced by Gentzen [1935] as an extension of his earlier natural deduction proof systems, is arguably the most elegant and flexible system for writing proofs. In this section, the propositional sequent calculus for classical logic is developed; the extension to firstorder logic is treated in 2.3 below. 1.2.1. Sequents and Cedents. In the Hilbertstyle systems, each line in a proof is a formula; however, in sequent calculus proofs, each line in a proof is a s e q u e n t : a sequent is written in the form A1, . . . , A~~ B1, . . . , B t
where the symbol ~ is a new symbol called the sequent arrow (not to be confused with the implication symbol D ) and where each Ai and B j is a formula. The intuitive meaning of the sequent is that the conjunction of the Ai's implies the disjunction of the B j 's. Thus, a sequent is equivalent in meaning to the formula k
t
i=1
j=l
AA, VB .
The symbols A and V represent conjunctions and disjunctions, respectively, of multiple formulas. We adopt the convention that an empty conjunction (say, when k  0 above) has value "True", and that an empty disjunction (say, when ~ = 0 above) has value "False". Thus the sequent ~ A has the same meaning as the formula A, and the e m p t y s e q u e n t ~ is false. A sequent is defined to be valid or a tautology if and only if its corresponding formula is. The sequence of formulas A1,...,Ak is called the a n t e c e d e n t of the sequent displayed above; B1,..., Bt is called its s u c c e d e n t . They are both referred to as cedents.
Introduction to Proof Theory
11
1.2.2. Inferences and proofs. We now define the propositional sequent calculus proof system PK. A sequent calculus proof consists of a rooted tree (or sometimes a directed acyclic graph) in which the nodes are sequents. The root of the tree, written at the bottom, is called the endsequent and is the sequent proved by the proof. The leaves, at the top of the tree, are called initial sequents or axioms. Usually, the only initial sequents allowed are the logical axioms of the form AF A, where we further require that A be atomic. Other than the initial sequents, each sequent in a PKproof must be inferred by one of the rules of inference given below. A rule of inference is denoted by a figure ~ or sl S s~ indicating that the sequent S may be inferred from $1 or from the S pair $1 and $2. The conclusion, S, is called the lower sequent of the inference; each hypotheses is an upper sequent of the inference. The valid rules of inference for PK are as follows; they are essentially schematic, in that A and B denote arbitrary formulas and F, A, etc. denote arbitrary cedents. Weak Structural Rules
Exchange:left
F, A, B, IIFA F, B, A, IIFA
Contraction:left A, A, F"F A
A, FFA
A, F"} F }A A
Weakening:left
F+A, A, B, A FFA, B, A, A
Exchange:right
Contraction:right FFA, A, A FFA, A
Weakening:right
F} F~ A A, A
The weak structural rules are also referred to as just weak inference rules. The rest of the rules are called strong inference rules. The structural rules consist of the weak structural rules and the cut rule. The Cut Rule FF A,A A, FF A FF A T h e P r o p o s i t i o n a l Rules 1
1:left
F +A, A
~:right
~A, F+A
A:left
A, B, F}A A A B, F~A
V:lefl A, F+ A D :left
A, F} A F+A, ~A F~A, B F+A, A A B
A:right F  + A , A
B, FFA A V B, FFA
V :right
FFA, A B, FFA A D B, P+A
D :right
F} A, A, B F+A, A V B
A, F ~ A, B F+A, A D B
1 We have stated the A:left and the V:right rules differently than the traditional form. The traditional definitions use the following two V:right rules of inference
S. Buss
12
The above completes the definition of PK. We write PK ~ F+ A to denote that the sequent F~ A has a PKproof. When A is a formula, we write PK F A to mean that PK F ~ A. The cut rule plays a special role in the sequent calculus, since, as is shown in section 1.2.8, the system PK is complete even without the cut rule; however, the use of the cut rule can significantly shorten proofs. A proof is said to be cutfree if does not contain any cut inferences. 1.2.3. A n c e s t o r s , d e s c e n d e n t s a n d t h e s u b f o r m u l a p r o p e r t y . All of the inferences of PK, with the exception of the cut rule, have a principal formula which is, by definition, the formula occurring in the lower sequent of the inference which is not in the cedents F or A (or H or A). The exchange inferences have two principal formulas. Every inference, except weakenings, has one or more auxiliary formulas which are the formulas A and B, occurring in the upper sequent(s) of the inference. The formulas which occur in the cedents F, A, II or A are called side formulas of the inference. The two auxiliary formulas of a cut inference are called the cut formulas. We now define the notions of descendents and ancestors of formulas occurring in a sequent calculus proof. First we define immediate descendents as follows: If C is a side formula in an upper sequent of an inference, say C is the ith subformula of a cedent F, H, A or A, then C's only immediate descendent is the corresponding occurrence of the same formula in the same position in the same cedent in the lower sequent of the inference. If C is an auxiliary formula of any inference except an exchange or cut inference, then the principal formula of the inference is the immediate descendent of C. For an exchange inference, the immediate descendent of the A or B in the upper sequent is the A or B, respectively, in the lower sequent. The cut formulas of a cut inference do not have immediate descendents. We say that C is an immediate ancestor of D if and only if D is an immediate descendent of C. Note that the only formulas in a proof that do not have immediate ancestors are the formulas in initial sequents and the principal formulas of weakening inferences. The ancestor relation is defined to be the reflexive, transitive closure of the immediate ancestor relation; thus, C is an ancestor of D if and only if there is a chain of zero or more immediate ancestors from D to C. A direct ancestor of D is an ancestor C of D such that C is the same formula as D. The concepts of descendent and direct descendent are defined similarly as the converses of the ancestor and direct ancestor relations. A simple, but important, observation is that if C is an ancestor of D, then C is a subformula of D. This immediately gives the following subformula property: F} A,A F} A,A V B
and
F} A,A F} A, BV A
and two dual rules of inference for A:left. Our method has the advantage of reducing the number of rules of inference, and also simplifying somewhat the upper bounds on cutfree proof length we obtain below.
Introduction to Proof Theory
13
1.2.4. P r o p o s i t i o n . (The Subformula Property) If P is a cutfree PKproof, then every formula occurring in P is a subformula of a formula in the endsequent of P. 1.2.5. L e n g t h s of proofs. There are a number of ways to measure the length of a sequent calculus proof P; most notably, one can measure either the number of symbols or the number of sequents occurring in P . Furthermore, one can require P to be treelike or to be daglike; in the case of daglike proofs no sequent needs to be derived, or counted, twice. ('bag' abbreviates 'directed acyclic graph', another name for such proofs is 'sequencelike'.) For this chapter, we adopt the following conventions for measuring lengths of sequent calculus proofs: proofs are always presumed to be treelike, unless we explicitly state otherwise, and we let IIPII denote the number of s t r o n g inferences in a treelike proof P . The value IIPI] is polynomially related to the number of sequents in P . If P has n sequents, then, of course, IIPII < n. On the other hand, it is not hard to prove that for any treelike proof P of a sequent F~ A, there is a (still treelike) proof of an endsequent F'~ A' with at most IIPII 2 sequents and with F' C_ F and A' c_ A. The reason we use IIPII instead of merely counting the actual number of sequents in P , is that using IIPII often makes bounds on proof size significantly more elegant to state and prove. Occasionally, we use IIPIIdag to denote the number of strong inferences in a daglike proof P . 1.2.6. S o u n d n e s s T h e o r e m . The propositional sequent calculus P K is sound. That is to say, any PKprovable sequent or formula is a tautology. The soundness theorem is proved by observing that the rules of inference of P K preserve the property of sequents being tautologies. The implicational form of the soundness theorem also holds. If  is a set of sequents, let an  be any sequent calculus proof in which sequents from  are permitted as initial sequents (in addition to the logical axioms). The implicational soundness theorem states that if a sequent F~ A has an  then F~ A is made true by every truth assignment which satisfies  1.2.7. T h e inversion t h e o r e m . The inversion theorem is a kind of inverse to the implicational soundness theorem, since it says that, for any inference except weakening inferences, if the conclusion of the inference is valid, then so are all of its hypotheses. T h e o r e m . Let I be a propositional inference, a cut inference, an exchange inference or a contraction inference. If I ' s lower sequent is valid, then so are are all of I ' s upper sequents. Likewise, if I ' s lower sequent is true under a truth assignment T, then so are are all of I ' s upper sequents. The inversion theorem is easily proved by checking the eight propositional inference rules; it is obvious for exchange and contraction inferences.
14
S. Buss
Note that the inversion theorem can fail for weakening inferences. Most authors define the A :left and V :right rules of inference differently than we defined them for PK, and the inversion theorem can fail for these alternative formulations (see the footnote on page 11). 1.2.8. T h e c o m p l e t e n e s s t h e o r e m . The completeness theorem for P K states that every valid sequent (tautology) can be proved in the propositional sequent calculus. This, together with the soundness theorem, shows that the PKprovable sequents are precisely the valid sequents. Theorem.
If F ~
A is a tautology, then it has a PKproof in which no cuts appear.
1.2.9. In order to prove Theorem 1.2.8 we prove the following stronger lemma which includes bounds on the size of the PKproof. L e m m a . Let F~ A be a valid sequent in which there are m occurrences of logical connectives. Then there is a treelike, cut free PKproof P of F  + A containing fewer than 2'n strong inferences. P r o o f . The proof is by induction on m. In the base case, m  0, the sequent F+ A contains no logical connectives and thus every formula in the sequent is a propositional variable. Since the sequent is valid, there must be some variable, p, which occurs both in F and in A. Thus F+ A can be proved with zero strong inferences from the initial sequent p}p. The induction step, m > 0, is handled by cases according to which connectives are used as outermost connectives of formulas in the cedents F and A. First suppose there is a formula of the form (~A) in F. Letting F' be the cedent obtained from F by removing occurrences of ~A, we can infer F+ A by: F' F A , A ~A, F'~ A F~ A where the double line indicates a series of weak inferences. By the inversion theorem, F'~ A, A is valid, and hence, since it has at most m  1 logical connectives, the induction hypothesis implies that it has a cutfree proof with fewer than 2m1 strong inferences. This gives F~ A a cutfree proof with fewer than 2m1 + 1 A A A B , F'"F A PF A By the inversion theorem and the induction hypothesis, A, B, F' ~ A has a cutfree proof with fewer than 2m1 strong inferences. Thus FFA has a cutfree proof with fewer than 2 m strong inferences. Third, suppose there is a formula of the A A B
Introduction to Proof Theory
15
appearing in the succedent A. Letting A' be the the succedent A minus the formula A A B, we can infer F> A',A F> A ' , B F> A',A A B F>A By the inversion theorem, both of upper sequents above are valid. Furthermore, they each have fewer than m logical connectives, so by the induction hypothesis, they have cutfree proofs with fewer than 2m1 strong inferences. This gives the sequent F+ A a cutfree proof with fewer than 2 m strong inferences. The remaining cases are when a formula in the sequent F> A has outermost connective V or ~. These are handled with the inversion theorem and the induction hypothesis similarly to the above cases. [] 1.2.10. The bounds on the proof size in Lemma 1.2.9 can be improved somewhat by counting only the occurrences of distinct subformulas in F ~ A. To make this precise, we need to define the concepts of positively and negatively occurring subformulas. Given a formula A, an occurrence of a subformula B of A, and a occurrence of a logical connective c~ in A, we say that B is negatively bound by ~ if either (1) ~ is a negation sign, ~, and B is in its scope, or (2) ~ is an implication sign, ~, and B is a subformula of its first argument. Then, B is said to occur negatively (respectively, positively) in A if B is negatively bound by an odd (respectively, even) number of connectives in A. A subformula occurring in a sequent F ~ A is said to be positively occurring if it occurs positively in A or negatively in F; otherwise, it occurs negatively in the sequent. L e m m a . Let F ~ A be a valid sequent. Let m' equal the number of distinct subformulas occurring positively in the sequent and m" equal the number of distinct subformulas occurring negatively in the sequent. Let m = m ~+ m". Then there is a treelike, cut free PKproof P containing fewer than 2 m strong inferences. P r o o f . (Sketch) Recall that the proof of Lemma 1.2.9 built a proof from the bottomup, by choosing a formula in the endsequent to eliminate (i.e., to be inferred) and thereby reducing the total number of logical connectives and then appealing to the induction hypothesis. The construction for the proof of the present lemma is exactly the same, except that now care must be taken to reduce the total number of distinct positively or negatively occurring subformulas, instead of just reducing the total number of connectives. This is easily accomplished by always choosing a formula from the endsequent which contains a maximal number of connectives and which is therefore not a proper subformula of any other subformula in the endsequent. [] 1.2.11. The cut elimination theorem states that if a sequent has a PKproof, then it has a cutfree proof. This is an immediate consequence of the soundness and completeness theorems, since any PKprovable sequent must be valid, by the soundness theorem, and hence has a cutfree proof, by the completeness theorem.
16
S. Buss
This is a rather slick method of proving the cut elimination theorem, but unfortunately, does not shed any light on how a given PKproof can be constructively transformed into a cutfree proof. In section 2.3.7 below, we shall give a stepbystep procedure for converting firstorder sequent calculus proofs into cutfree proofs; the same methods work also for propositional sequent calculus proofs. We shall not, however, describe this constructive proof transformation procedure here; instead, we will only state, without proof, the following upper bound on the increase in proof length which can occur when a proof is transformed into a cutfree proof. (A proof can be given using the methods of Sections 2.4.2 and 2.4.3.) C u t  E l i m i n a t i o n T h e o r e m . Suppose P be a (possibly daglike) PKproof of F ~ A. Then F ~ A has a cutfree, treelike PKproo] with less than or equal to 2 IIPIIdag strong inferences. 1.2.12. F r e e  c u t e l i m i n a t i o n . Let G be a set of sequents and, as above, define an Gproof to be a sequent calculus proof which may contain sequents from G as initial sequents, in addition to the logical sequents. If I is a cut inference occurring in an Gproof P , then we say I ' s cut formulas are directly descended from G if they have at least one direct ancestor which occurs as a formula in an initial sequent which is in  A cut I is said to be free if neither of I ' s auxiliary formulas is directly descended from  A proof is freecut free if and only if it contains no free cuts. (See Definition 2.4.4.1 for a better definition of free cuts.) The cut elimination theorem can be generalized to show that the freecut free fragment of the sequent calculus is implicationally complete: F r e e  c u t E l i m i n a t i o n T h e o r e m . Let S be a sequent and G a set of sequents. If G ~ G, then there is a freecut free Gproof of S. We shall not prove this theorem here; instead, we prove the generalization of this for firstorder logic in 2.4.4 below. This theorem is essentially due to Takeuti [1987] based on the cut elimination method of Gentzen [1935]. 1.2.13. S o m e r e m a r k s . We have developed the sequent calculus only for classical propositional logic; however, one of the advantages of the sequent calculus is its flexibility in being adapted for nonclassical logics. For instance, propositional intuitionistic logic can be formalized by a sequent calculus P J which is defined exactly like PK except that succedents in the lower sequents of strong inferences are restricted to contain at most one formula. As another example, minimal logic is formalized like P J, except with the restriction that every succedent contain exactly one formula. Linear logic, relevant logic, modal logics and others can also be formulated elegantly with the sequent calculus. 1.2.14. T h e Tait calculus. Tait [1968] gave a proof system similar in spirit to the sequent calculus. Tait's proof system incorporates a number of simplifications with regard to the sequent calculus; namely, it uses sets of formulas in place of sequents,
Introduction to Proof Theory
17
it allows only propositional formulas to be negated, and there are no weak structural rules at all. Since the Tait calculus is often used for analyzing fragments of arithmetic, especially in the framework of infinitary logic, we briefly describe it here. Formulas are built up from atomic formulas p, from negated atomic formulas ~p, and with the connectives A and V. A negated atomic formula is usually denoted ~; and the negation operator is extended to all formulas by defining ~ to be just p, and inductively defining A V B and A A B to be A A B and A V B, respectively. Frequently, the Tait calculus is used for infinitary logic. In this case, formulas are defined so that whenever F is a set of formulas, then so are V F and A F. The intended meaning of these formulas is the disjunction, or conjunction, respectively, of all the formulas in F. Each line in a Tait calculus proof is a set F of formulas with the intended meaning of F being the disjunction of the formulas in F. A Tait calculus proof can be treelike or daglike. The initial sets, or logical axioms, of a proof are sets of the form FU{p, ~}. In the infinitary setting, there are three rules of inference; namely, F U {Aj } FU{Aj'j
eI}
where j E I,
(there are ]II many hypotheses), and
F U {Aj~Aj } F U {A}
F U {A} the cut rule. F In the finitary setting, the same rules of inference may also be used. It is evident that the Tait calculus is practically isomorphic to the sequent calculus. This is because a sequent F+ A may be transformed into the equivalent set of formulas containing the formulas from A and the negations of the formulas from F. The exchange and contraction rules are superfluous once one works with sets of formulas, the weakening rule of the sequent calculus is replaced by allowing axioms to contain extra side formulas (this, in essence, means that weakenings are pushed up to the initial sequents of the proof). The strong rules of inference for the sequent calculus translate, by this means, to the rules of the Tait calculus. Recall that we adopted the convention that the length of a sequent calculus proof is equal to the number of strong inferences in the proof. When we work with treelike proofs, this corresponds exactly to the number of inferences in the corresponding Taitstyle proof. The cut elimination theorem for the (finitary) sequent calculus immediately implies the cut elimination theorem for the Tait calculus for finitary logic; this is commonly called the normalization theorem for Taitstyle systems. For general infinitary logics, the cut elimination/normalization theorems may not hold; however, LopezEscobar [1965] has shown that the cut elimination theorem does hold for infinitary logic with formulas of countably infinite length. Also, Chapters III and IV
18
S. Buss
of this volume discuss cut elimination in some infinitary logics corresponding to theories of arithmetic.
1.3. Propositional resolution refutations The Hilbertstyle and sequent calculus proof systems described earlier are quite powerful; however, they have the disadvantage that it has so far proved to be very difficult to implement computerized procedures to search for propositional Hilbertstyle or sequent calculus proofs. Typically, a computerized procedure for proof search will start with a formula A for which a proof is desired, and will then construct possible proofs of A by working backwards from the conclusion A towards initial axioms. When cutfree proofs are being constructed this is fairly straightforward, but cutfree proofs may be much longer than necessary and may even be too long to be feasibly constructed by a computer. General, noncutfree, proofs may be quite short; however, the difficulty with proof search arises from the need to determine what formulas make suitable cut formulas. For example, when trying to construct a proof of F~ A that ends with a cut inference; one has to consider all formulas C and try to construct proofs of the sequents F ~ A, C and C, F~ A. In practice, it has been impossible to choose cut formulas C effectively enough to effectively generate general proofs. A similar difficulty arises in trying to construct Hilbertstyle proofs which must end with a modus ponens inference. Thus to have a propositional proof system which would be amenable to computerized proof search, it is desirable to have a proof system in which (1) proof search is efficient and does not require too many 'arbitrary' choices, and (2) proof lengths are not excessively long. Of course, the latter requirement is intended to reflect the amount of available computer memory and time; thus proofs of many millions of steps might well be acceptable. Indeed, for computerized proof search, having an easytofind proof which is millions of steps long may well be preferable to having a hardtofind proof which has only hundreds of steps. The principal propositional proof system which meets the above requirements is based on resolution. As we shall see, the expressive power and implicational power of resolution is weaker than that of the full propositional logic; in particular, resolution is, in essence, restricted to formulas in conjunctive normal form. However, resolution has the advantage of being amenable to efficient proof search. Propositional and firstorder resolution were introduced in the influential work of Robinson [1965b] and Davis and Putnam [1960]. Propositional resolution proof systems are discussed immediately below. A large part of the importance of propositional resolution lies in the fact that it leads to efficient proof methods in firstorder logic: firstorder resolution is discussed in section 2.6 below. 1.3.1. Definition. A literal is defined to be either a propositional variable Pi or the negation of a propositional variable ~Pi. The literal ~Pi is also denoted ~ ; and if x is the literal ~ , then 5 denotes the unnegated literal Pi. The literal 5 is called the
Introduction to Proof Theory
19
complement of x. A positive literal is one which is an unnegated variable; a negative literal is one which is a negated variable. A clause C is a finite set of literals. The intended meaning of C is the disjunction of its members; thus, for a a truth assignment, a(C) equals True if and only if a(x) = True for some x E C. Note that the empty clause, 0, always has value False. Since clauses that contain both x and 5 are always true, it is often assumed w.l.o.g. that no clause contains both x and 5. A clause is defined to be positive (respectively, negative) if it contains only positive (resp., only negative) literals. The empty clause is the only clause which is both positive and negative. A clause which is neither positive nor negative is said to be mixed. A nonempty set F of clauses is used to represent the conjunction of its members. Thus a(F) is True if and only if a(C) is True for all C E F. Obviously, the meaning of F is the same as the conjunctive normal form formula consisting of the conjunction of the disjunctions of the clauses in F. A set of clauses is said to be satisfiable if there is at least one truth assignment that makes it true. Resolution proofs are used to prove that a set F of clauses is unsatisfiable: this is done by using the resolution rule (defined below) to derive the empty clause from F. Since the empty clause is unsatisfiable this will be sufficient to show that F is unsatisfiable. 1.3.2. Definition. Suppose that C and D are clauses and that x E C and 5 E D are literals. The resolution rule applied to C and D is the inference C D (C \ {x}) U (D \ {5}) The conclusion ( C \ {x}) U (D \ {5}) is called the resolventof C and D (with respect to x).2 Since the resolvent of C and D is satisfied by any truth assignment that satisfies both C and D, the resolution rule is sound in the following sense: if F is satisfiable and B is the resolvent of two clauses in F, then F U (B} is satisfiable. Since the empty clause is not satisfiable, this yields the following definition of the resolution proof system. Definition. A resolution refutation of F is a sequence C1, C 2 , . . . , Ck of clauses such that each Ci is either in F or is inferred from earlier member of the sequence by the resolution rule, and such that C~ is the empty clause. 1.3.3. Resolution is defined to be a refutation procedure which refutes the satisfiability of a set of clauses, but it also functions as a proof procedure for proving the validity of propositional formulas; namely, to prove a formula A, one forms a set FA of clauses such that A is a tautology if and only if FA is unsatisfiable. Then 2Note that x is uniquely determined by C and D if we adopt the (optional) convention that clauses never contain any literal and its complement.
S. Buss
20
a resolution proof of A is, by definition, a resolution refutation of FA. There are two principal means of forming FA from A. The first method is to express the negation of A in conjunctive normal form and let FA be the set of clauses which express that conjunctive normal form formula. The principal drawback of this definition of FA is that the conjunctive normal form of ~A may be exponentially longer than A, and FA may have exponentially many clauses. The second method is the method of extension, introduced by Tsejtin [1968], which involves introducing new propositional variables in addition to the variables occurring in the formula A. The new propositional variables are called extension variables and allow each distinct subformula B in A to be assigned a literal Xs by the following definition: (1) for propositional variables appearing in A, xp is p; (2) for negated subformulas ~B, X~S is ~6; (3) for any other subformula B, Xs is a new propositional variable. We then define F A to contain the following clauses: (1) the singleton clause {~X}, (2) for each subformula of the form S A C in A, the clauses {XBAC, XB}, {XBAC,Xc} and {~6,x6,XBAC}; and (3) for each subformula of the form B V C in A, the clauses
{XBvc,2"6},
{XBvC, X6} and
{XB, Xc, XBvC}.
If there are additional logical connectives in A then similar sets of clauses can be used. It is easy to check that the set of three clauses for B A C (respectively, B V C) is satisfied by exactly those truth assignments that assign XB^C (respectively, XBvC) the same truth value as xB A xe (resp., xB V xc). Thus, a truth assignment satisfies FA only if it falsifies A and, conversely, if a truth assignment falsifies A then there is a unique assignment of truth values to the extension variables of A which lets it satisfy F A. The primary advantage of forming F A by the method of extension is that FA is linearsize in the size of A. The disadvantage, of course, is that the introduction of additional variables changes the meaning and structure of A. 1.3.4. C o m p l e t e n e s s T h e o r e m for P r o p o s i t i o n a l R e s o l u t i o n .
If F is an
unsatisfiable set of clauses, then there is a resolution refutation of F. Proof. We shall briefly sketch two proofs of this theorem. The first, and usual, proof is based on the DavisPutnam procedure. First note, that by the Compactness Theorem we may assume w.l.o.g, that F is finite. Therefore, we may use induction on the number of distinct variables appearing in F. The base case, where no variables appear in F is trivial, since F must contain just the empty clause. For the induction step, let p be a fixed variable in F and define F' to be the set of clauses containing the following clauses: a. For all clauses C and D in F such that p E C and ~ E D, the resolvent of C and D w.r.t, p is in F', and b. Every clause C in F which contains neither p nor ~ is in F'.
Introduction to Proof Theory
21
Assuming, without loss of generality, that no clause in F contained both p and ~, it is clear that the variable p does not occur in U. Now, it is not hard to show that F' is satisfiable if and only if F is, from whence the theorem follows by the induction hypothesis. The second proof reduces resolution to the freecut free sequent calculus. For this, if C is a clause, let Ac be the cedent containing the variables which occur positively in C and He be the variables which occur negatively in C. Then the sequent He~ Ac is a sequent with no nonlogical symbols which is identical in meaning to C. For example, if C  (pl, p2,p3}, then the associated sequent is p2~pl, P3. Clearly, if C and D are clauses with a resolvent E, then the sequent liE ~ / k E is obtained from 1Iv~ Ac and liD ~ / k D with a single cut on the resolution variable. Now suppose F is unsatisfiable. By the completeness theorem for the freecut free sequent calculus, there is a freecut free proof of the empty sequent from the sequents Hc~ Ac with C E F. Since the proof is freecut free and there are no nonlogical symbols appearing in any initial sequents, every cut formula in the proof must be atomic. Therefore no nonlogical symbol appear anywhere in the proof and, by identifying the sequents in the freecut free proof with clauses and replacing each cut inference with the corresponding resolution inference, a resolution refutation of the empty clause is obtained. [:] 1.3.5. R e s t r i c t e d forms of r e s o l u t i o n . One of the principal advantages of resolution is that it is easier for computers to search for resolution refutations than to search for arbitrary Hilbertstyle or sequent calculus proofs. The reason for this is that resolution proofs are less powerful and more restricted than Hilbertstyle and sequent calculus proofs and, in particular, there are fewer options on how to form resolution proofs. This explains the paradoxical situation that a lesspowerful proof system can be preferable to a more powerful system. Thus it makes sense to consider further restrictions on resolution which may reduce the proof search space even more. Of course there is a tradeoff involved in using more restricted forms of resolution, since one may find that although restricted proofs are easier to search for, they are a lot less plentiful. Often, however, the ease of proof search is more important than the existence of short proofs; in fact, it is sometimes even preferable to use a proof system which is not complete, provided its proofs are easy to find. Although we do not discuss this until section 2.6, the second main advantage of resolution is that propositional refutations can be 'lifted' to firstorder refutations of firstorder formulas. It is important that the restricted forms of resolution discussed next also apply to firstorder resolution refutations. One example of a restricted form of resolution is implicit in the first proof of the Completeness Theorem 1.3.4 based on the DavisPutnam procedure; namely, for any ordering of the variables P l , . . . , P m , it can be required that a resolution refutation has first resolutions with respect to Pl, then resolutions with respect to p2, etc., concluding witti resolutions with respect to Pro. This particular strategy is not particularly useful since it does not reduce the search space sufficiently. We consider next several strategies that have been somewhat more useful.
S. Buss
22
1.3.5.1. S u b s u m p t i o n . A clause C is said to subsume a clause D if and only if C C_ D. The subsumption principle states that if two clauses C and D have been derived such that C subsumes D, then D should be discarded and not used further in the refutation. The subsumption principle is supported by the following theorem: T h e o r e m . If F is unsatisfiable and if C c D, then F' = (F \ {D}) U {C} is also unsatisfiable. Furthermore, F' has a resolution refutation which is no longer than the shortest refutation of F. 1.3.5.2. P o s i t i v e r e s o l u t i o n a n d h y p e r r e s o l u t i o n . Robinson [1965a] introduced positive resolution and hyperresolution. A positive resolution inference is one in which one of the hypotheses is a positive clause 9 The completeness of positive resolution is shown by: T h e o r e m . (Robinson [1965a]) If F is unsatisfiable, then F has a refutation containing only positive resolution inferences. P r o o f . (Sketch) It will suffice to show that it is impossible for an unsatisfiable F to be closed under positive resolution inferences and not contain the empty clause. Let A be the set of positive clauses in F; A must be nonempty since F is unsatisfiable. Pick a truth assignment T that satisfies all clauses in A and assigns the minimum possible number of "true" values. Pick a clause L in F \ A which is falsified by r and has the minimum number of negative literals; and let ~ be one of the negative literals in L. Note that L exists since F is unsatisfiable and that T(p)  True. Pick a clause J E A that contains p and has the rest of its members assigned false by T; such a clause J exists by the choice of T. Considering the resolvent of J and L, we obtain a contradiction. [] Positive resolution is nice in that it restricts the kinds of resolution refutations that need to be attempted; however, it is particularly important as the basis for hyperresolution. The basic idea behind hyperresolution is that multiple positive resolution inferences can be combined into a single inference with a positive conclusion. To justify hyperresolution, note that if R is a positive resolution refutation then the inferences in R can be uniquely partitioned into subproofs of the form
A1 A2 Aa
BI B2
Ba
B4 An
Bn An+l
where each of the clauses A 1 , . . . , An+l are positive (and hence the clauses B 1 , . . . , Bn are not positive). These n + 1 positive resolution inferences are combined into the single hyperresolution inference
Introduction to Proof Theory A1 A2 A3
""
23
An B1
An+l (This construction is the definition of hyperresolution inferences.) It follows immediately from the above theorem that hyperresolution is complete. The importance of hyperresolution lies in the fact that one can search for refutations containing only positive resolutions and that as clauses are derived, only the positive clauses need to be saved for possible future use as hypotheses. Negative resolution is defined similarly to positive resolution and is likewise complete. 1.3.5.3. S e m a n t i c r e s o l u t i o n . Semantic resolution, independently introduced by Slagle [1967] and Luckham [1970], can be viewed as a generalization of positive resolution. For semantic resolution, one uses a fixed truth assignment (interpretation) T to restrict the permissible resolution inferences. A resolution inference is said to be Tsupported if one of its hypotheses is given value False by T. Note that at most one hypothesis can have value False, since the hypotheses contain complementary occurrences of the resolvent variable. A resolution refutation is said to be Tsupported if each of its resolution inferences are ~supported. If TF is the truth assignment which assigns every variable the value False, then a Tfsupported resolution refutation is definitionally the same as a positive resolution refutation. Conversely, if F is a set of clauses and if T is any truth assignment, then one can form a set F' by complementing every variable in F which has Tvalue True: clearly, a Tsupported resolution refutation of F is isomorphic to a positive resolution refutation of U. Thus, Theorem 1.3.5.2 is equivalent to the following Completeness Theorem for semantic resolution:
For any T and F, F is unsatisfiable if and only if F has a Tsupported resolution refutation.
Theorem.
It is possible to define semantichyperresolution in terms of semantic resolution, just as hyperresolution was defined in terms of positive resolution. 1.3.5.4. S e t  o f  s u p p o r t r e s o l u t i o n . Wos, Robinson and Carson [1965] introduced setofsupport resolution as another principle for guiding a search for resolution refutations. Formally, set of support is defined as follows: if F is a set of clauses and if II C F and F \ II is satisfiable, then II is a set of support for F; a refutation R of F is said to be supported by H if every inference in R is derived (possibly indirectly) from at least one clause in II. (An alternative, almost equivalent, definition would be to require that no two members of F \ II are resolved together.) The intuitive idea behind set of support resolution is that when trying to refute F, one should concentrate on trying to derive a contradiction from the part II of F which is not known to be consistent. For example, F \ II might be a database of facts which is presumed to be consistent, and II a clause which we are trying to refute.
S. Buss
24
If F is unsatisfiable and II is a set of support .for F, then F has a refutation supported by II.
Theorem.
This theorem is immediate from Theorem 1.3.5.3. Let T be any truth assignment which satisfies F \ II, then a Tsupported refutation is also supported by II. The main advantage of set of support resolution over semantic resolution is that it does not require knowing or using a satisfying assignment for F \ II. 1.3.5.5. Unit and input r e s o l u t i o n . A unit clause is defined to be a clause containing a single literal; a unit resolution inference is an inference in at least one of the hypotheses is a unit clause. As a general rule, it is desirable to perform unit resolutions whenever possible. If F contains a unit clause {x}, then by combining unit resolutions with the subsumption principle, one can remove from F every clause which contains x and also every occurrence of 9 from the rest of the clauses in F. (The situation is a little more difficult when working in firstorder logic, however.) This completely eliminates the literal x and reduces the number of and sizes of clauses to consider. A unit resolution refutation is a refutation which contains only unit resolutions. Unfortunately, unit resolution is not complete: for example, an unsatisfiable set F with no unit clauses cannot have a unit resolution refutation. An input resolution refutation of F is defined to be a refutation of F in which every resolution inference has at least one of its hypotheses in F. Obviously, a minimal length input refutation will be treelike. Input resolution is also not complete; in fact, it can refute exactly the same sets as unit resolution: Theorem.
(Chang [1970]) A set of clauses has a unit refutation if and only if it has
a input refutation. 1.3.5.6. L i n e a r r e s o l u t i o n . Linear resolution is a generalization of input resolution which has the advantage of being complete: a linear resolution refutation of F is a refutation A1, A 2 , . . . , A,1, A, = 0 such that each Ai is either in F or is obtained by resolution from Ai1 and Aj for some j < i  1. Thus a linear refutation has the same linear structure as an input resolution, but is allowed to reuse intermediate clauses which are not in F. Theorem.
(Loveland [1970] and Luckham [1970]) If r is unsatisfiable, then r has
a linear resolution refutation. Linear and input resolution both lend themselves well to depthfirst proof search strategies. Linear resolution is still complete when used in conjunction with setofsupport resolution. F u r t h e r r e a d i n g . We have only covered some of the basic strategies for proposition resolution proof search. The original paper of Robinson [1965b] still provides an excellent introduction to resolution; this and many other foundational papers on
Introduction to Proof Theory
25
this topic have been reprinted in Siekmann and Wrightson [1983]. In addition, the textbooks by Chang and Lee [1973], Loveland [1978], and Wos et al. [1992] give a more complete description of various forms of resolution than we have given above. H o r n clauses. A Horn clause is a clause which contains at most one positive literal. Thus a Horn clause must be of the form (p, q7,..., q~} or of the form ( ~ , . . . , ~g~} with n _> 0. If a Horn clause is rewritten as sequents of atomic variables, it will have at most one variable in the antecedent; typically, Horn clauses are written in reversesequent format so, for example, the two Horn clauses above would be written as implications p~
ql,...,qn
and r ql, . . . , qn"
In this reversesequent notation, the antecedent is written after the r and the commas are interpreted as conjunctions (A's). Horn clauses are of particular interest both because they are expressive enough to handle many situations and because deciding the satisfiability of sets of Horn clauses is more feasible than deciding the satisfiability of arbitrary sets of clauses. For these reasons, many logic programming environments such as PROLOG are based partly on Horn clause logic. In propositional logic, it is an easy matter to decide the satisfiability of a set of Horn clauses; the most straightforward method is to restrict oneself to positive unit resolution. A positive unit inference is a resolution inference in which one of the hypotheses is a unit clause containing a positive literal only. A positive unit refutation is a refutation containing only positive unit resolution inferences. T h e o r e m . A set of Horn clauses is unsatisfiable if and only if it has a positive unit resolution refutation. P r o o f . Let F be an unsatisfiable set of Horn clauses. F must contain at least one positive unit clause {p}, since otherwise the truth assignment that assigned False to all variables would satisfy F. By resolving {p} against all clauses containing ~, and then discarding all clauses which contain p or ~, one obtains a smaller unsatisfiable set of Horn clauses. Iterating this yields the desired positive unit refutation. [:] Positive unit resolutions are quite adequate in propositional logic, however, they do not lift well to applications in firstorder logic and logic programming. For this, a more useful method of search for refutations is based on combining semantic resolution, linear resolution and setofsupport resolution: 1.3.5.7. T h e o r e m . Henschen and Wos [1974]. Suppose F is an unsatisfiable set of Horn clauses with H C_ F a set of support for F, and suppose that every clause in F \ H contains a positive literal. Then F has a refutation which is simultaneously a negative resolution refutation and a linear refutation and which is supported by II.
S. Buss
26
Note that the condition that every clause in F \ II contains a positive literal means that the truth assignment r that assigns True to every variable satisfies F \ H. Thus a negative resolution refutation is the same as a Tsupported refutation and hence is supported by H. The theorem is fairly straightforward to prove, and we leave the details to the reader. However, note that since every clause in F \ H is presumed to contain a positive literal, it is impossible to get rid of all positive literals only by resolving against clauses in F \ H. Therefore, H must contain a negative clause C such that there is a linear derivation that begins with C, always resolves against clauses in F \ II yielding negative clauses only, and ending with the empty clause. The resolution refutations of Theorem 1.3.5.7, or rather the lifting of these to Horn clauses described in section 2.6.5, can be combined with restrictions on the order in which literals are resolved to give what is commonly called SLDresolution. 2. P r o o f t h e o r y
of firstorder
logic
2.1. Syntax and semantics 2.1.1. S y n t a x of firstorder logic. Firstorder logic is a substantial extension of propositional logic, and allows reasoning about individuals using functions and predicates that act on individuals. The symbols allowed in firstorder formulas include the propositional connectives, quantifiers, variables, function symbols, constant symbols and relation symbols. We take ~, A, V and D as the allowed propositional connectives. There is an infinite set of variable symbols; we use x , y , z , . . , and a,b,c,.., as metasymbols for variables. The quantifiers are the existential quantifiers, (3x), and the universal quantifiers, (Vx), which mean "there exists x" and "for all x". A given firstorder language contains a set of function symbols of specified arities, denoted by metasymbols f, g, h , . . . and a set of relation symbols of specified arities, denoted by metasymbols P, Q, R, .... Function symbols of arity zero are called constant symbols. Sometimes the firstorder language contains a distinguished twoplace relation symbol = for equality. The formulas of firstorder logic are defined as follows. Firstly, terms are built up from function symbols, constant symbols and variables. Thus, any variable x is a term, and if t l , . . . ,tk are terms and f is kcry, then f(tl,... ,tk) is a term. Second, atomic formulas are defined to be of the form P(tl,..., tk) for P a kcry relation symbol. Finally, formulas are inductively to be built up from atomic formulas and logical connectives; namely, any atomic formula is a formula, and if A and B are formulas, then so are (~A), (AAB), (A V B), (A D S), (Vx)A and (3x)A. To avoid writing too many parentheses, we adopt the conventions on omitting parentheses in propositional logic with the additional convention that quantifiers bind more tightly than binary propositional connectives. In addition, binary predicate symbols, such as  and A also has an LK~ proof. Proof. Part (2) of the cutfree completeness theorem clearly implies part (1); we shall prove (2) only for the case where H is countable (and hence the language L is, w.l.o.g., countable). Assume II logically implies F ~ A. The general idea of the argument is to try to build an LKproof of P~ A from the bottom up, working backwards from F+ A to initial sequents. This is, of course, analogous to the procedure used to prove Theorem 1.2.8; but because of the presence of quantifiers, the process of searching for a proof of F~ A is no longer finite. Thus, we shall have to show the proofsearch process eventually t e r m i n a t e s  this will be done by showing that if the proofsearch does not yield a proof, then F ~ A is not valid. Since the language is countable, we may enumerate all Lformulas as A1, A2, Aa,... so that every Lformula occurs infinitely often in the enumeration. Likewise, we enumerate all Lterms as tl, t2, ta,.., with each term repeated infinitely often. We shall attempt to construct a cutfree proof P of F+ A. The construction of P will proceed in stages: initially, P consists of just the sequent F> A; at each stage, P will be modified (we keep the same name P for the new partially constructed P). A sequent in P is said to be active provided it is a leaf sequent of P and there is no formula that occurs in both its antecedent and its succedent. Note that a nonactive leaf sequent is derivable with a cutfree proof. We enumerate all pairs (Ai, tj) by a diagonal enumeration; each stage of the construction of P considers one such pair. Initially, P is the single sequent P ~ A. At each stage we do the following:
Loop: Let (Ai, tj) be the next pair in the enumeration. Step (1): If Ai is in II, then replace every sequent F' } A' in P with the sequent F', Ai '} A'. Step (2): If Ai is atomic, do nothing and proceed to the next stage. Otherwise, we will modify P at the active sequents which contain Ai by doing one of the following:
Case (2a): If Ai is ~B, then every active sequent in P which contains Ai, say of form F', ~B, F"> A', is replaced by the derivation F', ~B, F" > A', B F', ~B, F"+ A' and similarly, every active sequent in P of the form F'+ A', ~B, A" is replaced by the derivation B, P' ~ A', ~B, A" F' } A', ~B, A"
Introduction to Proof Theory
35
Case (2b): If A/ is of the form B V C, then every active sequent in P of the form F', B V C, F"+ A', is replaced by the derivation
B, F', B V C, F" + A'
C, F', B V C, F" + A'
F', B V C, F"  + A'
And, every active sequent in P of the form F' }A', B V C, A" is replaced by the derivation F'+ A', B V C, A", B, C F'+ A', B V C, A"
Cases (2c)(2d):The cases where Ai has outermost connective D or A are similar (dual) to case (2b), and are omitted here.
Case (2e): If Ai is of the form (3x)B(x), then every active sequent in P of the form F', (3x)B(x),F"}A' is replaced by the derivation B(~), r', (3~)B(~), r"+ zx' r', (3z)B(x), r"+ A, where c is a new free variable, not used in P yet. In addition, any active sequent of the form P'+ A', (3x)B(x), A" is replaced by the derivation
F'"+ A', (3x)B(x), A", B(tj) P'+ A', (3x)S(x), A" Note that this, and the dual V :left case, are the only cases where t3 is used. These two cases are also the only two cases where it is really necessary to keep the formula Ai in the new active sequent; we have however, always kept Ai since it makes our arguments a little simpler. Case (2f): The case where Ai begins with a universal quantifier is dual to case (2e). Step (3): If there are no active sequents remaining in P , exit from the loop; otherwise, continue with the next loop iteration.
End loop. If the algorithm constructing P ever halts, then P gives a cutfree proof of C 1 , . . . , Ck, F} A for some C 1 , . . . , Ck 6 I1. This is since each (nonactive) initial sequent has a formula A which appears in both its antecedent and succedent and since, by induction on the complexity of A, every sequent A+ A is derivable with a cutfree proof. It remains to show that if the above construction of P never halts, then the sequent F+ A is not logically implied by I1. So suppose the above construction of P never halts and consider the result of applying the entire infinite construction process. From the details of the construction of P , P will be an infinite tree (except in the exceptional case where F+ A contains only atomic formulas and II is empty,
36
S. Buss
in which case P is a single sequent). If II is empty, then each node in the infinite tree P will be a sequent; however, in the general case, each node in the infinite tree will be a generalized sequent of the form F ~, II~ A ~ with an infinite number of formulas in its antecedent. (At each stage of the construction of P , the sequents contain only finitely many formulas, but in the limit the antecedents contain every formula from II since these are introduced by step (1).) P is a finitely branching tree, so by Khnig's lemma, there is at least one infinite branch r in P starting at the roots and proceeding up through the tree (except, in the exceptional case, r is to contain just the endsequent). We use r to construct a structure Az[ and object assignment a, for which F~ A is not true. The universe of A/[ is equal to the set of Lterms. The object assignment a just maps a variable a to itself. The interpretation of a function symbol is defined so that fM(rl,... , rk) is the term f ( r l , . . . , r~). Finally, the interpretation of a predicate symbol P is defined by letting P ~ ( r l , . . . , ra) hold if and only the formula P(rl,..., rk) appears in an antecedent of a sequent contained in the branch r . To finish the proof of the theorem, it suffices to show that every formula A occurring in an antecedent (respectively, a succedent) along r is true (respectively, false) in A4 with respect to a; since this implies that F~ A is not valid. This claim is proved by induction on the complexity of A. For A atomic, it is true by definition. Consider the case where A is of the form (3x)B(x). If A appears in an antecedent, then so does a formula of the form B(c); by the induction hypothesis, B(c) is true in Ad w.r.t, a, so hence A is. If A appears in an succedent, then, for every term t, B(t) eventually appears in an succedent; hence every B(t) is false in Az[ w.r.t, a, which implies A is also false. Note that A cannot appear in both an antecedent and a succedent along ~, since the nodes in ~ were never active initial sequents. The rest of cases, for different outermost connectives of A, are similar and we omit their proofs. 2.3.8. There are number of semantic tableau proof systems, independently due to Beth [1956], Hintikka [1955], Kanger [1957] and Schiitte [1965], which are very similar to the firstorder cutfree sequent calculus. The proof above, of the completeness of the cutfree sequent calculus, is based on the proofs of the completeness of semantic tableau systems given by these four authors. The original proof, due to Gentzen, was based on the completeness of LK with cut and on a process of eliminating cuts from a proof similar to the construction of the next section. 2.4. C u t e l i m i n a t i o n . The cutfree completeness theorem, as established above, has a couple of drawbacks. Firstly, we have proved cutfree completeness only for pure LK and it is desirable to also establish a version of cutfree completeness (called 'freecut free' completeness) for LKe and for more general systems LK6. Secondly, the proof is completely nonconstructive and gives no bounds on the sizes of cutfree proofs. Of course, the undecidability of validity in firstorder logic implies that the size of proofs (cutfree or otherwise) cannot be recursively bounded in terms of the formula being proved; instead, we wish to give an upper bound on the size of a
Introduction to Proof Theory
37
cutfree LKproof in terms of the size of a general LKproof. Accordingly, we shall give a constructive proof of the cut elimination theorem this proof will give an effective procedure for converting a general LKproof into a cutfree LKproof. As part of our analysis, we compute an upper bound on the size of the cutfree proof constructed by this procedure. With some modification, the procedure can also be used to construct freecut free proofs in LKe or L K ~ ; this will be discussed in section 2.4.4. It is worth noting that the cut elimination theorem proved next, together with the Completeness Theorem 2.2.3 for the Hilbertstyle calculus, implies the Cutfree Completeness Theorem 2.3.7. This is because L K can easily simulate Hilbertstyle proofs and is thus complete; and then, since any valid sequent has an LKproof, it also has a cutfree LKproof. 2.4.1. Definition. The depth, dp(A), of a formula A is defined to equal the height of the tree representation of the formula; that is to say: 9 dp(A)  O, for A atomic, 9 dp(A A B) = dp(A V B) = dp(A ~ B) = 1 + max{dp(A), dp(B)}, 9 dp(~A) = d p ( ( 3 x ) A ) = d p ( ( V x ) A ) = 1 + dp(A). The depth of a cut inference is defined to equal the depth of its cut formula. Definition. The superexponentiation function 2~, for i, x _ 0, is defined inductively by 2~  x and 2i~+1 22T. Thus 2T is the number which is expressed in exponential notation as a stack of i many 2's with an x at the top. 2.4.2. C u t  E l i m i n a t i o n T h e o r e m . Let P be an LKproof and suppose every cut formula in P has depth less than or equal to d. Then there is a cutfree L K  p r o o f P * with the same endsequent as P, with size
ILP*II
A, A; however, it is not a valid proof. To fix it up to be a valid proof, we need to do the following. First, an initial sequent in Q' will be of the form A, F~ A, A; this can be validly derived by using the initial sequent A+ A followed by weakenings and exchanges. Second, for 1 < i < k, the ith 3 :right inference enumerated above, will be of the form IIi, r  + A, Ai, B(ti) Hi, F } A, Ai in Q. This can be replaced by the following inferences
.R~
II~, p+ A, A~, B(h)
B(h), F~ A Hi, [' '~ A, Ai Note that this has replaced the 3 :right inference of Q with a cut of depth d  1 and some weak inferences. No further changes are needed to Q' to make it a valid proof. In particular, the eigenvariable conditions still hold since no free variable in F ~ A is used as an eigenvariable in Q. By adding some exchanges and contractions to the end of this proof, the desired proof P * of F ~ A is obtained. It is clear that every cut in P * has depth < d. From inspection of the construction of P * , the size of P * can be bounded by
IIP*ll _< Ilqll'(llRIl+l)
< IIPII 2.
Case (f): The case where A is of the form (Vx)B(x) is completely dual to case (e), and is omitted here. Case (g): Finally, consider the case where A is atomic. Form R' from R by replacing every sequent II+ A in R with the sequent II, F ~ A, A, where His II minus all occurrences of direct ancestors of A. R' will end with the sequent F, F+ A, A and will be valid as a proof, except for its initial sequents. The initial sequents B~ B in R, with B not a direct ancestor of the cut formula A, become B, F~ A, B in R'; these are readily inferred from the initial sequent B> B with only weak inferences. On the other hand, the other initial sequents A+ A in R become F} A, A which is just the endsequent of Q. The desired proof P* of F> A is thus formed from R' by adding some weak inferences and adding some copies of the subproof Q to the leaves of R', and by adding some exchanges and contractions to the end of R'. Since Q and R have only cuts of degree < d (i.e., have no cuts, since d = 0), P * likewise has only cuts of degree < d. Also, since the number of initial sequents in R' is bounded by [IRII + 1, the size of P * can be bounded by
liP* II
IIRll + IIQ[I" (lIRJI + 1)
0 and terms t l ( ~ , t2(g, yl), t3(c', Yl, Y2),..., tk(~, Yx, . . . Yk1) such that T
v
(Vy3)[A(ta(g, Yl, Y2), Y3, c*)V . . . V (Vyk)[A(t~(g, yl, . . . , yk1), yk, c))]...]]]
SThis corollary is named after the more sophisticated nocounterexample interpretations of Kreisel [1951,1952].
Introduction to Proof Theory
55
To prove the corollary, note that the only strong Vexpansions of A are formulas of the form V (3x)(Vy)A(x, y, ~ and apply the previous theorem. 2.5.4. N o r e c u r s i v e b o u n d s o n n u m b e r o f t e r m s . It is interesting to ask whether it is possible to bound the value of r in Herbrand's Theorem 2.5.1. For this, consider the special case where the theory T is empty, so that we have an L K  p r o o f P of ( 3 X l , . . . , xk)B(g, ~) where B is quantifierfree. There are two ways in which one might wish to bound the number r needed for Herbrand's theorem: as a function of the size of P , or alternatively, as a function of the size of the formula (3~)B. For the first approach, it follows immediately from Theorem 2.4.3 and the proof of Herbrand's theorem, that r < "211Pl1" 911PI[ For the second approach we shall sketch a proof below that r can not be recursively bounded as a function of the formula (3~)B. To show that r cannot be recursively bounded as a function of (3~)B, we shall prove that having a recursive bound on r would give a decision procedure for determining if a given existential formula is valid. Since it is well known that validity of existential firstorder formulas is undecidable, this implies that r cannot be recursively bounded in terms of the formula size. W h a t we shall show is that, given a formula B as in Theorem 2.5.1 and given an r > 0, it is decidable whether there are terms t l , 1 , . . . , tr,k which make the formula
~lB(d, ti,1,...,ti,~)
(1)
i=1
a tautology. 6 This will suffice to show that r cannot be recursively bounded. The quantifierfree formula B is expressible as a Boolean combination C(D1,...,DI) where each Di is an atomic formula and C ( . . . ) is a propositional formula. If the formula (1) is a tautology, it is by virtue of certain formulas Di(g, ti,1,..., t~,k) being identical. T h a t is to say there is a finite set X of equalities of the form Di(g, ti,1,..., ti,k) = Di, (g, ti,,1,..., ti,,~) such that, any set of terms t l , 1 , . . . , tr,k which makes all the equalities in X true will make (1) a tautology. But now the question of whether there exist terms tl,1, 99 9 tr,k which satisfy such a finite set X of equations is easily seen to be a firstorder unification problem, as described in section 2.6.1 below. This means that there is an algorithm which can either determine that no choice of terms will satisfy all the equations in X or will find a most general unifier which specifies all possible ways to satisfy the equations of X . Since, for a fixed r > 0, there are only finitely many possible sets X of equalities, we have the following algorithm for determining if there are terms which make (1) a tautology: for each possible set X of equalities, check if it has a solution (i.e., a 6 This was first proved by Herbrand [1930] by the same argument that we sketch here.
S. Buss
56
most general unifier), and if so, check if the equalities are sufficient to make (1) a tautology. []
2.5.5. Interpolation theorem Suppose we are given two formulas A and B such that A D B is valid. An interpolant for A and B is a formula C such that A D C and C D B are both valid. It is a surprising, and fundamental, fact that it is always possible to find an interpolant C such that C contains only nonlogical symbols which occur in both A and B. We shall assume for this section that firstorder logic has been augmented to include the logical symbols T and _l_. For this, the sequent calculus has two new initial sequents ~ T and 2_+. We write L(A) to denote the set of nonlogical symbols occurring in A plus all free variables occurring in A, i.e., the constant, symbols, function symbols, predicate symbols and free variables used in A. For II a cedent, L(II) is defined similarly.
Craig's Interpolation Theorem. Craig [1957a]. (a) Let A and B be firstorder formulas such that ~ A D B. Then there is a formula C such that L(C) C_ L(A) N L ( B ) and such that ~ A D C and ~CDB. (b) Suppose F1, F2 ~ A1, A2 is a valid sequent. Then there is a formula C such that L(C) is a subset of L(r~, A1) n L(F2, A2) and such that F1 ~ A1, C and C, F2 ~ A2 are both valid. Craig's interpolation can be proved straightforwardly from the cut elimination theorem. We shall outline some of the key points of the proof, but leave a full proof to the reader. First it is easy to see that part (a) is just a special case of (b), so it suffices to prove (b). To prove (b), we first use the cut elimination theorem to obtain a cutfree proof P of F1, F2~ A1, A 2 . We then prove by induction on the number of strong inferences in P that there is a formula C with only the desired nonlogical symbols and there are proofs P1 and P2 of F1 ~ A1, C and C, F2 ~ A2. In fact, the proofs P1 and P2 are also cutfree and have lengths linearly bounded by the length of P. For an example of how the proof by induction goes, let's make the simplifying assumption that there are no function symbols in our languages, and then assume that the final strong inference of P is an 3 :right inference with principal formula in A2. That is to say, suppose P ends with the inference 9 . ~
: ~
~ 9
r~, r~+ A,, A~, A(t) F1, F2 ~ A1, A~, (3x)A(x) with A2 is the cedent A~, (3x)A(x). Since we are assuming there are no function symbols, t is just a free variable or a constant symbol. The induction hypothesis states that there is an interpolant C(t) with an appropriate firstorder language such
Introduction to Proof Theory
57
that F1 ~ A1, C(t) and C(t), F2 ~ A2, A(t) are LKprovable. The interpolant C* for the endsequent of P is defined as follows: if the symbol t does not appear in the sequent F2 ~ A2, then C* is (3y)C(y); otherwise, if the symbol t does not appear in the sequent F1 ~ A1, then C* is (Vy)C(y); and if t appears in both sequents, C* is just C. It can be checked that in all three cases, the sequents F~ ~ A~, C* and C*,F2~A2, (3x)A(x) are LKprovable. Therefore, C* is an interpolant for the endsequent of P; also, it is obvious that the language L(C) of C is still appropriate for the endsequent. Secondly, suppose P ends with the inference ~
o ~
o
~
r,, A(b) F~, F2~ A~, A'2, (Vx)A(x) with the principal formula still presumed to be in A2. The induction hypothesis states that there is an interpolant C with an appropriate firstorder language such that F1 ~ A1, C(b) and C(b), F2 ~ A2, A(b). Since, by the eigenvariable condition, b does not occur except as indicated; we get immediately LKproofs of the sequents F1 ~ A1, (Vy)C(y) and (Vy)C(y), F2 ~ A2, (Vx)A(x). Therefore, (Vy)C(y) serves as an interpolant for the endsequent of P. There are a number of other cases that must be considered, depending on the type of the final strong inference in P and on whether its principal formula is (an ancestor of) a formula in A1, A2, F1 or F2. These cases are given in full detail in textbooks such as Takeuti [1987] or Girard [1987b]. It remains to consider the case where there are function symbols in the language. The usual method of handling this case is to just reduce it to the case where there are no function symbols by removing function symbols in favor of predicate symbols which define the graphs of the functions. Alternatively, one can carry out directly a proof on induction on the number of strong inferences in P even when function symbols are present. This involves a more careful analysis of the 'flow' of terms in the proofs, but still gives cutfree proofs P1 and P2 with size linear in the size of P. We leave the details of the functionsymbol case to the reader. O t h e r i n t e r p o l a t i o n t h e o r e m s . A useful strengthening of the Craig interpolation theorem is due to Lyndon [1959]. This theorem states that Craig's interpolation theorem may be strengthened by further requiring that every predicate symbol which occurs positively (resp., negatively) in C also occurs positively (resp., negatively) in both A and B. The proof of Lyndon's theorem is identical to the proof sketched above, except that now one keeps track of positive and negative occurrences of predicate symbols. Craig [1957b] gives a generalization of the Craig interpolation theorem which applies to interpolants of cedents. LopezEscobar [1965] proved that the interpolation theorem holds for some infinitary logics. Barwise [1975,w proved that the interpolation theorem holds for a wider class of infinitary logics. LopezEscobar's proof was prooftheoretic,
S. Buss
58
based on a sequent calculus formalization of infinitary logic. Barwise's proof was modeltheoretic; Feferman [1968] gives a prooftheoretic treatment of these general interpolation theorems, based on the sequent calculus. 2.5.6. B e t h ' s definability t h e o r e m Definition. Let P and P~ be predicate symbols with the same arity. Let F(P) be an arbitrary set of firstorder sentences not involving P~, and let F(P ~) be the same set of sentences with every occurrence of P replaced with P~. The set r(P) is said to explicitly define the predicate P if there is a formula A(~ such that
r(P) ~ (V~)(A(~) ++ P(~)). The set F(P) is said to implicitly define the predicate P if
r(P) u r(P')~ (v~)(P(~) +, P'(~)). The Definability Theorem of Beth [1953] states the fundamental fact that the notions of explicit and implicit definability coincide. One way to understand the importance of this is to consider implicit definability of P as equivalent to being able to uniquely characterize P. Thus, Beth's theorem states, loosely speaking, that if a predicate can be uniquely characterized, then it can be explicitly defined by a formula not involving P. One common, elementary mistake is to confuse implicit definability by a set of sentences F(P) with implicit definability in a particular model. For example, consider the theory T of sentences which are true in the standard model (N, 0, S, +) of natural numbers with zero, successor and addition. One might attempt to implicitly define multiplication in terms of zero and addition letting F(M) be the theory
T U {(Vx)(M(x,O) = 0), (Vx)(Vy)(M(x,S(y)) = M(x,y) + x)) It is true that this uniquely characterizes the multiplication function M(x, y) in the sense that there is only one way to expand (N, 0, S, +) to a model of r(M); however, this is not an implicit definition of M since there are nonstandard models of T which have more than one expansion to a model of F(M). B e t h ' s Definability T h e o r e m .
F(P) implicitly defines P if and only if it explicitly
defines P. Proof. Beth's theorem is readily proved from the Craig interpolation theorem as follows. First note that if P is explicitly definable, then it is clearly implicitly definable. For the converse, assume that P is implicitly definable. By compactness, we may assume without loss of generality that F(P) is a single sentence. Then we have that r(P) A P ( ~ ~ r(P') ~ P ' ( ~ . By the Craig Interpolation Theorem, there is a interpolant A(~ for r(P) ^ P(c) and F(P') D P'(~. This interpolant is the desired formula explicitly defining P. D
Introduction to Proof Theory
59
It is also possible to prove the Craig Interpolation Theorem from the Beth Definability Theorem. In addition, both theorems are equivalent to the modeltheoretic Joint Consistency Theorem of Robinson [1956]. 2.6. F i r s t  o r d e r logic a n d r e s o l u t i o n refutations The importance of the resolution proof method for propositional logic (described in section 1.3) lies in large part in the fact that it also serves as a foundation for theoremproving in firstorder logic. Recall that by introducing Herbrand and Skolem functions, theoremproving in firstorder logic can be reduced to proving II2formulas of the form (V~)(3y~A(~, ~ with A quantifierfree (see w Also, by Herbrand's Theorem 2.5.1, the problem of proving (VZ)(3~)A(Z, y~ is reducible to the problem of finding terms ?1, ?2,..., ?~ so that Vi A(Z, ~) is tautologically valid. Now, given the terms 71,..., ?k, determining tautological validity is 'merely' a problem in propositional logic; and hence is amenable to theorem proving methods such as propositional resolution. Thus one hopes that if one had a good scheme for choosing terms r'l,..., ?k, then one could have a reasonable method of firstorder theorem proving. This latter point is exactly the problem that was solved by Robinson [1965b]; namely, he introduced the resolution proof method and showed that by using a unification algorithm to select terms, the entire problem of which terms to use could be solved efficiently by using the "most general" possible terms. In essence, this reduces the problem of firstorder theorem proving to propositional theorem proving. (Of course, this last statement is not entirely true for two reasons: firstly, there may be a very large number (not recursively bounded) of terms that are needed, and secondly, it is entirely possible that foreknowledge of what terms are sufficient, might help guide the search for a propositional proof.) 2.6.1. Unification. We shall now describe the unification algorithm for finding most general unifiers. We shall let t denote a term containing function symbols, constant symbols and variables. A substitution, a, is a partial map from variables to terms; we write x a to denote a ( x ) , and when x is not in the domain of a, we let x a be x. If a is a substitution, then ta denotes the result of simultaneously replacing every variable x in t with x a . We extend a to atomic relations by letting R ( t l , . . . ,tk)a denote R ( t l a , . . . ,tka). We use concatenation a T to denote the substitution which is equivalent to an application of a followed by an application of T. Definition. Let A1,... ,Ak be atomic formulas. A unifierfor the set (A1,... , A k } is a substitution a such that A la  A2a . . . .  A k a , where  represents the property of being the identical formula. A substitution is said to be a variable renaming substitution, if the substitution maps variables only to terms which are variables.
60
S. B u s s
A unifier a is said to be a m o s t g e n e r a l u n i f i e r if, for every unifier r for the same set, there is a unifier p such that T  a p . Note that up to renaming of variables, a most general unifier must be unique. Unification Theorem.
I f {A1,...,A~} h a s a u n i f i e r t h e n it h a s a m o s t g e n e r a l
uniter. P r o o f . We shall prove the theorem by outlining an efficient algorithm for determining whether a unifier exists and, if so, finding a most general unifier. The algorithm is described as an iterative procedure which, at stage s has a set Es of equations and a substitution as. The equations in Es are of the form c ~  ~ where c~ and may be formulas or terms. The meaning of this equation is that the soughtfor most general unifier must be a substitution which makes c~ and ~ identical. Initially, E0 is the set of k  1 equations A j  A j + I and a0 is the identity. Given Es and as, the algorithm does any one of the following operations to choose Es+l and as+l : (1) If Es is empty, we are done and as is a most general unifier. (2) If Es contains an equation of the form F ( t l , . . . , ti)
" F(t~l, . . . , t~),
then as+l = as and Es+l is obtained from Es by removing this equation and adding the i equations ti " t~. Here F is permitted to be a function symbol, a constant symbol or a predicate symbol. (3) Suppose Es contains an equation of the form x  t
or
t  x,
with x a variable and t a term. Firstly, if t is equal to x, then this equation is discarded, so Es+l is Es minus this equation and as+l  as. Secondly, if t is a nontrivial term in which x occurs, then the algorithm halts outputting that no unifier exists. 7 Thirdly, if x does not occur in t, then let I x / t ] denote the substitution that maps x to t and define Es+l to be the set of equations six~t]  s ' [ x / t ] such that s  s' is in Es, and define as+l  a s [ x / t ] . We leave it to the reader to prove that this algorithm always halts with a most general unifier if a unifier exists. The fact that the algorithm does halt can be proved by noting that each iteration of the algorithm either reduces the number of distinct variables in the equations, or reduces the sum of the lengths of the terms occurring in the equations, o The algorithm as given above is relatively efficient; however, the sizes of the terms involved may grow exponentially large. Indeed, there are unification problems where the size of the most general unifier is exponential in the size of the unification problem; for example, the most general unifier for {f(xl,x2,...x~), f(g(x2, x2),... ,g(xk+l,X~+l))} maps Xl to a term of height k with 2k atoms. 7This failure condition is known as the occurs check.
Introduction to Proof Theory
61
If one is willing to use a dag (directed acyclic graph) representation for terms, then this exponential growth rate does not occur. Paterson and Wegman [1978] have given an efficient lineartime unification algorithm based on representing terms as dags. 2.6.2. R e s o l u t i o n a n d f a c t o r i n g i n f e r e n c e s . We now describe the resolution inference used by Robinson [1965b] for firstorder logic. The starting point is Herbrand's theorem: we assume that we are attempting to prove a firstorder sentence, which without loss of generality is of the form (3Z)A(~7). (This may be assumed without loss of generality since, if necessary, Herbrand functions may be introduced.) We assume, in addition, that A is in disjunctive normal form. Instead of proving this sentence, we shall instead attempt to refute the sentence (VZ)(~A(x)). Since A is in disjunctive normal form, we may view ~A as a set FA of clauses with the literals in the clauses being atomic formulas or negated atomic formulas. We extend the definition of substitutions to act on clauses in the obvious way so that ( C 1 , . . . , Ck}a is defined to be (Cla,..., Cka}. When we refute (V~)(~A), we are showing that there is no structure M which satisfies (VZ)(~A). Consider a clause C in F A. The clause C is a set of atomic and negated atomic formulas, and a structure M is said to satisfy C provided it satisfies (vz)(Vcecr Thus, it is immediate that if M satisfies C, and if a is a substitution, then M also satisfies Ca. From this, we see that the following version of resolution is sound in that it preserves the property of being satisfied by a model M" If B and C are clauses, if r is an atomic formula and if a and T are substitutions, let D be the clause (Ba \ (r U (CT \ (r It is easy to verify that if B and C are satisfied in M , then so is D. Following Robinson [1965b], we use a restricted form of this inference principle as the sole rule of inference for firstorder resolution refutations: D e f i n i t i o n . Let B and C be clauses and suppose P(gl), P ( g 2 ) , . . . P(gk) are atomic formulas in B and that ~P(~),~P(~2),...~P(Q) are negated atomic formulas in C. Choose a variable renaming substitution T SO that CT has no variables in common with B. Also suppose that the k + g formulas P(gi) and P(~)T have a most general unifier a. Then the clause D defined by
(Ba \ is defined to be an
{P(gl)a})U
Rresolvent of
(CTa \ {~P(gl)Ta})
B and C.
The reason for using the renaming substitution T is that the variables in the clauses B and C are implicitly universally quantified; thus if the same variable occurs in both B and C we allow that variable to be instantiated in B by a different term than in C when we perform the unification. Applying T before the unification allows this to happen automatically. One often views Rresolution as the amalgamation of two distinct operations: first, the factoring operation finds a most general unifier of a subset of clause, and
S. Buss
62
second, the unitary resolution operation which resolves two clauses with respect to a single literal. Thus, Rresolution consists of (a) choosing subsets of the clauses B and C and factoring them, and then (b) applying resolution w.r.t, to the literal obtained by the factoring.
A set F of firstorder clauses is unsatisfiable if and only if the empty clause can be derived from F by Rresolution.
C o m p l e t e n e s s of R  r e s o l u t i o n .
This theorem is proved by the discussion in the next paragraph. 2.6.3. Lifting g r o u n d r e s o l u t i o n to f i r s t  o r d e r r e s o l u t i o n . A ground literal is defined to be a literal in which no variables occur; a ground clause is a set of ground literals. We assume, with no loss of generality, that our firstorder language contains a constant symbol and that therefore ground literals exist. Ground literals may independently be assigned truth values s and therefore play the same role that literals played in propositional logic. A ground resolution refutation is a propositionalstyle refutation involving ground clauses only, with ground literals in place of propositional literals. By the Completeness Theorem 1.3.4 for propositional resolution, a set of ground clauses is unsatisfiable if and only if it has a ground resolution refutation. For sets of ground clauses, Rresolution is identical to propositionalstyle resolution. Suppose, however, that F is an unsatisfiable set of firstorder (not necessarily ground) clauses. Since F is unsatisfiable there is, by Herbrand's theorem, a set of substitutions a l , . . . , ar so that each Far is a set of ground clauses and so that the set II  [.Ji Fai of clauses is propositionally unsatisfiable. Therefore there is a ground resolution refutation of H. To justify the completeness of Rresolution, we shall show that any ground resolution refutation of H can be 'lifted' to an Rresolution refutation of F. In fact, we shall prove the following: if C1, (;'2,..., Cn  0 is a resolution refutation of H, then there are clauses D1,D2,... ,Din  0 which form an Rresolution refutation of F and there are substitutions al,a2,... ,am so that D i a i  Ci. We define Di and ai by induction on i as follows. Firstly, if Ci E H, then it must be equal to Dia~ for some Di E F and some substitution ai by the definition of H. Secondly, if Ci is inferred from Cj and Ck, with j, k < i by resolution w.r.t, the literal P ( ~ , then define Ej to be the subset of Dj which is mapped to P ( ~ by aj, and define Ek similarly. Now, form the Rresolution inference which factors the subsets Ej and E~ of Dj and Dk and forms the resolvent. This resolvent is Di and it is straightforward to show that the desired ai exists. That finishes the proof of the Completeness Theorem for Rresolution. It should be noted that the method of proof shows that Rresolution refutations are the shortest possible refutations, even if arbitrary substitutions are allowed for factoring inferences. Even more importantly, the method by which ground resolution refutations were 'lifted' to Rresolution refutations preserves many of the search strategies that SWe are assuming that the equality sign (=) is not present.
Introduction to Proof Theory
63
were discussed in section 1.3.5. This means that these search strategies can be used for firstorder theorem proving. 9 2.6.4. P a r a m o d u l a t i o n . The above discussion of Rresolution assumed that equality was not present in the language. In the case where equality is in the language, one must either add additional initial clauses as axioms that express the equality axioms or one must add additional inference rules. For the first approach, one could add clauses which express the equality axioms from section 2.2.1; for instance the third equality axiom can be expressed with the clause {xl r Yl, ..., xk r y~,~P(~), P ( ~ }, and the other equality axioms can similarly be expressed as clauses. More computational efficiency can be obtained with equality clauses of the form
{x ~ y, A(x), A(y) } where A(x) indicates an arbitrary literal. For the second approach, the paramodulation inference is used instead of equality clauses; this inference is a little complicated to define, but goes as follows: Suppose B and C are clauses with no free variables in common and that r  s is a literal in C; let t be a term appearing somewhere in B and let a be a most general unifier of r and t (or of s and t); let B ~ be the clause which is obtained from Ba by replacing occurrences of ta with sa (or with ra, respectively) and let C' be (C \ {r = s))a. Under these circumstances, paramodulation allows B~U C ~ to be inferred from B and C. Paramodulation was introduced and shown to be complete by Robinson and Wos [1969] and Wos and Robinson [1973]: for completeness, paramodulation must be combined with Rresolution, with factoring and with application of variable renaming substitutions. 2.6.5. H o r n clauses. An important special case of firstorder resolution is when the clauses are restricted to be Horn clauses. The propositional refutation search strategies described in section 1.3.5.6 still apply; and, in particular, an unsatisfiable set F of Horn clauses always has a linear refutation supported by a negative clause in F. In addition, the factoring portion of Rresolution is not necessary in refutations of F. A typical use of Horn clause refutations is as follows: a set A of Horn clauses is assumed as a 'database' of knowledge, such that every clause in A contains a positive literal. A query, which is an atomic formula P(sl, ..., sk), is chosen; the object is to determine if there is an instance of P ( ~ which is a logical consequence of A. In other words, the object is to determine if ( 3 ~ ) P ( ~ is a consequence of A where ~ is the vector of variables in P ( ~ . To solve this problem, one forms the clause { P ( ~ ) 9Historically, it was the desire to find strategies for firstorder theorem proving and the ability to lift results from propositional theorem proving, that motivated the research into search strategies for propositional resolution.
S. Buss
64
and lets F be the set A U ( P ( ~ } ; one then searches for a linear refutation of F which is supported by P ( ~ . If successful, such a linear refutation R also yields a substitution a, such that A F P(s')a; and indeed, a is the most general substitution such that R gives a refutation of A U { P ( ~ a } . From this, what one actually has is a proof of ( V ~ ( P ( ~ a ) where ~7is the vector of free variables in the terms P(~a. Note that there may be more than one refutation, and that different refutations can give different substitutions a, so there is not necessarily a unique most general unifier. What we have described is essentially a pure form of PROLOG, which is a logic programming language based on searching for refutations of Horn clauses, usually in a depthfirst search. PROLOG also contains conventions for restricting the order of the proof search procedure. For f u r t h e r reading. There is an extensive literature on logic programming, automated reasoning and automated theorem proving which we cannot survey here. The paper of Robinson [1965b] still provides a good introduction to the foundations of logic programming; the textbooks of Chang and Lee [1973] and Loveland [1978] provide a more detailed treatment of the subject matter above, and the textbooks of Kowalski [1979] and Clocksin and Mellish [1981] provide good detailed introductions to logic programming and PROLOG. Chapter IX, by G. J~iger and R. St~irk, discusses the prooftheory and semantics of extensions of logic programming to nonHorn clauses. 3. P r o o f t h e o r y
for other logics
In the final section of this chapter, we shall briefly discuss two important nonclassical logics, intuitionistic logic and linear logic. 3.1. I n t u i t i o n i s t i c logic Intuitionistic logic is a subsystem of classical logic which historically arose out of various attempts to formulate a more constructive foundation for mathematics. For example, in intuitionistic logic, the law of the excluded middle, A V ~A, does not hold in general; furthermore, it is not possible to intuitionistically prove A V B unless already at least one of A or B is already intuitionistically provable. We shall discuss below primarily mathematical aspects of the intuitionistic logic, and shall omit philosophical or foundational issues: the books of Troelstra and van Dalen [1988] provide a good introduction to the latter aspects of intuitionistic logic. The intuitionistic sequent calculus, L J, is defined similarly to the classical sequent calculus LK, except with the following modifications: (1) To simplify the exposition, we adopt the convention that negation (~) is not a propositional symbol. In its place, we include the absurdity symbol _L in the language; _1_is a nullary propositional symbol which is intended to always have value False. The two ~ rules of L K are replaced with the single _L:left initial sequent, namely _L+. Henceforth, ~A is used as an abbreviation for A D _L.
Introduction to Proof Theory
65
(2) In L J, the Y :right rule used in the definition of L K in section 1.2.2 is replaced by the two rules F+ A,A FF A , A V B
and
F  + A,A FF A , B V A
(3) Otherwise, LJ is defined like L K , except with the important proviso that at most one formula may appear in the antecedent of any sequent. In particular, this means that rules which have the principal formula to the right of the sequent arrow may not have any side formulas to the right of the sequent arrow. 3.1.1. C u t e l i m i n a t i o n . An important property of intuitionistic logic is that the cut elimination and freecut elimination theorems still apply: Theorem. (1) Let F+A be L Jprovable. Then there is a cutflee L Jproof of F'+ A. (2) Let ~ be a set of sequents closed under substitution such that each sequent in G
has at most one formula in its antecedent. Let L J6 be L J augmented with initial sequents from  If L K s F F+A, then there is a freecut free LK~proof of F~ A. The (free) cut elimination theorem for L J can be proved similarly to the prooftheoretic proofs used above for classical logic. 3.1.2. C o n s t r u c t i v i s m a n d i n t u i t i o n i s m . Intuitionistic logic is intended to provide a 'constructive' subset of classical logic: that is to say, it is designed to not allow nonconstructive methods of reasoning. An important example of the constructive aspect of intuitionistic logic is the BrouwerHeytingKolmogorov (BHK) constructive interpretation of logic. In the BHK interpretation, the logical connectives V, 3 and D have nonclassical, constructive meanings. In particular, in order to have a proof of (or knowledge of) a statement (3x)A(x), it is necessary to have a particular object t and a proof of (or knowledge of) A(t). Likewise, in order to have a proof of A V B the BHK interpretation requires one to have a proof either of A or of B, along with a indication of which one it is a proof of. In addition, in order to have a proof of A D B, one must have a method of converting any proof of A into a proof B. The BHK interpretation of the connectives A and V are similar to the classical interpretation; thus, a proof of A A B or of (Vx)A(x), consists of proofs of both A and B or of a method of constructing a proof of A(t) for all objects t. It is not difficult to see that the intuitionistic logic L J is sound under the BHK interpretation, in that any L Jprovable formula has a proof in the sense of the BHK interpretation. The BHK interpretation provides the philosophical and historical underpinnings of intuitionistic logic; however, it has the drawback of characterizing what constitutes a valid proof rather than characterizing the meanings of the formulas directly. It is possible to extend the BHK interpretation to give meanings to formulas directly,
66
S. Buss
e.g., by using realizability; under this approach, an existential quantifier (3x) might mean "there is a constructive procedure which produces x, and (possibly) produces evidence that x is correct". This approach has the disadvantage of requiring one to pick a notion of 'constructive procedure' in an adhoc fashion; nonetheless it can be very fruitful in applications such a intuitionistic theories of arithmetic where there are natural notions of 'constructive procedure'. For pure intuitionistic logic, there is a very satisfactory model theory based on Kripke models. Kripke models provide a semantics for intuitionistic formulas which is analogous to model theory for classical logic in many ways, including a model theoretic proof of the cutfree completeness theorem, analogous to the proof of Theorem 2.3.7 given above. For an accessible account of the BHK interpretation and Kripke model semantics for intuitionistic logic, the reader can refer to Troelstra and van Dalen [1988,vol. 1]; Chapter VI contains an thorough discussion of realizability. In this section we shall give only the following theorem of Harrop, which generalizes the statements that A V B is intuitionistically valid if and only if either A or B is, and that (3x)A(x) is intuitionistically valid if and only if there is a term t such that A(t) is intuitionistically valid. 3.1.3. Definition. The Harrop formulas are inductively defined by: (1) Every atomic formula is a Harrop formula, (2) If B is a Harrop formula and A is an arbitrary formula, then A ::) B is a Harrop formula. (3) If B is a Harrop formula, then (Vx)B is a Harrop formula. (4) If A and B are Harrop formulas, then so is A A B. Intuitively, a Harrop formula is a formula in which all existential quantifiers and all disjunctions are 'hidden' inside the lefthand scope of an implication. T h e o r e m . (Harrop [1960]) Let F be a cedent containing only Harrop.formulas, and let A and B be arbitrary formulas. (a) /f L J F F~ (3x)B(x), then there exists a term t such that L J proves F ~ B(t) . (b) If L J F F~ A V B, then at least one of F ~ A and F ~
B is L Jprovable.
We omit the proof of the theorem, which is readily provable by using induction on the length of cutfree L Jproofs. 3.1.4. Interpretation into classical logic. The 'negative translation' provides a translation of classical logic into intuitionistic logic; an immediate corollary of the negative translation is a simple, constructive proof of the consistency of classical logic from the consistency of intuitionistic logic. There are a variety of negative translations: the first ones were independently discovered by Kolmogorov, GSdel and Gentzen.
Introduction to Proof Theory
67
Definition. Let A be a formula. The negative translation, A  , of A is inductively defined by: (1) If A is atomic, then A is ~~A, (2) (  , B )  i s ~ ( B  ) . (3) (B A C) is (B) A (C),
(4) (S D C) is (S) D (C), (5) (B V C) is ~(~(B) A~(C)), (6) (VxB) is Vx(S), (7) (3xB) is~Vx(~(B)), Theorem.
Let A be a formula. Then L K F A if and only if L J F A  .
Clearly A is classically equivalent to A, so if A is intuitionistically valid, then A is classically valid. For the converse, we use the following two lemmas which immediately imply the theorem. L e m m a . Let A be a formula. Then L J proves ~~A ~
A .
Proof. The proof is by induction on the complexity of A. We will do only one of the more difficult cases. Suppose A is B D C. We need to show that L J proves  ~ ( B  D C  )   } B  D C  , for which, it suffices to prove B, ,,(B D C) ~ C  . By the induction hypothesis, L J proves ~,C } C  , so it will suffice to show that B  ,  ~ ( B  D C  ) ,  , C   + is L Jprovable. To do this it suffices to show that B, B  D C~C is L Jprovable, which is easy to prove. [] If F is a cedent B 1 , . . . , B ~ , then F denotes the cedent B ~  , . . . , B k and ~ F denotes the cedent ~B~,..., ~B~.
Suppose that L K F, ~A } . Lemma.
proves F}A.
Then L J
proves the sequent
Proof. The proof of the lemma is by induction on the number of inferences in an LKproof, possibly containing cuts. We'll do only one case and leave the rest to the reader. Suppose the LKproof ends with the inference B,F+ A,C
F+ A,B D C The induction hypothesis is that B,F, ~A,~C} is provable in LJ. By the previous lemma, the induction hypothesis implies that B, F,,A}C is provable, and from this it follows that F,~A,,(B D C)} is also LJprovable, which is what we needed to show. D
68
S. Buss
One consequence of the proof of the above theorem is a constructive proof that the consistency of intuitionistic logic is equivalent to the consistency of classical logic; thus intuitionistic logic does not provide a better foundations for mathematics from the point of view of sheer consistency. This is, however, of little interest to a constructivist, since the translation of classical logic into intuitionistic logic does not preserve the constructive meaning (under, e.g., the BHK interpretation) of the formula. It is possible to extend the negative translation to theories of arithmetic and to set theory; see, e.g., Troelstra and van Dalen [1988,vol. 1]. 3.1.5. F o r m u l a s as t y p e s . Section 3.1.2 discussed a connection between intuitionistic logic and constructivism. An important strengthening of this connection is the CurryHoward formulasastypes isomorphism, which provides a direct correspondence between intuitionistic proofs and Aterms representing computable functions, under which intuitionistic proofs can be regarded as computable functions. This section will discuss the formulasastypes isomorphism for intuitionistic propositional logic; our treatment is based on the development by Howard [1980]. Girard [1989] contains further material, including the formulasastypes isomorphism for firstorder intuitionistic logic. T h e { D }  f r a g m e n t . . We will begin with the fragment of propositional logic in which the only logical connective is D. We work in the sequent calculus, and the only strong rules of inference are the D :left and D :right rules. Firstly we must define the set of types:
Definition. The types are defined as follows: (a) Any propositional variable Pi is a type. (b) If a and T are types, then (a + T) is a type. If we identify the symbols ~ and 4 , the types are exactly the same as formulas, since is the only logical connective allowed. We shall henceforth make this identification of types and formulas without comment. Secondly, we must define the terms of the Acalculus; each term t has a unique associated type. We write t ~ to mean that t is a term of type T. Definition. For each type T, there is an infinite set of variables, x~, x~, x~,.., of type T. Note that if a ~ T, then x~ and x~ are distinct variables. The terms are inductively defined by: (a) Any variable of type T is a term of type T. (b) If s ~~ and t ~ are terms of the indicated types, then (st) is a term of type T. This term may also be denoted (st) ~ or even (s~~t~) ~. (c) If t ~ is a term then )~x~.t is a term of type a + T. This term can also be denoted ()~x~.t) ~ or (Ax~.t~) ~
Introduction to Proof Theory
69
Traditionally, a type a + T is viewed as a set of mappings from the objects of type a into the objects of type T. For intuitionistic logic, it is often useful to think of a type a as being the set of proofs of the formula a. Note that under the BKHinterpretation this is compatible with the traditional view of types as a set of mappings. The Ax connective serves to bind occurrences of x; thus one can define the notions of bound and free variables in terms in the obvious way. A term is closed provided it has no free variables. The computational content of a term is defined by letting (st) represent the composition of the terms s and t, and letting Ax~.t represent the mapping that takes objects d of type T to the object t ( x / d ) . Clearly this gives a constructive computational meaning to terms. Theorem. type B .
Let B be a formula.
L J F B if and only if there is a closed term of
An example of this theorem is illustrated by the closed term K defined as Ax.Ay.x where x and y are of type a and T, respectively, and therefore the term K has type a ~ (T + a), which is a valid formula. A second important example is the closed term S which is defined by Ax.)~y.Az.((xz)(yz)) where the types of x, y and z are a ~ (T + #), a + T and a, respectively, and therefore, S has type (a + (T + #)) + ((a ~ r) + (a + #)) which is a valid formula 9 The terms g and S are examples of combinators. The import of this theorem is that a closed term of type B corresponds to a proof of B. We shall briefly sketch the proof of this for the sequent calculus; however, the correspondence between closed terms and natural deduction proofs is even stronger, namely, the closed terms are isomorphic to natural deduction proofs in intuitionistic logic (see Girard [1989]). To prove the theorem, we prove the following lemma: L e m m a . L J proves the sequent A1, . . . , An+ B if and only if there is a term t B o f the indicated type involving only the free variables xA1,. . , X nA. 9
9
P r o o f . It is straightforward to prove, by induction on the complexity of t s , that the desired L Jproof exists. For the other direction, use induction on the length of the L Jproof to prove that the term t B exists. We will consider only the case where the last inference of the L Jproof is F+ A B, F> C A D B, F+ C The induction hypotheses give terms rA(x r) and sC(xB,xr) where x r actually represents a vector of variables, one for each formula in F. The desired term tC(xA~B, x r) is defined to be sC((xA~SrA(xr)), xr) . [] T h e { D, _l_}  f r a g m e n t . The above treatment of the formulasastypes isomorphism allowed only the connective D. The formulasastypes paradigm can be extended to allow also the symbol _L and thereby negation, by making the following changes.
70
S. Buss
Firstly, enlarge the definition of types, by adding a clause specifying that 0 is a type. Identifying @with 2. makes types identical to formulas. The type 0 corresponds to the empty set; this is consistent with the BHK interpretation since 2_ has no proof. Secondly, enlarge the definition of terms by adding to the definition a new clause stating that, for every type a, there is a term f 0  ~ of type 0 + a. Now the Theorem and Lemma of section 3.1.5 still hold verbatim for formulas with connectives { D, 2_} and with the new definitions of types and terms. T h e {D, 2., A }  f r a g m e n t . To further expand the formulasastypes paradigm to intuitionistic logic with connectives D, 2. and A, we must make the following modifications to the definitions of terms and types. Firstly, add to the definition of types, a clause stating that if a and T are types, then (a x T) is a type. By identifying x with A, we still have the types identified with the formulas. Consistent with the BHKinterpretation, the type (a x T) may be viewed as the crossproduct of its constituent types. Secondly, add to the definition of terms the following two clauses: ( i ) i f s ~ and t r are terms, then (s, t) is a term of type a x 7, and (ii) if s ~x~ is a term, then (rls) ~ and (r2s) ~ are terms. The former term uses the pairing function; and the latter terms use the projection functions. Again, the Theorem and Lemma above still holds with these new definitions of types and terms. T h e {D, 2., A, V }  f r a g m e n t . To incorporate also the connective V into the formulasastypes isomorphism we expand the definitions of types and terms as follows. Firstly, add to the definition of types that if a and T are types, then (a + T) is a type. To identify formulas with types, the symbol + is identified with V. The type a + ~" is viewed as the disjoint union of a and T, consistent with the BHKinterpretation. Secondly, add to the definitions of terms the following two clauses 9 If s ~ and t r are terms of the indicated types, then (~~(~+r)s) and (~*(~+~)t) are terms of type a + T. 9 For all types a, T and #, there is a constant symbol d (~+')~((~+')~((~+~)+')) of the indicated type. In other words, if s ~~" and t ~+" are terms, then (dst) is a term of type (a + T) + #. Once again, the Theorem and Corollary of section 3.1.5 holds with these enlarged definitions for types and terms. 3.2. L i n e a r logic Linear logic was first introduced by Girard [1987a] as a refinement of classical logic, which uses additional connectives and in which weakening and contractions are not allowed. Linear logic is best understood as a 'resource logic' in that proofs in linear logic must use each formula exactly once. In this point view, assumptions in a proof are viewed as resources; each resource must be used once and only once during the proof. Thus, loosely speaking, an implication A ~ B can be proved in
Introduction to Proof Theory
71
linear logic only if A is exactly what is needed to prove B; thus if the assumption A is either too weak or too strong, it is not possible to give a linear logic proof of 'if A then B'. As an example of this, A ~ B being provable in linear logic does not generally imply that A, A F B is provable in linear logic; this is because A F B is asserting that one use of the resource A gives B, whereas A, A F B asserts that two uses of the resource A gives B. We shall introduce only the propositional fragment of linear logic, since already the main features of linear logic are present in its propositional fragment. We are particularly interested in giving a intuitive understanding of linear logic; accordingly, we shall frequently make vague or even notquitetrue assertions, sometimes, but not always, preceded by phrases such as "loosely speaking" or "intuitively", etc. Linear logic will be formalized as a sequent calculus. ~~ The initial logical sequents are just A   } A; the weakening and contraction structural rules are not allowed and the only structural rules are the exchange rule and the cut rule. Since contraction is not permitted, this prompts one to reformulate the rules, such as A :right and V :left, which contain implicit contraction of side formulas. To begin with conjunction, linear logic has two different different conjunction symbols, denoted  and &: the multiplicative connective  does not allow contractions on side formulas, whereas the additive conjunction & does. The rules of inference for the two varieties of conjunction are: A, B, F~A

A  B~ F}A
@:right
FI+A1, A F2+A2, B F1, F2~A1, A2, A  B
and A, F}A
&:left A&B, F~A &:left
&:right
F}A, A F}A, B F+A, A&B
B, F'~A
A&B, F"> A
The obvious question now arises of what the difference is in the meanings of the two different conjunctions  and &. For this, one should think in terms of resource usage. Consider first the @ :left inference: the upper sequent contains both A and B so that resources for both A and B are available. The lower sequent contains A  B in their stead; the obvious intuition is that the resource (for) A  B is equivalent to having resources for both A and B. Thus, loosely speaking, we translate  as meaning "both". Now consider the & :left inference; here the lower sequent has the principal formula A&B in place of either A or B, depending on which form of the inference is used. This means that a resource A&B is equivalent to having a resource for A or B, whichever is desired. So we can translate & as "whichever". The translations "both" and "whichever" also make sense for the other inferences for the conjunctions. Consider the  :right inference and, since we have not yet discussed the disjunctions, assume A1 and A2 are empty. In this case, the upper hypotheses say that from a resource F1 one can obtain A, and from a resource F2 1~ customary convention for linear logic is to use "b" as the symbol for the sequent connection; we shall continue to use the symbol " ~ " however.
S. Buss
72
one can obtain B. The conclusion then says that from (the disjoint union of) both these resources, one obtains both A and B. Now consider a & :right inference F~ A
F~ B
F + A&B For the lower sequent, consider an agent who has the responsibility of providing (outputting) a resource A&B using exactly the resource F. To provide the resource A&B, the agent must be prepared to either provide a resource A or a resource B, whichever one is needed; in either case, there is an upper sequent which says the agent can accomplish this. Linear logic also has two different forms of disjunction" the multiplicative disjunction ~' and the additive disjunction @. The former connective is dual to  and the latter is dual to &. Accordingly the rule of inference for the disjunctions are: A, Ft}A1 B, F2 "}A2 :~ :right F'} A, A, B ~ :left F+ A, A:~B A ~ B , r l , r2'}At, A2 and A, F~A B, F}A P+A, A e :leIt @:right F}A, A @ B A @ B,F+A F+A, B
@:right F} A, A @ B By examining these inferences, we can see that the multiplicative disjunction, ~ , is a a kind of 'indeterminate OR' or 'unspecific OR', in that a resource for A ~ B consists of a either a resource either for A or for B, but without a specification of which one it is a resource for. On the other hand, the additive conjunction, @, is a 'determinate OR' or a 'specific OR', so that a resource for A (9 B is either a resource for A or a resource for B together with a specification of which one it is a resource for. To give a picturesque example, consider a military general who has to counter an invasion which will come either on the western front or on the eastern front. If the connective 'or' is the multiplicative or, then the general needs sufficient resources to counter both threats, whereas if the 'or' is an additive disjunction then the general needs only enough resources to counter either threat. In the latter case, the general is told ahead of time where the invasion will occur and has the ability to redeploy resources so as to counter the actual invasion; whereas in the former case, the general must separately deploy resources sufficient to counter the attack from the west and resources sufficient to counter the attack from the east. From the rules of inference for disjunction, it is clear that the commas in the succedent of a sequent intended to be interpreted as the unspecific or, ~ . The linear implication symbol, o, is defined by letting A o B be an abbreviation for A • where A • is defined below. Linear logic also has two nullary connectives, 1 and T, for truth and two nullary connectives, 0 and _k, for falsity. Associated to these connectives are the initial sequents ~ 1 and 0, F} A, and the one rule of inference: F~ A 1, F} A
Introduction to Proof Theory
73
The intuitive idea for the multiplicatives is that 1 has the null resource and that there is no resource for _1_. For the additive connectives, T has a 'arbitrary' resource, which means that anything is a resource for T, whereas any resource for 0 is a 'wild card' resource which may be used for any purpose. In addition, linear logic has a negation operation A • which is involutive in that (A• • is the same as formula A. For each propositional variable p, p• is the negation of p. The operation of the negation is inductively defined by letting (A  B) • be AZ~B z, (A&B) • be A • @ B, 1 • be 3_, and T z be 0. This plus the involution property defines the operation A • for all formulas A. The two rules of inference associated with negation are
• :right A, F   } A
F} A,A •
and
•
F~
A, A
Ai, F__.} A
Definition. The multiplicative/additive fragment of propositional linear logic is denoted M A L L and contains all the logical connectives, initial sequents and rules of inference discussed above. The absence of weakening and contraction rules, makes the linear logic sequent calculus particularly well behaved; in particular, the cut elimination theorem for M A L L is quite easy to prove. In fact, if P is a M A L L proof, then there is a shorter M A L L proof P * with the same endsequent which contains no cut inferences. In addition, the number of strong inferences in P * is bounded by the number of connectives in the endsequent. Bellin [1990] contains an indepth discussion of M A L L and related systems. As an exercise, consider the distributive laws A "+ (A174 and (A174 ~ A The reader should check that these are valid under the intuitive interpretation of the connectives given above; and, as expected, they can be proved in M A L L . Similar reasoning shows that & is distributive over ~ . On the other hand, (9 is not distributive over ~ ; this can be seen intuitively by considering the meanings of the connectives, or formally by using the cut elimination theorem. M A L L is not the full propositional linear logic. Linear logic (LL) has, in addition, two modal• ! and ? which allow contraction and weakening to be used in certain situations. The negation operator is extended to LL by defining (!A) • to be ?(A• The four rules of inference for ! are: v :weakeningo
9
F} A
!A, F+A
!:weakening1 A, F}A
!A, F+A
!A, !A, F+A !F~?A, A !A, F+A !F}?A, !A where in the last inference, !F (respectively, ?A) represents a cedent containing only formulas beginning with the ! symbol (respectively, the ? symbol). The dual rules for ? are obtained from these using the negation operation. One should intuitively think of a resource for !A as consisting of zero or more resources
!:contraction
74
S. Buss
for A. The nature of full linear logic with the modalities is quite different from that of either M A L L or propositional classical logic; in particular, Lincoln et al. [1992] show that LL is undecidable. The above development of the intuitive meanings of the connectives in linear logic has not been as rigorous as one would desire; in particular, it would be nice have a completeness theorem for linear logic based on these intuitive meanings. This has been achieved for some fragments of linear logic by Blass [1992] and Abramsky and Jagadeesan [1994], but has not yet been attained for the full propositional linear logic. In addition to the references above, more information on linear logic may be found in Troelstra [1992], although his notation is different from the standard notation we have used. A c k n o w l e d g e m e n t s . We are grateful to S. Cook, E. Gertz, C. Guti~rrez, A. Jonasson, C. Lautemann, A. Maciel, R. Parikh C. Pollett, R. St~irk, J. Tor~n and S. Wainer for reading and suggesting corrections to preliminary versions of this chapter. Preparation of this article was partially supported by NSF grant DMS9503247 and by cooperative research grant INT9600919/ME103 of the NSF and the Czech Republic Ministry of Education.
References S. ABRAMSKYAND R.. JAGADEESAN [1994] Games and full completeness for multiplicative linear logic, Journal of Symbolic Logic, 59, pp. 543574. J. BARWISE [1975] Admissable Sets and Structures: An Approach to Definability Theory, SpringerVerlag, Berlin. [1977] Handbookof Mathematical Logic, NorthHolland, Amsterdam. G. BELLIN
[1990] Mechanizing Proof Theory: ResourceAware Logics and ProofTransformations to Extract Explicit Information, PhD thesis, Stanford University. E. W. BETH [1953] On Padoa's method in the theory of definition, Indagationes Mathematicae, 15, pp. 330339. [1956] Semanticentailment and formal derivability, Indagationes Mathematicae, 19, pp. 357388. A. BLASS [1992] A game semantics for linear logic, Annals of Pure and Applied Logic, 56, pp. 183220. S. R. Buss [1986] Bounded Arithmetic, Bibliopolis, Napoli. Revision of 1985 Princeton University Ph.D. thesis. C.L. CHANG [1970] The unit proof and the input proof in theorem proving, J. Assoc. Comput. Mach., 17, pp. 698707. Reprinted in: Siekmann and Wrightson [1983,vol 2]. C.L. CHANGAND R. C.T. LEE [1973] SymbolicLogic and Mechanical Theorem Proving, Academic Press, New York.
Introduction to Proof Theory
75
W. F. CLOCKSIN AND C. S. MELLISH [1981] Programming in Prolog, NorthHolland, Amsterdam, 4th ed. W. CRAIG [1957a] Linear reasoning. A new form of the HerbrandGentzen theorem, Journal of Symbolic Logic, 22, pp. 250268. [1957b] Three uses of the HerbrandGentzen theorem in relating model theory and proof theory, Journal of Symbolic Logic, 22, pp. 269285. M. DAVIS AND H. PUTNAM [1960] A computing procedure for quantification theory, J. Assoc. Comput. Mach., 7, pp. 201215. Reprinted in: Siekmann and Wrightson [1983,vol 1]. P. C. EKLOF [1977] Ultraproducts for algebraists, in: Barwise [1977], pp. 105137. S. FEFERMAN [1968] Lectures on proof theory, in: Lectures on proof theory, Proceedings of the Summer School in Logic, Leeds, 1967, M. H. LSb, ed., Lecture Notes in Mathematics #70, SpringerVerlag, Berlin, pp. 1107. G. FREGE [1879] BegriffsschrQ2, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, Halle. English translation: in van Heijenoort [1967], pp. 182. G. GENTZEN [1935] Untersuchungen fiber das logische Schliessen, Mathematische ZeitschrQ2, 39, pp. 176210, 405431. English translation in: Gentzen [1969], pp. 68131. [1969] Collected Papers of Gerhard Gentzen, NorthHolland, Amsterdam. Edited by M. E. Szabo. J.Y. GIRARD [1987a] Linear logic, Theoretical Computer Science, 50, pp. 1102. [1987b] Proof Theory and Logical Complexity, vol. I, Bibliopolis, Napoli. [1989] Proofs and Types, Cambridge tracts in theoretical computer science #7, Cambridge University Press. Translation and appendices by P. Taylor and Y. Lafont. K. GODEL [1930] Die Vollst~indigkeit der Axiome des logischen Funktionenkalkiils, Monatshe~e flit Mathematik und Physik, 37, pp. 349360. R. HARROP [1960] Concerning formulas of the types A ~ B V C, A + (Ex)B(x) in intuitionistic formal systems, Journal of Symbolic Logic, 25, pp. 2732. J. VAN HEIJENOORT [1967] From Frege to Gb'del: A sourcebook in mathematical logic, 18791931, Harvard University Press. L. HENKIN [1949] The completeness of the firstorder functional calculus, Journal of Symbolic Logic, 14, pp. 159166. L. HENSCHEN AND L. Wos [1974] Unit refutations and Horn sets, J. Assoc. Comput. Mach., 21, pp. 590605. J. HERBRAND [1930] Recherches sur la thdorie de la dgmonstration, PhD thesis, University of Paris. English translation in Herbrand [1971] and translation of chapter 5 in van Heijenoort [1967], pp. 525581.
76
S. Buss
[1971] Logical Writings, D. Reidel, Dordrecht, Holland. ed. by W. Goldfarb. D. HILBERT AND W. ACKERMANN [1928] Grundziige der theoretischen Logik, SpringerVerlag, Berlin. D. HILBERT AND P. BERNAYS [193439] Grundlagen der Mathematik, I f_4II, Springer,Berlin. J. HINTIKKA [1955] Form and content in quantificationtheory, two papers on symbolic logic, Acta Philosophica Fennica, 8, pp. 755.
W. A. HOWARD [1980] The formulasastypes notion of construction, in: To H. B. Curry: Essays in Combinatory Logic, Lambda Calculus and Formalism, J. P. Seldin and J. R. Hindley, eds., Academic Press, New York, pp. 479491. S. KANGER [1957] Provability in Logic, Almqvist & Wiksell, Stockholm. S. C. KLEENE [1952] Introduction to Metamathematics, WoltersNoordhoff, Groningen and NorthHolland, Amsterdam. R. KOWALSKI [1979] Logicfor Problem Solving,NorthHolland, Amsterdam.
J. KRAJf(3EK, P. PUDL~.K, AND G. TAKEUTI [1991] Bounded arithmetic and the polynomial hierarchy, Annals of Pure and Applied Logic, 52, pp. 143153. G. KREISEL [1951] On the interpretation of nonfinitistproofspart I, Journal of Symbolic Logic, 16, pp. 241267. [1952] On the interpretationof nonfinitistproofs, part II. interpretationof number theory, applications, Journal of Symbolic Logic, 17, pp. 4358.
P. LINCOLN, J. C. MITCHELL, A. SCEDROV, AND N. SHANKAR [1992] Decision problems for linear logic, Annals of Pure and Applied Logic, 56, pp. 239311. E. G. K. LOPEZESCOBAR [1965] An interpolation theorem for denumerably long formulas, Fundamenta Mathematicae, 57, pp. 253272. D. W. LOVELAND [1970] A linear format for resolution, in: Syrup. on Automatic Demonstration, Lecture Notes in Mathematics #125, SpringerVerlag, Berlin, pp. 147162. [1978] Automated Theorem Proving: A Logical Basis, NorthHolland, Amsterdam. D. LUCKHAM [1970] Refinement theorems in resolution theory, in: Syrup. on Automatic Demonstration, Lecture Notes in Mathematics #125, SpringerVerlag, Berlin, pp. 163190. R. C. LYNDON [1959] An interpolation theorem in the predicate calculus, Pacific Journal of Mathematics, 9, pp. 129142. E. MENDELSON [1987] Introduction to Mathematical Logic, Wadsworth & Brooks/Cole, Monterey. J. R. M UNKRES [1975] Topology: A First Course, PrenticeHall, Englewood Cliffs, New Jersey.
Introduction to Proof Theory
77
M. S. PATERSON AND M. N. WEGMAN [1978] Linear unification, J. Comput. System Sci., 16, pp. 158167. G. PEANO [1889] Arithmetices Principia, noca methodo exposito, Turin. English translation in: van Heijenoort [1967], pp. 8397. D. PRAWITZ [1965] Natural Deduction: A ProofTheoretical Study, Almqvist & Wiksell, Stockholm. A. ROBINSON [1956] A result on consistency and it application to the theory of definability, Indagationes Mathematicae, 18, pp. 4758. G. ROBINSON AND L. Wos [1969] Paramodulation and theoremproving in firstorder theories with equality, in: Machine Intelligence 4, PP. 135150. J. A. ROBINSON [1965a] Automatic deduction with hyperresolution, International Journal of Computer Mathematics, 1, pp. 227234. Reprinted in: Siekmann and Wrightson [1983,vol 1]. [1965b] A machineoriented logic based on the resolution principle, J. Assoc. Comput. Mach., 12, pp. 2341. Reprinted in: Siekmann and Wrightson [1983,vol 1]. K. SCHIJTTE [1965] Ein System des verkniipfenden Schliessens, Archiv far Mathematische Logik und Grundlagenforschung, 2, pp. 5567. J. SIEKMANN AND G. WRIGHTSON [1983] Automation of Reasoning, vol. l&2, SpringerVerlag, Berlin. J. R. SLAGLE [1967] Automatic theorem proving with renamable and semantic resolution, J. Assoc. Comput. Mach., 14, pp. 687697. Reprinted in: Siekmann and Wrightson [1983,vol 1]. W. W. WAIT [1968] Normal derivability in classical logic, in: The Syntax and Semantics of Infinitary Languages, Lecture Notes in Mathematics #72, J. Barwise, ed., SpringerVerlag, Berlin, pp. 204236. G. TAKEUTI [1987] Proof Theory, NorthHolland, Amsterdam, 2nd ed. A. S. TROELSTRA [1992] Lectures on Linear Logic, Center for the Study of Logic and Information, Stanford. A. S. TROELSTRA AND D. VAN DALEN [1988] Constructivism in Mathematics: An Introduction, vol. I&II, NorthHolland, Amsterdam. G. S. TSEJTIN [1968] On the complexity of derivation in propositional logic, Studies in Constructive Mathematics and Mathematical Logic, 2, pp. 115125. Reprinted in: Siekmann and Wrightson [1983,vol 2]. A. N. WHITEHEAD AND B. RUSSELL [1910] Principia Mathematica, vol. 1, Cambridge University Press. L. Wos, R. OVERBEEK, E. LUSK, AND J. BOYLE [1992] Automated Reasoning: Introduction and Applications, McGrawHill, New York, 2nd ed.
78
S. Buss
L. Wos AND G. ROBINSON [1973] Maximal models and refutation completeness: Semidecision procedures in automatic theorem proving, in: Word Problems: Decision Problems and the Burnside Problem in Group Theory, W. W. Boone, F. B. Cannonito, and R. C. Lyndon, eds., NorthHolland, Amsterdam, pp. 609639. L. Wos, G. ROBINSON, AND D. F. CARSON [1965] Efficiency and completeness of the set of support stategy in theorem proving, J. Assoc. Comput. Mach., 12, pp. 201215. Reprinted in: Siekmann and Wrightson [1983,vol 1].
CHAPTER II
FirstOrder Proof Theory of Arithmetic Samuel R. Buss Departments of Mathematics and Computer Science, University o] California, San Diego, La Jolla, CA 920930112, USA
Contents 1. Fragments of a r i t h m e t i c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1. Very weak fragments of arithmetic . . . . . . . . . . . . . . . . . . . . . . 1.2. Strong fragments of arithmetic . . . . . . . . . . . . . . . . . . . . . . . . 1.3. Fragments of b o u n d e d arithmetic . . . . . . . . . . . . . . . . . . . . . . . 1.4. Sequent calculus formulations of arithmetic . . . . . . . . . . . . . . . . . 2. GSdel incompleteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. A r i t h m e t i z a t i o n of m e t a m a t h e m a t i c s . . . . . . . . . . . . . . . . . . . . . 2.2. T h e GSdel incompleteness theorems . . . . . . . . . . . . . . . . . . . . . 3. On the strengths of fragments of a r i t h m e t i c . . . . . . . . . . . . . . . . . . . . 3.1. Witnessing theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Witnessing t h e o r e m for S~ . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3. Witnessing theorems and conservation results for T~ . . . . . . . . . . . . 3.4. Relationships between BEn a n d / E n . . . . . . . . . . . . . . . . . . . . . 4. Strong incompleteness theorems for IA0 + exp . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
HANDBOOK OF PROOF THEORY Edited by S. R. Buss 1998 Elsevier Science B.V. All rights reserved
81 82 83 97 109 112 113 118 122 122 127 130 134 137 143
80
S. Buss
This chapter discusses the prooftheoretic foundations of the firstorder theory of the nonnegative integers. This firstorder theory of numbers, also called 'firstorder arithmetic', consists of the firstorder sentences which are true about the integers. The study of firstorder arithmetic is important for several reasons. Firstly, in the study of the foundation of mathematics, arithmetic and set theory are two of the most important firstorder theories; indeed, the usual foundational development of mathematical structures begins with the integers as fundamental and from these constructs mathematical constructions such as the rationals and the reals. Secondly, the proof theory for arithmetic is highly developed and serves as a basis for prooftheoretic investigations of many stronger theories. Thirdly, there are intimate connections between subtheories of arithmetic and computational complexity; these connections go back to Ghdel's discovery that the numeralwise representable functions of arithmetic theories are exactly the recursive functions and are recently of great interest because some weak theories of arithmetic have very close connection of feasible computational classes. Because of Ghdel's second incompleteness theorem that the theory of numbers is not recursive, there is no good proof theory for the complete theory of numbers; therefore, prooftheorists consider axiomatizable subtheories (called fragments) of firstorder arithmetic. These fragments range in strength from the very weak theories R and Q up to the very strong theory of Peano arithmetic (PA). The outline of this chapter is as follows. Firstly, we shall introduce the most important fragments of arithmetic and discuss their relative strengths and the bootstrapping process. Secondly, we give an overview of the incompleteness theorems. Thirdly, section 3 discusses the topics of what functions are provably total in various fragments of arithmetic and of the relative strengths of different fragments of arithmetic. Finally, we conclude with a proof of a theorem of J. Paris and A. Wilkie which improves Ghdel's incompleteness theorem by showing that IA0 + exp cannot prove the consistency of Q. The main prerequisite for reading this chapter is knowledge of the sequent calculus and cutelimination, as contained in Chapter I of this volume. The proof theory of arithmetic is a major subfield of logic and this chapter necessarily omits many important and central topics in the proof theory of arithmetic; the most notable omission is theories stronger than Peano arithmetic. Our emphasis has instead been on weak fragments of arithmetic and on finitary proof theory, especially on applications of the cutelimination theorem. The articles of FairtloughWainer, Pohlers, Troelstra and AvigadFeferman in this volume also discuss the proof theory of arithmetic. There are a number of book length treatments of the proof theory and model theory of arithmetic. Takeuti [1987], Girard [1987] and Schiitte [1977] discuss the classical proof theory of arithmetic, Buss [1986] discusses the proof of the bounded arithmetic, and Kaye [1991] and H~jek and Pudl~k [1993] treat the model theory of arithmetic. The last reference gives an indepth and modern treatment both of classical fragments of Peano arithmetic and of bounded arithmetic.
Proof Theory of Arithmetic 1. F r a g m e n t s
81
of arithmetic
This section introduces the most commonly used axiomatizations for fragments of arithmetic. These axiomatizations are organized into the categories of 'strong fragments', 'weak fragments' and 'very weak fragments'. The line between strong and weak fragments is somewhat arbitrarily drawn between those theories which can prove the arithmetized version of the cutelimination theorem and those which cannot; in practice, this is equivalent to whether the theory can prove that the superexponential function i ~~ 21 is total. The very weak theories are theories which do not admit any induction axioms. Nonlogical s y m b o l s for a r i t h m e t i c . We will be working exclusively with firstorder theories of arithmetic: these have all the usual firstorder symbols, including propositional connectives and quantifiers and the equality symbol (  ) . In addition, they have nonlogical symbols specific to arithmetic. These will always include the constant symbol 0, the unary successor function S, the binary functions symbols + and 9 for addition and multiplication, and the binary predicate symbol _< for 'less than or equal to'. 1 Very often, terms are abbreviated by omitting parentheses around the arguments of the successor function, and we write St instead of S(t). In addition, for n k 0 an integer, we write S"t to denote the term with n applications of S to t. For weak theories of arithmetic, especially for bounded arithmetic, it is common to include further nonlogical symbols. These include a unary function [sxJ 1 for division by two, a unary function Ixl which is defined by
I.,I = r~og~(n + 1)1, and Nelson's binary function # (pronounced 'smash') which we define by
m # n = 2 Iml'lnl. It is easy to check that Inl is equal to the number of bits in the binary representation of Tb.
An alternative to the # function is the unary function Wl, which is defined by wl(n)  n0og2nJ and has growth rate similar to # . The importance of the use of the wl function and the # function lies mainly in their growth rate. In this regard, they are essentially equivalent since Wl (n) ~ n # n and m # n  O(wl (max{m, n})). Both of these functions are generalizable to faster growing functions by defining wn(x)  x ~"l(ll~ and x#n+ly  21*l#,lul where #2 is # . It is easy to check that the growth rates of w~ and #~+1 are equivalent in the sense that any term involving one of the function symbols can be bounded by a term involving the other function symbol. 1Many authors use < instead of 0 and show t h a t these numbers are A0definable. In fact, the set definable by the formula
{mp}p is
LenBit(1, x) = 0 A LenBit(2, x) = 1A A(V2 i _< x)(2 < 2 i D
[LenBit(2',x) = 1 ++ (2' is a power of 4 A LenBit(]vf~] ,x) = 1)]) 1
/
As an i m m e d i a t e corollary we get a A0formula defining the the numbers of the form x  22P; namely, they are the powers of two for which LenBit(x, mp)  1 holds for some mp < 2x. Now the general idea of defining 2Y is to express y in binary notation as y = 2pl + 2p2 + . . . + 2pk with distinct values pj, and thus define x  1I~122p~. To carry this out, we define an extraction function Ext(u, x) which will be applied when u is of a n u m b e r of the form 22p . Formally we define
Ext(u,x) =
LxluJ m o d
u.
Note t h a t when u = 22p , then Ext(u, x) returns the n u m b e r with binary expansion equal to the 2Pth bit t h r o u g h the (2 p+I  1)th bit of x. We will think of x coding the sequence of numbers Ext(22P,x) for p  0, 1,2, . . . . We also define Ex](u,x) as Ext(u 2, x); this is of course the n u m b e r which succeeds Ext(u, x) in the sequence of numbers coded by x. We are now ready to A0define x = 2 i . We define it with a formula r i) which states there are numbers a, b, c, d x. (2) Ext(2, b)= 1, Ext(2, c)= 0 and Ext(2, d)= 1. (3) For all u of the form 22~ such t h a t a > u 2, Ex~(u, (4) For all u of the form 22p such t h a t a > u 2, either
b) = 2. Ext(u, b),
Ex~ (u, c) = Ext(u, c) and Ex~ (u, d) = Ext(u, d), or Ex~ (u, c) = Ext(u, c) + Ext(u, b) and Ex~ (u, d) = Ext(u, d) . Ext(u, a). (5) There is a u < a of the form 22p such t h a t Ext(u, c)  i and Ext(u, d)  x. (a)
(b)
S. Buss
92
Obviously this is a A0formula; we leave it to the reader the nontrivial task of checking that IA0 can prove simple facts about this definition of 2i , including (1) the fact that if r i) and r j) both hold, then i = j , (2) the fact that 2i. 2j  2 i+j (3) the fact that if x is a power of two, then x  2i for some i < x, and (4) that if r i) and r i ) t h e n x = y. (k) Length function. The length function is Ixl = [log2(x + 1)~ and can be A0defined in IA0 as the value i such that y  2i is the least power of two greater than x. Note that 101 = 0 and for x > 0, Ixl is the number of bits in the binary representation of x. The reader should check that IA0 can prove elementary facts about the Ixl function, including that Ixl _ x and that 36 < x D Ixl 2 < x. (1) The Bit(i,x) function is definable as LenBit(2i, x). This is equivalent to a A0definition, since when 2~ > x, B i t ( i , x )  O. Bit(i,x) is the ith bit of the binary representation of x; by convention, the lowest order bit is bit number 0. (m) Sequence coding. Sequences will be coded in the base 4 representation used by Buss [1986]; many prior works have used similar encodings. A number x is viewed as a bit string in which pairs of bits code one of the three symbols comma, "0" or "1". The ith symbol from the end is coded by the two bits Bit(2i + 1, x) and Bit(2i, x). This is best illustrated by an example: consider the sequence (3, 0, 4). Firstly, a comma is prepended to the the sequence and the entries are written in base two, preserving the commas, as the string: ",11,,100"; leading zeros are optionally dropped in this process. Secondly, each symbol in the string is replaced by a two bit encoding by replacing each "1" with "11", each "0" with "10", and each comma with "01". This yields "0111110101111010" in our example. Thirdly, the result is interpreted as a binary representation of a number; in our example it is the integer 32122. This then is a Ghdel number of the sequence (3, 0, 4). This scheme for encoding sequences has the advantage of being relatively efficient from an information theoretic point of view and of making it easy to manipulate sequences. It does have the minor drawbacks that not every number is the Ghdel number of a sequence and that Ghdel numbers of sequences are nonunique since it is allowable that elements of the sequence be coded with excess leading zeros. Towards arithmetizing Ghdel numbers, we define predicates Comma(i,x) and
Digit(i, x) as Comma(i, x) Digit(i,x)
r
Bit(2i + 1, x) = 0 A Bit(2i, x) = 1
=
2. (1  Bit(2i + 1, x ) ) + Bit(2i, x).
Note Digit equals zero or one for encodings of "0" and "1" and equals 2 or 3 otherwise. It is now fairly simple to recognize and extract values from a sequence's Ghdel number. We define IsEntry(i, j, x) as
(i  0 v Comma(i  1,x)) A C o m m a ( j , x ) A (Vk < j)(k > i D Digit(j,x) < 1) which states that the ith through (j  1)st symbols coded by x code an entry in
Proof Theory of Arithmetic
93
the sequence. And we define Entry(i, j, x) = y by
lYl _ 1.
(1) T~ proves II~IND and T~ ~ S~ . (2) S~ proves E~LIND, IIbiPIND and IIbiLIND. 1.3.4. P o l y n o m i a l t i m e c o m p u t a b l e functions in S~ The last section discussed the fact that ~definable functions and A~defined predicates can be introduced into theories of bounded arithmetic and used freely in induction axioms. Of particular importance is the fact that these include all polynomial time computable functions and predicates. A function or predicate is said to be polynomial time computable provided there exists a Turing machine M and a polynomial p(n), such that M computes the function or recognizes the predicate, and such that M runs in time < p(n) for all inputs of length n. The inputs and outputs for M are integers coded in binary notation, thus the length of an input is proportional to the total length of its binary representation. For our purposes, it is convenient to use an alternative definition of the polynomial time computable functions; the equivalence of this definition is due to Cobham [1965]. Definition. The polynomial time functions on N are inductively defined by (1) The following functions are polynomial time: 9 The nullary constant function 0. 9 The successor function x ~+ S(x). 9 The doubling function x ~ 2x. 9 The conditional function Cond(x, y, z) = { y z
if x = 0 otherwise.
(2) The projection functions are polynomial time functions and the composition of polynomial time functions is a polynomial time function.
S. Buss
104
(3) If g is a ( n  1)cry polynomial time function and h is a (n + 1)cry polynomial time function and p is a polynomial, then the following function f, defined by limited iteration on notation from g and h, is also polynomial time:
f (O, Y,)  g(Y,) f(z,Y,)
=
provided I / ( z , e ) l _ 1, S~ _D S 1 and also, by Theorem 1.3.5 below, T~ _D S 1. Therefore, the above theorem, combined with Theorem 1.3.3.3 gives: 1.3.4.2. T h e o r e m . (Buss [1986]) Let i >_ 1. The theories Si2 and Ti2 can introduce symbols for polynomial time computable functions and predicates and use them freely in induction axioms. We shall show later (Theorem 3.2) that the converse to Theorem 1.3.4.1 also holds and that S 1 can Ebldefine and A~define precisely the polynomial time computable functions and predicates, respectively.
1.3.5. Relating S~ and T~ It is clear that S~ :D S 1 and T~ D T1 , for i > 1. In addition we have the following relationships among these theories:
Proof Theory of Arithmetic Theorem.
105
(Buss [1986]) Let i > 1.
(1) T~ ~ S~. (2) S~ ~ T~~ It is however open whether the theories
St C_ T~ C_ S~ C T~ C_ Sg C_ ... are distinct. Proof. A proof of (1) can be found in Buss [1986,sect 2.6]: this proof mostly involves bootstrapping of T~, and we shall not present it here. The proof of (2) uses a divideandconquer method. Fix i > 1 and fix a E~_1formula A(x); we must prove that S~ proves the IND axiom for A. We argue informally inside S~, assuming (Vx)(A(x) D A(x + 1)). Let B(x,z) be the formula (Vw _< x)(Vy _< z + 1)(A(w y) D A(w)). Clearly B is equivalent to a II~formula. By the definition of B, it follows that
(W)(Vz)(B(~, [~'zJ) D B(x, z)) and hence by II~PIND on B(x, z) with respect to z,
(W)(B(,, 0) ~ B(,,,)). Now, (Vx)B(x, 0) holds as it is equivalent to the assumption (Vx)(A(x) D A(x + 1)), and therefore (Vx)B(x, x) holds. Finally, (Vx)B(x, x) immediately implies (Vx)(A(0) D A(x))" this completes the proof of the IND axiom for A. The theorem immediately implies the following corollary: Corollary.
(Buss [1986]) $2  T2.
In the proof of the above theorem, it would suffice for A(x) to be A~ with respect to S~. Therefore, S~ proves A~IND. 1.3.6. Polynomial hierarchy functions in b o u n d e d a r i t h m e t i c The polynomial time hierarchy is a hierarchy of bounded alternation polynomial time computability; the base classes are the class P  A~ of polynomial time recognizable predicates, the class FP = []~ of polynomial time computable functions, the class N P  EVl of predicates computable in nondeterministic polynomial time, the class coNP = IIp of complements of N P predicates, etc. More generally, A~, [l~, E~ and II~ are defined as follows:
106
S. Buss
Definition. The classes A~ and []~ have already been defined. Further define, by induction on i, (1) E~ is the class of predicates R(s definable by
r
(3y _
_ 1. (a) Every O~ function is E~definable in S~. (b) Every A~ predicate is A~definable in S~. Proof. The proof proceeds by induction on i. The base case has already been done as Theorem 1.3.4.1. Part (b) is implied by (a), so it suffices to prove (a). To prove the inductive step, we must show the following three things (and show they are provable in S~): P (1) If f ( ~ , y ) i s a 0,_lfunction, then the characteristic function X ( ~ ) o f (3y 1, the functions symbols for []~ cannot be used freely in induction axioms (modulo some open questions). Since the notation T~I(0~) is so atrocious, it is sometimes denoted PVi instead. Krajfhek, Pudls and Takeuti [1991] prove that PVi can be axiomatized by purely universal axioms: to see the main idea of the universal axiomatization, note that if A is A~, then PVi proves A is equivalent to a quantifierfree formula via Skolemization and thus induction on A(x, ~, can be obtained from the universal formula (V~(Vt)[A(0, ~ A ~A(t, c')) D A(fA(t, ~ = 1, c)A ~A(fA(t, ~, ~] where fA is computed by a binary search procedure which asks A~ queries to find a value b for which A ( b  1, ~ is true and A(b, c*)is false. Of course, this f is a 0~function and therefore is a symbol in the language of PVi. 1.3.8. M o r e a x i o m a t i z a t i o n s of b o u n d e d a r i t h m e t i c For any theory T in which the Ghdel fl function is present or is E~definable, in particular, for any theory T _D S 1 , there are two further possible axiomatizations that are useful for bounded arithmetic: 7The original definition of a theory of this type was the definition of equational theory PV of polynomial time functions by Cook [1975]. TO(D~) can also be defined as the conservative extension of PV to firstorder logic.
Proof Theory of Arithmetic
109
Definition. Let (I) be a set of formulas. The (I)replacement axioms are the formulas
(w < Isl)( y < t)A(x, y) D (3w)(Vx ~ Isl)(A(x, Z(x + 1, w)) A ~(x + 1, w) _ t) for all formulas A E (I) and all appropriate (semi)terms s and t. As usual A may have other free variables in addition to x that serve as parameters. The strong Oreplacement axioms are similarly defined to be the formulas (3w)(V~
_ 1, and over the base theory $21). Figure 1 shows these and other relationships among the axiomatizations of bounded arithmetic. 1.4. Sequent calculus formulations of a r i t h m e t i c This section discusses the proof theory of theories of arithmetic in the setting of the sequent calculus: this will be an essential tool for our analysis of the prooftheoretic strengths of fragments of arithmetic and of their interrelationships. The sequent calculus used for arithmetic is based on the system LK~ described in Chapter I of this volume; LK~ will be enlarged with additional rules of inference for induction, minimization, etc., and for theories of bounded arithmetic, LKe is enlarged to include inference rules for bounded quantifiers. 1.4.1. Definition. LKB (or LKB~) is the sequent calculus LK (respectively, LK~) extended as follows: First, the language of firstorder arithmetic is expanded to allow bounded quantifiers as a basic part of the syntax. Second, the following new rules of inference are allowed: B o u n d e d quantifier rules
V n D 9RPrfT(v,rr Hence, S~ proves 9RThmT(rr By the choice of r and since T _DS~, this implies T k r which contradicts the consistency of T. Q.E.D. 2.2.3. T h e second incompleteness t h e o r e m . Ghdel's second incompleteness theorem improves on his first incompleteness theorem by giving an example of a true formula with an intuitive meaning which is not provable by a decidable, consistent theory T. This formula is the formula ConT which expresses the consistency of T. Note, however, that unlike the the formula r in the first incompleteness theorem, ConT is not necessarily independent of T since there are consistent theories that prove their own inconsistency. An example of such a theory is T + 9ConT. G6del's Second I n c o m p l e t e n e s s T h e o r e m . theory and suppose T D_Q. Then T / ConT.
Let T be a decidable, consistent
Proof. As usual, we assume T _DS~. Let r be the formula from the proof of Ghdel's First Incompleteness Theorem which is S~provably equivalent to ~ThmT(rr Recall that we proved T Jz r We shall prove that S~ proves ConT D r whichwill suffice to show that T / ConT, since T _D S~. By choice of r S~ proves 9r D ThmT(rr Also, by the formula 5 in section 2.1.2, S~ proves ThmT(rr 7) D ThmT(rThmT(rr Also, by choice of r and by formula 1 of section 2.1.2, S~ proves ThmT(rThmT(rr
~) D ThmT(r~r
Putting these together shows that S~ proves that 9r D [ThmT(rr 7) A ThmT(~r From whence 9r D ~ConT is easily proved. Therefore, S~ proves ConT D r
[:]
14This is a good example (see Feferman [1960]) of an extensional definition which is not an intensional definition. For consistent theories T, RPr]T and RThmT provide an extensionally correct definition of provability, since RPrST(n, rAT) has the correct truth value for all particular n and rA7. However, they are not intensionally correct; since, in general, T cannot prove that RThmT(rA7) and RThmT(rA D B 7) implies RThmT(B).
S. Buss
122
The formula r not only implies ConT, but is actually S~equivalent to ConT. For this, note that since r implies ~ThmT(rr it can be proved in S 1 that r implies ~ThmT(q)  1~). (Since if a contradiction is provable, then every formula is provable.) 2.2.4. L6b's t h e o r e m . The selfreferential formula constructed for the proof of the First and Second Incompleteness Theorems asserted "I am not provable". A related problem would be to consider formulas which assert "I am provable". As the next theorem shows, such formulas are necessarily provable. In fact, if a formula is implied by its provability, then the formula is already provable. This gives a strengthening of the Second Incompleteness Theorem, which implies that, in order to prove a formula A, one is not substantially helped by the assuming that A is provable. More precisely, the assumption ThmT(rA 7) will not significantly aid a theory T in proving A. L6b's T h e o r e m . Let T D Q be an axiomatizable theory and A be any sentence. If T proves ThmT(rA "1) D A, then T proves A. Proof. As usual, we assume T _DS~. Let T' be the axiomatizable theory TU{~A}. The proof of Lhb's Theorem uses the fact that T ~ is consistent if and only if T ]z A; and furthermore, that S 1 proves Con(T') is equivalent to ~ThmT(rAT). From these considerations, the proof is almost immediate from the second incompleteness theorem. Namely, since T proves ~A D ~ThmT(rA 7) by choice of A, T also proves ~A D Con(T'). Therefore, by the Deduction Theorem, T' F Con(T') so by Ghdel's Second Incompleteness Theorem, T ~ is inconsistent, i.e., T k A. 2.2.5. F u r t h e r reading. The above material gives only an introduction to the incompleteness theorems. Other significant aspects of incompleteness include: (1) the strength of reflection principles which state that the provability of a formula implies the truth of the formula, see, e.g., Smorynski [1977]; (2) provability and interpretability logics, for which see Boolos [1993], Lindstrhm [1997], and Chapter VII of this handbook; and (3) concrete, combinatorial examples of independence statements, such as the Ramsey theorems shown by Paris and Harrington [1977] to be independent of Peano arithmetic. 3. O n t h e s t r e n g t h s
of fragments
of arithmetic
3.1. W i t n e s s i n g t h e o r e m s In section 1.2.10, it was shown that every primitive recursive function is ~ldefinable by the theory/~1. We shall next establish the converse which implies that the ~ldefinable functions o f / ~ 1 are precisely the primitive recursive functions. The principal method of proof is the 'witnessing theorem method': 1511 provides the simplest and most natural application of the witnessing method.
Proof Theory of Arithmetic
123
3.1.1. T h e o r e m . (Parsons [1970], Mints [19731 and Takeuti [1987]). E1definable function of IE1 is primitive recursive.
Every
Parsons' proof of this theorem was based on the Ghdel Dialectica theorem and a similar proof is given by Avigad and Feferman in Chapter V in this volume. Takeuti's proof was based on a Gentzenstyle assignment of ordinals to proofs. Mints's proof was essentially the same as the witness function proof presented next; except his proof was presented with a functional language. 3.1.2. T h e W i t n e s s p r e d i c a t e for E l  f o r m u l a s . For each Elformula A(b), we define a A0formula WitnessA (w, b) which states that w is a witness for the truth of A. Definition. Let A(b) be a formula of the form (3Xl,... ,x~)B(Xl,... ,x~,b), where B is a A0formula. Then the formula WitnessA(w, b) is is defined to be the formula .#
B(/~(1, w), . . . , /~(k, w), b). If A = A',A is a succedent, then Witnessva(w,~ is defined to be
WitnessA(/~(1, w), c)V WitnessvA,(~(2, w), c]. Dually, if F = A, F' is an antecedent, then Witness Ar is defined similarly as
WitnessA(~(1, w), ~ A WitnessAr,(~(2, w), ~. Note the different conventions on ordering disjunctions and conjunctions; these are not intrinsically important, but merely reflect the conventions for the sequent calculus are that active formulas of strong inferences are at the beginning of an antecedent and at the end of a succedent. It is, of course, obvious that WitnessA is a A0formula, and that IA 0 can prove
A(b) ~ (3w) WitnessA(w, b). 3.1.3. P r o o f .
(Sketch of the proof of Theorem 3.1.1.)
Suppose PE1 proves
(Vx)(3y)A(x, y) where A E El. Then there is a sequent calculus proof P in the theory /El of the sequent (3y)A(c, y). We must prove that there is a primitive recursive function f such that A(n, f(n)) is true, in the standard integers, for all n >__ 0. In fact, we shall prove more than this: we will prove that there is a primitive recursive function f , with a Eldefinition in /El, such that /El proves (Vx)A(x, f(x)). This will be a corollary to the next lemma.
Let F and A be cedents of E1 formulas and suppose IE1 proves the sequent F+ A. Then there is a function h such that the following hold:
Witnessing Lemma for/El.
S. Buss
124 (1) h is E1defined (2) /El proves
by IE1 and is primitive recursive, and
Note that Theorem 3.1.1 is an immediate corollary to the lemma, since we may take F to be the empty sequent, A to be the sequent containing just (3y)A(c, y), and let f(x) = ~(1, ~(1, h(x))) where h is the function guaranteed to exist by the lemma. This is because h(x) will be a sequence of length one witnessing the cedent (3y)A, so its first and only element is a witness for the formula (3y)A, and the first element of that is a value for y that makes A true. It remains to prove the Witnessing Lemma. For this, we know by the Cut Elimination Theorem 1.4.2, that there is a freecut free proof P of the sequent F+ A in the theory /El; in this proof, every formula in every sequent can be assumed to be a Elformula. Therefore, we may prove the Witnessing Lemma by induction on the number of steps in the proof P. The base case is where there are zero inferences in the proof P and so F} A is an initial sequent. Since the initial sequents allowed in a n / E l proof contain only atomic formulas, the Witnessing Lemma is trivial for this case. For the induction step, the argument splits into cases, depending on the final inference of the proof. There are a large number of cases, one for each inference rule of the sequent calculus; for brevity, we present only three cases below and leave the rest for the reader. For the first case, suppose the final inference of the proof P is an 3 :right inference, namely, .
.
9
F} A, A(t) F} A, (3x)A(x) Let c be the free variables in the upper sequent. The induction hypothesis gives a Eldefined, primitive recursive function g(w, ~ such t h a t / E l proves
WitnessAr(W, ~ ~ Witnessv{A,A(t)}(g(w, ~, C"). In order for Witnessv{A,A(t)}(g(w, c),c) to hold, either/?(2, g(w, ~) witnesses V A or ~(1, g(w, c)) witnesses A(t). So letting h(w, c*)be Eldefined by
h(w, ~
= ((t(~) 9~(1,
where 9 denotes sequence concatenation. Witness that
g(w, ~), /3(2, g(w, ~)), It is immediate from the definition of
(h(w, For the second case, suppose the final inference of the proof P is an inference, namely,
3:left
Proof Theory of Arithmetic 9
.
125
o
A(b), F ~ A (3x)A(x), F+ A where b is an eigenvariable which occurs only as indicated. The induction hypothesis gives us a Eldefined, primitive recursive function g(w, g, b) such t h a t / E l proves
WitnessA{A(b),r} (w, c) } WitnessvA(g(w , ~, b), c). Let tail(w) be the Eldefined function so that tail( (wo, wi, . . . , wn) ) = (wl, . . . , wn). Letting h(w, c') be the function g((tail(~(1, w)),/~(2, w)), ~,/?(1,/~(1, w))), it is easy to check that h satisfies the desired conditions of the Witnessing Lemma. For the third case, suppose the final inference of P is a EIIND inference: 9
9 .
.
. 9
A(b), F+ A, A(Sb) A(0), F ~ A, A(t) where b is the eigenvariable and does not occur in the lower sequent. The induction hypothesis gives a Eldefined, primitive recursive function g(w, g, b) such that PE1 proves
WitnessA{A(b),r} (w, g, b) + Witnessv{A,A(Sb)} (g(w, ~, b), ~, b). Let k(g, v, w) be defined as
k(e,
= { v W
if Witnessv{ ~ } (v, c') otherwise
Since Witness is a A0predicate, k is Eldefined by FE1. Now define the primitive recursive function f (w, g, b) by
f(w, g, 0) = f ( w , ~ , b + 1) =
(fl(1, w), 0)
(~(1, g((~(1, f(w,g,b)),~(2, w)),~,b)), k(g,/~(2, f (w, g, b)), /~(2, g((/~(1, f (w, ~, b)), fl(2, w)), g, b))))
By Theorem 1.2.10, f is E1 definable b y / E l , and since f may be used in induction formulas, E1 can prove
WitnessA{A(o),r } (w, 5) } Witnessv{A,A(b)} (f (w, g, b), g, b). using EIIND with respect to b. Setting h(w, ~  f(w, g, t) establishes the desired conditions of the Witnessing Lemma. Q.E.D. Witnessing Lemma and Theorem 3.1.1.
3.1.4. Corollary.
recursive predicates.
The A1definable predicates of IE1 are precisely the primitive
126
S. Buss
Proof. Corollary 1.2.10 already established that every primitive recursive predicate is Aldefinable by /El. For the converse, suppose A(c) and B(c) are Elformulas such t h a t / E l proves (Vx)(A(x) ++ ~B(x)). Then the characteristic function of the predicate A(c) is Eldefinable i n / E l since 1El can prove
(Vx)(3!y)[(A(x) A y = O) V (B(x) A y = 1)]. By Theorem 3.1.1, this characteristic function is primitive recursive, hence so is the predicate A(c). 3.1.5. Total functions o f / E n . Theorem 1.2.1 provided a characterization of the Eldefinable functions o f / E l as being precisely the primitive recursive functions. It is also possible to characterize the Eydefinable functions o f / E n for n > 1 in terms of computational complexity; however, the n > 1 situation is substantially more complicated. This problem of characterizing the provably total functions of fragments of Peano arithmetic is classically one of the central problems of proof theory; and a number of important and elegant methods are available to solve it. Space prohibits us from explaining these methods, so we instead mention only a few references. The first method of analyzing the strength of fragments of Peano is based on Gentzen's assignment of ordinals to proofs; Gentzen [1936,1938] used Cantor normal form to represent ordinals less than e0 and gave a constructive method of assigning ordinals to proofs in such a way that allowed cuts and inductions to be removed from PAproofs of sentences. This can then be used to characterize the primitive recursive functions of fragments of Peano arithmetic in terms of recursion on ordinals less than e0. The textbooks of Takeuti [1987] and Girard [1987] contain descriptions of this approach. A second version of this method is based on the infinitary proof systems of Tait: Chapter III of this volume describes this for Peano arithmetic, and Chapter IV describes extensions of this ordinal assignment method to much stronger secondorder theories of arithmetic. The books of Schiitte [1977] and Pohlers [1980] also describe ordinal assignments and infinitary proofs for strong theories of arithmetic. A further use of ordinal notations is to characterize natural theories of arithmetic in terms of transfinite induction. A second approach to analyzing the computational strength of theories of arithmetic is based on modeltheoretic constructions; see Paris and Harrington [1977], Ketonen and Solovay [1981], Sommer [1990], and Avigad and Sommer [1997]. A third method is based on the Dialectica interpretation of Ghdel [1958] and on Howard's [1970] assignment of ordinals to terms that arise in the Dialectica interpretation. Chapter V of this volume discusses the Dialectica interpretation. A fourth method, due to Ackermann [1941] uses an ordinal analysis of ecalculus proofs. More recently, Buss [1994] has given a characterization of the provably total functions of the theories/E~ based on an extension of the witness function method used above.
Proof Theory of Arithmetic
127
3.2. Witnessing theorem for S~ Theorem 1.3.4.1 stated that every polynomial time function and every polynomial time predicate is E~definable or A~definable (respectively) by S~. More generally, Theorem 1.3.6 stated that every 0~'function and every A~predicate is E~definable or A~definable by S~. The next theorem states the converse; this gives a precise characterization of the E~definable functions of S 1 and of the E~definable functions of S~ in terms of their complexity in the polynomial hierarchy. The most interesting case is probably the base case i  1, where S~ is seen to have prooftheoretic strength that corresponds precisely to polynomial time. T h e o r e m . (Buss [1986]) (1) Every E~definable function of S~ is polynomial time computable. (2) Let i >_ 1. Every E~definable function of S~ is in the ith level, 0~, of the
polynomial hierarchy. Corollary. (Buss [1986]) (1) Every A~definable predicate of S 1 is polynomial time. (2) Let i >_ 1. Every A~definable predicate of S~ is in the ith level, A~, of the
polynomial hierarchy. The corollary follows from the theorem by exactly the same argument as was used to prove Corollary 3.1.4 from Theorem 3.1.1. To prove the theorem, we shall use a witnessing argument analogous to the one use for/El above. First, we need a revised form of the Witness predicate; unlike the usual definition of the Witness predicate for bounded arithmetic formulas, we define the Witness predicate only for prenex formulas, since this provides some substantial simplification. This simplification is obtained without loss of generality since every E~formula is logically equivalent to a E~formula in prenex form. 3.2.1. Then (1) If (2) If
Definition.
Fix i >_ 1. Let A(~ be a E~formula which is in prenex form.
WitnessiA(w, ~ is defined by induction on the complexity of A as follows: b 1f~ A is a H i_ then Witness~(w ~ is just the formula A(~, b A(~ is not in II~_ 1 and is of the form (3x _ 1. Let F+A be a sequent of formulas in E b
in prenex form, and suppose S~ proves F} A. Let 6 include all free variables in the sequent. Then there is a Drfunction h(w, ~ which is E~defined in S~ such that S~ proves
w~t~~,~(~,~+ w~t~&~(h(~,~, ~.
The proof of the Witnessing Lemma is by induction on the number of sequents in a freecut free proof P of F+ A. Since every E~formula is equivalent to a E~formula in prenex form, we may assume w.l.o.g, that every induction formula in the freecut free proof P is a prenex form E~formula. Then, by the subformula property, every formula appearing anywhere in the proof is also a E~formula in prenex form. The base case of the induction proof is when F} A is an initial sequent; in this case,
Proof Theory of Arithmetic
129
every formula in the sequent is atomic, so the Witnessing Lemma trivially holds. The induction step splits into cases depending on the final inference of the proof. The structural inferences and the propositional inferences are essentially trivial, the latter because of our assumption that all formulas are in prenex form. So it remains to consider the quantifier inferences and the induction inferences 9 The cases where the final inference of P is an 3 ___:right inference or an 3 _ O. Since f is defined by limited recursion on notation from g, and since g is E~defined by S~, f is also E~defined by S~. Therefore, f may be used in induction formulas and S~ can prove
using E~PIND with respect to b. Setting h(w, 5) = f (w, ~, t) establishes the desired conditions of the Witnessing Lemma. Finally, we consider the inferences involving bounded universal quantifiers. The cases where the principal formula of the inference is a II~_lformula are essentially trivial, since such formulas do not require a witness value, i.e., they are their own witnesses. This includes any inference where the principal connective is a nonsharply
S. Buss
130
bounded universal quantifier. A V < :left where the principal formula is in ~ must have a sharply bounded universal quantifier as its principal connective; this case of the Witnessing Lemma is fairly simple and we leave it to the reader. Finally, we consider the case where the last inference of P is a V _ :right inference
b < Itl, r  + zx, A(b) r+ zx, ( w < Itl)A(t) where A E 2 ~ \ IIi_ b 1 and where the eigenvariable b occurs only as indicated. The induction hypothesis gives a ~defined, []~function g such that S~ proves
w),
b _< Itl ^
WitnessiA(b)(~(1, g(w, g, b)), ~, b) V Witness~/~(~(2, g(w, g, b)), c). Let fl (w, ~ be defined to equal /~(2, g(w, g, b)) for the least value b _ Itl such this value witnesses V A, or if there is no such value of b _< t, let fl(w, c) = O. Since the predicate Witness~/~(~(2, g(w, g, b)), ~ is A~ and is A~defined by S~, the function fl is in []~ and is Z~defined by S~. Also, let f2(w, ~ equal (~(1,
g(w, g, 0)), ~(1, g(w, if, 1)), ~(1, g(w, g, 2 ) ) , . . . , ~(1, g(w, g, Itl))>.
It is easy to verify that f2 also is in []~ and is ~definable by S~. Now let h(w, c') equal (f2((O,w>,c),fl((O,w),c)>. It is easy to check that h satisfies the desired conditions of the Witnessing Lemma. Q.E.D. Witnessing Lemma and Theorem 3.2. 3.3. W i t n e s s i n g t h e o r e m s a n d c o n s e r v a t i o n results for T~ This section takes up the question of the definable functions of T~. For these theories, there are three witnessing theorems, one for each of the E~definable, b b the E~+ldefinable and the Ei+2definable functions. In addition, there is a close connection between S~+1 and T~; namely, the former theory is conservative over the latter. We'll present most of these results without proof, leaving the reader to look up the original sources. The results stated in this section will apply to T~ for i _ 0; however, for i  0, the bootstrapping process does not allow us to introduce many simple functions. Therefore, when i = 0, instead of T~ we must use the theory PV1  TO([]~) as defined in section 1.3.7. To avoid continually treating i = 0 as a special case, we let T ~ denote PV1 for the rest of this section. b 3.3.1. The ~i+ldefinable functions of T~
T h e o r e m . (Buss [1990]) Let (1) T~ can Eib+ldefine every
i >_ O. []iP+lfunction.
Proof Theory of Arithmetic
131
(2) Conversely, every Eib+ldefinable function of T~ is a []iP+lfunction. (3) S~ +1 i8 Y]iblconservative over T~. (4) S~+1 is conservative over T~ + ~ib+lreplacement with respect to Boolean combinations of Ei+ l b formulas. We'll just state some of the main ideas of the proof of this theorem, and let the reader refer to Buss [1990] or Kraji~ek [1995] for the details. Part (1) with i  0 is trivial because of the temporary convention that T~ denotes PV1. To prove part (1) for i > 0, one shows that T~ can "Qidefine" every []iP+l formula, where Qidefinability is a strong form of Eib+ldefinability. The general idea of a Qidefinable function is that it is computed by a Turing machine with a E~oracle such that every "yes" answer of the oracle must be supported by a witness. In the correct computation, the sequence of "yes/no" answers is maximum in a lexicographical ordering, and thus T~ can prove that the correct computation exists using E~MAX axioms (which can be derived similarly to minimization axioms). This proof of (1) is reminiscent of the theorem of Krentel [1988] that MINSAT is complete for ~ . Part (2) of the theorem is immediate from Theorem 3.2 and the fact that T~ C_ S~+1 . Part (3) is based on the following strengthening of the Witnessing Theorem 3.2.2 for S~+1" W i t n e s s i n g L e m m a for S~+1 . Let i > 1. Let F ~ A be a sequent of formulas in Eib+l in prenex form, and suppose si2+1 proves F ~ A; let ~ include all free variables in the sequent. Then there is a []i+1 P function h(w, ~ which is Qi defined in T~2 such that T~ proves
c), The proof of this Witnessing Lemma is almost exactly the same as the proof of the Witnessing Lemma in section 3.2.2; the only difference is that the witnessing functions are now proved to be Qidefinable in T~. In fact, (1) implies the necessary functions are Qidefined by T~ since we already know they are Eib+ldefined by S~+1 . So the main new aspect is showing that T~ can prove that the witnessing functions work. Part (3) of the theorem is an immediate consequence of the Witnessing Lemma. Part (4) can also be obtained from the Witnessing Lemma using the fact that b T~9+ Ei+lreplacement can prove that A(~ is equivalent to (3w) Witness~+l(w, c)for any A E E~b+l. b 3.3.2. T h e Ei+2definable functions of T~
The Eib+2definable functions of T~ can be characterized by the following theorem, due to Kraji~ek, Pudls and Takeuti [1991]. KPT Witnessing Theorem.
Let i >_ O. Suppose T~ proves
(w)(sy)(Vz _
0 and there are Eib+ldefinable function symbols fl(x), f2(x, zl),..., f~(x, zt,...zk_t) such that Ti2 proves (Vx)(VZl _< t)[A(fl(X),X,Z,)V (Vz2 _< t)lA(f2(x,z,),x, z2) V(Vza _< t)[A(f3(x, z,, z2), x, z3) V . . . V (Vzk _< t)[A(fk(X, Zl,...,Z~_l),X,Z~)]...]]]
Conversely, whenever the above formula is provable, then T~ can also prove (Vx) (By)(Vz _< t)A(y, x, z). The variables x, y and z could just as well have been vectors of variables, since the replacement axioms and sequence coding can be used to combine adjacent like quantifiers. Also, the first half of the theorem holds even if t involves both x and y. The proof of the KPT Witnessing Theorem is now quite simple: by the discussion in section 1.3.7, we can replace each T~ by its conservative, universally axiomatized extension PV/+I, and now the theorem is an immediate corollary of the corollary to the generalized Herbrand's theorem in section 2.5.3 of Chapter I. 3.3.2.1. Applications to the polynomial hierarchy. The above theorem has had a very important application in showing an equivalence between the collapse of the hierarchy of theories of bounded arithmetic and the (provable) collapse of the polynomial time hierarchy. This equivalence was first proved by Kraji~ek, Pudls and Takeuti [1991]; we state two improvements to their results. (We continue the convention that T ~ denotes PV1.) (Buss [1995], Zambella [1996]) Let i >_ O. If Ti2 ~ si2+t , then (1) Ti2 = $2 and therefore $2 is finitely axiomatized, and (2) T~ proves the polynomial hierarchy collapses, and in .fact, (2.a) T~ proves that every ~ib3formula is equivalent to a Boolean combination of ~+2b formulas and (2.b) T~ proves the polynomial time hierarchy collapses to ~+l/poly.
Theorem.
Corollary. 5'2 is finitely axiomatized if and only if $2 proves the polynomial hierar
chy collapses. Let g(x) be a ~definable function of T~ such that for each n > 0 there is an m > 0 so that Ti2 ~ (Vx)(x > n D g(x) > m) (for example, g(n) = Inl or g(n) ]lull, etc.) Let g ~  I N D denote the axioms
A(O) A (Vx)(A(x) D A(x + 1)) D (Vz _ O, there is a S~proof P of (3x)(superexp(k,n,x)) with size polynomiaUy bounded in terms of Inl and k. In addition, P is a E2k+lproof.
4.3.3. T h e o r e m .
P r o o f . The proof is based on using formulas that define inductive cuts. particular ones we need are formulas Ji(x) and Ki(x) defined as:
Jo(x) r
0= 0
(always true)
Ko(~) ,~ (3v)(2~=v)
J,+,(~) ,~ (W)(K,(,) ~ K,(z + ~)) Ki+l(X)
Lemma.
(~) s~ F J~(0) (b) S~ k Jk(x) D Jk(x + 1)
r
(3y)(2 ~ = y A J,+l(y))
The
Proof Theory of Arithmetic
141
(c) s~ e J~(~) ^ u < 9 ~ J~(u) (d) S~ F Jk(x) D J~(x + x)
(e) s~ ~ g~(o) (f) S~ F Kk(x) A u < x D gk(u)
(g) s] ~ g~(~) ~ g~(~ + 1) (h) S~ R Kk(x) D (3z)superexp(k + 1,x,z). Parts (a)(g) are proved simultaneously by induction on k. Part (h) is likewise proved using induction on k. Moreover, it is easy to verify that the S2proofs of formulas (a)(g) are polynomial size in k, and involve only E2k+lformulas. By using (d) and (c) of the lemma, it is straightforward now to give find an S~proof of Jk(n) of size polynomial in In I and k; from this, (e) and (h) give the desired proof of P of (3z)superexp(k, n, z). 4.3.4. L e m m a .
Suppose r
k > 0 such that
s~ r
e ~]1 and $2 + exp F (Vx)r
(w)(& %
Then, there is a
r
Lemma 4.3.4 is proved from Lemma 4.3.2 by formalizing the argument of Lemma 4.3.3 in S~. 4.3.5. Lemma. the form (Vy)r
Let r be a VII~ formula, which is without loss of generality of y) where CM e II~. Then there is a term t such that
This lemma is a special case of Theorem 2.1.2. 4.3.6. L e m m a .
Let r be a VH~sentence such that $2 + exp R r
Then there is a
k > 0 such that
s~ ~ ~r + = Co~.~ ( & ) . Proof. Without loss of generality, r is of the form (Vx)r with CM a 1Itformula. By Lemma 4.3.4, $2 proves (Vx)(S2 ~ CM(X__)). On the other hand, Lemma 4.3.5 implies that $2 proves
~r
D (& ~  r
These two facts suffice to prove Lemma 4.3.6. 4.3.7. L e m m a .
Let k > O. Then $2 + exp proves Conr.~($2).
S. Buss
142 Proof.
(Sketch). The proof of this has two main steps:
(1) Firstly, one shows that $2 + exp proves BdCon(S2) ~ Conr.k(S2). This is done, by formalizing the following argument: (a) Assume that P is a E~proof of 0  1 in the theory $2. (b) By using sequence encoding to collapse adjacent like quantifiers, we may assume w.l.o.g, that each formula in P has at most k + 1 unbounded quantifiers. (c) By applying the process used to prove the CutElimination Theorem 2.4.2 of Chapter I, there is a bounded S2proof of 0  1 of size at most 911PII "2k+4" Since only finitely many iterations of exponentiation are needed, the last step can be formalized in $2 + exp. (2) Secondly, one shows that $2 § can prove the bounded consistency of $2. The general idea is that if there is a bounded S2proof P of 0  1, then there is a a fixed value g so that all variables appearing in P can be implicitly bounded by L  2~ize(P) where size(P) is the number of symbols in P. (In fact, g  3 works.) Once all variables are bounded by L, a truth definition can be given based on the fact that 2 Ls~z~(P) exists. With this truth definition, $2 § exp can prove that every sequent in the S2proof is valid. 4.3.8. Corollary. The theory $2 + exp is conservative over the theory $2 (J { Con~ k ($2)" k > 0} with respect to VHb consequences. P r o o f . The fact that the first theory includes the second theory is immediate from Theorem 4.3.7. The conservativity is immediate from Lemma 4.3.6. Incidentally, since $2 is globally interpretable in Q, we also have that the theories $2 + { Con~ k ($2): k _ 0} and $2 + { Con~.~(Q): k >_ 0} are equivalent. 4.3.9. T h e o r e m .
$2 U {Con~ k(S2): k > 0} l/Con(S2).
It is an immediate consequence of Theorem 4.3.9 and Corollary 4.3.8 that $2 + exp ~ Con(S2), which is the main result we are trying to establish. So it remains to prove Theorem 4.3.9: P r o o f . Let k > 0 be fixed. Use Ghdel's Diagonal Lemma to choose an 32~sentence Ck such that
(
+
( )
).
Now r is certainly false, since otherwise $2 + Con~ k ($2) proves ~r which would be false if Ck were true. Furthermore, $2 § exp can formalize the previous sentence, since as sketched above in the part (2) of the proof of Theorem 4.3.7, $2 § exp can prove the validity of every formula appearing in a bounded proof in the theory $2 + Con~.k ($2). Therefore, $2 + exp proves ~r Since ~r is a VlI~sentence, Corollary 4.3.8 implies that there is some m > 0 such that $2 + Conr~m($2) k ~r It is evident that $2 + Conr.~($2) cannot prove C o n ~ ($2) since this would contradict the fact that Ck is false, which implies that
Proof Theory of Arithmetic $2 + C o n ~ ($2) does not prove ~r Co~(S~)
143
Therefore $2 + Con~ k ($2) also cannot prove
.
Since this argument works for arbitrary k, the Compactness Theorem implies t h a t $2 + { Con~ k ( Q ) : k > 0} cannot prove Con(S2).
The proofs above actually proved something slightly stronger than Theorem 4.3.9; namely, 4.3.10. T h e o r e m .
There is an m = O(k) such that S2+ Con~ (S2) jz Con~m (S2).
We conjecture that m  k + 1 also works, but do not have a proof at hand. 4.3.11. A related result, which was stated as an open problem by Wilkie and Paris [1987] and was later proved by H~jek and Pudl~k [1993,Coro. 5.34], is the fact that there is a VH~sentence r such that S 1 + exp F r but such that S~ jz r for all k>0. A c k n o w l e d g e m e n t s . We are grateful to J. Avigad, C. Pollett, and J. Krajf~ek for suggesting corrections to preliminary versions of this chapter. Preparation of this article was partially supported by NSF grant DMS9503247 and by cooperative research grant INT9600919/ME103 of the NSF and the Czech Republic Ministry of Education.
References W. ACKERMANN [1941] Zur Widerspruchsfreiheit der Zahlentheorie, Mathematische Annallen, 117, pp. 162194. J. AVIGAD AND R. SOMMER [1997] A modeltheoretic approach to ordinal analysis, Bulletin of Symbolic Logic, 3, pp. 1752. J. BARWISE [1977] Handbookof Mathematical Logic, NorthHolland, Amsterdam. J. H. BENNETT [1962] On Spectra, PhD thesis, Princeton University. G. BOOLOS [1989] A new proof of the Ghdel incompleteness theorem, Notices of the American Mathematical Society, 36, pp. 388390. [1993] The Logic of Provability, Cambridge University Press. S. R. Buss
[1986] Bounded Arithmetic, Bibliopolis, Napoli. Revision of 1985 Princeton University Ph.D. thesis. [1990] Axiomatizations and conservation results for fragments of bounded arithmetic, in: Logic and Computation, proceedings of a Workshop held CarnegieMellon University, 1987, W. Sieg, ed., vol. 106 of Contemporary Mathematics, American Mathematical Society, Providence, Rhode Island, pp. 5784. [1992] A note on bootstrapping intuitionistic bounded arithmetic, in: Proof Theory: A selection of papers from the Leeds Proof Theory Programme 1990, P. H. G. Aczel, H. Simmons, and S. S. Whiner, eds., Cambridge University Press, pp. 149169.
144
S. Buss
[1994] The witness function method and fragments of Peano arithmetic, in: Proceedings of the Ninth International Congress on Logic, Methodology and Philosophy of Science, Uppsala, Sweden, August 714, 1991, D. Prawitz, B. Skyrms, and D. Westersts eds., Elsevier, NorthHolland, Amsterdam, pp. 2968. [1995] Relating the bounded arithmetic and polynomialtime hierarchies, Annals of Pure and Applied Logic, 75, pp. 6777. [1997] Bounded arithmetic and propositional proof complexity, in: Logic of Computation, H. Schwichtenberg, ed., SpringerVerlag, Berlin, pp. 67121. S. R. Buss AND A. IGNJATOVIC [1995] Unprovability of consistency statements in fragments of bounded arithmetic, Annals of Pure and Applied Logic, 74, pp. 221244. S. R. Buss AND J. KRAJfCEK [1994] An application of Boolean complexity to separation problems in bounded arithmetic, Proceedings of the London Mathematical Society, 69, pp. 121. G. J. CHAITIN [1974] Informationtheoretic limitations of formal systems, J. Assoc. Comput. Mach., 21, pp. 403424. P. CLOTE [1985] Partition relations in arithmetic, in: Methods in Mathematical Logic, C. A. Di Prisco, ed., Lecture Notes in Computer Science #1130, SpringerVerlag, Berlin, pp. 3268. A. COBHAM [1965] The intrinsic computational difficulty of functions, in: Logic, Methodology and Philosophy of Science, proceedings of the second International Congress, held in Jerusalem, 1964, Y. BarHillel, ed., NorthHolland, Amsterdam. S. A. COOK [1975] Feasibly constructive proofs and the propositional calculus, in: Proceedings of the Seventh Annual A CM Symposium on Theory of Computing, Association for Computing Machinery, New York, pp. 8397. S. A. COOK AND A. URQUHART [1993] Functional interpretations of feasibly constructive arithmetic, Annals of Pure and Applied Logic, 63, pp. 103200. S. FEFERMAN [1960] Arithmetization of metamathematics in a general setting, Fundamenta Mathematicae, 49, pp. 3592. H. GAIFMAN AND C. DIMITRACOPOULOS [1982] Fragments of Peano's arithmetic and the MRDP theorem, in: Logic and Algorithmic: An International Symposium held in honour of Ernst Specker, Monographie #30 de L'Enseignement Math~matique, pp. 187206. G. GENTZEN [1936] Die Widerspruchsfreiheit der reinen Zahlentheorie, Mathematische Annalen, 112, pp. 493565. English translation in: Gentzen [1969], pp. 132213. [1938] Neue Fassung des Widerspruchsfreiheitbeweis fiir der reinen Zahlentheorie, Forschungen zur Logik end zur Grundlegung der exacten Wissenscha]ten, New Series, 4, pp. 1944. English translation in: Gentzen [1969], pp. 252286. [1969] Collected Papers of Gerhard Gentzen, NorthHolland, Amsterdam. Edited by M. E. Szabo. J.Y. GIRARD [1987] Proof Theory and Logical Complexity, vol. I, Bibliopolis, Napoli.
Proof Theory of Arithmetic
145
K. GODEL [1958] 0ber eine bisher noch nicht beniitzte Erweiterung des finiten Standpunktes, Dialectica, 12, pp. 280287. P. Hh.JEK AND P. PUDLAK [1993] Metamathematics of Firstorder Arithmetic, Perspectives in Mathematical Logic, SpringerVerlag, Berlin. D. HILBERT AND P. BERNAYS [193439] Grundlagen der Mathematik, 1 8~ II, Springer, Berlin. W. A. HOWARD [1970] Assignment of ordinals to terms for primitive recursive functionals of finite type, in:
Intuitionism and Proof Theory: Proceedings of the Summer Conference at Buffalo N.Y. 1968, A. Kino, J. Myhill, and R. E. Vesley, eds., NorthHolland, Amsterdam, pp. 443458. D. S. JOHNSON, C. H. PAPADIMITRIOU, AND M. YANNAKAKIS [1988] How easy is local search?, Journal of Computer and System Science, 37, pp. 79100. R. W. KAYE [1991] Models of Peano arithmetic, Oxford Logic Guides #15, Oxford University Press. [1993] Using Herbrandtype theorems to separate strong fragments of arithmetic, in: Arithmetic, Proof Theory and Computational Complexity, P. Clote and J. Kraji~ek, eds., Clarendon Press (Oxford University Press), Oxford. C. F. KENT AND B. R. HODGSON [1982] An arithmetic characterization of NP, Theoretical Computer Science, 21, pp. 255267. J. KETONEN AND R. M. SOLOVAY [1981] Rapidly growing Ramsey functions, Annals of Mathematics, 113, pp. 267314. J. KRAJICEK [1995] Bounded Arithmetic, Propositional Calculus and Complexity Theory, Cambridge University Press. J. KRAJICEK, P. PUDLh.K, AND G. TAKEUTI [1991] Bounded arithmetic and the polynomial hierarchy, Annals of Pure and Applied Logic, 52, pp. 143153. M. W. KRENTEL [1988] The complexity of optimization problems, Journal of Computer and System Sciences, 36, pp. 490509. H. LESSAN [1978] Models of Arithmetic, PhD thesis, Manchester University. P. LINDSTROM [1997] Aspects of Incompleteness, Lecture Notes in Logic #10, SpringerVerlag, Berlin. R. J. LIPTON [1978] Model theoretic aspects of computational complexity, in: Proceedings of the 19th Annual Symposium on Foundations of Computer Science, IEEE Computer Society, Piscataway, New Jersey, pp. 193200. M. H. L6B [1955] Solution of a problem of Leon Henkin, Journal of Symbolic Logic, 20, pp. 115118. E. MENDELSON [1987] Introduction to Mathematical Logic, Wadsworth & Brooks/Cole, Monterey.
146
S. Buss
G. E. MINTS [1973] Quantifierfree and onequantifier systems, Journal of Soviet Mathematics, 1, pp. 7184. E. NELSON
[1986] Predicative Arithmetic, Princeton University Press. V. A. NEPOMNJAS(~II
[1970] Rudimentary predicates and Turing calculations, Kibernetika, 6, pp. 2935. English translation in Cybernetics 8 (1972) 4350. R. PARIKH [1971] Existence and feasibility in arithmetic, Journal of Symbolic Logic, 36, pp. 494508. J. B. PARIS AND C. DIMITRACOPOULOS [1982] Truth definitions for Ao formulae, in: Logic and Algorithmic, Monographie no 30 de L'Enseignement Mathematique, University of Geneva, pp. 317329. J. B. PARIS AND L. HARRINGTON [1977] A mathematical incompleteness in Peano arithmetic, in: Handbook of Mathematical Logic, NorthHolland, Amsterdam, pp. 11331142. J. B. PARIS AND L. A. S. KIRBY [1978] ]Encollection schemes in arithmetic, in: Logic Colloquium '77, NorthHolland, Amsterdam, pp. 199210. C. PARSONS [1970] On a numbertheoretic choice schema and its relation to induction, in: Intuitionism and Proof Theory: Proceedings of the Summer Conference at Buffalo N.Y. 1968, A. Kino, J. Myhill, and R. E. Vesley, eds., NorthHolland, Amsterdam, pp. 459473. [1972] On nquantifier induction, Journal of Symbolic Logic, 37, pp. 466482. W. POHLERS
[1980] Proof Theory: An Introduction, Lecture Notes in Mathematics #1407, SpringerVerlag, Berlin. P. PUDLAK [1983] Some prime elements in the lattice of interpretability types, Transactions of the American Mathematical Society, 280, pp. 255275. [1990] A note on bounded arithmetic, Fundamenta Mathematicae, 136, pp. 8589. A. A. RAZBOROV [1994] On provably disjoint NPpairs, Tech. Rep. RS9436, Basic Research in Computer Science Center, Aarhus, Denmark, November. http://www.brics.dk/index.html. [1995] Unprovability of lower bounds on the circuit size in certain fragments of bounded arithmetic, Izvestiya of the RAN, 59, pp. 201224. A. A. RAZBOROV AND S. RUDICH [1994] Natural proofs, in: Proceedings of the TwentySixth Annual ACM Symposium on Theory of Computing, Association for Computing Machinery, New York, pp. 204213. J. B. ROSSER [1936] Extensions of some theorems of GSdel and Church, Journal of Symbolic Logic, 1, pp. 8791. g . SCHUTTE
[1977] Proof Theory, Grundlehren der mathematischen Wissenschaften #225, SpringerVerlag, Berlin. W. SIEG [1985] Fragments of arithmetic, Annals of Pure and Applied Logic, 28, pp. 3371.
Proof Theory of A rithmetic
147
C. SMORYNSKI [1977] The incompleteness theorems, in: Barwise [1977], pp. 821865. R. M. SMULLYAN [1992] GSdel'sIncompleteness Theorems, Oxford Logic Guides #19, Oxford University Press. R. M. SOLOVAY [1976] Letter to P. Hdjek Unpublished. R. SOMMER
[1990]
TransfiniteInduction and Hierarchies Generated by Transfinite Recursion within Peano Arithmetic, PhD thesis, U.C. Berkeley.
L. J. STOCKMEYER [1976] The polynomialtime hierarchy, Theoretical Computer Science, 3, pp. 122. G. TAKEUTI [1987] Proof Theory, NorthHolland, Amsterdam, 2nd ed. [1990] Some relations among systems for bounded arithmetic, in: Mathematical Logic, Proceedings of the Heyting 1988 Summer School, P. P. Petkov, ed., Plenum Press, New York, pp. 139154. A. TARSKI, A. MOSTOWSKI, AND R. M. ROBINSON [1953] Undecidable Theories, NorthHolland, Amsterdam. A. J. WILKIE AND J. B. PARIS [1987] On the scheme of induction for bounded arithmetic formulas, Annals of Pure and Applied Logic, 35, pp. 261302. C. WRATHALL [1976] Complete sets and the polynomialtime hierarchy, Theoretical Computer Science, 3, pp. 2333. D. ZAMBELLA [1996] Notes on polynomially bounded arithmetic, Journal of Symbolic Logic, 61, pp. 942966.
This Page Intentionally Left Blank
CHAPTER III
Hierarchies of Provably Recursive Functions Matt Fairtlough Department of Computer Science, University of She]field, She]field $1 ~DP, England
Stanley S. Wainer 1 Department of Pure Mathematics, University of Leeds, Leeds LS2 9JT, England
Contents 1. I n t r o d u c t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. S t r u c t u r e d ordinals a n d associated hierarchies . . . . . . . . . . . . . . . . . . . 3. C o m p l e t e w  a r i t h m e t i c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. P r o v a b l y recursive functions of PA . . . . . . . . . . . . . . . . . . . . . . . . . 5. I n d e p e n d e n c e results for PA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. T h e "true" ordinal of PA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Theories with transfinite i n d u c t i o n . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
150 153 164 175 190 193 199 203
1 The second author thanks the Department of Philosophy at Carnegie Mellon University for generous hospitality and the opportunity to teach some of this material, during his year as a Fulbright Scholar 199293. HANDBOOK OF PROOF THEORY E d i t e d by S. R. Buss 1998 Elsevier Science B.V. All rights reserved
150
M. Fairtlough and S. Wainer
1. I n t r o d u c t i o n Since the recursive functions are of fundamental importance in logic and computer science, it is a natural puremathematical exercise to attempt to classify them in some way according to their logical and computational complexity. We hope to convince the reader that this is also an interesting and a useful thing to do: interesting because it brings to bear, in a clear and simple context, some of the most basic techniques of proof theory such as cutelimination and ordinal assignments; and useful because it brings out deep theoretical connections with programverification, program complexity and finite combinatorics. One might wonder why this branch of recursive function theory should most appropriately be viewed in a prooftheoretic light, but this is simply because the underlying concerns are of an intensional character, to do with computations or derivations of functions according to given programs rather than merely their definitions in extenso as sets of ordered pairs. The prooftheoretic connection is immediately observable by considering the most basic recursive operation of all, namely composition: given functions f and g define h := f o g by the rule (g(x) = y)+ ( f ( y ) = z) + (h(x) = z).
Then the usual quantifier rules of logic yield V x . 3 y . ( g ( x ) = y) + V y . 3 z . ( f ( y )
 z) ~ V x . 3 z . ( h ( x )  z)
and so the totality/termination of h follows from that of g and f respectively by means of two applications of Cut. As we shall see, cutelimination then yields a "direct" proof from which the complexity of h can be read off. It is the relationship between computational complexity on the one hand, and logical complexity (of termination proofs) on the other, which forms our principal theme here. Put simply, a program satisfies a specification Vinput. 3output. Spec(input, output)
if for each input x it computes an output y such that Spec(x, y) holds. Mere knowledge that the specification is true tells us only that there exists a whileprogram satisfying it. But to gain information about the possible structure and complexity of such a program we need to know why the specification is true, in other words we need to be given a proof. Thus our primary interest will be with those (recursively enumerable) classes of functions which are "verifiably computable" in given subsystems of arithmetic and analysis whose prooftheoretic strength is wellunderstood. This is not to say that the problem of classifying all recursive functions "in one go" is uninterestingfar from it. The known general results of Feferman [1962] in that directionon completeness and incompleteness of hierarchies generated along paths in Kleene's Oraise further deep questions which remain unanswered, e.g. "what is a natural wellordering?"
Provably Recursive Functions
151
Our aim then is to find uniform scales against which we can measure the computational complexity of functions verifiably computable in "known" theories. By "complexity" we mean "complexity in the large", as measured by the rates of growth of resourcebounding functions irrespective of whether they be polynomial, exponential or much worse. We do not wish to place prior restrictions on their size, but rather to have the means of comparing one with another. How might this be achieved? What form should a "subrecursive scale" take? To answer this we need first to ask what kind of features of recursive definitions we are actually trying to measure and compare. Suppose given a numbertheoretic program of some kind, together with an operational semantics determining for each number n a space C(n) consisting of all computations and subcomputations of the program, starting on inputs _ n. The subcomputation relation induces a tree structure on C(n) and we will assume further that it has been linearly ordered by a suitable KleeneBrouwer ordering  ~ n . Then if nl < n2 there will be an orderpreserving embedding C(nl, n2) " (C(nl),~nl) ~ (C(n2),~n2)such that C(n2, n3) o C(nl, n2)  C(nl,n3) whenever nl < n2 < ha. Thus C is a functor from the category N  {0 < 1 < 2 < ..} into the category of linear orderings with orderpreserving maps, and the idea is that C abstracts the uniform computational structure from the given program. Now C has a direct limit. This limit will be a countable linear ordering 7  (X, ~) together with maps g(n)" C(n) + X satisfying nl < n2 ~ g(nl)  g(n2)oC(nl, n2), and it will be initial among all such objects. Therefore ~/can be represented as the union of a nested, increasing sequence of suborderings ),[n] " (Image g(n), ~). Let us now make the further assumption that the given program always terminates. Then computations are finite and the subcomputation relation is wellfounded. So each C(n) and hence each 7[n] will be finite and 3' will be a wellordering. In other words, the computational structure of terminating programs can be described abstractly in terms of countable wellorderings 7, presented as the unions of ascending chains of finite suborderings 7[0] C 711] C 7[2] C .... This is our basis for a uniform theory of "ordinal assignments" and as we shall see, it applies equally well to proofs as to computations, thus providing a link between the two. There is still more useful structure to be extracted from the above presentation of a wellordering 7  (X,~)" given an arbitrary point a e X tA {3'} and any number n, define Pn(a), the "npredecessor" of a, to be the topmost element of 7In] N {fl" fl ~ a} if this is nonempty, and 0 (the least element) otherwise. Then these predecessor functions provide a uniform method of generating hierarchies of numbertheoretic functions and functionals, by transfinite recursion: Let J 9 N N  ~ N N be any given operator taking unary functions to unary functions. Then for each a ~ y define the ath iterate of J, j,~. N N ~
NN
M. Fairtlough and S. Wainer
152 as follows:
j~(g) .= { g An.J(JPn(a)(g))(n)
ifa=O ifa # 0
Two simple examples spring immediately to mind, but the reader should not be deceived by their apparent simplicity since they turn out to be fundamental tools of the subrecursive hierarchy trade. Both are obtained as iterates of the composition operator J(g, h) "= g o h, the first with g fixed and h varying, the second with h fixed and g varying. I Fix g E iNs and define Jg(h) = goh. Then for every h E iNs and every n E iN,
J~(h)(n) = g~(h(n)) where k is the length of the descending chain ...
o.
In particular with g the successor function and h constantly zero we obtain for each a _ 7 the functions
Ga(n) "= least k. (P~(a) = O) constituting the socalled SlowGrowing Hierarchy. Notice that G~(n) measures the size of 7[n]. II Fix h E iNs and define Jh (g) = g o h. Then for every g E iNs and every n E iN,
J~(g)(n) = g(hk(n)) where k is the length of the descending chain O.
In particular with h the successor and g the identity function we obtain for every a ~ 7 the functions
Ha(n) "= least m. (PmlPm2"'" Pn+lPn(a) = 0) constituting the Hardy Hierarchy (so called because Hardy [1904] was the first to make use of them, in "exhibiting" a set of reals with cardinality R1). As we shall see however, this hierarchy also contains, embedded within it, the FastGrowing Hierarchies Ba and Fa which provide crucial links between proof theory on the one hand and recursive function theory on the other. This paper is about these hierarchies, their interconnections, and their relationships with proof theory as displayed in "subrecursive classification theorems" of the following form Given an arithmetical theory T with prooftheoretic ordinal IITII, then the
provably recursive functions of T are exactly those ]unctions computable within complexitybounds Ha .for a ~ IITll.
Provably Recursive Functions
153
We shall concentrate on the cases where T is Peano Arithmetic (PA), a fragment of it or an extension of it by some axiom of transfinite induction. Pohlers, in this volume, provides an ordinal analysis for many richer theories, but once this is done for a theory T its subrecursive classification may then be reduced to that of a corresponding theory of transfinite induction over order types ~ IITII. Thus the two papers together supply the methods one needs to classify the provably recursive functions of a wide spectrum of theories. Throughout this paper we shall only consider theories based on classical logic. It is known by e.g. Friedman [1978] that in general, the provably recursive functions will be the same whether the underlying logic is intuitionistic or classical. We view proof theory as an attempt to analyse the truth definition and to classify truths. Consequently we here use proof theory to measure the complexity of recursive functions according to the logical complexity of the judgements that they terminate. We begin by developing in section 2 a general inductive class ~s of ordinal presentations, together with their arithmetic and associated function hierarchies. Section 3 presents arithmetic truth as a cutfree infinitary calculus in the style of Tait [1968], with an assignment of ordinal bounds based on Buchholz [1987] but modified and simplified here following Fairtlough [1991]. In this presentation the slowgrowing hierarchy arises naturally as the class of bounding functions for the truth of existential statements. Addition of the Cut rule then yields an infinitary proof theory for which the fastgrowing hierarchies now supply the bounding functions. In section 4 we read off classification theorems for PA and its fragments by embedding them into the infinitary framework. These results are subsequently extended in section 7 to theories of transfinite induction. Section 5 gives an application to a wellknown mathematical independence result for PA due originally to Kirby and Paris [1982]. However the proof given here is due to Cichon [1983] and displays a clear connection with the Hardy hierarchy. Section 6 reduces fast to slowgrowing by computing a map c~ ~ c~+ such that B~  Go+. This amounts to a complete cutelimination since proofs of II ~ sentences are now reduced to their cutfree truthdefinitions. The result is a new and more subtle ordinal assignment to theories T which was first discovered by Girard [1981]. Only the initial stages of this reduction are needed here (those appropriate for PA) but they already serve to illustrate the subtlety and power of ordinal assignments in providing uniform measurements of prooftheoretic complexity.
2. S t r u c t u r e d
ordinals
and associated
hierarchies
A "structured" countable ordinal is one for which an arbitrary but fixed "fundamental sequence" has been assigned to each limit below it. The most convenient way to introduce such ordinals is in two stages. First the set ~ of abstract "treeordinals" is defined inductively, in a manner reminiscent of Kleene's O but without any conditions (effectivity or otherwise) being placed on sequences. Then the subset ~s of "structured treeordinals" is obtained by requiring that fundamental
M. Fairtlough and S. Wainer
154
sequences mesh together in an appropriate way, as in Schmidt [1976] and Ketonen and Solovay [1981]. It will be intuitively clear from their uniform construction that the particular treeordinals named and used in what follows are all recursive ones. 2.1. D e f i n i t i o n . The set ~ of countable treeordinals a, /~, 7, . . . , A, ... is generated inductively according to the rules: 9 OEf~
9 aE~ 9 w
~ a+l:=aU{a}E~
e N (~
e ~)
~
~ := ( ~ ) ~ N
e n.
2.2. N o t e . A will always denote a "limit" A = {Ax/xeN. We usually write, more suggestively, A = sup Ax. The subtree ordering .< is the transitive closure of the rules 9 a. n and/3 E c~[k], so/~[n'] C_ y[n'] for every n' _> k. Therefore by weakening n " N ~~ A', ~ C (if necessary) to k 9N ~ A', ~ C and applying the induction hypothesis, we obtain k" N ~ + ~ A', A II and hence the final rule can be reapplied to obtain n 9 N 7" ~~+~ A', A as required.
Provably Recursive Functions
169
2. If c = E ( . . . ) is the "main" formula of n " N F7 A, C then this means E ( . . . ) is true and we have an axiom. Consequently ~ C = E ( . . . ) is false. Now n 9N ~r7 A ~ follows from n 9N Fr7 A ~, ~C by Lemma 3.9 and Weakening then gives n" N y, ~_7+a A ~, A 9 3. If C  (B0 V B1) is the main formula the proof is similar to the next case. 4. If C = 3xB(x) is the main formula then the last rule applied in obtaining n 9N F7 A, C is an 3rule and we may assume that the two premises are, for some m, n" N ~_~o m " N and n" N F~1 A, B(m), C where ~0,131 e a[n] C_ "y[n]. Now apply the induction hypothesis to the righthand premise so as to obtain
~'~" N H7mt~l i I, A, B(m) and apply Vinversion and Weakening to the assumption n 9N F~ A', Vx~B(x) to obtain max(n, m)" N t~ A', A,~B(m) (.) Then one NCut with n" N t~~ max(n, m) 9N yields
n" N H7~~1A I, A,~B(m) provided ]~I[T~]~ O. For if 5 e j31[n] we may weaken the ordinal bound in (.) from "), to "y+ 5. Then the NCut applies since both/30 and y+ 5 lie in ~ +/31[n]. Thus if ~l[n] r o we can apply an LCut with cutformula B(m) of height r to obtain n 9N 7" ~ + ~ A ~ A as required. On the other hand if j31[n]  o then n : N ~ 1 A, B(m), C is an axiom and n" N F ~1 A', A, B(m) is too. Thus either n" N ~ A', A is already an axiom, or else ~B(m) is a false atom in which case we may use 3.9 and (.) to obtain max(n, m ) " N F~ A', A, from which we may deduce n " N ~~+" A', A by an NCut as before. 3.11. C u t  E l i m i n a t i o n T h e o r e m . If n" N ~rabl A then n" N 12~ A.
(Gentzen [1936] and Schiitte [1977])
2" }
Hence n" N F7 A implies n" N ~* A where a* = exp~(~) = 2"
r 2's.
P r o o f . We proceed by induction on the derivation of n 9N ~r~_klA. If the final rule applied is anything other than an LCut of rank r + 1, apply the induction hypothesis to the premises and then reapply the same rule using the fact that if ~ C a[n] then 2z E 2"[n]. If on the other hand, the final rule is an LCut with premises ~0 n " N ~r+l A ,  h e
and
~1 n " N ~r+l A , C
where I CI  r + 1, then the induction hypothesis and possibly a Weakening gives n'NF 2~A,~C
and
n'NF 2~A,C
M. Fairtlough and S. Whiner
170
where/~ is the greatest of/~0,/~1 in a[n]. Now the CutReduction lemma 3.10 applies immediately, with 7  a  2e, A ~ = A and one of C, ~ C of the required form. Hence n : N t~~+2~ A and since 2e + 2e[n '] C_ 2~[n '] when n' _> n, the desired result follows by a Weakening. 3.12. N o t e . It may later be convenient to replace the exponential 2 ~ (in 3.11) by w ~. The above proof still works in that case, but with the proviso that the declared parameter n should be nonzero since only then do we have w e + we[n '] c_ wa[n '] for all n' _> n, whenever ~ E a[n]. 3.13. D e f i n i t i o n s .
A lE~
is one of the form
3zl . . . 3zkB(zl, . . . , zk) where B is "bounded" in the sense that all quantifiers occurring in it are bounded quantifiers 3y(y < t A . . . ) , Vy(y ~ t V . . . ) with t either a variable or a closed term. Note that this restriction is not a severe one, for if the quantifier bound t were an open term then we could rewrite 3y(y < t A ...) as 3z(z = t A 3y(y < z A . . . ) ) and
vy(y
t v . . . ) as 3 z ( z = t ^ vy(y < z ^ . . . ) ) .
A closed E~ as above is said to be true at m E N if there are ml, . . . , mk, all less than m, such that B ( m ~ , . . . , mk) is true (in the standard model). A set A of closed lE~ is true at m if at least one Ai E A is true at m. 3.14. N o t e . If a set A of closed IE~ is true at m then it is automatically true at any greater m'. This property is called lE~ Also if A contains a true formula all of whose quantifiers are bounded then A is true at any m.
3.15. B o u n d i n g L e m m a for F. 1. n" N ~~ m " N if and only if m _ B~(m) = m + 2m+l and by 2.29, f is computable within time or space bounded by some fixed iterate of Bo, say B~'  Bo+i. But for any reasonable model of computation there is a bounded "computation formula" CI(Z , y,z) which expresses the fact that f(Z) is computable within resource z and with outputvalue y. Since we know that our f is computable within resource bounded by Bo+i, it follows that the formula 3y3z. Cf(rh, y, z) is true at Bo+~(max rh), for all inputs ff~. Therefore by Bounding Lemma 3.15 (3), we have for j = I C f l + 2, max nh" N o~~
3y3z . Ci(rh , y, z)
and then by k applications of the Vrule, ~+'+J+k V~3y3z. Cf (,~, y, z). Since "y is a limit and c~ ~ "y we also have c~ § i t j § k ~ "y and hence f is "yrecursive. The second equality was previously established at the end of section 2. 2. This follows exactly the same lines as (1) but using Bounding Lemma 3.16 for ~o. The closure conditions imposed on ~' ensure that if f E E(Go) for some (~ ~ "y we may assume 2W _'~ c~ so that Go(m) _> 2re+l, and then find (~ ~ ")' such that Go, bounds a fixed iterate of Go. Hence f will be computable within resource bounded by Go, and for each input r5, the E]~ formula 3y3z. Cf(ff~, y, z) will be true at Go, (max r5), and so max r5 ~o' 3y3z. C/(r5, y, z) and f is therefore "),definable. 3.20. E x a m p l e . First notice that for each integer k >_ 0, Bo+~ is just Bo iterated 2~ times, so E ( S o ) = E(Bo+k). However B o + ~ ( m )  S2~+l(m) and so Bo+~ eventually dominates every function in E(Bo) if w ~ a. Thus Bo+~ r E(So). In particular then, the functions Bw.k with k = 1, 2,... play the role of the Ackermann/Grzegorczyk functions in such a way that E(B~.k) = Grzegorczyk's E k+2. Therefore for each k _ 2 we have by 3.19 (1), since w. k is qspace representable for some polynomial q, REC(w. k) = ~kt1
Provably Recursive Functions
175
and hence REC(w 2)  PRIMITIVE RECURSIVE. Note that on the other hand, E~
= t; 3.
3.21. E x a m p l e . It should be fairly clear that for limits 7 satisfying mild representability and closure conditions, that the 7recursive functions are exactly those defined by nested recursions over initial segments of 7. For any such recursion f, a computation formula Cf would be definable and an informal termination proof by transfinite induction could be translated into a formal proof within the system F~. Conversely, the Hierarchy Theorem would show that any 7recursive function is elementary in some B, for ~ ~ 7, and so definable by nested recursions because the B, functions are. 3.22. Note. From the example on primitive recursion, we see that part 1 of the Hierarchy Theorem applies to any 7 which is qspace representable for some primitive recursive q, and such that w2 _ 7. Similarly part 2 of the Hierarchy Theorem applies to any 7 which is qspace representable for some primitive recursive q, provided 7 is at least as big as the first primitive recursively closed ordinal and also satisfies the stated compositionality requirement. These conditions will indeed be satisfied by any 7 to which we later apply the Hierarchy Theorem, but we leave the reader to convince him or herself of this fact. In particular, it is quite easy to see that ~0 is polynomialspace representable. See e.g. Sommer [1992]. 4. P r o v a b l y
recursive
functions
of PA
Here we characterize the provably recursive functions of Peano Arithmetic (PA) and its fragments: PRovREc(PA)
=
PROvREc(E~ = PROVREC(~0_IND)
REC(e0) REC((e0),) REC(w 2)
if n > 2
These results go back to Kreisel [1952] for PA, Parsons [1966] for the fragments, and to Wainer [1970] and Schwichtenberg [1971] for their corresponding subrecursive hierarchy classifications. 4.1. P e a n o A r i t h m e t i c (PA). Our version of PA is formalized classically in a Taitstyle calculus PA ~ A where as before, A is a finite set of formulas built out of atoms E(...), E(...) using V, A, 3, V; but now the formulas may contain free variables. It is sometimes convenient to display the free variables occurring in A by
M. Fairtlough and S. Wainer
176
LAx F A, E ( t l , . . . , tk), E ( t l , . . . , tk) V
F A , Bi
i = 0 or i = 1
A, (B0 V B1) tA, Bo
FA, B1
~ A, (Bo A B1) 3
~ A, B(t)
~ A, 3zB(z) V
F A, B(y) F A, VxB(x)
y not free in A
F A , ~ C t A , C tA
LCut
Figure 2: Logical axioms and rules of PA writing A ( x l , . . . ,Xk) etc. The special atoms x 9N do not occur in the language of PA. 1. The logical axioms and rules are as in figure 2. 2. The principle of induction is formulated here as a rule (but see Note 4.3 below)"
Ind
t A,B(0)
~ A ,  , B ( x ) , B ( x + 1) A, B(t)
where the variable x is not free in A, and t is any term. 3. To get the theory off the ground, we also need to add certain arithmetical axioms defining the atomic relations between basic terms. Among these will be the constant 0, the successor function + 1 and the equality and inequality relations Eo(x,y)  (x = y) and Eo(x,y)  (x # y), El(X,y) = (x < y) and E1 (x, y) = (x ~ y). Their defining axioms will include all substitution instances of the axioms in figure 3. Instead of adding terms for addition and multiplication as in more usual formulations of PA, we choose to add a pairing function p and its inverses u and v satisfying the axioms in figure 4, so that by iterating p we can then encode finite lists directly as
(~0,..., ~~)  p(...p(p(0, ~0), ~ ) , . . . , ~ _ , )
Provably Recursive Functions
t A , x + l
177
r
A,x+ 1 r
1,x=y
~ A , x = x
t A , x ~ y , y = x A,xr162 F A , x ~ 0 F A , x < x + l
F A , x ~ y + l , x < y , x = y A,x~x A,x~y,y~z,x_ B~+,(m) >_ e(m). Hence m " N k~"2 e(m)" N by Bounding Lemma 3.15 (1). Therefore by an Ncut, m 9N F ~''1 A, B. We can now reapply the rule in question to obtain n : N k~'' A, B as desired. Step 2 now follows by putting y 2w'd SO that ~"  w d+l + w d. 3 + 2. This completes 4.9. 
4.10. T h e o r e m . REC(w 2) C_ PROVREC(E~ Proof. By 3.20 B~.k is primitive recursive, for each k 6 N and so by the Hierarchy Theorem, every function in REC(w 2) is primitive recursive. We therefore only need show that every primitive recursive function is provably recursive in 9 This is done by assigning to each primitive recursive definition of a function f, a bounded formula CI(2, y, z) with the intuitive meaning: "z is a sequence code which describes the stepbystep computation of f (~), ending with output y". The formula 3y3z. CI(s y, z) then E~ f(E) and we merely have to show it to be provable in ~~ 1. If f is defined by one of the initial schemes: f(M)0
or
f(~)x,+l
or
f(~')xi
then take Cf(s y, z) to be the conjunction of z  0 (the empty sequence) with y0
or
yXl+l
or
yxi
respectively. Then in each case we have
Cf(E, O, O) or
Cf(~,x, + l, O) or
Cf(~,xi, O)
provable immediately by identity axioms, hence F 3y3z. CI(E , y, z) by the 3rule and hence F Vs 3y3z. CI(Y. , y, z) by the Vrule. 2. Suppose f is defined from go, g, and g2 by the substitution scheme: f(x)  g0(g, (x), g2(x)) and assume inductively that go, g, and g2 have already been assigned "computation formulas" Co, C1 and C~ so that in X]~
F VZ 3y3z. Ci(Y., y, z). Then take Cf(s y, z) to be the formula
lh(z) = 3 A (Z)o ~ 0 A (z), r 0 A (z)~ r 0 A y = u((z)o) ^ c1( , ^ ^ Co(u((z)l), u((z)o), v((z)0)).
184
M. Fairtlough and S. Whiner
Now from the arithmetical axioms in PA it's easy to see that we can derive
~c, (z, y~, z,), ~c~(z, y~, z~), ~Co(y~, yo, zo), c~(~, yo, (p(yo, zo), p(y~, z~ ), p(y~, z~) >). Then by applying the quantifier rules in the correct order we obtain
3~yv~. ~c~ (~, y, z), 3~yv~. ~c~(~, y, ~), ~ y v ~ .  C o ( ~ , y, z), v~ 3y3~. c~(~, y, z), and from this by three successive cuts, ~ V~ 3y3z. CI(~. , y, z).
3. Suppose f is defined from go and gl by primitive recursion: f(~, 0)  go(~)
,
f(~, w + 1)  gl(x, w, f(~, w)).
Assume go and gl have already been assigned formulas Co, C1 such that in ~]~ we have F V~. 3y3z. Co(F., y, z) ~ V~ VwVw'gygz. C1 (~,, w, w', y, z).
Then take CI(~, w, y, z) to be the formula lh(z) = w + 1 A Vi < w + 1. ((z), # O) A y = u((z)w)
^ Co(~, u((~)o), ~((z)o)) ^ vi < ~.c~(~,i,u((z),),u((z),+~),~((z),+~)). Using the arithmetical axioms we obtain ~Co (~, y, z), C~(~, 0, u((t)o), t) by choosing the term t = (p(y, z)), and
~c~(~, ~, y, z), ~c~(~, ~, y, y', ~'), c~(~, w + 1, y', e) by choosing t '  p ( z , p ( y ' , z ' ) ) .
Thus by appropriate quantifier rules,
and ~ VyVz. ~CI(~. , w, y, z), 3~3w3w'VyVz. ~C1 (~, w, w', y, z), 3y3z. CI(~, , w + 1, y, z)
Provably Recursive Functions
185
and hence by applying cut to each one:
~ 3 y 3 z . C f ( ~ , O, y, z)
VyVz . ~CI(~ , w, y, z), 3y3z . CI(~. , w + 1, y, z).
and
Then by E0IND we obtain
~ 3y3z. C I (~,, w, y, z) and by Vrules
~ Y2 Yw3y3z. CI(~, , w, y, z). This completes the proof since every primitive recursive function is definable by a sequence of the above schemes. 4.11. T h e o r e m . REC((S0)n) C_ PrtOvREc(H~
(n > 2)
P r o o f . By the Hierarchy Theorem and the Comparisons Lemma it suffices to show that for every ce ~ (So)n, the fastgrowing F~ is provably recursive in II~ For if f E E(B~) then f E E(F~) since B~ _< F~ on all nonzero arguments. Thus f is primitive recursively definable from F~ and so by the proof of 4.10, f will also be provably recursive by some further E~ We need to construct an appropriate "computation formula" CF for the F functions. To do this we must first define a coding of treeordinals or, fl, 7 , . .  , ~ So as numbers a = raT, b = r/~7, c = r77 , ...respectively, so that the basic ordinal operations can be simulated by the arithmetical termstructure of PA. This is easy to do because each ce < (So)n can be represented in Cantor normal form thus 0[, =
W OtO " ~'}~0 3t ~JOtl
" ml
Jr" 9 9 9 4(.l) O t k  1
" mk1
where O/k_1 ~ ... ~ al "{ O/0 "~ exp$l(1) and m o , . . . , m k  1 So if cei is coded as the number ai we can take as code for c~:
are positive integers.
a  (p(ao, mo),p(al, m l ) , . . . ,P(ak1, inkl)) The ordinal 0 will be coded as the empty sequence 0. Succ(a) then stands for the formula (u(v(a)) = 0 A v(v(a)) # 0) and Lim(a) stands for the formula (u(v(a))
o A v(v(a)) # o).
We can then define elementary number theoretic functions
1. (a,b) ~~ a@b 2. (b,n) ~ w b. (n + l) 3. (a, x) ~ l(a, x) such that, if a encodes a and b encodes fl then 1. a @ b encodes a + fl (provided the lowest exponent of w in a is no smaller than the highest exponent of w in fl) 2. wb. (n + 1) encodes wz. ( n + 1)
M. Fairtlough and S. Whiner
186
3. l(a, x) encodes a , if a is a limit. Since these functions are primitive recursive, they are provably recursive in E~ by the foregoing theorem and we may therefore use t h e m freely to form new terms, in such a way as to render the equation
(a~gw b . ( u + l))~gw b =
a~w b.(u+2)
provable in E~ We sometimes write w a for w a. 1. Now we can easily define a c o m p u t a t i o n formula CH(a,x, y,z) for the Hardy functions Ha(x)  y, by formalizing the statement: z is a sequence ~(ao, xo),...,p(akl,xk1)) where ao = 0, Xo = y, a~i = a, xk1 = x and for each i < k  1; if Succ(ai+l) then ai @ r l ~ = ai+x and xi = xi+~ + 1, and if Lim(ai+~) then ai = l(ai+~,xi+~) and Xi
m
Xi+l
"
B u t since Fa  H ~ we can therefore take as the c o m p u t a t i o n formula for F"
CF(a, x, y, z)  CH(wa, x, y, z). The following are derivable in E~
for all terms a and b:
1. F 3y3z.CH(rl7, x,y,z) since F CH(rl7, x,x+l,p(p(O,p(O,x+l)),p(rl7, x))). 2. F ,Yx3y3z. CH(a, x, y, z), ,3y3z. CH(b, x, y, z), 3y3z . CH(a @ b, x, y, z). For if z'  (p(O,y),...,p(b,x)) codes the c o m p u t a t i o n of He(x ) = y and z"  (p(O,y'),...,p(a,y)) codes the c o m p u t a t i o n of Ha(y), then z = (p(O, y'),..., p(a, y),..., p(a @ b, x)) codes the c o m p u t a t i o n of Ha+~(x) = H~,(H~(x))  y'. Clearly z is primitive recursively (and so provably recursively in E~ definable from z' and z". Therefore in E~ we have ~CH(a, y, y', ~"), ~C~(b, ~, y, ~'), 3z. C,,(a 9 b, ~, y', ~) and (2) follows by the quantifier rules. 3. F ~ L i m ( a ) ,  , 3 y 3 z . CH(ax, x, y, z), 3y3z. CH(a, x, y, z) since F ~ L i m ( a ) ,  , C H ( a x , x, y, z), Cg(a, x, y, p(z, p(a, x))).
4. ~ 3y3z. CF(O, x, y, z) by (1) since w ~ = 1. 5. F ~Vx3y3z.CF(a,x,y,z),3y3z.CF(a ~ rl7, x,y,z). First substitute w ~ for a and w ~. (u + 1) for b t h r o u g h o u t the derivation of (2) to obtain
F ~Vx3y3z. CH (w a, x, y, z), ~3y3z. c , ( ~ ~ 9(u + 1), ~, y, ~), 3 y 3 z . C , ( ~ ~ 9(u + 2), ~, y, z) using ~ w a @ w a. (u + 1) = w ~. (u + 2). Since the base case
~w3y3z. c~(~ o, ~, y, z), ~y3z. cH(~ o, ~, y, z)
Provably Recursive Functions follows by logic, we can now apply E~
187 over u to obtain
~ ~Vx3y3z . CH(W~, x, y, z), 3y3z . CH(W~ . (x + 1), x, y, z). Hence by (3) with a replaced by w~ 1 " ,
w
y3z.
o,
y, z), 3y3z.
y, z)
which is what was required since CF(a, x, y, z) is CH(Wa, x, y, z). 6. [ Lim(a), ~3y3z. CF(a~, x, y, z), 3y3z. Cf(a, x, y, z) again by (3) with a replaced by wa. Now in order to prove F~ totallydefined we clearly need the principle of transfinite induction up to c~. This can be formalizedand appropriate instances of it provedin PA, using a neat method of Gentzen [1943]. See also Schiitte [1977], Feferman [1968] and Takeuti [1987]. Let
TI(a,A) =_ Frog(A) + Vc. (c < a + A(c)) where Frog(A) meaning A is "progressive", is the formula Vd(A(0) A (A(d) + A(d @ c17)) A (Lim(d) A Vx.A(d,) + A(d))) and where (c ~ a) is a E~ defining the partial ordering 7 "~ a on treeordinals, i.e, "there is a sequence z = (p(co, zo),... ,p(ck, Zk)) with co = c, ck = a, and such that for each i < k, if Succ(ci+l) then ci @ FlU = C~+l, and if Lim(c~+l) then ci = l(Ci+l, Zi+l)". For any formula A(c) with a distinguished free variable c define A'(b) to be the formula Va(A(a) ~ A(a @ wb)), and let A (n+l) _ (A(n)) '. Note that if A is II~ then o A' is Hm+ 1 when brought to prenex form. We have the following three lemmas: 7. If A is a H~ then II~
F ,Frog(A), Frog(X)
We argue informally. First assume Frog(A). The first conjunct of Prog(A') is A'(0) which follows straight from A(a) + A(a @ r 17). The second conjunct is A'(d) ~ A'(d@rl7). To prove this assume A'(d) and A(a). Then A(a@w dCrl~) comes about by a H~ on the formula A(a ~ w d. (u + 1)). The base case A(a @ w d) comes from A'(d) and A(a); the induction step comes from A'(d) with a instantiated by a @ w d. (u + 1). Hence VxA(a @ w d. (x + 1)) and then A(a @ w derl~) follows from Frog(A). Therefore we have proved ~A' (d), A(a) + A(a@w d~l~) and hence by Vintroduction and Vintroduction, A'(d) + A'(d@rlT). The third conjunct of Prog(A') is (Lim(d)AVx.A'(dx)) + A'(d). But if Lim(d) and Vx. A'(dx) then for any a, A(a) ~ Vx. A(a @ wdx), so by Frog(A) yet again, A(a) + A(a ~ wd). Hence (Lim(d) A Vx. A'(d~)) + A'(d). Putting these cases together gives Prog(A').
M. Fairtlough and S. Whiner
188 8. If A is a H~
then IIOIND k "~TI(b, A'), TI(w b, A).
For, assuming TI(b, A') and Prog(A), we have by (7) Prog(A') and therefore Vc ~_ b.Va.(A(a) + A(a~wC)). Using this we can then prove Vd ~_ wb.A(d) as follows. Suppose d ~ wb. Recall that for fixed d and i, in(d, i) is the initial part of sequence d, with length i, so if i < lh(d) then in(d,i) w d~ 9mo 9 "" 9 w d~I 9mi1 where do,..., di1 ~ b. However, in the formal theory we can actually prove (i 2.
This is proved by iterating (8) above. Note that A' is rio, A" is r i ~ A (n2) is rio. For any fixed k e N we have in ri~ ~ TI(rk7, A(~l)), hence / rk7 k TILT ,A(~2)), hence k TI(w ~ ,A (~~)) etc. until finally k T I ( a , A ) w h e r e a  r exp~n  1 (k)7 . "
rk~
Now we can complete the proof of the theorem by choosing A to be the formula
Vx3y3z. CF(a, x, y, z). By (4), (5) and (6) this formula is progressive. So by (9), riOnIND k Va ~ rexp~,l(k) 7.Vx3y3z. Cf(a, x, y, z) for every fixed k E N. Therefore if a is a fixed code for a treeordinal a ~ (E0)n, the formula Vx3y3z. Cf(a, x, y, z) defining F~ is provable in ri~ Putting 3.19(1), 3.20, 4.9, 4.10 and 4.11 together and recalling 2.23 and 4.3, we obtain our main subrecursive classification for arithmetic.
Provably Recursive Functions
189
4.12. Classifications.
1. The primitive recursive functions can be classified as PRovREc(E~
=
= =
REC(w 2)
U E(Ba) UE(F~) = e)~W
U E(H~). ~<w w
2. The multiplyrecursive functions of Pdter [1967] can be classified as PROVREC(2~IND) =
REC(w')
= =
U E(Ba) U E(F~) = Ot ~ W ~
3. More generally, for n > 2, and recalling (r PROVREC(E~
=
U
E(H~)
Ot ~ W w ~
=
1, (r
=
(M(~o)m~"
REC((eO).)
=
U
E(Ba)
a~(~o),,
=
U
U
E(Fa)=
a~(6o),
E(Ha)
a~(eo),,+l
4. And finally, PRovREc(PA)
=
REC(eo)
=
U E(Sa)
=
U E(F~) = U E(H~) t~ g e o
~ ~ 6 o
4.13. Remarks. Kreisel [1952], using ideas from Ackermann [1940], was first to show that the provably recursive functions of PA are definable by transfinite recursions over ordertypes ~ Co. The refinements of this result, showing the provably recursive functions of II~ to be those definable by recursion over ordertypes ~ (c0),, are due to Parsons [1966]. The result that the provably recursive functions of ~~ are exactly the primitive recursive functions is due variously to Parsons [1970], Mints [1973] and Takeuti [1987]. The corresponding subrecursive classifications in terms of the fastgrowing Fas follow from Grzegorczyk [1953] for c~ ~ w; from Robbin [1965] for c~ ~ w~; and from Lbb and Wainer [1970], Wainer [1970] and Schwichtenberg [1971] for a ~ e0. See also Constable [1971], Buchholz and Wainer [1987] and Rose [1984].
190
M. Fairtlough and S. Wainer
The Ha's first appeared in a subrecursive context in Wainer [1972], though their definition and first use (exhibiting a set of reals with cardinality R1) appears very early in Hardy [1904]. The development of the REC(ce) classes and corresponding B~functions stems from Fairtlough [1991]. See also Fairtlough and Wainer [1992]. Many other significant contributions to the theory of provably recursive functions have been (and are still being) madesee for instance Buchholz [1987], Buchholz, Cichon and Weiermann [1994], Buss [1994], Friedman and Sheard [1995], Girard [1981], Leivant [1995], Ratajczyk [1993], Schwichtenberg [1977], Sieg [1985,1991], Tucker and Zucker [1992] and Weiermann [1996]. Furthermore there are nonstandard modeltheoretic methods running parallel to the prooftheoretic ones employed heresee for example Paris [1980], Hajek and Pudlak [1991], Avigad and Sommer [1997] and Sommer [1995]. The field is now quite broad, and we can only apologise to those many contributors whom we have not mentioned. 5. I n d e p e n d e n c e
results for PA
From 4.12(4) we see immediately that B~o, F~o and H~o eventually dominate all the provably recursive functions of PA, and so they cannot themselves be provably recursive in PA. Consequently PA Y VnVx3y3z. CH((co)n, x, y, z). In other words, the proof of 4.11 shows that although for any fixed n we have PA F TI((co)n,A) there are II~
of A for which PAY TI(co, A).
Another way to see this would be to note that, again for an appropriate formula A PA + TI(c0, A) F "PA is consistent". For, by Gentzen, if a false atom were derivable in PA, then by the Embedding and CutElimination theorems, it would be derivable in the infinitary system with cutrank 0 and ordinal bound c~ < c0. By induction up to Co, this is impossible since an infinitary rank0 proof of an atom cannot use logic. Formalisation of this argument would lead to a proof of TI(co, A) F Con(PA) in Primitive aecursive Arithmetic. Of course, G5del's second theorem tells us that there is no proof of Con(PA) inside PA, hence no proof of TI(co, A). Thus Co is the least upper bound of the "provable ordinals" of PA. See Gentzen [1943]. These may be termed "logic" independence results. The question then was whether they might be equivalent to other independence results of a more clear
Provably Recursive Functions
191
mathematical character. This was shown to be the case by Paris and Harrington [1977] who proved the independence (from PA) of a certain finite version of Ramsey's theorem, and by Ketonen and Solovay [1981] who then gave a "rateofgrowth" analysis of it in terms of the fastgrowing hierarchy. In this section we treat another simpler independence result of Kirby and Paris [1982] concerning socalled "Goodstein sequences", but the proof we give is due to Cichon [1983] since it is directly related to the Hfunctions. x (1) the "complete 5.1. D e f i n i t i o n . Given numbers a, x > 1 with a < expx+l base(x + 1) form" of a is
a(x+l)a~.ml+(X+l)a2"m2+'''+(x+l)ak'mk wherein ml, m2, . . . , m k . " etc, are again written in base(x + 1) form. Let g(x, a) be the number which results by writing a form, and then increasing the base in this expression from all coefficients mi fixed. The Goodstein sequence on (x, a) is then the sequence a~ = a and a~+j+l = g ( x + j, a~+j).
> ak, and their exponents 1 in complete base(x + 1) (x + 1) to (x + 2), leaving of numbers {ai}i>x where
5.2. D e f i n i t i o n . Given a number a written in complete base(x + 1) form, let ord~(a) be the treeordinal obtained by replacing the base throughout by w. 5.3. L e m m a .
o r d x ( a  1 ) = Px(ordx(a)) f o r a > O.
P r o o f . The proof is by induction on a. If a  1 then o r d x ( a  1)  0  Px(1). Suppose a > 1 and that the complete base(x + 1) form of a is a = (x + 1)al 9ml [ (x [ 1)a2 9m2 + . . .
+ (x + 1 ) a k . m k .
If ak  0 then o r d x ( a  1)  ord~(a)  1  P~(ordx(a)). If ak > 0 then let b = (x + 1)a' 9ml + (x + 1)a2 9m2 + . . . + (x + 1) ak. (ink  1). Then
a
1 = b + (x + 1 ) a k  l . x + (x + 1)ak2x + ' " + (x + 1)~
Let a  ordx(a), fl  ordx(b) and ak  ordx(ak). Then by the induction hypothesis we have ordx(a  1) = fl + w P'(ak) 9x + w P2(ak) 9x + . . . + x. Therefore by the properties of the function Px we obtain
ord~(a 1 ) 5.4. L e m m a .
Px(fl + w a~)  Pz(ordx(a))
g(x,a) = Gz+l(Pz(ordz(a))).
192
M. Fairtlough and S. Whiner
Proof.
By the definitions, note that g(x,a) = Gx+~(ordx(a 1))
since Gx+l replaces base w by (x + 2), as in 2.11. The result then follows from 5.3. 5.5. L e m m a . each j
Let ax, ax+l, ax+2,.., be the Goodstein sequence on (x, a). Then for
1. ordx+j(ax+j)  Px+j1 Px+j2 " " "Px(ordx(a)). 2. ax+j = Gx+j(ordx+j(ax+j)). Proof. 1. By induction on j. The base case is trivial and for the induction step we have by 5.3, ordx+j+l(g(x + j, ax+j))
=
ordx+j(ax+j  1)

Px+j(ordx+j(ax+j)).
Hence ordx+j+l(ax+j+l)  Px+j(ordx+j(ax+j)) and the result follows immediately from the induction hypothesis. 2. This is immediate by iterating 5.4. 5.6. T h e o r e m . (Cichon [1983]) Let {a,},>x be the Goodstein sequence on (x, a). Then there is a y such that ay = O, and the least such y is given by y = Horde(a)(x). P r o o f . By 5.5, ordx+j+l(ax+j+l) ~ ordx+j(ax+j) if ordx+j(ax+j) :fl 0. By wellfoundedness there must be a first stage k at which ordx+k(ax+k)  0 and hence ax+k  O. By Theorem 2.19 we can express this k as k = 
least k. (Px+k1 Px+k2""Px(ordx(a)) = 0) Dord~(a)(succ)(x)
and therefore x § k  Uordz(a)(X ). 5.7. T h e o r e m . (Kirby and Paris [1982]) Let Good(a,x, y) be a E~ of arithmetic expressing the fact that the Goodstein sequence on (x, a) terminates at y, i.e. ay = O. Then VaVx3y. Good(a, x, y) is true by 5.6, but not provable in PA. P r o o f . If it were a theorem of PA, the function h(a,x) = least y . Good(a,x, y) would be provably recursive in PA. For each x, set a(x) = exp~+l (1). Then a(x) is primitive recursive and so the function h(a(x), x) would also be provably recursive in PA. However, by 5.6 and since ordx(a(x))  (c0)x, we have h ( a ( x ) , x )  H6o(X). This contradicts 4.12(4).
Provably Recursive Functions 6.
The
"true"
ordinal
193
of PA
Section 4 characterizes the provably recursive functions of PA in terms of c0recursiveness but, recalling definitions 3.17, it still remains to characterize them in terms of 7definability. We shall now "compute" the appropriate 7 by appealing to the Hierarchy Theorems 3.19 and finding an ordinal map a ~ a + such that for a < e0, and even much larger a's, B~ = Go+ We then have P R o v R E c ( P A ) = R E C ( c 0 ) = E~ For related results and an alternative treatment in terms of GSdel's system T of primitive recursive functionals, see Schwichtenberg and Wainer [1995]. c + is the prooftheoretic ordinal of the theory of one inductive definition and is usually referred to as the BachmannHoward ordinal (see Howard [1970]). Girard [1981] was the first to give a detailed analysis of the relationship between the fastgrowing and the slowgrowing hierarchies and once the correct result was known, many others gave more direct and simpler analyses. We shall follow the treatment in Cichon and Wainer [1983] and more generally, Wainer [1989]. The main point is that, in order to describe the map a ~+ a +, one needs to make use of "higher number classes" of uncountable treeordinals. However, since we are only concerned here with "small" a ' s below e0, we only need to go to the "next" number class over f~. 6.1. D e f i n i t i o n . Let ~0 inductively according to the four Zero. 0 E f~2 Succ. a E ~ 2 = = ~ a + l E ~ 2 Lim0. Vx E ~0(a~ E ~2) = ~ a = Liml. V~ E ~l(a~ E ~2) ==~ a =
N and f~l rules:
~.
Then the set f~2 is generated
(a~) E f~2 (a~) E ~2
Note: we sometimes write a = sup a~ or a = S U P a e according to whether a = (a~) or a = (ae) in f~2. 6.2. D e f i n i t i o n . The (wellfounded) "subtree" partial ordering < on f~2 is defined as the transitive closure of the rules 9
a. ~0. Since prooftheoretic ordinal analysis seeks to compute for a given theory T, the least upper bound T of its "provable ordinals" (see Pohlers in this volume), the results here will then immediately give a classification of the provably recursive functions of T viz. PrtovREc(T) = R E C ( T ) =
U E(F~).
Of course, T must be shown to satisfy the conditions of our Hierarchy Theorem, or something like them, and this often requires some checking! See Buchholz, Cichon and Weiermann [1994] for related work involving similar kinds of conditions. See also Weiermann [1996] for an alternative treatment of PA and transfinite induction in terms of "ordinal majorisation" relations. We shall assume henceforth that 3' = s u p % > w is a structured, countable treeordinal which is "primitive recursively representable", i.e. qspace representable for some primitive recursive q. The representability of 7 ensures that it will be possible to code the wellordering relation c~ ~ ~ for c~, ~ ~ ~,, by a ~~ of arithmetic "a ~ b" (just as was done for Co in the proof of Theorem 4.12 and for quite general systems of ordinal notations in Sommer [1992]). There will be primitive recursive functions a ~~ a @ r l ~ representing the successor, and (a,x) ~ l(a,x) such that if a encodes c~ then l(a, x) encodes cex when ~ is a limit, and l(a, x) = 0
M. Fairtlough and S. Wainer
200
otherwise. For simplicity we shall assume that PA is extended to include them as new terms, also denoted a (9 r17 and l(a,x), with appropriate defining axioms for them. We shall also assume that 0 is the least element of the wellordering and that the value of a (9 F17 is always numerically larger than a. The top element of the wellordering, representing 7 itself, will be denoted by c and we shall write Lim(a) for "l(a, 1) ~ 0". 7.1. Definition. PA + TI(),) is the theory obtained by adding to PA the Principle of Transfinite Induction up to )', formulated either as an axiomscheme or as a rule. We choose the rule TI(7): 1 A,B(0)
1 A , ~ B ( a ) , B ( a ( g r l 7)
1 A ,  L i m ( a ) , 3 x  , B ( l ( a , x ) ) , B ( a )
1 A, Va(a ~ c V B(a))
with the restriction that a is not free in A. The embedding of PA + TI(7) into wArithmetic follows the same lines as the embedding of PA in 4.7. Applications of the TI(3') rule are dealt with as follows:
Suppose the three premises of the TI(),) rule have been embedded into wArithmetic with a fixed ordinal bound 5 E f~s, so that 16 A, B(0) and for every a E N,
7.2. L e m m a .
a: N
t6
A ,   , B ( a ) , B ( a (9 r17)
a: N 16 A ,  L i m ( a ) , 3 x ~ B ( l ( a , x ) ) , B ( a ) . Furthermore suppose that 5 is so chosen that for all a, n E N, 1. the term l(a, n  1) has numerical value bounded by B6(max(a, n)), 2. the cardinality of 5In] is at least [B I, the height of the induction formula. Then for every a ~ 7, if a is the number encoding a we have a : N 16+~.,,+3 A, B(a). P r o o f . Proceed by induction on a ~ y (for notational simplicity we shall suppress the side formulas A). The case a = 0 is immediate and the successor case from a to a + 1 is also straightforward, since from the assumption a : N 16 ,S(a), S(a (9 r17) and the induction hypothesis a : N 16+5.~+a B(a), we obtain a : N 16+5.~+4 B(a(gF17) by Cut and hence a (9 r17 : N 16+5.(~+z)+a B(a (9 r17) by Weakening. Now suppose a = sup ax and choose any n E N. Then letting m denote the numerical value of l ( a , n  1), we have m : N 16+5.~,.1+a B ( l ( a , n  1)) by the induction hypothesis, and max(a, n) : N 1~ m : N by the Bounding Lemma 3.15 and condition 1. Therefore by an NCut we obtain for every n, max(a,n) : N 16+5.~,1+4 B ( l ( a , n  1)).
Provably Recursive Functions
201
The structuredness of a gives 5 + 5 9c~n1 + 4 E 5 + 5 . a[max(a, n)] and so an application of the Vrule yields a : N K~+5"~ V x B ( l ( a , x  1)). The reason for the second condition on 5 is that it ensures (we leave the reader to check it), a : N k~+5"~ 3 x ~ B ( l ( a , x  1)),VxB(l(a,x)). Hence by Cut we obtain a : N K6+~~+1 VxB(l(a,x)). Therefore from the assumption
a: N K6 ~Lim(a),3x~S(l(a,x)),B(a) and since F ~ Lim(a) is an axiom, we obtain by two further Cuts, a : N K~+5"a+3 B(a) and this completes the proof. Now in order to prove the Embedding Theorem for PA + TI(7) there is one further crucial requirement to be placed on ~'. Clearly if y is coded as a numbertheoretic wellordering there must be a "norm" function with the property that whenever c~ ~ y is encoded by the number a, we have c~ e ~[norm(a)]. See Buchholz, Cichon and Weiermann [1994]. For "standard" codings of prooftheoretic ordinals this norm function will often be just the identity, but we shall merely require that it be primitive recursive. 7.3. E m b e d d i n g of PA + TI(7). Suppose "7 is primitive recursively representable, with a primitive recursive norm as above. Suppose PA + TI(7) K A ( h ( 2 ) , . . . , tk(E)).
Then there is a number d, measuring the "size" of this proof, such that for every assignment of numbers fi to the variables ~, we have maxfi" N Ks'~'d A ( m l , . . . ,m~)
where ml, . . . , mk are the numerical values of the terms tl(fi), . . . , tk(fi). Furthermore this infinitary derivation has finite cutrank. P r o o f . All the cases of PArules carry over straightforwardly just as in 4.8, but with w now replaced by 5 "9'. The only case we need worry about is the application of the TI(~/) rule. Assume inductively that its premises are all embedded in wArithmetic with ordinal bound 5 = 5 "7" d. Assume also that d is chosen large enough so that conditions 1 and 2 of the previous Lemma are satisfied by 5. Then what we need to prove is K s'~'(d+2)A, Va(a ~ c V B(a))
M. Fairtlough and S. Whiner
202
(we suppress the parameters occurring in the sideformulas A since they play no active part in this case). Now the previous lemma gives, for every a ~ ~' with code a,
a: N ~_6+5.~+3A, B(a). Let M(a, n) be a ~E~ expressing the relation "norm(a)  n", and recall that itself is coded by the top element c in its numbertheoretic wellordering. Then for a sufficiently large d we have, for all a, n E IN, max(n, a ) : N F~a A,~M(a, n) V a ;d c V B(a) where ~a = (f + 5. a + 4 if a 4 c and norm(a) = n, and fla = 5 otherwise. Hence & e 5 + 5. ~[max(n, a)] for every a and so by the Vrule,
n: N I'~+5"'~ A, Va(~M(a, n) V a 7~ c V B(a)) for every n, so by the Vrule again, t6+5"'y+'
A, VxVa(~M(a,x) V a 7~ c V B(a)).
The desired result }_5.~.(d+2) A, Va(a 7~ c V B(a)) will then follow by a Cut if we can derive F 6+5"''/+1 3 x 3 a ( M ( a , x ) A a ~ cA~B(a)),Va(a ~ c V B(a)).
This is done as follows: for each a E IN with norm(a) = m we have (again for a large enough d) ~_6 M(a, m) and ~_6 a ~ c, a ~ c and t~ ~B(a), B(a). Hence ~6+3 (M(a, m) A a 4 c A~B(a)), (a ~ c V B(a)). But since the norm function is primitive recursive and 7 ~ w, we can assume that the chosen d is large enough to ensure a : N ~6 m : N by the Bounding Lemma 3.15. Hence
a: N F~+5 3x3a(M(a,x) A a 4 c A ~B(a)), (a ~ c V B(a)) follows by two applications of the 3rule, and a final application of the Vrule yields i6+5"'Y+'
3x3a(M(a,x) A a 4 c A~B(a)), Va(a ~ c V B(a)).
This completes the proof.
Suppose 7 = w~ is primitive recursively representable with a primitive recursive norm. Recall c(~) := sup x exp,(1 + ~ + 1). Then P a o v R E c ( P A + TI(~/)) = REC(e(c~)) = U E(F~).
7.4. Classification Theorem.
~~(a)
Provably Recursive Functions
203
P r o o f . For the first containment C_ suppose f is provably recursive in PA + TI(7). Then for an appropriate computation formula C I we have PA + TI(7) F Vx3y3z. Cl(x, y, z). The Embedding Theorem gives (for appropriate r, d E IN)
w y3z,
y, z)
and, with only minor modifications to the proof, the 5 could be replaced by w thus:
w3y3z, c (z, y, z). By the CutElimination Theorem 3.11 we then obtain
F~ Vx3y3z. Ci(x, y, z) where 5 = exp~(w .'y. d) ~ expr+l(1 + c~ + 1) ~ e(~), and hence f e REc(e(c~)). The second containment follows from the Hierarchy Theorem 3.19 since B e < F~. For the third containment, suppose f e E(F~) for some ~ ~ e(c~). Then f will be provably recursive in PA + TI(~') if F~ is and for some fixed m E N we have /~ ~ exp.(1 + c~ + 1). But in PA + TI('y) we can prove TI(a) and hence TI(1 + a + 1). Then by iterating part 8 of the proof of Theorem 4.11 we can prove transfinite induction up to exp.(1 + a + 1) and hence up to ~. As before FZ is then provably recursive in PA + TI(~'), and this completes the proof. As a concluding example, let ID~ be the theory of an ntimes iterated inductive definition and let T~ denote its prooftheoretic ordinal (see Buchholz et al. [1981] for detailed analyses of these and other theories). Then the T~'S have structured treeordinal representations and by Whiner [1989] we have T+  Tn+l. Therefore PROVREC(IDn) = REC(~'n)= E~ and letting r  sup T~, PaOVREC(II~  CA0)  REC(T) = g0DEE(T). A more direct analysis of general IDtheories in terms of the slow growing hierarchy is given by Arai [1991]. References W. ACKERMANN [1940] Zur Widerspruchsfreiheit der Zahlenentheorie, Mathematische Annalen, 117, pp. 162194.
204
M. Fairtlough and S. Wainer
T. ARAI [1991] A slow growing analogue of Buchholz' proof, Annals of Pure and Applied Logic, 54, pp. 101120. J. AVIGADAND R. SOMMER [1997] A modeltheoreticapproach to ordinal analysis, Bulletin of Symbolic Logic, 3, pp. 1752. W. BUCHHOLZ [1987] An independence result for (II~CA)+ (BI), Annals of Pure and Applied Logic, 23, pp. 131155. W. BUCHHOLZ,E. A. CICHON, AND A. WEIERMANN [1994] A uniform approach to fundamental sequences and subrecursive hierarchies, Mathematical Logic Quarterly, 40, pp. 273286.
W. BUCHHOLZ, S. FEFERMAN, W. POHLERS, AND W. SInG [1981] Iterated Inductive Definitions and Subsystems of Analysis, Lecture Notes in Mathematics #897, SpringerVerlag, Berlin. W. BUCHHOLZ AND S. S. WAINER [1987] Provably computable functions and the fast growing hierarchy, in: Logic and Combinatorics, S. G. Simpson, ed., vol. 65 of Contemporary Mathematics, American Mathematical Society, Providence, R.I., pp. 179198. S. R. Buss [1994] The witness function method and provably recursive functions of Peano Arithmetic, in:
Proceedings of the 9th. International Congress of Logic, Methodology and Philosophy of Science, D. Prawitz, B. Skyrms, and D. Westerstahl, eds., NorthHolland, Amsterdam. E. A. CICHON [1983] A short proof of two recently discovered independence proofs using recursion theoretic methods, Proceedings of the American Mathematical Society, 87, pp. 704706. E. A. CICHON AND S. S. WAINER [1983] The slow growing and Grzegorczyk hierarchies, Journal of Symbolic Logic, 48, pp. 399408. R. L. CONSTABLE [1971] Subrecursive programming languages III, the multiple recursive functions, in: Proceedings of the ~lst International Symposium on Computers and Automata, Brooklyn Polytechnic Institute, NY, pp. 393410. N. J. CUTLAND [1981] Computability, Cambridge University Press. M. V. H. FAIRTLOUGH [1991] Ordinal Complexity of Recursive Programs and their Termination Proofs, PhD thesis, Leeds University Department of Pure Mathematics. M. V. H. FAIRTLOUGHAND S. S. WAINER [1992] Ordinal complexity of recursive definitions, Information and Computation, 99, pp. 123153. S. FEFERMAN [1962] Classification of recursive functions by means of hierarchies, Journal of the American Mathematical Society, 104, pp. 101122. [1968] Systems of predicative analysis II, Journal o] Symbolic Logic, 33, pp. 193220. H. M. FRIEDMAN [1978] Classically and intuitionistically provably recursive functions, in: Proc. Higher Set Theory, Oberwolfach, G. H. Miiller and D. S. Scott, eds., Lecture Notes in Mathematics #669, SpringerVerlag, Berlin, pp. 2127.
Provably Recursive Functions
205
H. M. FRIEDMAN AND M. SHEARD [1995] Elementary descent recursion and proof theory, Annals of Pure and Applied Logic, 71, pp. 145. G. GENTZEN [1936] Die Widerspruchsfreiheit der reinen Zahlentheorie, Mathematische Annalen, 112, pp. 493565. [1943] Beweisbarkeit und Unbeweisbarkeit von Anfangsfdllen der Transfiniten Induction in der reiner Zahlentheorie, Mathematische Annalen, 119, pp. 140161. J.Y. GIRARD [1981] II~llogicpart I, Annals of Mathematical Logic, 21, pp. 75219. [1987] Proof Theory and Logical Complexity, Bibliopolis, Naples. A. GRZEGORCZYK [1953] Some classes of recursive functions, Rozprawy Matem. IV, Warsaw. P. HAJEK AND P. P U D L A K [1991] The Metamathematics of First Order Arithmetic, Perspectives in Mathematical Logic, SpringerVerlag, Berlin.
G. H. HARDY [1904] A theorem concerning the infinite cardinal numbers, Quarterly Journal of Mathematics, 35, pp. 8794. W. A. HOWARD [1970] A system of abstract constructive ordinals, Journal of Symbolic Logic, 37, pp. 355374. N. KADOTA [1993] On Wainer's notation for a minimal subrecursive inaccessible ordinal, Mathematical Logic Quarterly, 39, pp. 217227. J. KETONEN AND R.. M. SOLOVAY [1981] Rapidly growing Ramsey functions, Annals of Mathematics, 113, pp. 267314. L. A. S. KIRBY AND J. B. PARIS [1982] Accessible independence results for Peano Arithmetic, Bulletin of the American Mathematical Society,pp. 285293. G. KREISEL [1952] On the interpretation of nonfinitistproofs II, Journal of Symbolic Logic, 17, pp. 4358. D. LEIVANT [1995] Intrinsictheories and computational complexity, in: Logic and Computational Complexity, International Workshop LCC'9J, D. Leivant, ed., Lecture Notes in Computer Science #960, SpringerVerlag, Berlin, pp. 177194. M. H. L6B AND S. S. WAINER [1970] Hierarchies of number theoretic functions I and II, Arkiv flitmathematische Logik und Grundlagenforschung, 14, pp. 3951 and 97113. Correction in vol. 14, pages 198199.
G. E. MINTS [1973] Quantifierfree and one quantifier systems, Journal of Soviet Mathematics, 1, pp. 7184. J. B. PARIS [1980] A hierarchy of cuts in models of arithmetic, in: Model Theory of Algebra and Arithmetic, L. Pacholski and et al., eds., Lecture Notes in Mathematics #834, SpringerVerlag, Berlin, pp. 312337. J. B. PARIS AND L. HARRINGTON [1977] A mathematical incompleteness in Peano Arithmetic, in: Handbook of Mathematical Logic, J. Barwise, ed., NorthHolland, Amsterdam, pp. 11331142.
206
M. Fairtlough and S. Wainer
C. PARSONS [1966] Ordinal recursion in partial systems of number theory (abstract), Notices of the American Mathematical Society, 13, pp. 857858. [1970] On a numbertheoretic choice schema and its relation to induction, in: Intuitionism and Proof Theory: proceedings of the summer conference at Buffalo N.Y. 1968, NorthHolland, Amsterdam, pp. 459473. R. P~TER [1967] Recursive Functions, Academic Press, New York, 3rd ed. Z. RATAJCZYK [1993] Subsystems of true arithmetic and hierarchies of functions, Annals of Pure and Applied Logic, 64, pp. 95152. J. ROBBIN
[1965] Subrecursive Hierarchies, PhD thesis, Princeton University. H. E. ROSE [1984] Subrecursion: Functions and Hierarchies, vol. 9 of Oxford Logic Guides, Clarendon Press, Oxford. U. SCHMERL [1982] Number theory and the BachmannHoward ordinal, in: Logic Colloquium '81, J. Stern, ed., NorthHolland, Amsterdam, pp. 287298. D. SCHMIDT [1976] Builtup systems of fundamental sequences and hierarchies of numbertheoretic functions, Arkiv flit mathematische Logik und Grundlagenforschung, 18, pp. 4753. K. SCHUTTE [1977] Proof Theory, SpringerVerlag, Berlin. Translation by J. N. Crossley H. SCHWICHTENBERG [1971] Eine Klassifikation der e0rekursiven Functionen, Zeitschrift fiir mathematische Logik und Grundlagen der Mathematik, 17, pp. 6174. [1977] Proof theory: Some applications of cutelimination, in: Handbook of Mathematical Logic, J. Barwise, ed., NorthHolland, Amsterdam, pp. 867895. H. SCHWICHTENBERGAND S. S. WAINER [1995] Ordinal bounds for programs, in: Feasible Mathematics II, P. Clote and J. B. Remmel, eds., vol. 13 of Progress in Computer Science and Applied Logic, Birkh~iuser, Boston, pp. 387406. W. SIEG [1985] Fragments of arithmetic, Annals of Pure and Applied Logic, 28, pp. 3371. [1991] Herbrand analyses, Archive for Mathematical Logic, 30, pp. 409441. R,. SOMMER
[1992] Ordinal arithmetic in IA0, in: Arithmetic, Proof Theory and Computational Complexity, P. Clote and J. Krajicek, eds., Oxford University Press. [1995] Transfinite induction within Peano Arithmetic, Annals of Pure and Applied Logic, 76, pp. 231289. W. W. TAIT [1968] Normal derivability in classical logic, in: The Syntax and Semantics of Infinitary Languages, J. Barwise, ed., Lecture Notes in Mathematics #72, SpringerVerlag, Berlin, pp. 204236. G. TAKEUTI [1987] Proof Theory, NorthHolland, Amsterdam, 2nd ed.
Provably Recursive Functions
207
J. V. TUCKER AND J. I. ZUCKER [1992] Provable computable selection functions on abstract structures, in: Proof Theory, P. H. G. Aczel, H. Simmons, and S. S. Wainer, eds., Cambridge University Press, pp. 277306. S. S. WAINER [1970] A classification of the ordinal recursive functions, Arkiv fiir mathematische Logik und Grundlagen]orschung, 13, pp. 136153. [1972] Ordinal recursion and a refinement of the extended Grzegorczyk hierarchy, Journal of Symbolic Logic, 38, pp. 281292. [1989] Slow growing versus fast growing, Journal of Symbolic Logic, 54, pp. 608614. A. WEIERMANN [1996] How to characterise provably total functions by local predicativity, Journal of Symbolic Logic, 61, pp. 5269.
This Page Intentionally Left Blank
CHAPTER IV
Subsystems of Set Theory and Second Order Number Theory Wolfram Pohlers Institut fiir mathematische Logik und Grundlagenforschung West f~lische WilhelmsUniversit~t, D~81~9 Miinster, Germany
Contents 1. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1. Ordinals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2. P a r t i a l models for axiom systems of set theory . . . . . . . . . . . . . . . . 1.3. Connections to s u b s y s t e m s of second order n u m b e r theory . . . . . . . . . 1.4. M e t h o d s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. First order n u m b e r theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. Peano a r i t h m e t i c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. Peano a r i t h m e t i c with additional transfinite induction . . . . . . . . . . . . 3. Impredicative systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Some remarks on predicativity and impredicativity . . . . . . . . . . . . . 3.2. A x i o m systems for n u m b e r theory . . . . . . . . . . . . . . . . . . . . . . 3.3. A x i o m systems for set theory . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Ordinal analysis for settheoretic axioms systems . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
HANDBOOK OF PROOF THEORY Edited by S. R. Buss 9 1998 Elsevier Science B.V. All rights reserved
210 210 215 219 230 231 231 261 266 267 268 279 294 333
210
w. Pohlers
1. P r e l i m i n a r i e s The aim of the following contribution is to present a sample of ordinal analyses of subsystems of Set Theory and Second Order Number Theory. 1 But before we start the presentation of results we think we should enter a general discussion about the type of results we are going to obtain. We want to keep close to Hilbert's program and try to give consistency proofs for axiom systems as constructively as possible  being well aware of all the obstacles to this enterprise resulting from Ghdel's Theorems. The emphasis will be on impredicative subsystems of Set Theory and Second Order Number Theory. Here we opt for a seemingly unconventional approach. We will try to construct partial models of theories within the constructible sets. The hierarchy L of constructible sets is determined by the ordinal line. Therefore special care will be given to notations for ordinals. In the Preliminaries we will introduce our concept of ordinal analysis. First we introduce some basic facts on ordinals, then introduce our concept of partial models and finally show how this is connected to the more conventional approach to ordinal analysis.
1.1. O r d i n a l s Ordinals will play the crucial role for all what follows. Therefore we start with a short introduction to ordinals as we will use them in the following contribution. Ordinals are regarded in their set theoretic sense, i.e., an ordinal is a hereditarily transitive set. Assuming that the membership relation E is a wellfounded relation, this entails that every ordinal is wellfounded with respect to the membership relation and every ordinal is the set of its Epredecessors. For the reader who is not so familiar with set theory we give a brief sketch of the theory of ordinals as far as it will be needed for this paper. Besides transfinite recursion (which may be regarded as a generalization of primitive recursion) all we need from Set Theory are the facts (On1)  (One) below. They may be viewed as axioms for the theory. To make the article not too long we will not give proofs here. Detailed information how to prove the results of this section from (On1)  (One) can be found in Pohlers [1989]. A linear order relation ~ wellorders its field iff it does not contain infinite ~descending sequences. A class M is transitive iff a E M =~ a C_ M.
(On1) The class O n of ordinals is a non void transitive class, which is wellordered by the membership relation E. We define ~ < ~ as c~ E O n A ~ E O n A c~ E ~. In general we use lower case Greek letters as syntactical variables for ordinals. The wellfoundedness of E on the class O n implies the principle of transfinite induction
(V~ E On)[(Vr/< ~)F(~)
=~ F(~)] =~
(V~ E O n ) F ( ~ )
1i am indebted to Dr. Arnold Beckmann for proofreading. He not only detected a series of errors in the first versions but also made many valuable suggestions.
Set Theory and Second Order Number Theory
211
and transfinite recursion which, for a given function g, allows the definition of a function f satisfying the recursion equation f(r/)  g({f(~)[ ~ < ~}). (On2) The class O n of ordinals is unbounded, i.e., (V~ E On)(3r] E On))[~ < r/]. The cardinality IMI of a set M is the least ordinal ~ such that M can be mapped bijectively onto ~. An ordinal ~ is a cardinal if Ic~l  c~. (On3) If M C O n and IMI E On then M is bounded in On, i.e., there is an a E On such that M C ~. For every ordinal a we have by (On1) and (On2) a least ordinal c~' which is bigger than c~. We call a' the successor of c~. There are three types of ordinals: 9 the least ordinal O, 9 successor ordinals, i.e., ordinals of the form c~', 9 ordinals which are neither 0 nor successor ordinals. Such ordinals are called limit ordinals. We denote the class of limit ordinals by Lira. Considering these three types of ordinals we reformulate transfinite induction and recursion as follows: Transfinite induction: If F(O) and (Vc~EOn)[F(c~) (V~ < ,~)F(~) => F(A) for ,~ E Lira then (V~ E On)F(~).
=V F(c~')]
as well as
Transfinite recursion: For given c~ E On and functions g, h there is a function f satisfying the recursion equations f (O)  e~
f (~') = g(:(~)) f ( A )  h({f(r/) I rl
O n or f: a
~ a respectively.
For M c_ O n (M C_ a) the enumerating function enM is a (a)normal function iff M is club (in a). Extending their primitive recursive definitions continuously into the transfinite we obtain the basic arithmetical functions 4, 9and exponentiation for all ordinals. The ordinal sum, for example, satisfies the recursion equations
a+O=a ~ + / ~ ' = (~+/~)' a + A = sup~ < ~(a + ~) :for A e Lira. It is easy to see that the function A(. a 4 ( is the enumerating function of the class {( E On] a < (} which is club in all regular a > a. Hence A(. a 4 ( is a Rnormal function for all regular a > a. We define H := {~ e On I o~# 0 ^ (V~ < oL)(Vrl < o0[~ + r/< ~]}
and call the ordinals in H additively indecomposable. Then H is club (in any regular ordinal > w), 1 := 0' E H, w E H and w r q H = {1}. Hence end(0) = 1 and en~(1) = w which are the first two examples of the fact that (V~c E On)[en~(~) = w~]. Thus A(. w~ is a (a)normal function (for all a E R bigger than w). We have
][][ c_ Lira U {1} and obtain e M/i/(v~
< o0[,~ + o~ = 04.
(1)
Set Theory and Second Order Number Theory
213
Thus for a finite set { a l , . . . , an} C_ H we get al +...
+ an = akl + "'" + akin
for { k l , . . . , kin} C_ { 1 , . . . , n} such t h a t ki < ki+l and ak~ _> ak~+l. By induction on a we obtain thus ordinals { a l , . . . , an } C_ H such t h a t for a r 0 we have a = al + " " + an and al > _ ' " >_ an.
(2)
This is obvious for a E H and immediate from the induction hypothesis and the above r e m a r k if a = ~ + 77 for ~, 77 < a. It follows by induction on n t h a t the ordinals a l , . . . , an in (2) are uniquely determined. We therefore define an additive normal form a=NFal+...+an'r
a=al+...+an,
{at,...,an}C_H
and al >_ " " >_ an.
We call { a l , . . . , an } the set of additive components of a if a =NF a l + ' ' ' + an. We use the additive components to define the symmetric sum of ordinals a =NF a l + ' ' ' + an and fl =NF an+l + " " + am by a ~ fl "= a~(1) + ' "
+ a~Cm)
where lr is a p e r m u t a t i o n of the numbers { 1 , . . . , m} such t h a t 1 _< i < j _< m =~ a~( 0 ___ a~(j). In contrast to the "ordinary ordinal sum" the symmetric sum does not cancel additive components. By definition we have
It is easy to check t h a t the symmetric sum is order preserving in its b o t h arguments. As another consequence of (2) we obtain the Cantor normal f o r m for ordinals for the basis w, which says t h a t for every ordinal a r 0 there are ordinals ~ 1 , . . . , ~n such that a = N F w ~'~ + " ' " + w ~".
Since ) ~ . w r is a normal function we have a _< w ~ for all ordinals a. We call a an e  n u m b e r if w ~ = a and define r
"  min { a I w" = a } .
more generally let ) ~ . Q enumerate the fixed points of ) ~ . w r . If we put :=
Z) :=
we obtain ~o "= sup expn(w, 0). n(og
For 0 < a < Co we have a < w ~ and obtain by the Cantor Normal Form T h e o r e m uniquely determined ordinals a l , . . . , an < a such t h a t a =NF w al + "'" + w an.
214
W. Pohlers
For a class M c On we define its derivative M' := {~ e On[ enM(~) = ~}. The derivative f ' of a function f is defined by f ' := enFix(/), where Fix(f) :  {r
f(r
= r
Thus f ' enumerates the fixedpoints of f. If M is club (in some regular ~) then M ' is also club (in ~). Thus if f is a normal function f' is a normal function, too. If {M, I t e I} is a collections of classes club (in some regular ~) and III e O n ([I[ e ~) then ~,e, M~ is also club (in ~). These facts give raise to a hierarchy of club classes. We define c
(o) : = H := :=
cr(r
for
e Lira.
If we put ~ :  encr(~), then all 9~a are normal functions and we have by definition
a < fl => ~oa(~P~7) = ~ZT.
(3)
The function ~ is commonly called Veblen function. From (3) we obtain immediately
(PC~I~I ~~ (~cx2~ ig
O/1 < C~2 and ~1 ~_~(~Ot2~ or al = a2 and 131 < ~2 or a2 < al and r < f12.
(4)
We define the Veblen normal form for ordinals ~ d / b y a =NF ~
: r
a = ~
and 7"1< a.
Then a = N F (P~IT]I and a = N F (P~2~2 ~ r " r and 771 = ?72. Since ~ < a and < ~ E Cr(a) implies ~ r we call Cr(a) the class of acritical ordinals.. If a is itself acritical then ~, r / < a =v qoer/< a. Therefore we define the class SC of strongly critical ordinals by SC := {a e On I ~ e C~(a) }. The class SC is club (in all regular ordinals ~ > w). Its enumerating function is denoted by A~. F~. Regarding that by (4) A~. ~,0 is order preserving one easily proves sc =
=
If we define 7o := 0 and %+1 := 9~. 0 then we obtain F0 = sup 7. n<w
Set Theory and Second Order Number Theory
215
For every a < Fo there are uniquely determined ordinals ~1,... ,~n < a and ~h,..., r/n < a such that
,F (P~lrh + . . . + cP~nrl,~ and rii < ~a~,rli for i E { 1 , . . . , n}.
(5)
This is all we need to know about ordinals for the moment. We will have to come back to the theory later. 1.2. P a r t i a l m o d e l s for a x i o m s y s t e m s of set t h e o r y Let /::(E) denote a language of Set Theory, i.e., a first order language which contains a symbol E for the membership relation. Later we will add a unary relation symbol Ad. We will distinguish between restricted quantifiers (Vx E a) and (3x E a) and unrestricted quantifiers of the form (Vx) and (3x). Recall the Levy hierarchy of/::formulas. We say that a formula is A0 if it contains only restricted quantifiers. A formula F of/:: is a Hnformula, if F = (VXl)(3x2)... (Q~xn)a(x~,... ,x~), where G(xl,... ,xn) is a A0 formula and V3..(Qn) an alternating string of unrestricted quantifiers. Dually, a formula is En if ~F is logically equivalent to a 1Informula. A formula F ( x l , . . . ,xn) is a Anformula of an s 92 iff there are a 1Informula FH(Xl,... ,xn) and a Enformula Fr.(Xl,... ,x~) such that 91 I= (VZ)(F(Z) +4 .Fn(.~)) A (VZ)(F(Z) +4 Fs(.~)).
(6)
We call F a Anformula of a theory A x iff A x proves the formula in (6). The class of Eformulas is the smallest class which contains the A0formulas and is closed under the positive boolean operations V and A, restricted quantification and unrestricted existential quantification. The class of II formulas is the dual of the class of Eformulas. A formula F ( x l , . . . , xn) is A in a structure 92 if there is a Eformula FE(Xl,... ,xn) and a Hformula Fii(Xl,... ,Xn) such that 92 ~ (VZ)[F(Z) ++ Fr~(Z)] and 91 ~ (VZ)[F(Z) ++ Fn(Z)]. It is A for a theory A x iff it is A for all models of Ax. Recall that the constructible hierarchy L is the union of its stages L= given by the following definition. 1.2.1.
Definition.
L0  @
La+1 = Def(L~)
L~ = U{L~I f < ,~} for A E Lim, where a E Def(La) iff there is an /~(E)formula F(x, yl,...,yn) with no other free variables and a tuple (bl,...,bn) of elements of L~ such that
a  {seL=l L, k
F(s,b~,...,bn)}.
216
W. Pohlers
The formula F z is obtained from F by restricting all unrestricted quantifiers in F to z, i.e., replacing (Vx) by (Vx e z) and (3x) by (3x e z), respectively. The formula F z is obviously A0. We say that a formula G is II~ or E,~ if G
FL~
for a YInformula F or Enformula F, respectively. Let Ax be an axiom system formulated in the language s
We define
[IAx[Ioo := min {hi L~ ~ Ax}
(7)
and call I[Ax[Ioo the c~norm of Ax. By LbwenheimSkolem downwards and the Condensation Lemma for the constructible hierarchy we know that [[Ax[Ioo is a countable ordinal which may be quite big. Much too big for our purpose as we will see in a moment. To obtain smaller ordinals which may be characteristic for Ax we regard partial models for Ax. For any complexity class $" of/:sentences we define ConT(Ax) := {FI F e $" A A x e  F }
(8)
I[Axl[~ := min {a[ L~ ~ Con~(Ax)}.
(9)
and
The complexity of the sentences in the axiom systems which we are going to study will never exceed 113. Therefore we always have IIAxll
= IIAxlloo.
We will only consider axiom systems which comprise the following basis theory K P whose axioms are: The ontological axioms of Extensionality and Foundation (Ext)
(Vu)(Vv)[u = v ~ (Vx e u)(x e v) A (Vx e v)(x e u)]
(Found)
(Vu)[(3x e u)(x e u) + (3x e u)(Vz e x ) ~ ( z e u)].
The closure axioms of Pairing and Union (Pair)
(Vu)(Vv)(3w)(Vx)[x e w ~ x = u V x = v]
(Union)
(Vu)(3w)(Vx)[x e w ~ (3y e u)(x e y)]
The set existence axiom schemes of absolute Separation and Collection (A0Sep) (Vg)(Va)(3z)(Vx)[x e z ~ x e a A F ( x , ~*)] (A0Col) (WT)(Vu)[(Vx e u ) ( 3 y ) F ( x , y, g) + (3z)(Vx e u)(3y e z ) F ( x , y, ~7)]. In both schemes we allow only A0formulas F(x, g) or F ( x , y, g) of the language s respectively. This requirement guarantees the absoluteness of the sets whose existence is postulated. We use the abbreviations which are common in Set Theory. E.g. a = {x e u[ F(x)} stands for (Vx e a)[x e u A F(x)] A (Vx e u)[F(x) ~ x e a];
Set Theory and Second Order Number Theory
217
{ x 9 I F(x)} 9 b s t a n d s f o r (3y)[y = { x 9 I F(x)} A y 9 b], etc. Lower case Greek letters are supposed to range over ordinals. Adding the ontological axiom of Infinity (Inf)
(3u)[0 e
^ (w e
u
{z} e u)]
we obtain the theory K P w  and adding the Foundation Scheme (FOUND) (3x)F(x) ~ (3x)[F(x) A (Vy e x)~F(y)] we obtain the theories K P or K P w respectively. If we restrict the formula F(x) in (FOUND) to a complexity class $" we talk about S'Foundation, denoted by (gvFOUND). An ordinal a is called admissible if L~ ~ K P . The theories K P and K P w are profoundly studied in Barwise [1975]. They prove 2recursion a very important t h e o r e m  and Ereflection, i.e., the scheme
F + (3x)F ~ for Eformulas F. As a consequence both theories prove the equivalence of any Eformula to a Elformula. Recall that a partial function f: L~ >p L~ is called apartial recursive if its graph is Edefinable over L~ i.e., if we have a Eformula F(x, y, ~ without further free variables and a tuple g of elements of L~ such that f (a) "~ b iff L,~ ~ F[a, b, c] for all a, b E L~. Admissible ordinals are important for generalized recursion theory because L~ is closed under all arecursive functions if a is admissible. Obviously w is an admissible ordinal and it is a folklore result that w~K, i.e., the least countable ordinal which cannot be represented by a recursive wellordering on the natural numbers, is the next admissible ordinal. Therefore L~cK ~ K P w and, since there are no further admissibles between w and w~K, even IIKP~I[o~ w~K. Hence IIgPwtlH2 < w~K as well as iigPwll~l < w~K. For most impredicative theories Ax, however, we have w~K < [[Ax[l~,. Therefore we need to introduce the following ordinals. 1.2.2. Definition.
Let A x be a theory which contains K P  . Then we define
iiAxilr.~ := min {a I (VF)[F is a Elsentence A A x e  F T'~ =~
L~ ~ F]}
and analogously IHAxlHH~ "= min {at (VF)[F is a II2sentence A A x e  F L~ =~
L~ ~ El}
The notation A x ~ F ~'~ has to be read with the necessary care. It anticipates that L~ can be defined in some way from the axioms in Ax. We need Erecursion to
218
W. Pohlers
define the function a ~ L~. To prove the Erecursion theorem it suffices to have K P  + ElFOUND. So for theories extending K P  + ElFOUND we need only a description of a in order to characterize La. We say that A x believes that ~ is admissible iff L~ is definable in A x and
Ax
(10)
holds for all sentences G E K P  . rule for a complexity class ~" iff A x ~ a
~
We say that A x proves the L or L~reflection
A x ~ (3~)(3u)[u = L~ ^ a ~]
or
A x e  a '
Axe
=
^
respectively, for all formulas G E .T. The following lemma is a first easy observation about partial models. 1.2.3. L e m m a . Let a E (w, IIAxII~] be an ordinal such that A x believes that a is admissible and A x proves the L~reflection rule for Elformulas. Then Ilhxll,~ 
IIAxll r. P r o o f . We obviously have IIAxII~ ~ IIAxllr~ and need only to show the converse inequality. Thus put a :  []hxIl~.?, let (Vx)(3y)F(x, y) be a II2sentence such that h x ~ (Vx E L~)(3y E L~)F(x,y) and choose some a E L,. We have to show that L~ ~ (3y)F(a, y). Since a is obviously a limit ordinal there is a ~ < a such that a E L~. By definition of a there is a Elsentence G such that A x ~ G L~ but LZ ~: G. Since A x proves L~reflection we obtain A x ~(3y E L~)(3u E L ~ ) ( u  L~ A G u) and since A x believes that a is admissible by A0collection relativized to L~ also h x ~ v E L~ + (3z E L~)(Vx E v)(3y E z)F(x, y). Choosing v  u we thus get A x ~ (3~/E L~)(3u E L~)(3z E L~)[u = L~ A G ~ A (Vx E u)(3y E z)F(x, y)]. Since A x believes that ~ is admissible this is equivalent to a Elsentence relativized to L~. Hence L~ ~ (37)(3u)(3z)[u = L~ A G ~ A (Vx e u)(3y e z)F(x, y)]. Because u = L~ is absolute for L~ we finally get G L~ and (Vx E L~)(3y E L ~ ) f ( x , y) for some 7 < a. Because ofL~ ~ G w e h a v e f l < 7. Hence a E L~ C_ L~ and it follows L~ ~ (3y)f(a, y)as desired. O If K P w  + ElFOUND c_ A x then w < [[Ax[l~ =" a and A x proves the Lreflection rule for Eformulas. Interpreting the provable sentences of A x it makes no difference if we think that every unrestricted quantifier is restricted by L~. Since K P  c_ A x this has the same effect as if A x believes that a is admissible. Therefore we obtain as a corollary of Lemma 1.2.3 1.2.4. Corollary.
If K P w  + E,FOUND C_ A x then [[Ax[Ig 1 = [[Ax[[n2.
Another observation is that adding true II~sentences does not increase the E~: ordinal of an axiom system.
219
Set Theory and Second Order Number Theory
1.2.5. T h e o r e m .
Let G be a true H~ sentence. Then IIAx + GII~ = I I A x I I ~
Proof. Let G  H L~ for a Hisentence H. Assume that Ax + G ~ F L~ for a E1 sentence F. Then A x e  ( H + F) L~ and H + F is El. For c~ "  I I A x [ ] ~ we thus have L~ ~ H + F. From ~ _ a, L~ ~ H and the downwards persistency of Hisentences we get L , ~ H which in turn entails L~ ~ F. Hence [lAx + GII~ ~ _ I[Axll~?. But the converse inequality holds trivially and we have [lAx + GII~ ~ =
ItAxII~r
D
We introduce the following notation.
1.2.6. Definition.
IIAxll~ = [IAxl[~
Because of Lemma 1.2.3 we get IIAxII~  IIAx[Inu for theories Ax satisfying the hypotheses of the lemma. We call the computation of the ordinal [IAxll~ a aordinal analysis for Ax. It will turn out that IIAx[[~CK will be the most important ordinal. In Section 2.1.4 we will see that there is also something as an wordinal which gives a characterization of the Skolem functions of the provable H~'sentences of an axiom system Ax in terms of a subrecursive hierarchy. 1.3. C o n n e c t i o n s to s u b s y s t e m s of second o r d e r n u m b e r
theory
Let L~ be the language of Second Order Arithmetic. We assume that L~ contains a constant 0 for 0 and constants for all primitive recursive functions and predicates. We restrict the language to unary predicate variables and talk about set variables. This means no real restriction since we have a primitive recursive coding machinery. We use capital Latin letters as syntactical variables for sets and write t E X instead of X(t). We assume familiarity with the complexity classes in the arithmetical and analytical hierarchy. Since all primitive recursive functions and predicates have A0definitions in K P w  + E1  FOUND, we may regard L~ as a sublanguage of L(E) by restricting all first order quantifiers to w and replacing all second order quantifiers (VX) and (3X) by (VX C w) and (3X C_w), respectively. We may therefore transfer the notions of the arithmetical and analytical hierarchy to the language of/:(E). Whenever we talk of a H ~ E ~ 1I~, . . .  sentences in the language of Set Theory without further comments we think of a translation of the corresponding s 2 One of the basic facts for the things to come is the wCompleteness Theorem for H~sentences. We will use the wCompleteness Theorem to introduce the notion of truth complexity for II~sentences. The value t N of a closed term t and the truth value of an atomic sentence in the standard structure N are primitive recursively computable. Since there are symbols for all primitive recursive functions and predicates we obtain the diagram of N D(N) "= {A I A is an atomic sentence and N ~ A }
W. Pohlers
220
as a recursive set. For arithmetical sentences which are not atomic the truth definition is given inductively by
N ~ A1 and N ~ A2=v N ~ A 1 A B1 N~A,.forsomeie{1,2} = ~ N ~ A 1 VA2 N ~ A(n) for all n e N =~ N ~ (Vx)A(x) N p A(~) yor ~om~ ~ e N ~ N p (3~)A(~). To extend this truth definition to Hisentences we introduce an infinitary calculus. For technical reasons we opt for a one sided sequent calculus g la Tait. First we fix the language of the Tait calculus. The non logical symbols for the Taitlanguage of s are:
9 The constant 0 as well as constants for all primitive recursive functions and relations. The logical symbols comprise: 9 Bounded numbervariables, denoted by x, y, z, Xl,... and set variables, denoted by X , Y, Z, X1,... 9 The logical connectives A, V and the quantifiers V, 3. 9 The membership symbol E and its negation ~. Terms are built up from 0 and function symbols in the familiar way. We use S as a symbol for the successor function. Terms of the shape ( S ~ ~ 0 ) are numerals and ntimes
will be denoted by n_. Atomic formulas are t E X, t ~ X and R ( t l , . . . , t ~ ) , where t, t l , . . . , t ~ are terms, X is a set variable and R is a symbol for an nary relation symbol. From the atomic formulas we obtain the formulas of s in the familiar way. Notice that we do not have free number variables in the language. The negation symbol is not a basic symbol of the Taitlanguage. We define the negation of a formula by de Morgan's laws.
~(t e x ) .  (t r x); ~(t r x ) . = (t e x) ~ ( R t l . . . t ~ ) "=_ ( R t l . . . t n ) where R is a symbol for the complement of R ~(A A B)"_= (~A V ~S); ~(A V B ) :  (  ~ A A ,S) ,(Vx)A(x) "=_ (3x)~A(x); ,(3x)A(x) : (Vx)~A(x). It is obvious that we have
,,A=A.
(11)
The semantics for the Taitlanguage is straightforward. We easily check N ~ ( ~ A ) [ S l , . . . , Sn] iff N ~s A[S1,..., S~] for any assignment of sets S 1 , . . . , S~ to the set variables occurring in A. We use capital Greek letters A, F, A, A1, ... as syntactical variables for finite sets of/:~formulas.
221
Set Theory and Second Order Number Theory
1.3.1. Definition.
We define ~ A inductively by the following clauses:
(AxM) I f A N D(N) ~ q} then ~ A for all ordinals (~. (AxL) If t N = s N then ~ A, s r X, t e X for all ordinals ~. (A)
If ~
A,A,
and~i < a f o r i =
1,2 then ~ A, A1 A A2 .
(v)
If ~ A , A , a~d ~ < ~ 1o~ ~om~ i e
(V)
If ~
(3)
y ~ A, A(i) a~d ~o < ~ 1o~ ~om~ i e N t h ~ ~ A,
{1,2} t ~
~ A,A~ V g~.
A, A(i) and ai < a for all i e N then ~ A, (Vx)A(x) .
(3z)A(~).
The relations ~ A is to be read that there is an infinite proof tree of V A whose depth is bounded by the ordinal a. It is obvious from the definition that we have A,
a _< Z,
A C_ F
=~
~ F
(12)
and a simple induction on a shows F1, . . . , Fn =~ N ~ (F1 V . . . V Fn)[S1, . . . , Sm]
for all assignments $ 1 , . . . , Sm to the set parameters occurring in (F1 V ... V Fn). We thus have F (9~) =~ IN ~ (V)~)F()~)
(13)
for all H~sentences (V)~)F(s If F is a true arithmetical sentence, i.e., if F does not contain set parameters, we obtain by a simple induction on the complexity rk(F) of the formula F (which can be taken to be the number of logical symbols occurring in F) [rk(F)F.
(14)
This shows that ~ F is indeed an extension of the truth definition for arithmetical sentences. However, to prove that ~ F is also complete for II]sentences, i.e., the opposite direction in (13), needs harder work. We follow Schiitte's proof via search trees. Let Seq be the set of (codes for) finite sequences of natural numbers. For s, t C Seq we denote by s C_ t that s is an initial segment of t. A tree is a set of finite numbersequences which is closed under initial segments. The elements of a tree are called nodes. A set of nodes in a tree is a thread, if it is linearly ordered by C_. A maximal thread in a tree is a path. A binary relation ~c_ w x w is wellfounded iff there are no infinite ~descending chains. For wellfounded ~C w x w we define otyp.~ ( n ) : and
sup (otyp.~(m) + 11 m ~ n} wl
if n e field(~) otherwise
W. Pohlers
222
otyp( ) :=
I
e field( mo
(A)
I f ~ A , A i a n d m i < m f o r all i E (1,2}, t h e n ~  A , A1 A A2
Set Theory and Second Order Number Theory
233
(B)
If ~
A, A(t), then ~ A, (Bx)A(x) for all m > mo
(V)
If ~2_ A, A(u) and u not free in A, (Vx)A(x), then ~ A, (Vx)A(x) for all m>mo.
The identity axioms are the following formulas ( w : ) [ x = ~]
( w ) ( v y ) [ ~ = y + y = ~] ( w ) ( v y ) ( V z ) [ ~ = y ^ y = z + 9 = z]
(V..~)(Vy~[xl = (v~)(v~[x,
y, A
= y, ^
(W:)(Vy)[~ = y
+
...
A x n   Yn
. . . ^ x,, = y,,
,Yn)]
"'+
f(xt,...,xn)
 f(y,,...
+
(R(~,,...
,~,,) ~ R ( y , , . . .
,y,,))]
(x e X + y e X ) ]
Due to Gentzen's Hauptsatz we have the following theorem:
Let A be a finite set of formulas such that V A is valid in the sense of first order predicate logic. Then there are finitely many identity axioms I1,..., In and an m < w such that ~ ~I1,.. .,,In, A.
2.1.2.2. T h e o r e m .
Let t7 be a list containing all number variables which occur free in A. induction on m using the fact
A (~) ~,~d ~ = t ~ =. ~ A (t)
An easy
(30)
shows
r A(~) ~
~ A (~)
(31)
for every tuple g of numerals. We have (w)(vy)[~ = y
~
(~ e x + y e x ) ]
and all the other identity and defining axioms for primitive recursive functions and relations are true arithmetical sentences. Thus, using also (16), we have tc(F) < w
(32)
for all mathematical and identity axioms except induction. What really needs checking is the truth complexity of the scheme of Mathematical Induction. Here we need the following lemma. 2.1.2.3. T a u t o l o g y L e m m a .
For every s
we have [2.rk(F) A, ~F, F .
The proof is immediate by induction on rk(F). The truth complexity for all instances of Mathematical Induction follows from the Induction Lemma.
234
W. Pohlers
2.1.2.4. Induction Lemma.
For any natural number n and any s
F ( n ) we have
I
~F(0),~(Vx)[F(x) + F(S(x))], F(n_.).
(33)
The proof by induction on n is very similar to that of (20). For n  0 we get (33) as an instance of the Tautology Lemma. For the induction step we have 12"['k(F(~))+"l ~F(0), ~ ( V x ) [ F ( x ) + F(S(x))], F ( n )
(i)
by the induction hypothesis and obtain j2.~k(F(~_)) ~F(0), ~(Vx)[F(x)+ F ( S ( x ) ) ] , ~ F ( S ( n ) ) , F ( S ( n ) )
(ii)
by the Tautology Lemma. From (i) and (ii) we get 12"[~k(F(~))+"]+l ~F(0),,(Vx)[F(x) ~ F(S(x))], F ( n ) A ~ F ( S ( n ) ) , F ( S ( n ) ) . (iii) By a clause (S) we finally obtain
]2.[,k(F(n_))+n]+2~F(O), ~(Vx)[F(x) + F ( S ( x ) ) ] , F ( S ( n ) )
.
[3
By Lemma 2.1.2.4 we have tc(G) _~ w + 4 for all instances G of the Mathematical Induction Scheme. Together with (32) we get tc(F) g w + 4
(34)
for all identity and nonlogical axioms of NT. If N T ~ F then there are s {F1,..., Fn} and a natural number m such that ~ ,F~, . . . , ~Fn, F
(35)
and for all i E {1,..., n} the formula Fi is either an axiom in N T or an identity axiom. For every s F such that N T ~ F we thus get by (35) and (31) LNsentences F1,..., F~ such that ~F1, . . . , ~Fn, F
(36)
and by (34) I~+' F/
(37)
for all i E {1,..., n}. As sketched in Section 1.4 the problem of linking (37) and (36) will be solved by introducing a semiformal calculus. 2.1.2.5. Definition. We define ~ A for a finite set A of s contain at most free setvariables inductively by the following clauses: (AxM) I f A N D(N) r q} then ~ A for all ordinals c~ and p.
which
Set Theory and Second Order Number Theory
235
(AxL) If t N = s N then ~ A, s r X, t E X for all ordinals a and p.
(A)
Is $ A, A, a . ~ . , < ~ So~ i = 1, 2 the. ~ A, A, ^ A~
(v)
If ~ t,,A, a , ~ . o <  io~ ~om~ i e {1,2} th~. ~ A,A~ V A~.
(V)
If ~ A, X(i) a,~ ~, < . fo~ aUi e N t ~ . ~ A, (W)A(~).
(3)
If ~ A, A(~) a . ~ . o <  fo~ ~om~ ~ z N the. ~ A, (3~)A(~).
(cut)
If ~ A , F , ~ A , ~ F , ai < a ]ori E {1,2} andrk(F) < p then ~ A.
We have to read ~ A as" "There is an infinitary derivation of A of height < a whose cut formulas are all of ranks < p." We call F the main formula of a clause in the definition of ~ A, F if F is responsible for A, F being an axiom in (AxM) and (AxL) or if the logical symbol introduced in the clause belongs to F. Thus (cut) possesses no main formula. From the definition we get immediately
~
~.
~ ~
(3s)
From (31) and (38) we have
for every tuple z7of numbers. Another immediate property is
~A,
o~ 0. If the last clause was not a cut of rank >/3 we obtain the claim from the induction hypotheses and the fact that the functions qOp are order preserving. Therefore assume that the last inference is ~ A,F 18+wp
I8+wp ~: ~X,~F
~
I8+wP ~ A
such that ~ _ rk(F) < ~+wP. But then there is an ordinal r such that rk(F) = ~ + r which, writing r in Cantor normal form, means rk(F) = ~ + w ~1 + . . . + w ~" < ~+wP. Hence al < p and, putting a := al, we get rk(F) < ~ + w~. (n + 1). By the side induction hypothesis we have ~ A F and ~ A, ~ F . By a cut it follows '8
'
'8
Ivp"i+~pa2 A Ifwe define ~ ) c ~ .__ c~ and ~^(n+l) c~ . = ~a(~(n)c~) then we obtain from 8_i_wO..(n+l)
9
a < p by n + 1fold application of the main induction hypothesis 18
(~Opal+~opa2) A .
Finally we show ~(~)(~pal + ~pa2) < ~pa by induction on n. For n = 0 we have ~(a0) (qOpO/1 + (/9pO~2) = ~pO/1 { ~pO/2 < qOpO/ since ai < a and ~pO/ E C r ( 0 ) . n(n+l)
For the
induction step we have ~ (~pal + ~pC~) = ~ ( ~ ( n ) ( ~ p a l + ~pa2)) < ~pa since < p and ~(~)(~pal + ~pa~) < ~pa by the induction hypothesis. Hence ~ El ~8 A By the Predicative Elimination Lemma we obtain the function which computes an upper bound for the height of the cut free derivation. 2.1.2.10. E l i m i n a t i o n T h e o r e m .
Then
i i0
Let ~p A such that p NF 0)Pl I"'" "~Wpr~ 9
A
Back to the axiom system N T . If N T ~ F then we obtain by (36), (37), and (38) I~+4+n F for m := max{rk(F1), rk(Fn)} + 1 < w By the Elimination Theorem "
9
9
(or even mfold application of the Elimination Lemma) we obtain I~p~(~''~ + 4 + n) F i 0
9
Hence tc(F) _ expm(w, co + 4 + n) < e0 and we have 2.1.2.11. T h e o r e m .
specH~(NT) C_ e0.
2.1.3. Lower b o u n d s for specH~ (NT) We want to show that the bound given in Theorem 2.1.2.11 is the best possible one. By Theorem 1.3.10 it suffices to have Theorem 2.1.3.1 below. 2.1.3.1. T h e o r e m . For every ordinal ~ < Co there is a primitive recursive wellorder .( on the natural numbers of order type ~ such that N T ~ 71(K, X ) .
W. Pohlers
238
The first step in proving Theorem 2.1.3.1 is to represent ordinals below 60 by primitive recursive wellorders. This is done by an arithmetization. We simultaneously define a set On c_ 1N and a relation a < b for a, b E On together with an evaluation map 1" I" On ~ O n such that On and ~ become primitive recursive and a < b r la] < ]b]. We put 9 0 E O n and]O]=O
9 If zl,...,zn C_ On and zl ~_'" ~" zn then (zl,...,zn) E On and ](zt,...,zn)] = wlzal + ... + wlzI and
9 a ko such that t(g) N < P(k) < Ww.k+m[z](0). Applying an inference (V) to (i) we therefore get
The other cases follow immediately from the induction hypothesis.
O
By (96), (97), (99) and (100) we finally obtain N T ~ F =~ (3k)(3m)(3r) [W~.k~ F] for arithmetical sentences F. This yields a H~ N T ~ (Vx)(3y)F(x, y) for a II~
(102) as follows: If (i)
(Vx)(3y)f(x, y)then
W~.~ ~ (Vx)(3y)F(x, y)
(ii)
by (102). Defining ~n := (A[.exP(W,[+l)) (~) and &~(~,r/) recursively by &o([,r/) := ~ and &~+l(~, r/) "= exp(w,&~([,rl)$Co~(rl)+l) we get from (ii) by rfold application of the Elimination Lemma (Lemma 2.1.5.8) [~,(m) (Vx)(3y)f(x, y) W~r(~'k,m) ,o
(iii)
Putting a : &r(w.k, m) < ~o we obtain from (iii) using the Inversion and Witnessing Lemmas (Lemmas 2.1.5.5 and 2.1.5.3) (Vx e N)(3y < W~(x))F(x, y)
(iv)
which shows
I(Vx)(3y)f(x, Y)lno Fun(f)
A oL E O n A
dom(f)
= c~ A (V~c < c~)[Ad(f(~c))
A ('v'(~< ~c)(f((~)E f(~)) A ('v'xE f(~c))(Ad(x) ~ (::I(~< ~)(x = f((~)))]
W. Pohlers
292
express that there are at least a many admissibles. We define KPIv := K P I + (V~ < v)(3f)ltAd(~, f). If < is a wellordering we put KPI.~ := K P I + WO( ( ~ g r( is thus Edefinable in u~ and because of e a E u~ we get { ((, g I()l ( < ~} e u~ by Ereplacement relativized to u~. This proves (iv) in case that ~ E kim. If ~ = ( + 1 then (  otypr(z) for some z E field(r) and dom(hz)  ur as well as gI( are elements of u~. By Erecursion relativized to u~ we obtain hz e u~. Therefore gI~ = gI( U {((,Urng(hz))} e u~. This proves (iv). By (iv) and the definition of S we obtain S ~ e u~ and S y  [.J rng(hy). For r] e u~ we have hy(~?) = {x e w I A((J~ On if {r162
f ( a l , . . . , a n ) := w ~1 # "'" # w '~"
C_ 7/(X)].
we call it
Cantorian
closed.
A set M C_ On and an operator 7t induce a new operator ~
7t[M](X) := 7t(M u X). For an s
E let
by
(148)
301
Set Theory and Second Order Number Theory
par(E) :   {~1 L~ o ~ u ~ i~ E } If {3 is a set of s 3.4.3.2. Definition. conditions:
(149)
and 7i an operator we define ~
:= "//[par(O)].
An operator ~ is acceptable if it satisfies the following
o ~ n(O) 7/is Cantorianclosed (VX E Pow(On))[X C_ ~
(150)
(vx e pow(On))(VY e Pow(On))[X c_ ~ / ( y ) ~ ~/(X) c_ n(y)]. 3.4.3.3. Definition. Let 7r Pow(On) 4 Pow(On)be an operator. For a finite set A of s we define 7/ ~ A iff par(A)t0 {a} C_ 7t and one of the following conditions is satisfied: (A)
There is a sentence F E AM A  t y p e such that 7l[tF(G)] ~  A, G and ar < a for all G E d(F).
(V)
There is a sentence F E AM Vtype such that 7I ~  A, G and oF(G), av < a for some G E d(F).
(Ref~) There is a H~sentence F [~ such that (3zEl~)[z 7l ~po A, F [~ and a, ao + 1 < a. (cut)
r
0 A F ~] E A,
There is a sentence A such that rk(A) < p, 7I ~po A, A and 7I ~po A,~A for some ao < ~.
We say that A is operator controlled derivable if there are an acceptable operator 7t and ordinals a and p such that 7t ~ A. From now on we will only regard acceptable operators without mentioning it explicitly. The following properties of operatorcontrolled derivations are immediate consequences of Definition 3.4.3.3. /f 7t ~ A, 7/C_ 7~', A C_ F, a _< 13 E 7/', p _< a and par(F) C_ 7~'
(151)
then 7t' ~ F. /f 7t ~ A, (31 E L6)F(x) and 5 FLfl for f~ :  co~K by Theorem 3.4.6.6. 7/0 I'~+no By Lemma 3.4.6.7, (200) and the Foundation Theorem (Theorem 3.4.6.9) we therefore obtain 7to [n.2+~ FL . by cuts. Applying the Predicative Elimination Lemma ft+n (n 1)(fl.2+w)
(Lemma 3.4.3.6) we then obtain 7t0 [~o fl+l
F L" Assuming that F is a ~1
formula and putting a : = ~n1)(~.~. 2 [co) < Cf~_l_1 we get by the Collapsing Theorem (Theorem 3.4.5.3) 7/~,+~ ]rr ]r
F L" . Finally applying Theorem 3.4.2.2 we obtain
F L,. Therefore we have shown
3.4.6.10. Theorem.
For f~ := co~K we have IIKP~,IIn _< r
To obtain also an ordinal analysis for the theories KPI and K P i we need controlling operators for axioms (Adl), (Ad2), (Ad3) and (Lim). Let 7t be an acceptable operator which is closed under regular successors and A an ordinal satisfying (V~~(x') conclude ~(t) for arbitrary formulas ~ and terms t in the language. Note that if one has included products a x T among the finite types, one also needs to add basic terms denoting pairing and projection operations of the appropriate types, together with their defining equations as well. In later sections we will consider variants of T which allow for more elaborate types (e.g. "transfinite types") and functionals. For the time being, T is strong enough to interpret HA, as we now show. 7In one treatment of equality in T, these are only for terms of type 0; cf. section 2.5 below.
346
J. A vigad and S. Feferman
2.3. The Dialectica interpretation To each formula qo in the language of arithmetic we associate its Dialectica (or D) interpretation ~oD, which is a formula of the form ~pD __ 3X Vy ~PD
where ~0D is a (quantifierfree) formula in the language of T. Here the free variables of ~oD consist of those free in ~, together with the sequences of variables (possibly empty) x and y. However, the free variables of qo are generally suppressed and we also write ~D(X, y) for ~D. If one or more free variables z of ~ are exhibited, as ~(z), then we write ~OD(Z,y, z) for ~0D. The associations ( )D and ( )D are defined inductively as follows, where ~D = ::IX Vy ~ D
and
~/)D _. ~U VV ~/)D.
1. For ~ an atomic formula, x and y are both empty and ~0D = ~ D  ~.
2. (r ^ r176= 3~, ~ vy, ~ (r ^ r 3. ( ~ v r x, uVy, v ( ( z = 0 A ~ D ) V ( z = I A C D ) ) . 4. (Vz ~(z)) ~ = 3 x Vz, y ~ ( X ( z ) , y, z).
5. (3z ~(z)) ~ = 3z,~ vy ~ ( ~ , y , z). 6. (~ ~ r = 3u, r w , ~ ( ~ ( ~ , Y(~, ~)) + r
~)).
The case of + has been put last here because this requires special explanation below. Since we have defined ~ p to be ~ + A_, from 6 we obtain
7. (~)~ = ~v w  ~ ( ~ , v(~)). In clause 1 we assume the obvious identification of the symbols + and x of HA with the terms that represent addition and multiplication in T. The definition of ~pD for atomic ~, of (~ A r and of (3z ~(z)) D needs no comment. The definition of (~ V r is also clear on a constructive reading: the new parameter z tells which disjunct is being established, according as to whether z = 0 or z  1. The definition of (Vz ~(z)) D is obtained by prefixing a universal quantifier to ~pD to obtain
Vz 3x Vy ~PD(X,y, z) and then "skolemizing" the existentially quantified variable. The motivation behind the definition of (~o ~ r given by Gbdel is as follows: from a witness x to the hypothesis ~ D o n e should be able to obtain a witness u to the conclusion cO, such that from a counterexample v to the conclusion one should be able to find a counterexample y to the hypothesis. In short, one uses equivalences (iiv) below to bring quantifiers to the front, and then skolemizes the existential variables:
(3~ vy ~ ( ~ , y) ~ 3~ w r ~)) w (vy ~ ( ~ , y) ~ ~ w r ~)) w 3u (vy ~ ( ~ , y) ~ w Co(u, ~)) v~ 3u v~ (vy ~ ( ~ , y) ~ r ~)) v~ 3u v~ ~y ( ~ ( ~ , y) ~ Co(u, ~)) v~ 3u, ~ v~ ( ~ ( ~ , ~(~)) + r ~)) 3u, Y w , ~ ( ~ ( ~ , Y(~, ~)) ~ r ~)).
(ii) (iii) (iv) (v) (vi)
G~del's Functional Interpretation
347
The definitions of (~ A r (9~ V r and (3z ~(z)) D are all justified from a constructive as well as classical point of view. The definition of (Vz ~(z))D is justified by an application of the axiom of choice,
(AC)
w av ~(~, y) ~ 3Y w r
r(~))
for arbitrary formulas ~. (A C) is usually accepted classically in set theory; it is also accepted by many constructivists, since the reading of the hypothesis is that one has a constructive proof of Vx 3y ~(x, y), and such a proof must provide a means Y of constructively associating with each x a solution y = Y(x) of ~(x, y). The analysis of (~ + r is more delicate. While equivalences (ivi) above are all classically justified, only (i), (iii), (v) and (vi) are acceptable from a constructive point of view, the first two by logic and the latter two by (AC). Equivalence (ii), namely,
(IF')
(vy v ~ 3u Vv r ~ s~ (vy v + Vv r
is a special case of a principle called independence of premise. Though this is valid in classical logic, it is not generally accepted constructively, since the constructive reading of the hypothesis (0 + 3u 7) is that we have a constructive means of turning any proof of the premise 0 into a proof of r/with a witness for the existential quantifier applied to r]. In general, the choice of such a u will then depend on the proof of 0, while (IP') tells us that u can be chosen independently of any proof of that premise. Equivalence (iv) can be justified by a generalization of Markov's principle, namely
(MP')
,Vy 0 + 3y ,0
in which 0 is assumed to be quantifierfree. (Assuming that the law of excluded middle holds for CO, argue thus: if CO is true then (iv) is justified, and if CO is false apply (MP').) The problem with (MP') is that there is no evident way to choose constructively a witness y to ,0 from a proof that Vy 0 leads to a contradiction. However if y ranges over the natural numbers one can search for such a y given that one accepts its existence. In this case (MP ~) boils down to the usual form of Markov's principle
(MP)
w ( ~ s v ~(~, y) ~ 3v ~(~, v))
which is accepted in the Russian school of constructivity for ~ quantifierfree. While the reasoning leading to the form of the Dinterpretation is not fully constructive it can still be used as a tool in constructive metamathematics and to derive constructive information. In section 3.1 we'll see that the Dinterpretation verifies the three principles (A C), (IP'), and (MR') just discussed, and hence allows one to use the Dinterpretation to extract constructive information from nonconstructive proofs. 2.4. Verifying t h e a x i o m s of a r i t h m e t i c Ghdel's main result is as follows.
348
J. A vigad and S. Feferman
2.4.1. T h e o r e m . Suppose ~ is a formula in the language of arithmetic, and HA proves ~p. Then there is a sequence of terms t such that T proves ~D(t, y). We express this by saying that HA is Dinterpreted in T. Combining Theorem 2.4.1 with Corollary 2.1.2 we obtain
Suppose ~ is a.formula in the language of arithmetic, such that PA proves ~. Then there is a sequence of terms t such that T proves (~g)D(t, y).
2.4.2. C o r o l l a r y .
In short, PA is NDinterpreted in T. One proves Theorem 2.4.1 by induction on the length of the proof in HA. One only has to verify that the claim holds true when ~ is an axiom of HA, and that it is maintained under rules of inference. We begin by considering the axioms of rules of intuitionistic logic, listed in section 2.1. For most of these the verification is routine, and we only address a few key examples. For example, consider the rule "from ~ + r and ~ conclude r Given a term a such that T proves ~PD(a,y) and terms b and c such that T proves (pD(x,b(x, v)) ~ CD(C(X), V), we want a term d such that T proves CD(d, v). By substituting b(a, v) for y in the first hypothesis and a for x in the second, we see that taking d  c(a) works; so, in a sense, modus ponens corresponds to functional application. The reader can verify that, similarly, the axiom "from ~p + r and r + 0 conclude ~ + 0" corresponds to the composition of functions. Handling the axiom ~ + ~pA ~ requires a bit more work. If the interpretation of the hypothesis is given by 3x Vy ~D(X, y), the interpretation of the conclusion is
According to the clause for implication, we need to provide terms cl(x) and c2(x) taking a witness x for the hypothesis to witnesses xl and x2 for the conclusion; for this purpose, we can simply take c~ (x) and c2(x) to be x. But we also need a functional d(x, yl, y2) that will take yl and Y2 witnessing the failure of ~D(X, Yl) A ~D(X, Y2) to a value d representing the failure of ~D(X, d). This functional must effectively determine which of ~D(X, y~) and ~D(X, Y2) is false. We need the following 2.4.3. L e m m a .
If ~ is a formula of arithmetic, there is a term tv such that T
proves t~(x, y) = 0 ~ ~D(X, y). The lemma is proved by induction on the size of ~PD; the key instance occurs when ~D is simply an atomic formula sl  s2, for which case the required t can be obtained from an application of primitive recursion. Another instance of primitive recursion yields a functional Cond such that T proves
Cond(w, u, v) = ~ u ifw = 0 ( v otherwise.
Ghdel 's Functional Interpretation
349
Taking d  Cond(tv(x, yl), yl, y2) above then suffices to complete the proof for ~o ~ ~oA ~o. Aside from the two instances just described, the recursors R are not otherwise used to verify the logical axioms. Their primary purpose is to interpret the induction rule, "from ~a(0) and ~a(u) + ~(u') conclude ~(u)." Inductively we are given terms a, b, and c so that T proves ~aD(0, a, y) and
v.(u,
b(u,z,
v.(u', c(u,
y,).
We want a term d such that ~aD(U, d(u), y). Using the recursors we can define d using primitive recursion, so that d(0)

a
d(u') = c(u, d(u)). This yields
~aD(O, d(O), y) and
~aD(U, d(u), b(u, d(u), y)) + ~D(U', d(u'), y). The following lemma will then allow us to conclude ~D(U, d(u), y), as desired. 2.4.4. L e m m a .
From r
y) and r
b(u, y) ) + r
y) one can prove, in T,
r
The idea is to work backwards: if "  " denotes truncated (or "cut off") subtraction, note that r y) follows from  1,
 1, y ) ) ,
which in turn follows from r
 2, b(u  2, b(u  1, y))),
and so on, until the first argument is equal to 0. More formally, one defines in T a function e(y, z) by e(y, O) = y and e(y, z') = b ( u  z', e(y, z)), and then uses induction to prove that r e(y, u  w)) holds for every w less than or equal to u. For details, see Spector [1962] or Troelstra [1973]. This leaves only the quantifierfree axioms regarding 0, Sc, +, and x in HA, which follow immediately from their counterparts in T. Once more we emphasize that the recursors of T are only essentially needed for the nonlogical axioms. Roughly speaking, it is the combinatory completeness of T that allows us to verify the axioms of intuitionistic predicate logic, whereas the recursors are the functional analogue of induction. This observation allows one to generalize the Dinterpretation to other theories, as discussed in section 3.3 below.
J. A vigad and S. Fe.ferman
350
2.5. E q u a l i t y at h i g h e r types We return to the question as to how equality at higher types is to be treated in T. There are two basic choices, the intensional formulation taken in GSdel [1958] and the (weakly) extensional formulation of Spector [1962]. In GSdel's formulation, we have a decidable equality relation =~ at each type, i.e. all formulas s =~ t with s, t of type a are taken to be atomic, and the law of excluded middle is accepted for these. Suppressing type subscripts where there is no ambiguity, the equality axioms for T in this version are, as usual,
1. 8   8 2.
= t ^
+
The axioms for K, S and R at each type may be read as they stand. In Spector's formulation, the atomic formulas are equations between terms of type 0 only, and the law of excluded middle is accepted only for these. For a = (al + ... + an + 0), an equation s =~ t between terms of type a is regarded as an abbreviation for =
where the xi are fresh variables of type ai for i = 1 , . . . , n. In particular, the axioms for K, S and R are to be read as such abbreviations. Now in this version, the axioms 1, 2 are taken only for terms s, t of type 0; denote these by 10 and 20 respectively. For s, t of type a r 0 we must further adjoin a rule 2'. From s(xl,... ,xn) = t ( x l , . . . , Xn) and ~[s/x] infer ~[t/x]. An alternative suggested by GSdel in a footnote to his revision [1972] of [1958] and made explicit by Troelstra [1990] is to follow Spector in taking only equations at type 0 as basic but to assume as axioms, besides 10 and 20, just the following consequences of 2' for K, S, and R: s[K(u,
v)lz] =
=
s[R(u, v, 0)/~] = ~[~/~], s[R(u, v, w')/~] = ~[v(~, R(u, v, ~))/~], where six] is a term of type 0 and u, v, w are terms of appropriate type. Note that the resulting theory is contained in both GSdel's and Spector's versions. Most of the results for the functional interpretation are insensitive to which of these three formulations of T (and of other systems like T to be considered below) is taken. When it is necessary to make a distinction, as for example in the next section when considering extensions of T by adjunction of quantifiers, and in section 4.1 when considering models of T, we shall use WET to denote Spector's version and To to denote the version, just described, elicited by Troelstra, while reserving T itself for GSdel's version. However, one further system must be considered along with the latter. It is implicit in the idea of intensional equality at higher types that we have an effective procedure to decide whether any two objects of the same type are identical. This is made explicit by adjoining a constant E~ for the characteristic function of =~ at each type a, with axiom: Ea(x, y)  0 e+ x  ~ y.
GSdel 's Functional Interpretation
351
The system T with these additional constants and axioms is then denoted I  T . 3.
Consequences
and benefits
of the interpretation
3.1. H i g h e r t y p e a r i t h m e t i c Before turning to some of the immediate consequences of the Dialectica interpretation, we pause to consider some easy generalizations. First of all, note that the Dtranslation applies equally well if the source language is also typed. In other words, if we define HA W to be a version of T with quantifiers ranging over each finite type, with the axioms and rules of intuitionistic predicate logic and induction for all formulas in the new language, then we can apply the Dtranslation to this theory as well. A problem, however, arises in verifying the interpretation of the axiom ~ ~ ~ A ~, and this depends on how equality at higher types is treated. For if ~ is an equation s =~ t, in order to extend the argument for Lemma 2.4.3 we shall need to have a characteristic function E~ for  ~ . Otherwise we shall have to limit ourselves to a formulation of HA W using only equations of type 0. Thus we are led to consider three versions of HA W corresponding to the three versions I  T , W E  T and To in section 2.5, denoted respectively I  H A W, W E  H A W, and HA~. To be more precise, in W E  H A w , given the availability of quantification at higher types, we may take equality there to be introduced by definition in terms of equality at lower types so as to satisfy the equivalences
y
Vz
y(z))
for each a, T. What is a new option now is that a fully extensional version E  H A w can be formulated as follows. In this theory one takes the symbol  ~ to be basic at each type, and adopts the full axioms 1, 2 of section 2.5 as in the intensional version. What makes the difference is that the preceding equivalences are now taken as axioms in the fully extensional theory. Now, with little or no modification, the proof sketched in section 2.4 carries over to show that for H any one of the three systems I  H A w, W E  H A W, or HA~, the system H is Dinterpreted in the corresponding quantifierfree subsystem, i.e. I  T , W E  T , or To, resp. This does not hold for the system E  H A W, as shown by Howard [1973]. In particular, no functional of that system satisfies the Dinterpretation of Vu, x, y (x =1 Y ~ u(x) =o u(y)) where x, y are of type 1 (i.e. 0 + 0) and u is of type 2 (i.e. 1 ~ 0). However, E  H A W may be formally interpreted in HA~ preserving all formulas all of whose variables are of type 0 or 1, by relativizing the quantifiers to the hereditarily extensional objects in each type; we may then apply the Dinterpretation to HA~ as above. This is the route taken by Luckhardt [1973]; cf. also Feferman [1977,section 4.4.2] for a brief outline of the details involved. Clearly HA~ c I  H A w and HA"d C W E  H A W C E  H A W. In the following we shall take it that HA W is any one of the three systems I  H A W, W E  H A W, and HA~ for which the Dinterpretation works directly as described above,
352
J. Avigad and S. Feferman
and T is now taken for the corresponding quantifierfree subsystem; however, the reader is free to settle on any one of these three pairs as the preferred one according to taste, s Let us return to the principles ( A C ) , (IP'), and (MP') which figured in the motivation for the definition of (~ > r in section 2.3. These make use of the quantified variables of arbitrary type and are thus formulated in the language of HA w . Given the discussion in section 2.3 it shouldn't be surprising that, in fact, we have 3.1.1. T h e o r e m . Over intuitionistic logic, the schemata ( A C ) + (IP') + (MP') and (p 6+ ~D are equivalent. Henceforth we'll use HA # to denote the theory HA w together with either of these two schemata. The previous discussion tells us that 3.1.2. T h e o r e m .
HA # is Dinterpreted in T.
What, now, can be said about classical theories of higher types? Take PA w to be the classical extension of HA w , obtained simply by adjoining the law of excluded middle for all formulas. It is Ninterpreted in HA w just as PA is Ninterpreted in HA. By the preceding theorem, it is natural to consider the system PA # which is defined to consist of PA w together with all instances of ~ ~ ~gD for ~ in the language of PA w . It then follows immediately that 3.1.3. T h e o r e m .
PA # is NDinterpreted in T .
Consider now the principles ( A C ) , (IP'), and (MP') on the classical side. Of these, (IP') and (MR') follow from classical logic, but (A C) is not derivable from PA #. Moreover, (A C) is not in general preserved under the NDinterpretation. For, the Ninterpretation of Vx 3y ~(x, y) + 3Y Vx ~p(x, Y(x)) is equivalent to
w
y)
Y(z))
or, equivalently,
w
w
which cannot in general be proved in HA # . It is, however, provable for the special case (QFA C) of the axiom of choice with quantifierfree matrix ~. For, in that case,
w
y)
w 3y
y)
8But note that Troelstra [1990,p. 351] says that "WEHA ~ as an intermediate possibility [between IHA ~ and HA~] is not very attractive: the deduction theorem does not hold for this theory." Yet another alternative for dealing with the problem of verifying the axioms ~o~ ~oA without restriction on atomic formulas, but without additional E~ functionals, makes use of a variant of the Dinterpretation due to Diller and Nahm [1974], described in Troelstra [1973,pp. 243245]. We shall not go into that variant in this chapter.
Gbdel 's Functional Interpretation
353
is provable in HA # using (MP'). Then from (A C) in that system we infer 3Y Vx (flY(x,Y(x)), which implies intuitionistically its double negation. The conclusion is that (QFAC) is NDinterpreted in T. The following observation by Kreisel [1959,p. 120] then gives the proper analogue to Theorem 3.1.1: 3.1.4. T h e o r e m . equivalent.
Over classical logic, the schemata ( QFA C) and ~
++
~ND are
3.1.5. P r o o f . In the forward direction, one need only consider negative formulas ~, i.e. those which do not contain disjunction or existence symbols. For such formulas it is easily proved by induction on ~ that ~9N D has the form 3X Yy r (y), y) where r is quantifierfree and where 9~ ++ 3X Yy r (y), y) ~ Yy 3x r y). The reverse direction is immediate. [] Thus PA # can equally well be thought of as PA ~ + (QFA C). When analyzing classical or intuitionistic theories of first or secondorder arithmetic, it often turns to be useful to embed them in fragments or extensions of PA # and HA # respectively; cf. the discussion in section 3.3 below. 3.2. Some consequences of the D  i n t e r p r e t a t i o n Since the Dinterpretations of HA and HA #, as well as the NDinterpretations of PA and PA #, are purely syntactic, they can be formalized in a weak theory of arithmetic. This yields the following theorem, which is of foundational importance. 3.2.1. T h e o r e m . Let S be any of the theories HA, HA #, PA, or PA #. Then a weak base theory proves Con(T) + Con(S). Of course, the interpretations yield far more information than just the relative consistency of the theories involved. Recall that a formula O(xl,x2,...,xn) in the language of arithmetic is A ~ if all its quantifiers are bounded, in which case it defines a primitive recursive relation on the natural numbers. The characteristic function of this relation can be represented by a term t in the language of T, such that PA # or HA # proves 0(~) ~ t(~) = 0. Though it is somewhat an abuse of notation, when we say below that "T proves 0(Y~')" for such a 0, we mean that it proves t(Z) = 0. 3.2.2. T h e o r e m . formula
Let S be any of the theories above, and suppose S proves the II ~ Vx 3y O(x, y),
where 0 is A~o. Then there is a term f such that T proves
J. A vigad and S. Feferman
354
3.2.3. P r o o f . By embedding HA and PA in HA # and PA # respectively and proving the equivalence just discussed, we can assume that 0 is quantifierfree. In that case the Dinterpretation of Vx 3y O(x, y) is 3Y Vx 0(x, Y(x)), and its NDinterpretation is 3Y Vx ~0(x, Y (x)). The conclusion then follows from Theorem 3.1.2 in the case of HA # , and Theorem 3.1.3 in the case of PA #. [] Now suppose h is a recursive function whose graph is defined by a T ~ formula ~(x, y) in the standard model. We say that the theory S proves h to be total if it proves Vx 3[y ~(x, y). In section 2.2 we axiomatized the primitive recursive functionals of finite type, without addressing the issue of what exactly the terms denote. Deferring this discussion to section 4 below, for now let us assume that at least the closed type 1 terms denote functions. As a corollary to Theorem 3.2.2 we have 3.2.4. Corollary. Every provably total recursive function of HA, HA #, PA, or PA # is denoted by a term of T. In fact, in section 4.1 we will see that there are models of T that can be formalized in the language of arithmetic, yielding an interpretation of T in HA. This yields the following result, which is interesting in that it makes no mention of T at all:
3.2.5. Corollary.
PA, and hence HA + (MP), is conservative over HA for II ~
sentences. When it comes to PA, Theorem 3.2.2 is sharp in the following sense. Consider the II ~ sentence "for every x there exists a y, such that either y is a halting computation for the Turing machine with index x, or Turing machine x doesn't halt." Though this statement is provable in PA, any function returning such a y for every x cannot be recursive since it solves the halting problem. Later we'll see that, on the other hand, the functions represented by terms of T are recursive, so that the analogue of Theorem 3.2.2 does not hold for II ~ formulas. Nonetheless, one can extract a different kind of constructive information from PAproofs of complex formulas. Suppose PA (or PA #) proves a formula ~p given by Vxl 3yl Vx2 3y2 ... Vxn 3yn 0(xl, x 2 , . . . , xn, yl, y2,..., yn) where 0 is quantifierfree. implies
The Ninterpretation of this formula intuitionistically
3xl Vyl 3x2 Vy2 ... 3x~ Vy~ ~0(xl, x 2 , . . . , xn, yl, y2,..., y~), whose Dinterpretation is the same as that of VX1, X 2 , . . . , X~ 3yl, Y2, 999 Yn .
As a result we have
.
.
,
.
.
.
,
.
.
.
, y , ) .
GSdel's Functional Interpretation 3.2.6. Corollary.
355
Suppose PA proves ~p as above 9 Then there are terms tx =
F~ (Xx,
X2,
. . . , X~)
tn = Fn(X1, X2, . . . , Xn) such that T proves o(x~,
x~(t,),
. . . , x,(t~,
t~, . . . , t,_x),
t~, t~, . . . , t,).
The observation that the functionals F 1 , F 2 , . . . , F n can be taken to be recursive in their arguments is known as Kreisel's "nocounterexample interpretation" (cf. Kreisel [1951,1959]). To make sense of the name, think of the functions X 1 , X 2 , . . . , X n as trying to provide counterexamples to the truth of ~p, by making 0(X1, X2(Yl),..., X n ( y l , . . . , Yn1), Y l , . . . , Yn) false for any given values of Yl, Y2,..., Y~; in which case F1, F 2 , . . . , F~ effectively provide witnesses that foil the purported counterexample. 3.3. Benefits of t h e D  i n t e r p r e t a t i o n In general, the Dinterpretation is a powerful tool when applied to the reduction of an intuitionistic theory I to a functional theory F. As we have seen, the very form of the Dinterpretation automatically brings a number of benefits 9 Sifice little more than the combinatorial completeness of F is necessary to interpret the logical axioms of I, one only has to worry about interpreting the nonlogical axioms of I and tailor the functionals of F accordingly 9 As an added bonus, I can often be embedded in a highertype analogue I ~ to which we can add the schemata (AC), (IP'), (MR') at no extra cost. Taken together (A C), (IP'), and (MR') prove the scheme ~ ~ ~D, a fact that is often useful in practice since it allows one to pull facts about the Dinterpretation back to the theory being interpreted. This observation will be employed in sections 6 and 7 below. If one is trying to analyze a classical theory C by reducing it to I via a doublenegation interpretation, one has, of course, to ensure that I is strong enough to prove the doublynegated axioms of C. Once again, though, the choice of a Dinterpretation for I has some advantages: C can often also be embedded in a highertype analogue C ~ in a natural way, and the fact that Markov's principle is verified in the interpretation guarantees that one ultimately obtains Skolem terms for provable 17~ sentences. This in turn yields a characterization of C's provably total recursive functions. These advantages of the Dinterpretation are summed up in Feferman [1993]: Applied to intuitionistic systems it takes care of the underlying logic once and for all, verifies the Axiom of Choice AC in all types, and interprets various forms of induction by suitably related forms of recursion. This
J. A vigad and S. Feferman
356
then leads for such systems to a perspicuous mathematical characterization of the provably recursive functions and functionals. For application to classical systems, one must first apply the negative translation (again taken care of once and for all). Since the Dinterpretation verifies Markov's principle even at higher types ... at least the provably recursive functions and functionals are preserved, as well as QFAC in all types and induction schemata. The main disadvantage, though, comes with the analysis of other statements whose negative translation may lead to a complicated Dinterpretation; special tricks may have to be employed to handle these. A further distinguishing feature of the Dinterpretation is its nice behavior with respect to modus ponens. In contrast to cutelimination, which entails a global (and computationally infeasible) transformation of proofs, the Dinterpretation extracts constructive information through a purely local procedure: when proofs of ~ and + r are combined to yield a proof of r witnessing terms for the antecedents of this last inference are combined to yield a witnessing term for the conclusion. As a result of this modularity, the interpretation of a theorem can be readily obtained from the interpretations of the lemmata used in its proof. The process of applying the Dinterpretation to specific classical theorems can sometimes be used to obtain appropriate constructivizations thereof, or to uncover additional numerical information that is implicit in their classical proofs; cf., for example, Bishop [1970] or Kohlenbach [1993]. 4. M o d e l s
of T, type structures,
and normalizability
In section 2.2 we presented the set of terms of the theory T without a discussion of what these terms denote. In this section we exhibit several kinds of functional and term models. 4.1. F u n c t i o n a l m o d e l s The most obvious model of T considered in either the intensional or extensional sense is the full (settheoretic) hierarchy of functionals of finite type, in which the objects of type 0 are the natural numbers, and each type a + T represents the set of all functions from the objects of type a to those of type T. The denotations of 0, Sc, K, S, and R are then apparent, as well as the denotation of terms built up through the application of these constants. The equality relation in T is taken to denote true (extensional) equality in this model. By the primitive recursive functionals in the settheoretic sense we mean those denoted by closed terms of T. One can obtain a "smaller" type structure in which the elements of each type are indices for recursive functions, as follows. Let ~ denote a standard enumeration of the recursive functions, say, using Kleene's universal predicate. Define M0  N, and M~_~ = {e I Vx e M~ 3y e M~ ( ~ ( x ) $= y)}.
Gbdel 's Functional Interpretation
357
One can associate recursive indices to constants of T in a natural way, and interpret equality as equality of indices. The resulting model A/i of I  T (and of IHA ~) is known as the hereditarily recursive operations (though "hereditarily total recursive operations" might be more accurate) and is usually denoted by HRO. Equality is not extensional in this model, since many different indices can represent the same functional. If instead one wants a model of WE T (and of WEHA ~ or even EHA~), one can consider the hereditarily effective operations Af, usually denoted by HEO. For this model, sets N~ and the equality relation =~ are defined inductively as follows: set No  N and =0 the usual equality relation for natural numbers,
w e
y e
y
and e =~.r f  Vz e g~ (~pe(z) = r ~l(z)). Both HEO and HRO can be formalized in HA in the sense that the sets M~ (resp. N~) and the equality relations =~ are defined by formulas in the language of arithmetic, HA proves each axiom of T true in the interpretation, and the natural numbers in the model correspond to the natural numbers of HA. Notice, however, that the complexity of the formulas defining M~ and N~ grow with the level of a, so that HA (or PA) cannot prove the consistency of T outright. By generalizing the notion of a continuous function to higher types, Kleene and Kreisel independently obtained further models of T (cf. also Troelstra [1973]); these will be of significance in section 6. All of the models described in this subsection contain more than just the primitive recursive functionals (or the indices for such), and hence model extensions of T as well. 9 In contrast, a "minimal" model of T, which only contains objects denoted by terms, is provided by the term model, which we now describe. 4.2. N o r m a l i z a t i o n a n d t h e t e r m m o d e l The defining equations for the typed combinators K, S, and R in section 2.2 describe a symmetric equality relation. From a computational point of view, it is often more useful to think of these defining equations as describing a directed relation, in which terms on the lefthand sides of the equations are "reduced" to more basic ones on the right. For example, the defining equations of the theory T yield the following reduction rules: 1. K(s, t) t> s 2. S(r, s, t) ~ r(t)(s(t)) 3. R(s,t, 0) ~, s 9Some, like the recursion theoretic models, have generalizations to transfinite types; see, e.g. Beeson [1982]. Categorytheoretic methods have also been used to construct models of functional theories, though we will not discuss such models here.
J. A vigad and S. Fe.ferman
358
4. R ( s , t , x ' ) ~ t ( x , R ( s , t , x ) ) . If one uses lambda terms instead of combinators, the first two clauses can be replaced by the reduction rule ()~x.s)(t)~ t[s/x]. If one wants to include product types, corresponding reduction rules for the pairing and projection operations can be defined as well. If s and t are terms, we say that s reduces to t in one step, written s + t, if t can be obtained by replacing some subterm u of s by a v such that u ~ v. We say that s reduces to t if t can be obtained from s by a finite sequence of onestep reductions. In other words, the reducibility relation +* is the reflexivetransitive closure of +. Such a reducibility relation is an example of a rewrite system (cf. Dershowitz and Jouannaud [1990]). The following terminology is standard. 4.2.1. D e f i n i t i o n . 1. If s is a term and u is a subterm of s, then u is a redex of s if a reduction rule can be applied to u; i.e. there is some v such that u ~, v.
2. If s has a redex, then s is reducible. Otherwise, s is irreducible, or in normal form. 3. A term is normalizable if it can be reduced to one in normal form. A system of reduction rules is normalizing if every term is normalizible. 4. A term s is strongly normalizable if there are no infinite (onestep) reduction sequences beginning with s; that is, every such sequence eventually leads to a term in normal form. A system of reduction rules is strongly normalizing if every term is strongly normalizable. 5. A system of reduction rules is confluent, or has the ChurchRosser property, if whenever s +* u and s +* v then there is a term t such that u +* t and
v ~* t. If a system of rules is confluent and s is any term, then s has at most one normal form. Furthermore, if the system is also normalizing, then s has exactly one normal form. Identifying closed terms with their normal forms then provides a term model for the defining equations corresponding to the reduction rules. Under this very simple semantics, one can think of each closed term representing nothing more than the "program" it computes, when applied to other closed terms in normal form. Strong normalizability implies that the "programming language" is insensitive to its implementation, in the sense that every reduction sequence terminates regardless of the order in which the reductions are performed. In all the functional theories we consider in this chapter (together with the associated reduction relations), the only closed irreducible terms of type 0 are in fact numerals. In that case, we can identify type 0 objects of the corresponding term model with natural numbers. Showing that the reduction relation associated with a functional theory is confluent and normalizing then has two benefits: 9 it implies that the functional theory is consistent, that is, it cannot prove 0  1; and
G6del 's Functional Interpretation
359
9 it justifies the intuition that the functional theory describes "computable" entities. 4.3. S t r o n g n o r m a l i z a t i o n for T Given the reduction rules corresponding to the theory T described above, one can apply brute but judicious combinatorial force to verify the following 4.3.1. L e m m a . The reduction relation +* associated with T is confluent. Furthermore, any closed irreducible term of type 0 is a numeral. The "shortest known proof" of this, due to Tait and MartinLSf, can be found in Hindley and Seldin [1986,appendix 1]. Assuming we can prove that the relation +* is also normalizing, the resulting term model will satisfy the axioms of T: the uniqueness of normal forms implies that terms on either side of an equality axiom have the same interpretation under any instantiation of the variables; and the fact that the objects of type 0 in the term model are numerals reduces induction in T to induction in the metatheory. Proving that +* is normalizing, however, is bound to be tricky. Because it implies the consistency of Peano arithmetic, the proof must somehow go beyond the capabilities of that theory. W. Tait developed an elegant and flexible technique for proving normalization, using appropriate "convertibility" predicates (cf. Tait [1967] and Tait [1971]). 4.3.2. Definition. denoted by Red~:
For each type a, we define the set of reducible terms of type a,
1. If t is a term of type 0, then t is in Redo if and only if t is normalizing. 2. If t is a term of type a + T, then t is in Red~_.r if and only if whenever s in Red~, t(s) is in Red~. Writing the second clause symbolically, we have that t is in Red~_,~ if and only if Ys (s E Red~ + t(s) e Redr). Notice that the quantifier complexity of the firstorder formula expressing "t E Redp" grows with the complexity of p. The normalization proof proceeds in two steps: 1. One shows, by induction on terms, that every t of type a is in Red~. 2. One shows, by induction on the type a, that every t in Red~ is normalizing. The only aspect of the proof that cannot be carried out in a weak base theory is the verification of clause 2, when t is the recursor R~: at this point the argument requires induction on a formula involving the predicate Red~. As a result we have
J. A vigad and S. Feferman
360 4.3.3. T h e o r e m .
T is normalizing. Moreover, for each term t of T, PA proves
that t is normalizable. This does not mean that Peano arithmetic proves that "for every term t, t is normalizable," since for various t the corresponding PAproof can grow increasingly complex. The latter result, however, follows from the soundness of PA. Another approach to proving normalization involves assigning ordinals (or notations representing ordinals) to terms in such a way that each onestep reduction leads to a decrease in the associated ordinal. For terms of T, this task is carried out by Howard [1970] using notations below the ordinal e0. Via a formal treatment of the term model, this yields, as a byproduct, an ordinal analysis: over a weak base theory the assertion that "there are no infinite descending sequences of ordinal notations beneath ~0" implies the consistency of PA. 4.4. I n f i n i t e l y long t e r m s Another term model for T which is of special interest was provided by Tait [1965]. This uses infinitely long terms to replace the recursors, and thus produces a system of terms which is closer in character to the ordinary typed Acalculus. The closure conditions on terms are as follows: 1. There are infinitely many variables x ~, y~, z~,.., of each type T. 2. 0 is a constant of type 0. 3. Sc is a constant of type (0 + 0). 4. If s is a term of type a and t is a term of type (a + T) then t(s) is a term of type T. 5. If t is a term of type T then Ax~.t is a term of type (a + T). 6. If t,~ (n = 0, 1, 2 , . . . ) is a sequence of terms of type T then (tn} is a term of type (0 + T). Write n for Sc"(O). Then we translate each term t of T into a term t + of this system of infinite terms by taking t + = t for t a variable, 0 or Sc, K + = AxAy.x, S + = AxAyAz.x(z, y(z)) and R +  AfAgAx.(tn} where to  f and tn+l  g(n, tn), and by requiring (.)+ to preserve application. Each term t of the infinite system is assigned an ordinal [t[ as length in a natural way, with [tl = 1 for t a variable or constant, [Ax.t] = [t[ + 1, [t(s)[ : Is[ + It I and, finally, [(tn)[ = supn tm
G6del 's Functional Interpretation
361
3. ((tn)(r))(s)~ (tn(s))(r), when r is not a numeral and (t~)(r) is not of type O. The relation ~* is then the least reflexive and transitive relation which extends the relation and preserves application. As before, a term t is said to be in normal form if whenever t +* u then t is identical with u.
For each term t of T we can find a term t ~ in normal form such that t + +* t ~ and It~ < ~o.
4.4.1. T h e o r e m .
The idea of Tait's proof of Theorem 4.4.1 is very much the same as that for the cutelimination theorem for the extension of Gentzen's classical propositional sequent calculus to that for logic with countably long conjunctions II and disjunctions E. Derivations in PA are translated into derivations in this calculus, by first translating formulas ~ into propositional formulas ~+, using (Vx ~[x]) + = I I n < ~ + [ n / x ] . Then each derivation d from P A is translated into an infinite propositional derivation d + with finite cutrank and Id+l < w. 2. Each derivation whose cutrank is < m + 1 and ordinal length is _< a is effectively reduced to a derivation of the same endformula whose cutrank is < m and ordinal length is _ w~. So for each derivation d of T we eventually reduce d + to a derivation d ~ of length < ~0. Similarly, we can assign a "cutrank" to reducible infinite terms, and lower cutcomplexity at the same exponential cost of increasing ordinal bounds. See Tait [1968], Schwichtenberg [1977], or Chapters III IV in this volume for more details concerning cutelimination for sequent calculi for infinitary languages, and Tait [1965] or Feferman [1977] for details concerning normalization for infinitary term calculi. More information can be extracted from these procedures as follows. Schwichtenberg [1977,section 4.2.2] shows in detail how the infinitary derivations generated from those in PA, as described above, may be coded by indices for primitive recursive functions. For each derivation d' in the sequence of reductions from d + to d ~ the code of d' both determines the structure of d' as a tree and contains a bound on its cutrank (< w) and on its length (< e0, in a primitive recursive notation system for ~0). Exactly the same kind of thing can be done for each infinite term t' in the reduction sequence from t + to t ~ This allows one to define an effective valuation function on t ~ when the initial t is a closed term of type 1 (i.e. 0 ~ 0), and that leads one to 4.4.2. T h e o r e m . The functions of type 1 generated by the primitive recursive functionals may be defined by schemata of effective transfinite recursion on ordinals < ~o. The schemata referred to define, for any given ordinal a < ~0, a function F(x, y) of numbers x, y by F(O, y) = G(y) and for x ~ O, F(x, y) in terms of F(Hi(x, y), y) where the Hi are one or more functions with Hi(x, y) 2, E~+IDCI + (BarRule) is a conservative extension of autonomously iterated II~CA .for II~ statements.
9.
The interpretation
of theories
of ordinals
9.1. A classical t h e o r y of c o u n t a b l e t r e e o r d i n a l s We now extend the methods of the previous section to a finite type theory OR"i of countable (tree) ordinals including the # operator, in order to obtain a conservation result over a classical theory IDI of one arithmetical inductive definition. 19 This work is compared with that of Howard [1972] on analogous constructive theories; it is an interesting open question how these results may be combined. The work described here is based on an unpublished paper, Feferman [1968a]. The system OR~, to which the NDinterpretation is to be applied, is a variant of that used in Feferman [1968a]. It extends PA w + (#) as follows. We expand the type structure by an additional ground type, denoted ~. Then types a, % . . . are generated from the ground types 0 and ~ by closing under (a + T) as before. We have infinitely many variables of each type; we use the letters ~,/~, 7 , . . . , ~, 77,r for variables of the new type ~. These are informally understood to range over countable tree ordinals, which are closed under formation of suprema in the sense that if f is of type 0 ~ ~ then Sup(f) E ~ and Sup(f) represents the tree whose immediate subtrees are given by the sequence of values f (x) for x a natural number. Formally, the constants of OR~ besides those of PA w + (#) are as follows: 1. On is a constant of type 2. Sup is a constant of type (0~ ~) + 3. Sup 1 is a constant of type ~ ~ (0 + ~) 4. For each a, Rn,~ is a constant of type (~, (0 ~ a) + a), a, ~ ~ a. 19Recent work of Burr and Hartung [n.d.] and Burr [1997] extends these results with an interpretation of KPw (essentially a settheoretic analogue of ID1 ) in a theory of primitive recursive set functionals of finite type, instead of the ordinal functionals of OR~ .
GSdel 's Functional Interpretation
387
We shall omit the subscript ' a ' from the ordinal recursor whenever there is no ambiguity. In the following we shall write ax for (Supl(a))(x) for x of type 0; for a nonzero, this represents the immediate subtree at position x. Just as we do not assume extensionality for functions, we do not assume extensionality for ordinals, i.e. it does not follow from Vx (ax =/3~) that a  ft. Nevertheless we shall assume there is a oneone correspondence between nonzero ordinals and functions of which they are the suprema, so that Sup 1 will indeed be the inverse of the Sup operation. This is not necessary, but simplifies some points. The logic taken for OR"/ is full classical quantificational logic in its finite type language. The axioms of OR"/consist of: 1. The axioms of PA ~ + (#) with the induction scheme extended to all formulas of the language 2. Sup(f) 7~ 0a A Sup 1 (Sup(f)) = f, for f of type (0 + ~t) 3. a # 0n + Sup(Sup  l ( a ) ) = a 4.
=
5.
(a) R a ( f , a , 0a) = a (b) a 7(= On + Ra(f, a, c~)  f ( a , Ax.Rn(f, a, c~)) for each type a, where the variable a is of type a, and the variable f is of type ft, (0 + a) + a 6. ~(0a) A Voz (a r 0n A Vx ~(ax) + ~(a)) ~ Va ~(a)
7. ( QFA C) The quantifierfree subsystem of OR"/may be axiomatized as follows. 1'. T + (#), with induction rule extended to all quantifier free formulas of OR~ 2'.5'. The same as 25 in OR"/ 6'. The rule of induction on ordinals, i.e. from ~(0n) and (or # 0n A Vx ~o(a~) + ~(c~)) infer ~(c~) for all quantifierfree ~o. (This last is to be expressed in quantifierfree form using the It operator.) We denote this system by T9 + (It). Just as for Theorem 8.3.1 we readily obtain: 9.1.1. T h e o r e m .
OR"/is NDinterpreted in T~ + (It).
9.2. T h e classical s y s t e m s IDI of one a r i t h m e t i c a l i n d u c t i v e definition We remind the reader briefly of these relatively familiar systems. Here 0(P +, x) denotes a formula in the language of arithmetic with one additional predicate or set symbol P that has only positive occurrences in 0. This determines a monotonic operator from sets P to sets { x : 0(P, x)}, which thus has a least fixed point. The theory IDI (0) associated with 0 is given by the following axioms expressing that P is such a least fixed point, to the extent possible within the given firstorder language: 1. The axioms of PA with induction expanded to include all formulas containing the symbol P.
2. O(P, x )  ~ P(x) 3. Vx (O(r x) + r
+ Vx (P(x) + r
for each formula r
J. A vigad and S. Feyerman
388
Here O(r x) denotes the result of substituting any occurrence of the form P(t) in 0 by r The logic of ID1 (0) is, of course, classical. When referring to any system described in this way, we simply write ID1. We will use ID~ to denote the corresponding theories based on intuitionistic logic. Here, however, we need to specify how the positivity requirement on O is to be interpreted. We will say that O is weakly positive if it is positive in the classical sense, and strongly (or strictly) positive if there are no occurrences of P in the antecedent of an implication, where the basic logical connectives are taken to be those of section 2.1. An even more restrictive requirement on O is that it be an accessibility inductive definition, as described in section 9.6; cf. section 9.8 for a discussion. 9.3. T r a n s l a t i o n of IDI into OR~ The obvious way to carry out the translation of IDI(O) into OR"i is to define the approximations to the least fixed point from below. For this purpose, we need a form of ordinal recursion which defines a function at an ordinal a in terms of its values at all smaller ordinals ~. The appropriate lessthan relation for tree ordinals is introduced as follows. We use the l e t t e r ' s ' to range over numbers of finite sequences of natural numbers. The number 0 is chosen to code the empty sequence, and the extension of a sequence s by one new term x is denoted s^(x). For an ordinal c~, the predecessor c~, of c~ "down along s" is defined recursively by: 1.
OL0   OL
2. ~ =
(~,)~.
Then we define
3. Z < a + + a # O n A 3 s ( s # O A Z = a ~ ) . 9.3.1. L e m m a .
1. The < relation between ordinals is transitive. 2. Vow,/9 37 (oL < 7 A/9 < 7).
3. ("f~upper bounds", or Quantifierfree collection) For every quantifierfree for
m~u
f , w~ h~,,~ W 3/~
9.3.2. P r o o f .
f(z,/~)
~ 3 ~ W 3Z < ~
Claim 1 is immediate by definition.
f(~,/~). For claim 2, define f with
f(O) = ~, f(x') = i~, and take 7 = Sup(f). Finally, under the hypothesis of claim 3, we have by (QFAC), existence of an f such that Vx ~(x, f(x)); then take a
Sup(f).
[]
Given a function f of type ~ + a, the restriction of f to a for a # 0a is definable as As, x. f(as~(~)), which we also write both as As r 0. f ( a s ) and as A/~ < c~. f(/3). Then one can derive by use of our recursor Rn the following more general form of recursion with values in any type a, given any a of type a and G of type ~, (0 + a) + a:
( rIB, F 3xA iff F Ik A[x/fz] for some numeral ~, r VxA iff rlA[x/n] for all numerals ~. FIA for a formula A is defined as FIB for some universal closure B of A. (For predicate logic, clauses for V, _1_have to be added.) "1" is sometimes called a realizability, but we think it better to reserve the term realizability for notions where realizing objects appear explicitly. Since the "1" in "FIA" has nothing to do with division, the term "divides" for "1" is also not advisable. Therefore we call the notions derived from, or similar to Kleene's FIA simply slash relations or slashes.
Realizability
421
In one respect FIA is not well behaved; it is not closed under deduction, since it may happen that FIA, but not rl(A v A). Aczel [1968] gave a simple modification which overcomes this defect: the deducibility requirements in the clauses for V, 3 are dropped, and for implication and universal quantification we require instead
F[(A ~ B) iff (PIA ~ FIB) and P F A + B, rlVxAx iff r ~ VxAx and rlA[x/n] for all n. Now F[A =~ F F A holds for all A, and the modified slash yields the same applications as the original one. In fact, one easily proves by formula induction that F IF A in the sense of Kleene iff F[A in the sense of Aczel. The Aczel slash also has an appealing modeltheoretic interpretation; see e.g. Troelstra and van Dalen [1988,13.7]. It is also worth noting that CIC is both necessary and su]flcient for the validity of the rule "For all A, F C + 3xA ~ F C + A[x/~] for some ~" (Kleene [1962], Troelstra [1973a,3.1.8]). Slash operators in many variants have been widely used for obtaining metamathematical results for formalisms based on intuitionistic logic. The slash as defined above applies to sentences only, but the use of partial reflection principles in combination with formalized versions of the slash relation, restricted to formulas of bounded complexity, may be used to deal with free numerical variables, see Troelstra [1973a,3.1.16]. Suitable slash relations for systems beyond arithmetic may be defined by considering conservative extensions with extra "witnessing constants" for existential statements. The explicit definability property for numbers EDN can then be proved by proving soundness of slash for the extended system (a typical example is Moschovakis [1981]). Friedman [1973] describes the extension of the Kleene slash to higherorder logic. In Scedrov and Scott [1982] it is shown that this extension of the slash is in fact equivalent to a categorical construction on the free topos due to P. Freyd (see Lambek and Scott [1986]). Friedman and Scedrov [1983] use slash relations and numerical realizability combined with truth (qrealizability, see below) to obtain the explicit setexistence property (explicit definability property for sets) for intuitionistic secondorder arithmetic H A S (cf. 7.1) and intuitionistic set theory plus countable choice or relativized dependent choice. Friedman and Scedrov [1986] use a slash relation to establish a very interesting result: there is a particular numbertheoretic property A(n) such that if H A proves transfinite induction for a primitive recursive binary relation ~ w.r.t. A, then ~ is wellfounded with ordinal less than e0. (The corresponding result is false for PA, cf. Kreisel [1953].) If transfinite induction is proved for ~ w.r.t. A for the theory H A + obtained by adding transfinite induction for all recursivewellorderings, then ~ is wellfounded. Of the many papers discussing or making use of slash relations we further mention: Beeson[1975,1976b,1977a], Seeson and Scedrov [1984], Dragalin[1980,1988], Friedman [1977a], Krol' [1977], Moschovakis [1967], Myhill[1973b,1975], Robinson[1965]. m
A.S. Troelstra
422
qrealizability. Since in soundness theorems for formalized realizability we prove deducibility instead of just truth, one can replace deducibility in the definition of FFrealizability by truth; let us use "q" for this realizability. The clauses for 3, then become" x q (A ~ B ) : = Vy(yqA A A 4x . y q B ) A x$, x q 3yA := plx q A[y/pox] A A[y/pox]. Such a qvariant was used in Kleene [1969] to obtain derived rules for intuitionistic analysis with function variables, qrealizability is also not closed under deducibility (think of an instance A of CT0 unprovable in HA; then A is qrealizable, but A V A is not). Grayson [1981a] observed that an Aczelstyle modification could be used instead of qrealizability; this corresponds to our rntrealizability. Friedman and Scedrov [1984] use r__~nrealizability and qrealizability to obtain consistency with Church's thesis, the disjunction property and the numerical existence property for set theories based on intuitionistic logic, with axioms asserting the existence of very large cardinals, thereby demonstrating that the metamathematical properties just mentioned, often regarded as a test for the constructive character of a system, are not affected by assumptions concerning large cardinals. m
S h a n i n ' s a l g o r i t h m . In a number of papers Shanin presented a systematic way of making the constructive meaning of arithmetical formulas explicit. His method is logically equivalent to rnrealizability, as shown by Kleene [1960]. On the one hand Shanin's algorithm is more complicated than realizability, on the other hand it has the advantage of being the identity on 3free formulas. 2. A b s t r a c t
realizability
and function
realizabillty
2.1. After the leisurely introduction to numerical realizability in the preceding section, we now turn to variations and generalizations. In order to distinguish easily the various concepts of realizability, we shall use a certain mnemonic code: r_ signifies n__signifies f__ signifies m signifies t__ signifies ! signifies e_ signifies
"realizability", numerical or "by numbers", "by functions", "modified", "combined with truth", "Lifschitz variant of", extensional .
Thus " r f t " refers to "realizability by functions combined with truth" etc. Strictly speaking, the 1" is redundant in many of these mnemonic codes. A simple generalization of numerical realizability is realizability with a different set of realizing objects and/or different application operator; abstractly, the realizing objects with application have to form a combinatory algebra. We shall first sketch an abstract version of numerical realizability, namely realizability in a combinatory algebra with induction, then consider the interesting special case of function
Realizability
423
realizability. 2.2. D e f i n i t i o n . (The theory A P P ) The language is singlesorted, based on LPT. The only nonlogical predicate is N (natural numbers). There is an application operation 9and constants 0 (zero), S (successor), P (predecessor), P, P0, Pl (pairing with inverses), k, s (combinators), d (numerical definition by cases). (We have used the same symbols for pairing and inverses as in the case of H A * , even if there is a slight difference in syntax: p(t, t') in HA* corresponds to (pt)t' in A P P . ) For tlot2 we simply write (tit2), and we use association to the left, i.e. t i t 2 . . . t ~ is short for (... ( ( t l t 2 ) t 3 ) . . . t n ) . Axioms for the constants: NO, N x + N ( S x ) , N x ~ N ( P x ) , P ( S t ) ~_ t, PO = O, 0 ~ St, kx$, k t y ~_ t, sxy$, stt't" ~_ tt"(t't"), pxy$, p0x$, plx$, p 0 ( p t x ) = t, p l ( p x t ) = t, N u A N v + (u ~ v + d x y u v = x) A d x y u u = y. Observe that by the general LPTaxioms we have tt'$ + t$ A t'$. Finally we have induction: A[x/O] A Vx e N(A ~ A[x/Sx]) + Vx e N. A [:] The combinators k, s permit us to have Aabstraction defined by induction on the construction of terms: Ax.t :  k t for t a constant or variable ~ x, Ax.x := skk, Ax.tt' := s(Ax.t)(Ax.t'). For this definition FV(Ax.t) = FV(t) \ {x}, t'$ + (Ax.t)t' ~_ t[x/t'] if t' is free for x in t, Ax.t$ for all t. It is not generally true that 2 (1) if x r FV(t'), y ~ x then Ax.(t[y/t']) ~_ (Ax.t)[y/t'], (consider e.g. t  y, t'  kk) but we do have, for x r FV(t'), y r FV(t"), y ~ x (2) t"$ ~ ((Ax.t)[y/t'])t" ~_ t[x/t"][y/t']  tly/t'][x/t"]. Property (1) can be guaranteed by an alternative definition of abstraction: A ~x.x := skk, A'x.t := kt if x r FV(t), A'x.tt' := s(A'x.t)(A'x.t')if x e FV(tt'), 2This was overlooked in the proofs in Troelstra and van Dalen [1988,section 9.3], but is easily remedied by the use of (2). Strahm [1993] recently showed that this problem might also be overcome by considering instead of APP based on combinators, a theory based on lambdaabstraction as a primitive (without a ~rule of the form t = s =~ Ax.t  Ax.s !) and an explicit substitution operator as part of the language.
A.S. Troelstra
424
but then we lose the property that )~x.t$ for all t. A recursor and a minimum operator may be defined with help of a fixed point operator (see e.g. Troelstra and van Dalen [1988,9.3]) which permits us to define in A P P all partial recursive functions. It follows that H A can be embedded into A P P in a natural and straightforward way. R e m a r k . Partial combinatory algebras are structures (X, ~ k, s), k ~ s, satisfying the relevant axioms above; in such structures we can always define terms forming a copy of IN, and appropriate S, P, p, p0, pl, and we might simply have postulated induction for this particular copy of IN. However, in describing models it is more convenient not to be tied to a specific representation of IN relative to the combinators. Also, we want to leave open the possibility that the interpretation of N is nonstandard. 2.3. T h e m o d e l of t h e p a r t i a l recursive operations PRO The basic combinatory algebra is
(IN,., Axy.x, Axyz.x.z.(y.z)); where ~ is partial recursive function application for IN. 0, S, P get their usual interpretation (more precisely, we choose codes Ax.Sx, Ax.Px etc.; for d take Auvxy[u . sglx  Yl + v(11x  Yl)]. In HA* we can prove PRO to be a model of A P P , in the sense that A P P }A =~ HA* F [A]pRo. Here and in the sequel we use "interpretation brackets": given some model A/I, we use I r i s , [A]~ to indicate the interpretation of term t, formula A in the model M . Thus "[A]~ holds" means the same as M ~ A. 2.4. Definition. (Abstract realizability) x r_A in A P P is defined by x r_P :  P A x$ for P prime, xr_ (A A B) := (p0xr_A) A (plxr_B),
x r_(A + B) := Vy(y r_A ~ x.y r B) A x$, x r_Vy A := Vy(x.y r_A), x r_ 3y A := plx r_A[y/p0x]. 0 R e m a r k . xr_VyEN.A becomes literally Vyz(zr_ (y E N) ~ (xoyoz) r_A)). It is easy to see that realizability with a special clause for the relativized quantifier xrZVyEN.A := VyCN(x.yrZ A) is in fact equivalent. The 3free formulas play the same role in A P P as they do in HA*, i.e. 3free formulas have canonical realizing terms, their realizability coincides with their truth, and equivalence of realizability with truth for a formula A means that A is equivalent to a formula 3xB, B 3free; the schema characterizing r_realizability is an Extended
Axiom of Choice EAC Vx(Ax + 3ySxy) ~ 3zVx(Ax + z.x$ A S ( x , z . x ) ) (A is 3free)
Realizability
425
etc. We may specialize r_realizability to PROr_realizability by interpreting A P P in PRO. It is then not difficult to show that the resulting realizability of H A (as embedded in the obvious way into A P P ) becomes equivalent to r__nnrealizability; cf. Renardel de Lavalette [1984].
2.5. Partial continuous function application Another important model of A P P is the model PCO of functions with partial continuous application. Before we can discuss this, we need some preliminaries. N o t a t i o n s . From primitive recursive p, P0, Pl we can construct primitive recursive encodings pn of ntuples of natural numbers, with primitive recursive inverses p~ (0 < i < n). We may also assume finite sequences of natural numbers to be coded onto IN; we write (n0,... ,nx1) for (the code of) the finite sequence n0,... ,nx1; ( > is the (code of the) empty sequence (which may be assumed to be equal to 0; see below). If m is (a code of) a sequence, lth(m) is its length; 9 is a primitive recursive concatenation function for codes of sequences. We abbreviate n~m "= 3 n ' ( n . n ' = m ) , n~m
"(n~mAn~m), 9=
The primitive recursive inverse function A x y . ( x ) y of sequence encoding satisfies m = ( n o , . . . , nxl> =~ (m)y = ny for y < x, (m)y = 0 for y > x. For reasons of technical convenience we assume monotonicity in the arguments for encodings of pairs, ntuples and finite sequences: n < n' + p(n, m) < p(n', m), m < m' + p(n, m) < p(n, m'), and similarly for ptuples; n _ n 9m; lth(n) = lth(m) A V x ( ( n ) ~ , ~(x+l)' (~O,...,olx>. Q
Definition. E l e m e n t a r y A n a l y s i s EL is a conservative extension of H A obtained by adding to H A variables (c~, ~, ~/, 5, e) and quantifiers for (total) functions from IN to IN (i.e. infinite sequences of natural numbers). There is Aabstraction for explicit definition of functions, and a recursionoperator Rec such that (t a numerical term, r a function term; r t') := Cp(t, t')) Rec(t, r = t, Rec(t, r = r Rec(t, r Induction is extended to all formulas in the new language.
A.S. Troelstra
426
The functions of EL are assumed to be closed under "recursive in", which is expressed by including a weak choice axiom for quantifierfree A: QFAC
Vn3mA(n, m) + B~VnA(n, an)
[]
Definition. In EL we introduce abbreviations for partial continuous application
~(~) =
~
.=
3y(~(~y)
= 9 + 1 ^ vy't x h(y)  ~) by induction on x, and hence T F 3x h ( x )  ffJ+ Limw. Next, it is easily seen that, if for all successors w' of a node w, T F 3x h(x) = ~' + V {Lim~,, I T '  w" v w' R w"}, then
T F 3x h(x) = ~ + V {Lim~, I w = w' v w R w'}. Therefore, this will hold for w = 0, which implies (a). (b) Clearly h cannot have two different limits w and u. (c) Assume w is the limit o f h and w R ~u. Let n be such that for all x / > n , h(x)  w. We need to show that T )z _~Limu. Deny this. Then, since every provable formula has arbitrarily long proofs, there is x/> n such that x codes a proof of Limu; but then, according to definition 3.4, we must have h(x + 1 )  u, which, as u ~ w (by irreflexivity of R'), is a contradiction. (d) Assume w7(: 0, w is the limit of h and not wR' u. If u = w, then (since w ~= 0) there exists an x such that h(x + 1 ) = w and h ( x ) r w. Then x codes a proof of , Lim~ and Lim,o is provable. Next suppose u7(: w. Let us fix a number z with h ( z )  w . Since h is primitive recursive, T proves that h ( z )  w. Now argue in T + Lim~: since u is the limit of h and h(z) = w ~ u, there is a number x with x/> z such that h(x) ~ u and h(x + 1) = u. This contradicts the fact that not (w = )h(z)R' u, Thus, T + Lim~ is inconsistent, i.e., T F 1Lim~. (e) By (a), as T is sound, one of the Limw for w e W' is true. Since for no w do we have wRiT, (d) means that each Lim~, except Lim0, implies in T its own Tdisprovability and therefore is false. Consequently, Lim0 is true. (f) By (e), (c) and the soundness of T. I To repeat the statement of Solovay's second arithmetic completeness theorem (theorem 1.2): [ s A iff IN ~ A* for all arithmetic realizations *. P r o o f of t h e o r e m 1.2. Since the [:]A + A are reflection principles and these are true for a sound theory, the soundness part is clear. So, assume jz s A. Modal completeness of S then provides us with an Asound (see definition 2.5) model in which A is not forced in the root. We can repeat the procedure of the proof of the first completeness theorem, but now directly to the model itself (which we assume to have a root 0) without adding a new root, and again prove lemmas 3.2 and 3.3. But this time we have forcing also for 0 and we can improve lemma 3.3 to apply it also to w  0 , at least for subformulas of A. The proof of the (b)part of that lemma can be copied. With respect to the (a)part, restricting again to the [:]case, assume that 0 IF [:]C. Then, for each w e W with w :/: 0, w I}C. But now, by the Asoundness of the model, C is also forced in the root 0. By the induction hypothesis, for all w e W, T + Lim~ t C*. By lemma 3.2(a) then T F C*, so, T [ ([:]C)* and hence T F Lim0 + (C]C)*. Applying the strengthened version of lemma 3.3 to w = 0 and A, we obtain T F Lim0 + 1 A*, which suffices, since Lim0 is true (lemma 3.2). ~
484
G. Japaridze and D. de Jongh
4. F i x e d p o i n t t h e o r e m s For the provability logic L a fixed point theorem can be proved. One can view GSdel's diagonalization lemma as stating that in arithmetic theories the formula ~ [::lp has a fixed point: the GSdel sentence. GSdel's proof of his second incompleteness theorem effectively consisted of the fact that the sentence expressing consistency, the arithmetic realization of ~[::lJ_, is provably equivalent to this fixed point. Actually this fact is provable from the principles codified in the provability logic L, which means then that it can actually be presented as a fact about L. This leads to a rather general fixed point theorem, which splits into a uniqueness and an existence part. It concerns formulas A with a distinguished propositional variable p that only occurs boxed in A, i.e., each occurrence of p in A is part of a subformula [::IB of A. 4.1. T h e o r e m . (Uniqueness of fixed points) If p occurs only boxed in A(p) and q does not occur at all in A(p), then FL E]((p ~ A(p)) ^ (q +~A(q)) ~ (p ++ q). 4.2. Corollary. If p occurs only boxed in A(p), and both FLB +~A(B) and tL C ++ A(C), then FL B ~ C. 4.3. T h e o r e m . (Existence of fixed points) If p occurs only boxed in A(p), then there exists a formula B, not containing p and otherwise containing only variables of A(p), such that FL B ~ A (B). After the original proofs by de Jongh and Sambin (see Sambin [1976], Smoryfiski [1978,1985], and, for the first proof of uniqueness, Bernardi [1976]) many other, different, proofs have been given for the fixed point theorems, syntactical as well as semantical ones, the latter e.g., in Gleit and Goldfarb [1990]. It is also worthwhile to remark that theorem 4.3 follows from theorem 4.1 (which can be seen as a kind of implicit definability theorem) by way of Beth's definability theorem that holds for L. The latter can be proved from interpolation in the usual manner. Interpolation can be proved semantically in the standard manner via a kind of Robinson's consistency lemma (see Smoryfiski [197S]), and syntactically in the standard manner by cutelimination in a sequent calculus formulation ofL (Sambin and Valentini [1982,1983]). In an important sense the meaning of the fixed point theorem is negative, namely in the sense that, if in arithmetic one attempts to obtain formulas with essentially new properties by diagonalization, one will not get them by using instantiations of purely propositional modal formulas (except once of course: the GSdel sentence, or the sentence LSb used to prove his theorem). That is one reason that interesting fixed points often use Rosserorderings (see section 9). 5. P r o p o s i t i o n a l
theories
and Magarialgebras
A propositional theory is a set of modal formulas (usually in a finite number of propositional variables) which is closed under modus ponens and necessitation, but
The Logic of Provability
485
not necessarily under substitution. We say that such a theory is faithfully interpretable in PA, if there is a realization * such that T  { A I P A F A*}. (This isan adaptation of definition 11.1 to the modal propositional language.) Each sentence a of P A generates a propositional theory which is faithfully interpretable in PA, namely Th~ = {A(p) I P A F A*(ra'~)}. Of course, this theory is closed under Lderivability: it is an Lpropositional theory. A question much wider than the one discussed in the previous sections is, which Lpropositional theories are faithfully interpretable in P A and other theories. This question was essentially solved by Shavrukov [1993b]: 5.1. T h e o r e m . An r.e. Lpropositional theory T is faithfully interpretable in P A iff T is consistent and satisfies the strong disjunction property (i.e., DA ~ T implies A ~ T, and DA V DB ~ T implies DA ~ T or DB ~ T). Note that faithfully interpretable theories in a finite number of propositional variables are necessarily r.e. The theorem was given a more compact proof and at the same time generalized to all r.e. theories extending IAo + EXP by Zambella [1994]. If one applies the theorem to the minimal Lpropositional theory, an earlier proved strengthening of Solovay's theorem (Artiimov [1980], Avron [1984], Boolos [1982], Montagna [1979], Visser [1980])rolls out. 5.2. Corollary. (Uniform arithmetic completeness theorem) There exists a sequence of arithmetic sentences So, O/1,... such that, for any n and modal formula A(p0,... ,pn), F LA iff, under the arithmetic realization * induced by setting p~) = c~0,..., p* = ~ , A* is provable in PA. Sets of modal formulas that are the true sentences under some realization are closed under modus ponens, but not necessarily under necessitation; such sets are generally not propositional theories in the above sense. Let us call a set T of modal formulas realistic if there exists a realization * such that A* is true, for every A ~ T. Moreover, we say that T is wellspecified if, whenever A e T and B is a subformula of A, we also have B e T or ~B e T. Strannegs [1997] proves a result that generalizes both theorem 5.1 and Solovay's second arithmetic completeness theorem. We give a weak but easy to state version of it. 5.3. T h e o r e m . Let T be a wellspecified r.e. set of modal formulas. realistic iff T is consistent with S.
Then T is
An even more general point of view than propositional theories is to look at the Boolean algebras of arithmetic theories with one additional operator representing formalized provability and, more specifically, at the ones generated by a sequence of sentences in the algebras of arithmetic. The algebras can be axiomatized equationally and are called Magarialgebras (after the originator R. Magari) or diagonalizable algebras. Of course, theorem 5.1 can be restated in terms of Magarialgebras. Shavrukov proved two beautiful and essential additional results concerning the
G. Japaridze and D. de Jongh
486
Magarialgebras of formal theories that cannot naturally be formulated in terms of propositional theories. (Shavrukov [1993a]) The Magari algebras o / P A and ZF are not isomorphic, and, in fact not elementarily equivalent (Shavrukov [1997]).
5.4. T h e o r e m .
The proof only uses the fact that ZF proves the uniform Elreflection principle for PA. A corollary of the theorem is that there is a formula of the second order propositional calculus that is valid in the interpretation with respect to PA, but not in the one with respect to ZF. Beklemishev [1996b] gives a different kind of example of such a formula for the two theories PA and IA0 + EXP. 5.5. T h e o r e m . (Shavrukov [1997]) The first order theory o/the Magari algebra o/ PA is undecidable. Japaridze [1993] contains some moderately positive results on the decidability of certain fragments of (a special version of) this theory. 6. T h e e x t e n t
of Solovay's theorems
An important feature of Solovay's theorems is their remarkable stability: a wide class of arithmetic theories and their provability predicates enjoys one and the same provability logic L. Roughly, there are three conditions sufficient for the validity of Solovay's results: the theory has to be (a) sufficiently strong, (b) recursively enumerable (a provability predicate satisfying Lhb's derivability conditions is naturally constructed from a recursive enumeration of the set of its axioms), and (c) sound. Let us see what happens if we try to do without these conditions. The situation is fully investigated only w.r.t, the soundness condition. Consider an arbitrary arithmetic r.e. theory T containing PA and a 21 provability predicate PrT(x) for T. Iterated consistency assertions for T are defined as follows: Con~
:= T;
Con~+l(T):= Con(T + Conn(T)),
where, as usual, Con(T + ~) stands for ~ PrT(r~ ~7). In other words, Conn(T) is (up to provable equivalence) the unique arithmetic realization of the modal formula [:]n_l_. We say that T is of height n if Con n (T) is true and Conn+l(T) is false in the standard model. If no such n exists, we say that T has infinite height. In a sense, theories of finite height are close to being inconsistent and therefore can be considered as a pathology. The inconsistent theory is the only one of height 0. All ElSOund theories have infinite height, but there exist ElUnsound theories of infinite height. The theory T + ~ Con ~ (T) is of height n, if T has infinite height. Moreover, for each consistent but ElUnsound theory T and each n > 0, one can construct a provability predicate for T such that T is precisely of height n with respect to this predicate (Beklemishev [1989a]).
The Logic o.f Provability
487
Let us call the provability logic of T the set of all modal formulas A such that T F (A)~,, for all arithmetic realizations * with respect to PrT. The truth provability logic of T is the set of all A such that (A)~, is true in the standard model, for all realizations *. 6.1. T h e o r e m . (Visser [1981]) For an r.e. arithmetic theory T containing PA, the provability logic of T coincides with
1. L, if T has infinite height, 2. {A I D~l F LA}, if T is of height 0 m >/1, etc. The logic can be axiomatized over CS by the monotonicity axiom D A + AA and the schema
/x(Ds + s), where S is an arbitrary (possibly empty) disjunction of formulas of the form [qB and AB.a The second one corresponds to II~essentially reflexive (see definition 12.3) extensions of theories of bounded arithmetic complexity such as e.g., (IAo + EXP, P R A ) , (IN., IN.+1) R for n >/1, where IN~ is defined like I~k but with the induction for 2kformulas formulated as a rule. The corresponding provability logic can be axiomatized over C S M by the IIlessential reflexivity schema AA ~ A(O(A + S)+ S), where S is as before. We also know two natural provability logics of type A (Beklemishev [1994]). The first system corresponds to pairs of theories (T, U) such that U is an extension of T by finitely many IIlsentences and proves wtimesiterated consistency of T, such as e.g., the pairs (PA, P A + Con(ZF)), (IE1, IE1 + Con(IE2)), etc. This logic can be axiomatized over C S M by the principle (P)
AA + [::1(A_I_ v A),
valid for all 1Iiaxiomatizable extensions of theories, together with the schema A~ nn_L,
n/> 1.
3In the following, CS together with the monotonicity axiom will be denoted CSM.
494
G. Japaridze and D. de Jongh
The second system corresponds to reflexive IIlaxiomatizable extensions of theories, such as e.g., (PA, P A + {Conn(PA) [ n/> 1}), (IIE1, I]E1 + {Con(IIEn) [ n >i 1}). It can be axiomatized over C S M plus (P) by the reflexivity axiom
AA ~ AOA. Finally, we know by Beklemishev [1996a] a natural system of type L that corresponds to finite extensions of theories of the form (T, T + A), where both T + ~o and T + ~ ~o are conservative over T with respect to Boolean combinations of ~lsentences. Examples of such pairs are (PRA, I]E1), (IIE~,I~En), for n/> 1, and others. The logic is axiomatized over C S M by the B(P~l)conservativity schema AB ~ [::]B, where B denotes an arbitrary Boolean combination of formulas of the form [::]C and AC. The six bimodal logics described above essentially exhaust all nontrivial cases for which natural provability logics have explicitly been characterized. It is worth mentioning that all these systems are decidable, and a suitable Kripkestyle semantics is known for each of them. Smoryfiski [1985] contains an extensive treatment of PRLpA,zF including proofs of three arithmetic completeness theorems due to Carlson. These theorems are extended by Strannegs [1997] to the setting of r.e. sets of bimodal formulas (as discussed in section 5). Visser [1995] presents a beautiful approach to Kripke semantics for bimodal provability logics. Beklemishev [1994, 1996a] gives a detailed survey of the current state of the field. Apart from describing the joint behaviour of two 'usual' provability predicates, each of them being separately well enough understood, bimodal logic has been successfully used for the analysis of some nonstandard, not necessarily r.e., concepts of provability. The systems emerging from such an analysis often have not so much in common with CS, although different 'bimodal analyses' do share common technical ideas. As early as 1986, Japaridze [1986,1988b] characterized the bimodal logic of provability and oaprovability (dual to wconsistency) in Peano arithmetic. Later his study was simplified and further advanced by Ignatiev [1993a] and Soolos [1993b,1993a], who, among other things, showed that the same system corresponds to some other, socalled strong, concepts of provability (taken jointly with the usual one). Other examples of strong provability predicates are the En+lcomplete provability from all true arithmetic IInsentences, for n i> 1, and the II~complete provability under the wrule in analysis. Japaridze's bimodal logic can be axiomatized by the axioms and rules of L, formulated separately for El and for A, the monotonicity principle [:]A + AA, and an additional HIcompleteness principle ~A + A~A,
The Logic of Provability
495
which reflects in so far as that is possible t h a t / k is strong enough to prove all true IIlsentences (if [] is the usual r.e. provability predicate a n d / k a strong provability predicate). Japaridze's logic is decidable and has a reasonable Kripke semantics. An extensive treatment of Japaridze's logic is given in Boolos [1993b]. Bimodal analysis of other unusual provability concepts has been undertaken by Visser [1989,1995] and Shavrukov [1991,1994]. Using the work of Guaspari and Solovay [1979], Shavrukov [1991] found a complete axiomatization of the bimodal logic of the usual and Rosser's provability predicate for Peano arithmetic (see also section 9). It is worth noting that Rosser's provability predicate, although numerating (externally) the same theory as the usual one, has a very different modal behaviour; e.g., Rosser consistency of PA is a provable fact, but on the other hand, Rosser's provability predicate is not provably closed under modus ponens. Shavrukov [1994] characterizes the logic of the socalled Feferman provability predicate. This work was preceded by Visser [1989,1995], where the concept of provability in PA from 'nonstandardly finitely many' axioms and some other unusual provability concepts were bimodally characterized. These systems were motivated by their connections with interpretability logic, but another motivation originates with Jeroslow and Putnam who studied the Rosser and Feferman style systems as 'experimental' systems: their selfcorrecting behaviour is supposed to be closer to the way humans reason. Studying ordinary provability and selfcorrecting provability can provide a good heuristic for appreciating the differences between both kinds of systems. A final example of such an analysis of an unusual proof predicate by the development of a bimodal logic was LindstrSm [1994]'s analysis of Parikh provability, i.e., the proof predicate that allows []A/A as a rule of inference. Additional early results in bimodal logic, e.g., a bimodal analysis of the socalled Mostowski operator, can be found in Smoryfiski [1985]. Many results in bimodal provability logic can be generalized to polymodal logic. Such a generalization is particularly natural in the modallogical study of progressions of theories, a topic in proof theory that goes as far back as the work of Turing [1939]. From the modallogical point of view, however, such a generalization, in all known cases, does not lead to any essentially new phenomena. Roughly, the resulting systems happen to be direct sums of their bimodal fragments; therefore we shall not go into the details. Polymodal analogues are known for Japaridze's bimodal logic (modalities, indexed by natural numbers n, correspond to the operators to be provable from all true IInsentences), and for natural provability logics due to Carlson and Beklemishev. Here, the modal operators correspond to the theories of the original TuringFeferman progressions of transfinitely iterated reflection principles, and thus, are indexed by ordinals for some constructive system of ordinal notation, say, the natural one up to ~0. Iterating full reflection leads to the polymodal analogue of PR[pA,ZF, and transfinitely iterated consistency leads to a natural polymodal analogue of Atype provability logics (Beklemishev [1991,1994]).
496
G. Japaridze and D. de Jongh
9. R o s s e r o r d e r i n g s To discuss Rosser sentences and more generally the socalled Rosser provability predicate in a modal context, Guaspari and Solovay [1979] enriched the modal language by adding, for each DA and E]B, the formulas CA < E]B and OA ~ OB, with as their arithmetic realizations the Elsentences "A* is provable by a proof that is smaller than any proof of B*", and "A* is provable by a proof that is smaller than or equal to any proof of B*" (socalled witness comparison formulas). They axiomatized modal logics R  and R = R  + the rule C]A/A, and gave an arithmetic completeness result for R. In this arithmetic completeness result they did have to allow arbitrary standard provability predicates in the arithmetic realizations however, i.e., arbitrary provability predicates satisfying the three Lhb conditions. Shavrukov [1991] (see also the end of section 8) showed that this restriction can be dropped when one restricts the contexts for the new operator to [:]A < C]~A (the Rosser provability predicate, for short: vIRA), and de Jongh and Montagna [1991] showed that, allowing formulas with free variables as arithmetic substitutions leads to R  as the arithmetically complete system. Guaspari and Solovay [1979] also showed that for some standard provability predicates all Rosser sentences (i.e., sentences c~ such that PA F a ~ (PrpA ( r al) .< PrpA (r al)) are equivalent, and that for some other standard provability predicates this is not the case. This leaves open the question whether a reasonable notion of usual proof predicate can be defined for which the question "Is the Rosser sentence unique?" does have a definite answer. Hence also, uniqueness of fixed points is not provable in R. Finally, they showed that also the existence part of the fixed point theorem fails for R. Simpler proofs for the completeness theorems were given in de Jongh [1987] and Voorbraak [1988]. There are connections between this work in provability logic and speed up. First, de Jongh and Montagna [1988,1989] gave a new simpler proof of Parikh [1971]'s theorem that, for any provably recursive function g there is a sentence a provable in PA such that PA proves Prpn (raT) by a much shorter proof in the sense of g (a succedent E in the adequate set. For occurrences of the same maximal consistent set in different parts we use distinct copies. The Sw are defined to be the universal relation inside each part consisting of the Ecritical successors for some E, but to be such as to make no other connections between worlds. Then lemmas 13.12 and 13.13 give the theorem rather straightforwardly. With some care this program can be executed, but we take a slightly more complicated road that points the way to the completeness proof for I L M where the straightforward manner does not work. Set Wr to be the smallest set of pairs (A, T) with A a maximal consistent subset of 9 and T a finite sequence of formulas from 9 that satisfy the following requirements:
(i) r < A or r = A , (ii) T is a finite sequence of formulas from ~, the length of which does not exceed the depth of F minus the depth of A. (So, e.g. F is only paired off with the empty sequence.) It is clear that Wr is finite, since, for any A, if A ~ is a successor of A, then A ~ has fewer successors than A. We define R on Wr by w R w' iff (w)0 < (w')0 and (w)l c_ (w')l. The required properties check out easily. Let u S~v apply if (I) and (II) hold (writing 9 for concatenation):
(I) u, v e wt, (II) either (w)~ = (u), c_ (v),, or (U)l = (w), 9 (C) 9T and (v), = (w), 9 (C) 9T' for some C, T, T~, and if in the latter case (u)0 is a Ccritical successor of (w)0, then so is (V)o. Let us check that under this definition the S~ will have the properties (i)(iii) required by definition 13.3: (i) That S~ is a relation on wJ" is instantaneous. (ii) Reflexivity and transistivity of S~ are also easy to check. (iii) If w', w" e w$ and w ' R w", then (I) is immediate. For (II) it suffices to recall that successors of Ccritical successors are Ccritical (lemma 13.11). Finally, we define w IFp iff p e (w)0. We will now prove that, for each B e (I) and w e W r , w lFB iff B e (w)0, by induction on the length of B. Of course, the connectives are trivial, so it suffices to prove that
B > c e (~)o * = , w ( ~ R ~ ^ B e (~)o ~ 3v(~ S~ ~ A C e (v)0)). r Suppose B t> C r (w)0. Then ~ (B E>C) e (W)o. We have to show that, for some u with w R u, B e (u)o and Vv (u Swv + ~ C e (v)0). Let A with (w)0 < c A be as
The Logic of Provability
519
given by lemma 13.12, and take u to be {A, (w)l*{CI}. It is clear that u fulfills the requirements. : . : Suppose B t>C e (w)o. Consider any u such that B e (u)0 and w R u , and first assume (u)l = ( w ) I , { E } , T and (u)0 is an Ecritical successor of (W)o. By lemma 13.13 we can find an Ecritical successor A' of (w)0 with C e A'. It is clear that v = {A', (W)l,{E} } is a member of Wr and fulfills all the requirements to make
U~wV. If (u)l = ( w ) I , { E } , T but (u)0 is not an Ecritical successor of (w)0, then we find a successor A' of (w)0 with C e A' by using axiom (4) instead of lemma 13.13. Again it is clear that v = {A', (w)l,{E}} is a member of Wr and fulfills all the requirements to make u Swv. The final case is that (u)l = (w)l. In that case also we apply axiom (4) to obtain A' with C ~ A' and take v = {A', (w)~}. ~ 13.15. T h e o r e m .
(Completeness and decidability of I L M ) If•
ILMA, then there
is a finite ILMmodel K such that K J~ A. The main problem in the proof of this theorem is the following. To apply the characteristic axiom (A ~>B) + (A/~ 9 t> B/~ P C ) we seem to be forced to add the succedent of this formula to the adequate set whenever we have the antecedent. A straightforward definition of adequate set for the case of I L M would therefore lead adequate sets to be always infinite, which is of course unacceptable. After some searching we are lead to the following definition. 13.16. D e f i n i t i o n . additional condition:
An ILMadequate set (I) is an adequate set that satisfies the
if B t> C, []D e ~, then there is in (I) a formula B' t> C' such that B' is ILMequivalent to B/~ []D and C' to C/~ DD. Even though we require only equivalents to be present in (I) it is of course no longer evident that each finite set of formulas is contained in a finite I L M  a d e q u a t e set, since each newly constructed B/~ [::]D gives rise to a new [:]formula: B/~ ODC> _L. But we will show that this is nevertheless true. To make it easier on ourselves we assume that in our formula A all antecedents and succedents of t>formulas have the form B A [:] ,,~ B, except for _1_. In view of corollary 13.2(a) this is not an essential restriction. (The restriction is not really necessary, see Berarducci [1990].)
Each formula A is contained in an ILMadequate set (~ that contains only a finite number 9 classes.
13.17. L e m m a .
P r o o f . Let (I) be the smallest ILadequate set containing A. Let 9 be the set of antecedents and succedents of t>formulas in ~ including _l_. We obtain ~* by closing ~ off under the operation that forms D n E from each formula D in the class and each formula E that, either is a [:]formula in ~, or is of the form [] ~ F for some F in the class. The claim is that ~* contains only a finite number of equivalence classes. Given that claim we can construct a finite I L M  a d e q u a t e set by joining to
G. Japaridze and D. de Jongh
520
the subformulas of a finite set of representatives of all equivalence classes in ~*, and finally adding all the interpretability formulas combining two members of this finite set of representatives. It remains to prove the claim. This will be done by induction on the cardinality of ~. If that cardinality is 1 (i.e., 9  {2_}), the result is obvious. So, we can assume that the cardinality is larger than 1. We note that each element of ~* is of the form B ^ rn ,,, B ^ []C1 ^ ... ^ []Ck, with B ^ [] ,,, B from ~. That [] ,, B is a member of this conjunction means that in the Ci's all occurrences of B ^ [] ~ B can be replaced by _l_. Also one will recognize that B ^ El ~,, B will only be thrown in by the operation into the Ci in conjuncts of the form ~(B ^ [] ~,, B ^ ...). Replacing those occurrences of B ^ [] ,, B by _L means that one can drop the whole conjunct and keep an equivalent formula. If one drops all those conjuncts containing B/~ [] ~, B, then the resulting formula is of the form B ^ [] ,, B ^ C:ID~ ^ ... ^ E]Dm with B ^ [] ,, B not (relevantly) occurring in the Di. This means that the Di have been constructed from the []formulas in (I) and the other elements of ~. Thus, by the induction hypothesis, there are only a finite number of such Di (up to equivalence) and hence only a finite number of equivalence classes of elements of ~* that start with B A [] ~,, B. The same holds for each of the other elements of ~, so that the resulting set is finite. ~ P r o o f of t h e o r e m 13.15. Take some finite ILMadequate set (I) containing A and some maximal consistent subset F of (I) containing ~,A. We define both Wr and R as in the previous proof. This time, however, we let u S~v apply if (I) holds as well as (II') and (III), (II') (U)l c_ (V)l, and if (u)t(W)l * (C) 9T and ( v ) l  (w)l * (C) 9T' for some C, r, T', and (u)0 is a Ccritical successor of (w)0, then so is (v)0. (III) each []A e (u)0 is also a member of (v)0, That under this definition the Sw will have the properties (i)(iii) is shown in almost the same manner as before; that the Sw has the property (iv) required by definition 13.5 is shown as follows: Suppose that (A',T'ISw(A",T")R (F',a). W e m u s t show ( A ' , ~ ' ) R ( r ' , a ) . That T' C a, is immediate. That A' < F', follows from A" < F' combined with the fact that, by (III), Dformulas are preserved from A' to A ' . Naturally, we again define w IFp iff p e (w)0, and it will be sufficient to prove that, for each D e (I), w IFD iff D e (w)0. The only interesting case is the one that D is B D C, i.e., we have to show that
B > C e (w)0
W(w R ^ B
(u)o
v(u
^ C e (v)0).
r : Basically as in the proof for IL. ~: Assume that B D C ~ (w)0, and that u is such that w R u and B ~ (u)0. Let { [ ] D 1 , . . . , DOn} be the set of []formulas in (u)0. By axiom M (see proposition 2.1 (d)) and the adequacy of (I), (w)0 will contain a formula B'I> C' with B' and C' respectively ILMequivalent to B ^ []D1 ^ ... ^ []Dn and C ^ []D~ ^ ... ^ o D n .
The Logic of Provability
521
Let us just treat the case that (u)l = (w)l * (E> . T and (u)0 is an Ecritical successor of (w)0. (The other cases are easy, given our experience with IL.) We can find, by lemma 13.13, with (w)0, (u)0 and B'C> C' as input, an Ecritical successor A' of (w)0 with both C and [:]D 9A' for each [:]D 9 (u)0. It suffices to take v = B)* = Conserv( rA*7, r B . 7 ) , where Conserv( rA* 7, r B . 7) is an intensional formalization (see Chapter II of this Handbook) of "T + B* is IIlconservative over T + A*". If T = P A , then, in view of theorem 12.7, the interpretability and H1conservativity relations over its finite extensions are the "same" in all reasonable senses, so we can take Conserv( rA*7, r B . 7 ) to be a formalization of "T + B* is interpretable in T + A*". Below we prove the completeness of I L M as the logic of I]lconservativity over T and thus at the same time the completeness of I L M as the logic of interpretability over T = PA. The fact that I L M is the logic of interpretability over P A was proven more or less simultaneously and independently by Berarducci [1990] and Shavrukov [1988]. Later, H~jek and Montagna [1990,1992] proved that I L M is the logic of IIlconservativity over T = IE1 and stronger theories. 14.2. T h e o r e m .
F ILMA iff for every realization *, T F A*.
Proof. The (=:~) part can be verified by induction on I L M proofs. Since the soundness of L is already known, we only need to verify that if D is an instance of one of the additional 6 axiom schemata of ILM, then, for any realization *, T ~ D*. All the arguments below are easily formalizable in T:
G. Japaridze and D. de Jongh
522
Axiom (1): [::](A+ B) ~ (At> B). If T F A + B, then clearly T + B* is conservative over T + A*. Axiom (2): (At> B) A (Bt>C) ~ (At>C). Evidently, the relation of conservativity is transitive. Axiom (3): (At>C) ^ (Bt>C) + A v Bt>C. It is easy to see that if T + C* is (H1) conservative over T + A* and T + B*, then so is it over T + A* y B*. Axiom (4): (At> B) + ( ~ A ~ (}B). Clearly, if T + B* is Illconservative over T + A* and T + A* is consistent, then so is T + B*. Axiom (5): ~At>A. Suppose A is a Hl!sentence provable in T + A*. We need to show, arguing in T + ((}A)*, that then • is true. Indeed, suppose T + A* is consistent. Then it cannot prove a false IIl!sentence (by El !completeness) , and hence/k must be true. Axiom (M): (At> B) ~ (A ^ OCt> B ^ DC). Suppose T + B* is II~conservative over T + A* and )~ is a IIl!sentence provable in T + B* ^ (E]C)*. Then T + B* proves (EIC)* + )~. But the latter is a YIlsentence and therefore it is also proved by T + A*. Hence, T + A* A (DC)* F A. The following proof of the (r part of the theorem is taken from Japaridze [1994b] and has considerable similarity to proofs given in Japaridze [1992,1993] and Zambella [1992]. Just as in Japaridze [1992,1993], the Solovay function is defined in terms of regular witnesses rather than provability in finite subtheories (as in Berarducci [1990], Shavrukov [1988], Zambella [1992]). Disregarding this difference, the function is almost the same as the one given in Zambella [1992], for both proofs, unlike the ones in Berarducci [1990] and Shavrukov [1988], employ finite ILMmodels rather than infinite Vissermodels. Suppose ~ILM A. Then, by theorem 13.15, there is a finite ILMmodel (W, R, {S~}~ e w, IF) in which A is not valid. We may assume that W = { 1 , . . . , l}, 1 is the root of the model in the sense that 1R w for all 1 ~: w e W, and 1 ~ A . We define a new frame (W ~, R ~, {S~ } ~ w , ) :
w'=wu{o}, R' = R u {(0,
I
W}.
S~ = $1 U ((1, w) lw ~ W } and for each w e W, S~ = S~. Observe that (W', R', (S~ }wew,) is a finite ILMframe. Just as in section 3, we are going to embed this frame into T by means of a Solovay style function g :w ~ W ~ and sentences Lim~ for w e W ~ which assert that w is the limit of g. This function will be defined in such a way that the following basic lemma holds: 14.3. L e m m a .
T proves that g has a limit in W', i.e., T F V {Limr I r ~ W'}, (b) If w ~ u, then T F ~ (Limw n Limu), (c) If w R ~u, then T + Limw proves that T ]z ~ Lim~,,
523
The Logic o] Provability
(d) I f w ~ 0 and not w R' u, then T + Limw proves that T F 1Lim~, (e) If u S'~v, then T + Lim~ proves that T + Limv is H~conservative over T + Lim~, (f) Suppose w R' u and V is a subset of W ' such that for no v e V , u S~v ; then T + Lim~ proves that T + V
{Lim, Iv e V } is not II~conservative over
T + Lim~, (g) Lim0 is true, (h) For each i e W ~, Limi is consistent with T . To deduce the main thesis from this lemma, we define a realization * by setting for each propositional letter p, P* = V{Limr I r E W, r I~p}. 14.4. L e m m a .
For any w e W and any ILMformula B ,
(a) if w IFB, then T + Lim~ F B*; (b) if w IF B , then T + Lim~ F ~ B*. Proof. By induction on the complexity of B. The cases when B is atomic or has the form [:]C are handled just as in the proof of lemma 3.3, so we consider only the case when B = C1 D C2. Assume w e W. Then we can always write w R x and x S~ y instead of w R~x and x S~ y. Let ai = {r I w R r, r I~Ci} (i = 1, 2). First we establish that for both i = 1, 2,
(,)
T + Lim~ proves that T ~ C~ ~ V
{Limr I r e c~i}.
Indeed, argue in T + Limw. Since each r e ai forces Ci, we have by the induction hypothesis for clause (a) that for each such r, T F Limr + C~, whence T F V { L i m r I r e ai} + C~. Next, 9
according to lemma 14.3(a), TF V
J
{Lim~ I r e W'} and, according to lemma 14.3(d),
T disproves every Lim~ with not w R r; consequently, T ~ V{Lim~ ] w R r}; at the same time, by the induction hypothesis for clause (b), C~ implies in T the negation of each Lim~ with r P~Ci. We conclude that T F C~ + V {Limr I w R r, r I~Ci}, i.e., T ~ C~ ~ V{Lim~ I r e ai}. Thus, (,) is proved. Now continue" (a) Suppose w IFC1 E> C2. Argue in T + Limw. By (.), to prove that T + C~ is Hiconservative over T + Ct, it is enough to show that T + V{Lim~ I r e a2} is Hiconservative over T + V{Lim~ I r e al}. Consider an arbitrary u e al (the case with empty al is trivial, for any theory is conservative over T + _[_). Since w I~C1E> C2, there is v e a2 such that u Sw v. Then, by lemma 14.3(e), T + Limv is HIconservative over T + Lim~. Then so is T + V{Lim~ I r e a2} (which is weaker
G. Japaridze and D. de Jongh
524
than T + Limv). Thus, for each u e C~l,
T + V { L i m , I r e ~2} is lIiconservative
over T + Lim~. Clearly this implies that T + V over T + V
{Limr I r e c~2} is IIlconservative
{Limr I r e C~l}.
(b) Suppose w ~C1 c> C2. Let us then fix an element u of al such that u S~ v for no v e as. Argue in T + Lim~. By lemma 14.3(f), T + V
{Lim~ I r e a2} is not 1Iiconservative over T + Limu.
Then, neither is it Hiconservative over T + V {Lim~ I r e c~1} (which is weaker than T + Lim~). This means by (.) that T + C~ is not IIlConservative over T + C~. ~ Now we can pass to the desired conclusion: since l~ZA, lemma 14.4 gives T k Liml + ~ A*, whence T Y ~ Liml =~ TJz A*. But we do have T~Z _7Liml according to lemma 14.3(h). This ends the proof of theorem 14.2. t Our remaining duty is to define the function g and to prove lemma 14.3. The recursion theorem enables us to define this function simultaneously with the sentences Limw (for each w e W'), which, as we have mentioned already, assert that w is the limit of g, and the formulas A ~ ( y ) (for each pair (w, u) with wR'u), which we define by 
3t > y (g(t) =
^ V z ( y < z < t + g ( z ) =
14.5. D e f i n i t i o n . (function g) We define g(0) = 0. Assume that g(y) has already been defined for every y ~ x, and let g(x) = w. Then g(x + 1) is defined as follows: (1) Suppose wR'u, n z , not g(z') R ' g ( z ' + 1). Indeed, suppose this is not the case. Then, by lemma 14.7, for all z, there is z' with g(z)R' g(z'). This means that there is an infinite (or "sufficiently long") chain WlR' w 2 R ' . . . , which is impossible because W' is finite and R' is transitive and irreflexive. So, let us fix this number z. Then we never make an R'move after the moment z. We claim that S'moves can also take place at most a finite number of times (whence it follows that g has a limit and this limit is, of course, one of the elements of W'). Indeed, let x + 1 be an arbitrary moment after z at which we make an S~move, and let m be the rank of this move. That is, for some IIl[sentence A with a ~<x regular counterwitness, we have bmLim~+ A and wSg(m)u, where w = g(x) and u = g ( x + 1). Suppose we make the next S'move, with rank m', at some moment x ' + 1, x ' > x , from the world u to a world v, v  ~ u . Since Sg(m) is reflexive, conditions (a)(c) of definition 14.5(2) hold for x', u, u, A, m in the roles of x, w, u, A, m, respectively, and then, according to condition (d) of definition 14.5(2), the only reason for moving to v instead of u   instead of remaining at u, that is could be that m > m' (the case m = m I is ruled out because Lim~ :/: Limv). Similarly, the rank m" of the following S~move will be less than m ~, etc. Thus, consecutive S'moves without an RImove between them have decreasing ranks. Therefore, S'moves can take place at most m times after passing x. (b): Clearly g cannot have two different limits w and u. (c): Assume w is the limit of g and w R' u. Let n be such that for all x/> n, g(x)  w. We need to show that T Y 1Limu. Deny this. Then T b Lim~ ~ 1Awu(fi) and, since every provable formula has arbitrary long proofs, there is x/> n such that Fx Lim~+~ Awe(g); but then, according to definition 14.5(1), we must have g(x + 1)  u, which, as u =fi w (by irreflexivity of R'), is a contradiction. (d): Assume w ~ 0, w is the limit of g and not wR' u. If u = w, then (since w ~ 0) there is x such that g(x) = v =fi u and g(x + 1) = u. This means that at the moment x + 1 we make either an R'move or an S'move. In the first case we have T FLim~ ~  ~ Ave(g) for some n for which, as it is easy to see, the E1 !sentence A T ( g ) is true, whence, by El!completeness, T b~Lim~. And if an S'move is taken, then again T F ~ Lim~ because T + Lim~ proves a false (with a ~<x regular counterwitness) II1 !sentence. Next, suppose u ~: w. Let us fix a number z with g(z) = w. Since g is primitive recursive, T proves that g ( z )  w. Now argue in T + Lim~: since u is the limit of g and g ( z ) = w :/: u, there is a number x with x >1 z such that g(x)::/= u and g(x + 1) = u. Since not (w = )g(z)R' u, we have by lemma 14.7 that
(,)
for each y with z ~ t such that g(t'  1) ~ v and at the moment t' we arrive at v to stay there for ever. Let then x0 < ... < xk be all the moments in the interval It, t'] at which R' or S'moves take place, and let u0 = g ( x 0 ) , . . . , uk = g(xk). Thus t = x0, t ~= xk, u = u0, v = Uk and t o , . . . , Uk is the route of g after departing from w (at the moment t). Now let j be the least number among 1 , . . . ,k such that for all j >, where A>>B is interpreted as "there is a E~sentence ~o such that P A k (A* ~ ~o) A (~o ~ B*)" (for comparison: the interpretation of E~A is nothing but "there is a Elsentence ~o such that P A F (A* ~ ~o) A (~o ~ A*)"). He constructed a logic E L H in this language, called "the logic of E~interpolability", and proved its arithmetic completeness. Although the author of the logic of Elinterpolability did not suspect this, he actually had found the logic of weak interpretability over PA, because, as it is now easy to see in view of corollary 12.8, the formula ~(A >> ~B) expresses that P A + B* is weakly interpretable in PA + A*. We know that weak interpretability is a special (binary) case of linear tolerance, and the latter is a special (linear) case of tolerance of a tree of theories. Japaridze [1992] gave an axiomatization of the logic T O L of linear tolerance over PA, and Japaridze [1993] did the same for the logic T L R of the most general relation of tolerance for trees. All three logics ELH, T O L and T L R are decidable. Among them T O L has the most elegant language, axiomatization and Kripke semantics, and although T O L is just a fragment of T L R , here we are going to have a look only at this intermediate logic. The language of T O L contains the single variablearity modal operator ~: for any n, if A1,..., An are formulas, then so is O(A1,..., A~). This logic is defined as classical logic plus the rule ~A/,O(A) plus the following axiom schemata: 1. ~ ( 0 , A, 6 )  ~ (}(C, A AB,/9) v {}(C, B,/9), 2. (}(A)+ 0 ( A A~0(A)),
3. ~(C, A, :D) ~ {}((J, D), 4. ~(C,A,D)~ (}(C,A,A,.D), 5. 0(A, (}(C)) + (}(A/~ (}(C)), 6.
(Here A stands for A1,...,An for an arbitrary n~>0, (}(()) is identified with 1.)
G. Japaridze and D. de Jongh
530
15.4. Definition. A Visserframe (see Berarducci [1990]) is a triple {W, R, S}, where {W, R / is a Kripkeframe for L and S is a transitive, reflexive relation on W such that R c_ S and, for all w, u, v E W, we have w S u R v ~. w r y . A TOLmodel is a quadruple (W, R,S, IF} with {W, R , S ) a Visserframe combined with a forcing relation I~ with the clause w IF(~(A1 . . . , An) iff there are u l , . . . , un with u l S . . . Sun such that, for all i, wRui and ui IFAi. Such a model is said to be finite, if W is finite. 15.5. T h e o r e m . (Japaridze [19921) For any TOLformula A, t ToLA iff A is valid in every TOLmodel; the same is true if we consider only finite TOLmodels. 15.6. T h e o r e m . (Japaridze [1992]) Let T be a sound superarithmetic theory, and let, for * an arithmetic realization, ( ~ ( A 1 , . . . , An))* be interpreted as a natural formalization of "the sequence T + A~,..., T + A~ is tolerant". Then, for any T O L formula A, F ToLA iff for every realization *, T F A*. With the arithmetic interpretation in mind, note that L is the fragment of T O L in which the arity of ~ is restricted to 1. This is because consistency of A* with T, expressed in L by ~ A , means nothing but tolerance of the oneelement sequence {T + A*} of theories, expressed in T O L by ~(A). As for cotolerance, one can easily show, using theorems 12.7 and 12.13 ((i) ~ (iii)), that a sequence of superarithmetic theories is cotolerant iff the sequence where the order of these theories is reversed is tolerant. Moreover, it was shown in Japaridze [1993] that cotolerance though not tolerance for trees can also be expressed in terms of linear tolerance. In particular, a tree of superarithmetic theories is cotolerant iff one of its topological sortings is. Hence, given a tree Tr of modal formulas, cotolerance of the corresponding tree of theories can be expressed in T O L by ( } ( s v . . . v (~ (A~n), where s An are all the reverseorder topological sortings of Tr. Thus T O L , being the logic of linear tolerance, can, at the same time, be viewed as the logic of (unrestricted) cotolerance over PA. Just like tolerance, the notion of Fconsistency (see definition 12.4) can be generalized to finite trees, including sequences as special cases of trees: a tree Tr of theories is Fconsistent iff there are consistent extensions of these theories, of which each one is Fconservative over its predecessors in the tree. Then the corollaries of theorems 12.7 and 12.13 generalize to the following: 15.7. T h e o r e m . metic theories,
(Japaridze [1993], P A ~ ) For any finite tree Tr of superarith
(a) Tr is tolerant iff Tr is rIlconsistent; (b) Tr is cotolerant iff Tr is Elconsistent. Just as in the case of ILM, in the arithmetic completeness theorems for T O L and T L R , the requirement of superarithmeticity (essential reflexivity) of T can be
The Logic of Provability
531
weakened to IE1 c_ T if we view these logics as logics of Hiconsistency rather than tolerance. 15.8. T r u t h i n t e r p r e t a b i l i t y logics We want to finish our discussion of propositional interpretability logics by noting that the closure under modus ponens of the set of theorems of ILM, or any other of the logics mentioned in this section, supplemented with the axiom []A + A or its equivalent, yields the logic (in case of I L M called I L M ~) that describes all true principles expressible in the corresponding modal language, just as this was shown to be the case for L in section 3. The original sources usually contain proofs of both versions of the arithmetic completeness theorems for these logics. Strannegs [1997] considers infinite r.e. sets of modal formulas of interpretability logic. He generalizes his theorem 5.3 for the specific case of interpretability over P A to the following theorem. 15.9. T h e o r e m . Let T be a wellspecified r.e. set of formulas of interpretability logic. Then T is realistic iff it is consistent with I L M ~ . As in the case of L (corollary 5.2), a stronger version of this theorem implies as a corollary a uniform version of the arithmetic completeness of I L M with regard to PA. For a further consequence, let us first note that the existence of Oreysentences in PA, i.e., arithmetic sentences A such that both P A + A and P A +~A are interpretable in P A (first obtained by Orey [1961]), follows immediately from the arithmetic completeness of I L M with regard to PA. In Strannegs terminology this can be phrased as: Orey [1961] showed that the set { T b p , Tb~p} is realistic. Orey continued by asking what similar sets (such as { T b p , T b q , T b  ~ ( p A q ) , (T b ~p), ~(T b ~q), ~(T b p A q)}) are realistic. Let an Orey set be a set of modal formulas of the form (~)(Bb C), where B and C are Boolean formulas. Strannegs can then give the following answer to Orey's question. 15.10. T h e o r e m . with I L M ~ .
Let T be an r.e. Orey set. Then T is realistic iff it is consistent
16. Predicate provability logics 16.1. T h e p r e d i c a t e m o d a l l a n g u a g e a n d its a r i t h m e t i c i n t e r p r e t a t i o n The language of predicate provability logic is that of first order logic (without identity or function symbols) together with the operator O. We assume that this language uses the same individual variables as the arithmetic language. Throughout this section T denotes a sound theory in the language of arithmetic containing PA. We also assume that T satisfies the Lhb derivability conditions.
G. Japaridze and D. de Jongh
532
As in the previous sections, we want to regard each modal formula A(P1, ..., Pn) as a schema of arithmetic formulas arising from A(P1, ...,Pn) by substitution of arithmetic predicates P{, ..., P,~ for the predicate letters P1, ..., P, and replacing [::l by PrT(). However, some caution is necessary when we try to make this approach precise. In particular, we need to forbid for P* to contain quantifiers that bind variables occurring in A. 16.2. Definition. A realization for a predicate modal formula A is a function * which assigns to each predicate symbol P of A an arithmetic formula P*(Vl,..., v,), whose bound variables do not occur in A and whose free variables are just the first n variables of the alphabetical list of the variables of the arithmetic language if n is the arity of P. For any realization * for A, we define A* by the following induction on the complexity of A" 9 in the atomic cases, (P(Xl,... , x , ) ) * = P*(Xl,... ,xn), 9 * commutes with quantifiers and Boolean connectives: (VxB)* = Vx(B*), (B + C)* = B* ~ C*, etc., 9 (KIB)* = PrT[B*]. For an explanation of the notation "[]" see notation 12.2. Observe from this that A* always contains the same free variables as A. We say that an arithmetic formula ~o is a realizational instance of a predicate modal formula A, if ~o= A* for some realization * for A. The main task is to investigate the set of predicate modal formulas which express valid principles of provability, i.e., all of whose realizational instances are provable, or true in the standard model. 16.3. T h e s i t u a t i o n here is not as s m o o t h as in t h e p r o p o s i t i o n a l case, . . . Having been encouraged by the impressive theorems of Solovay on the decidability of propositional provability logic, one might expect that the valid principles captured by the predicate modal language are also axiomatizable (decidability is ruled out of course). However, the situation here is not as smooth as in the propositional case. The first doubts about this were raised by Montagna [1984]. In fact, it turned out afterwards that we have very strong negative results, one of which is the following theorem on nonarithmeticity of truth predicate logics of provability. 16.4. T h e o r e m . (Artiimov [1985a]) Suppose T is recursively enumerable. Then (/or any choice o/the provability predicate PrT) the set Tr o/predicate modal formulas all o/whose realizational instances are true, is not arithmetic. It was later shown by Vardanyan [1986], and also by Boolos and McGee [1987] that Tr is in fact Hicomplete in the truth set of arithmetic.
The Logic o.f Provability
533
P r o o f of t h e o r e m 16.4. We assume here that the arithmetic language contains one twoplace predicate letter E and two threeplace predicate letters A and M, and does not contain any other predicate, functional or individual letters. Thus, this language is a fragment of our predicate modal language. In the standard model E(x, y), A(x, y, z) and M(x, y, z) are interpreted as the predicates x : y, x + y : z and x x y : z, respectively. One variant of a wellknown theorem of Tennenbaum (see e.g., Chapter 29 of Boolos and Jeffrey [1989]) asserts the existence of an arithmetic sentence /3 such that: (1) 19 is true ("true" here always means "true in the standard model"), (2) any model of/3, with domain co, E interpreted as the identity relation, and A and M as recursive predicates, is isomorphic to the standard model. We assume that ~9 conjunctively contains the axioms of Robinson's arithmetic Q, including the identity axioms. Therefore, using standard factorization, we can pass from any model D of/9 with domain co and such that E, A and M are interpreted as recursive predicates, to a model D' which satisfies the conditions of (2) and which is elementarily equivalent to D. Thus, (2) can be changed to the following: (2') any model D of ~, with domain co and E, A and M interpreted as recursive predicates, is elementarily equivalent to the standard model (i.e., D b ), iff y is true, for all sentences "),). Let C be the formula
w, y y) v y)) ^ Vx, y, z (oA(x, y, z) v [3, A(x, y, z)) A Vx, y, z (OM(x, y, z) v a ~ M(x, y, z)). The following lemma yields the algorithmic reducibility of the set of all true arithmetic formulas (which, by Tarski's theorem, is nonarithmetic) to the set Tr, and this proves the theorem. 16.5. L e m m a . For any arithmetic formula ~o, ~ is true if and only if every realizational instance of j9 A C + ~o is true. P r o o f . ~ : Suppose ~ is true, * is a realization for/9 A C  + ~ and jg*A C* is true. We want to show that ~o* is also true. It is not hard to see that, since T is consistent and recursively enumerable (this condition is essential!), the truth of C* means that the relations defined on co in the standard model by the formulas E*, A* and M* are recursive. Let us define a model D with domain co such that, for all
k, m, n ~ co, D b E(k, m) iff E*(k, m) is true, D b A(k, m, n) iff A*(k, m, n) is true, D b M ( k , m , n ) iff M * ( k , m , n ) is true.
534
G. Japaridze and D. de Jongh
Observe that for every arithmetic formula 7 (for which the realization * is legal), we have D ~ 7 iff 7" is true. In particular D ~ #, and thus D satisfies the conditions of (2'), i.e., D is elementarily equivalent to the standard model, whence (as ~ is true) D ~ ~, whence ~* is true. r Suppose ~ is false. Let * be the trivial realization, i.e., such that E*(x, y) = E(x, y), A*(x, y, z) = A(x, y, z), M*(x, y, z) = M(x, y, z). Then #* = #, ~a* = ~a and therefore it suffices to show that # A C* + ~ is false, i.e., that # A C* is true. But # is true by (1), and from the decidability in T of the relations x  y, x + y = z and x x y = z, it follows that C* is also true. Formalizing in arithmetic the idea employed in the above proof, Vardanyan [1986] also proved that if T is recursively enumerable, then the set of predicate modal formulas whose realizational instances are provable in T (or in PA) is not recursively enumerable and is in fact II2complete. There is one perhaps even more unpleasant result which should also be mentioned here. For recursively enumerable T, the answer to the question whether a predicate modal formula expresses a valid provability principle, turns out to be dependent on the choice of the formula PrT, that is, on the concrete way of formalization of the predicate "x is the code of an axiom of T", even if a set of axioms is fixed (Art~mov [1986]). Note that the proofs of Solovay's theorems for propositional provability logic are insensitive in this respect and actually the only requirement is that the three L6bconditions must be satisfied.
16.6 . . . .
b u t still not c o m p l e t e l y d e s p e r a t e
Against this gloomy background one still can succeed in obtaining positive results in two directions. Firstly, although the predicate logic of provability in full generality is not (recursively) axiomatizable, some natural fragments of it can be so and may be stable with respect to the choice of the formula PrT. And secondly, all the abovementioned negative facts exclusively concern recursively enumerable theories, and the proofs hopelessly fail as soon as this condition is removed. There are however many examples of interesting and natural theories which are not recursively enumerable (e.g., the theories induced by wprovability or the other strong concepts of provability mentioned in section 8), and it well might be that the situation with their predicate provability logics is as nice as in the propositional case. The main positive result we are going to consider is the following: the "arithmetic part" of Solovay's theorems, according to which the existence of a Kripke countermodel (with a transitive and converse wellfounded accessibility relation) implies arithmetic nonvalidity of the formula, can be extended to the predicate level. This gives us a method of establishing nonvalidity for a quite considerable class of predicate modal formulas.
The Logic o/Provability
535
16.7. K r i p k e  m o d e l s for t h e p r e d i c a t e m o d a l l a n g u a g e
A Kripkeffame for the predicate modal language is a system M=(W,R,{Dw}~ew), where (W, R) is a Kripkeframe in the sense of section 2, {D~}wew are nonempty sets ("domains of individuals") indexed by elements of W such that if w R u, then Dw g Du, and a Kripkemodel is a Kripkeframe together with a forcing relation IF, which is now a relation between worlds w 9 W and closed formulas with parameters in D~; for the Boolean connectives and El, IF behaves as described in section 2, and we have only the following additional condition for the universal quantifier:
9 w IhVxA(x) iff w IhA(a) for all a 9 D~, and a similar one for the existential quantifier. A formula is said to be valid in a Kripkemodel (W, R, {D~}w~w, IF}, if A is forced at every world w 9 W. Such a model is said to be finite if W as well as all D~ are finite. 16.8. T h e p r e d i c a t e v e r s i o n of Solovay's t h e o r e m s For every predicate modal formula A, let REFL(A) denote the universal closure of A { ElB + B[ [:]B 9 SD}, where Sb is the set of the subformulas of A. 16.9. T h e o r e m . (Art~mov and Japaridze [1987,1990]). For any closed predicate modal formula A, (a) if A is not valid in some finite Kripkemodel with a transitive and converse wellfounded accessibility relation, then there exists a realization * for A such that T Jz A*, (b) if R E F L ( A )  + A is not valid in such a model, then there exists a realization * for A such that A* is false. Proof. We prove here only clause (a), leaving (b) as an exercise for the reader. Some details in this proof are in fact redundant if we want to prove only (a), but they are of assistance in passing to a proof of (b). Assume that (W, R, {Dw}w 9 w, IF) is a model with the abovementioned properties in which A is not valid. As before, without loss of generality we may suppose that W = { 1 , . . . , l } , 1 is the root and 1JFA. We suppose also that DwC_w and 0 9 Dw for each w 9 W. Let us define a model (W', R', {D~}~ew,, IF'} by setting 9 w':wu 9 R'=R
{0}, u
9 D~  D1 and, for all w 9 W, D~  D~, 9 for any atomic formula P, 0 IF'P iff 1 IFP and, if w 9 W, w IF'P iff w IFP.
G. Japaridze and D. de Jongh
536
We accept the definitions of the Solovay function h and the sentences Lim~ from section 3 without any changes; the only additional step is the following: For each a from D  U{Dw Ix 9 W} we define an arithmetic formula %(x) with only x free by setting %(x) = V { 3 t ~<x ( h ( t ) = h ( x ) = w A  ~ 3 z < t ( h ( z ) = w ) A x = t
+a)la 9
Thus, using the jargon from section 14, %(x) says that we have reached some world w such that a 9 D~, at the moment x we are still at w, and exactly a moments have passed since we moved to this world (we assume that the first "move", to the world 0, happened at the initial moment 0). We define the predicates ~, by 9 for each 0 # a 9D, 7~(x) = %(x), and 9 ~,~)(x) = %(x) v  V { % ( x )
] a 9 D\{0}}.
(It is easy to check that the left disjunct of 7~(x) is redundant; it implies the right disjunct.) Since we employ the same Solovay function h as in section 3, lemma 3.2 continues to hold. In addition, we need the following lemma: 16.10. L e m m a . (i)
T F 1(')'~(x) A "y~(x)) for all a # b,
(ii)
T F Limw +
(iii)
T F Lim~ + v x ( V
e Dw} for all w 9 W', {7,~(x) ] a 9D~}) for all w 9 W'.
Proof. (i): The formulas %(x) and %(x) for a =/ b are defined so that each disjunct of %(x) is inconsistent with each disjunct of %(x). And the right disjunct of ")'~(x), by definition, is inconsistent with each %(x), a ~ 0. (ii): Suppose a 9 D~ and argue in T + Lim~. Since w is the limit of h, there is a moment t at which we arrive in w, and stay there for ever (more formally: we have 13y < t (h(y) = w) and Vy >1t (h(y)  w)). Then, by definition, %(t + a) holds, whence 7'~(t + a) holds, whence 3x ),~(x). And so for each a 9Dw. (iii): Argue in T + Lim~. Consider an arbitrary number x. We must show that 7~(x) holds for some a 9Dw. The definition of h implies that, either h(x)R' w, or h(x) = w; in both cases we then have Dh(,) c_ D~. Let t be the least number such that h(t) = h(x), and let a = x  t. By definition, if a 9 Dh(,) (and thus a 9D~), then %(x) holds, whence 7~(x) holds and we are done; and if ar Dh(,), then (the right disjunct of) "y~)(x) holds and we are also done, because 0 9 Dw. t We now define a realization *. For each nplace predicate letter P, let P* be V{Lim~,A'y'~,(vl) A ... A'y'~.(v,)I~,,
9 ,~eD~,
wll'P(ax,...,an)}.
The Logic of Provability
537
16.11. L e m m a . Let B be a predicate modal formula with precisely x t , . . . ,xn free. Then, for each w 9 W and for all al, . . . , an e Dw, (a) if w I F ' ( B ( a l , . . . , an), then T ~ Lim~ A ~/~1(Xl) A ... A "y~ (xn) + B*; (b) i f w ~ ' ( B ( a l , . . . , a n ) , then TF Lim~ A 3,~1(Xl)A ... A 3'~(xn)+  B*. Proof. We proceed by induction on the complexity of B. Suppose B ( X l , . . . , x n ) is atomic. If w IF' B ( a t , . . . , an), then Lim~ A')"~I(xl) A . . . A 3'~ (xn) is one of the disjuncts of B* and the desired result is obvious. If w ~ ' B ( a l , . . . ,an), then that formula is not a disjunct of B* and, according to lemma 3.2(5) and 16.10(i), it implies in T the negations of all the disjuncts of B*. Next suppose that B ( x t , . . . , Xn) is Vy C(y, X l , . . . , Xn). If wIFVy C ( y , a ~ , . . . , a n ) , then w IF C(b, a l , . . . , a n ) for all b 9D~. Then, by the induction hypothesis, for all b e D~, T F LimwA'TIb(Y) Aq/al(Xl) A ... A")'la,,,(xn)+ (V(y, z l , . . . , x n ) ) *. Therefore,
I t,
T I Lim,,, A
^ 4, (x,) ^...
(c(,,, x,,...,
Note that there is no free occurrence of y in either Lim~ or 711 (Xl) A . . . A 7~, (xn). Universal quantification over y gives / H
\
TiLim~nVY[V{7'b(y) Vy(C(y,
xi,
. . .
l b e D ~ } ) AT'~i(Xl)h ... h 7'~(xn) ~
,Xn))*.
By lemma 16.10(iii), we can eliminate the conjunct antecedent of the above formula and conclude that
VY(V{',/~(y) JbeD,,,} I %  
in the
/
T FLim~ A'y'a,(xl)A ... A~/'~(Xn)+Vy(C(y, x l , . . . , X n ) ) * . If on the other hand w ~ V y C ( y , a l , . . . , a n ) , then there is b e D ~ w ~C(b, a l , . . . , an). By the induction hypothesis,
such that
T i Lim~ A g/~(y) A T'~I (Xl) A ... A g/~ (xn) "+" (C(y, xl, . . . ,Xn))*. Again, neither Lim~ nor 3 ' ~ ( X l ) A . . . A3'~(xn) contains y free, and existential quantification over y gives T F Lim~ A gY'Y~(Y) A T'a, (Xl) A ... A'/a~ (xn) + 3 y  ' ( C ( y , xl, . . . ,xn))*. According to lemma 16.10(ii), T t Lim~ + 9y 7~(y). Therefore, T FLim~ A3"~l(Xl)A ... A T ' ~ ( X n )  ~  ' ( V y C ( y , x l , . . . , x n ) ) * . Finally, suppose that B is rqC. If w l F D C ( a l , . . . , a n ) , then for each u such that wR'u, we have u I F C ( a l , . . . , an) and, by the induction hypothesis, T FLimu AT~,I(X,)A ... Ag/a,,(x,)+ (C(Xl,...,Xn)) *.
G. Japaridze and D. de Jongh
538 Therefore,
T F ( V{Lim~
I wR'u}) A 3,' (Xl)A...A
3/~. (x,)~
(C(xl,..., xn))*,
and, by the first two Lhb conditions,
T F PrT[( V{Lim~, IwR'u}) A ")lal(Xl)A ... A g/'a,(Xn)]+ (E]C(Xl, . . .,Xn))*. Observe that the formulas %(x) are primitive recursive and we have that T F %(x) + PrT[%(x)]; together with lemma 3.2(d) this means that
T F Lim~ A "Y~I(Xl) A . . . A ")'~.(x.) ~
PrT[( V{Lim~
I wR'u})A
"y'a,(Xl) A 9 A9 3''~.(x,)].
Thus, we get T F Lim~A '~lal (Xl) A . . . A '~la.(Xn)~ ([]C(Xl,...,Xn))*. If wP~[:lC(al,...,a,), then there is u such that w R ' u and u ~ C ( a l , . . . , a , ) . the induction hypothesis,
By
T F Lim, A')/~i(Xl)A ... A ' ) / ~ . ( x , ) ~  , ( C ( x l , . . . , x , ) ) * . Therefore,
T F ( C ( X l , . . . , x n ) ) * ~, (Limu A'y'a,(Xl)A ... AqI~, (x,)), T F PrT[(C(xl,..., x,))'] ~ PrT[(Lim, AY',l(Xl)A... A 7'a. (x,))], T F  PrT[ (Lim, A g/,, (Xl) A ... A T'.. (x.))I ~ 
(E]C(xl, . . . ,x.))*.
On the other hand, we have T F  PrT[ Lim~] A PrT[')'lal (Xl) A . . . A ")/a.(Xn)] +
,PrT[(Lim~, A T~x (Xl) A
.
.
.
A~la.(Xn))]
(this is a realizational instance of the principle (}p A E]q + (}(p A q) which is provable in g ) . According to lemma 3.2(c), and since T F ")'~(x) ~ PrT[7~(x)], we have
T F Limw A "7~,(x,) A . . . A "y~.(x.) + , PrT[, Lim~] A PrT[~/al (x,) A . . . A ~'~. (x,)]. Therefore, T F Limw h 7'hi (Xl) A . . .
A 'Tlar,(Xn)~ ~ ( [  ] C ( X l , . . .
,Xn))*.
To finish the proof of theorem 16.9: since A is closed and l~ZA, we have by lemma 16.11, T F Liml ~ ~ A*. By lemma 3.2(f), Liml is consistent with T, and consequently T ~ A*. ~
The Logic of Provability
539
16.12. F u r t h e r p o s i t i v e r e s u l t s One of the applications of theorem 16.9 is the following. Consider the fragment of our predicate modal language which arises by restricting the set of variables to one single variable x. In this case, without loss of generality, we may assume that every predicate letter is oneplace. Since the variable x is fixed, it is convenient to omit it in the expressions Vx, P(x), Q ( x ) , . . . and simply write V, p, q, . . . . In fact, we then have a bimodal propositional language with the modal operators [] and V. The modal logic Lq, introduced by Esakia [1988], is axiomatized by the following schemata: 1. all propositional tautologies in the bimodal language, 2. the axioms of L for [3, 3. the axioms of $5 for V, i.e.,
9 V (A + B) + (VA + VB), 9 VA+A, 9 3A ~ V 3 A (3 abbreviates ~ V~ ), 4. [ ] V A  + V [ ] A , together with the rules modus ponens, A~ [3A and A/VA. For this logic (the language of which is understood as a fragment of the predicate modal language) we have the following modal completeness theorem: 16.13. T h e o r e m . (Japaridze [1988a,1990a]) For any Lqformula A, ~LqA iff A is valid in all finite predicate Kripkemodels with a transitive and converse wellfounded accessibility relation. In view of the evident arithmetic soundness of Lq, this modal completeness theorem together with the above predicate version of Solovay's first theorem implies the arithmetic completeness of Lq:
16.14. Corollary. For any Lqformula A, of A is provable in T.
FLqA iff every realizational instance
Japaridze [1988a,1990a] also introduced the bimodal version Sq of S and proved that ~Sq A iff every realizational instance of A is true. The axioms of Sq are all theorems of Lq plus []A+ A, and the rules of inference are Modus Ponens and
A/VA. Taking into account that we deal with a predicate language, the requirement of finiteness of the models in theorem 16.9 is a very undesirable restriction however. In Japaridze [1990a] a stronger variant of this theorem was given with the condition of finiteness replaced by a weaker one. What we need instead of finiteness, is roughly the following:
540
G. Japaridze and D. de Jongh
(1) The relations w e W , w R u , a e D ~ must be binumerable in T (see definition 12.1), and this fact must be provable in T. (2) The relation IF must be numerable in T and T must prove that fact. To be more precise, IF need not be defined for all worlds and all formulas, but only for those which are needed to falsify the formula A in the root of the model (i.e., in some cases we may have neither w lFB nor w IF~B); T should just prove that IF behaves "properly", e.g., w l b B ==# w ~  ~ B , w l F B v C ==~ (wlFB or wlFC), w l F  ~ ( B v C ) ==~ (wlF~B and wlF~C), . . . . (3) T also must "prove" that the relation R is transitive and converse wellfounded. Of course, wellfoundedness is not expressible in the first order language, and T should somehow simulate a proof of this property of R. This is the case if, e.g., T proves the scheme of Rinduction for the elements of W, i.e., T
+ v , ,
w
w
We want to end this section by mentioning one last positive result. Let QL be the logic which arises by adding to L (written in the predicate modal language) the axioms and rules of the classical predicate calculus. Similarly, let QS be the closure of S with respect to classical predicate logic. 16.15. T h e o r e m .
(Japaridze [1990a,1991]). Suppose T is strong enough to prove all true Hisentences, and A is a closed predicate modal formula which satisfies one of the following conditions: (i) no occurrence of a quantifier is in the scope of some occurrence of [] in A, or (ii) no occurrence of • is in the scope of some other occurrence of [] in A, or (iii) A has the form
9
~ B.
Then we have:
(a) F qt. A iff all realizational instances of A are provable in T, (b) F qs A iff all realizational instances of A are true. (Of course, clause (b) is trivial in case (iii).) The proof for the (ii) and (iii)fragments in Japaridze [1990a] is based on the abovementioned strong variant of the predicate version of Solovay's theorems. Both Vardanyan's and Artiimov's theorems on nonenumerability and nonarithmeticity hold for the (i) and (ii)fragments as well, but this is not in contradiction with theorem 16.15. The point is that the use of Tennenbaum's theorem in the proofs of these negative results is possible only on assumption of the recursive enumerability of T, whereas no consistent theory which proves all the true IIlsentences can be recursively enumerable. Thus there are no immediate objections against the optimistic conjecture that QL and QS are complete for such strong theories without any restriction on the language.
The Logic of Provability
17.
541
Acknowledgements
In the first place we are very grateful to Lev Beklemishev for providing us with a draft for the sections 6, 7 and 8 in a near perfect state. He also gave extensive comments on other sections. Sergei Art~imov supported us with section 10, and in answering some questions for us. Albert Visser was very helpful with comments and discussions, answering questions, and pointing out mistakes. Giovanni Sambin provided valuable comments. Claes Strannegs shared his expertise with us. Joost Joosten, Rosalie Iemhoff and Eva Hoogland found quite a number of inaccuracies in the manuscript. Anne Troelstra stood by us with advice. Sam Buss was a very helpful editor and careful reader. The first author was supported by N.W.O., the Dutch Foundation for Scientific Research, while working on this chapter in 19921993, and by the National Science Foundation (grant CCR9403447) while working on its final version in 1997.
References S. N. ARTi~MOV [1980] Arithmetically complete modal theories, Semiotika i Informatika, VINITI, Moscow, 14, pp. 115133. In Russian, English translation in: Amer. Math. Soc. Transl. (2), 135: 3954, 1987. [1985a] Nonarithmeticity of truth predicate logics of provability, Doklady Akademii Nauk SSSR, 284, pp. 270271. In Russian, English translation in Soviet Math. Dokl. 32 (1985), pp. 403405. [1985b] On modal logics axiomatizing provability, Izvestiya Akad. Nauk SSSR, set. mat., 49, pp. 11231154. In Russian, English translation in: Math. USSR Izvestiya 27(3). [1986] Numerically correct logics of provability, Doklady Akademii Nauk SSSR, 290, pp. 12891292. In Russian. [1994] Logicof proofs, Annals of Pure and Applied Logic, 67, pp. 2959. [1995] Operational Modal Logic, Tech. Rep. MSI 9529, Cornell University. S. N. ARTi~MOVAND G. K. JAPARIDZE (DZHAPARIDZE) [1987] On effective predicate logics of provability, Doklady Akademii Nauk SSSR, 297, pp. 521523. In Russian, English translation in Soviet Math. Dokl. 36 (1987), pp. 478480. [1990] Finite Kripke models and predicate logics of provability, Journal of Symbolic Logic, 55, pp. 10901098. A. AVRON [1984] On modal systems having arithmetical interpretations, Journal of Symbolic Logic, 49, pp. 935942. L. D. BEKLEMISHEV [1989a] On the classification of propositional provability logics, Izvestiya Akademii Nauk SSSR, ser. mat. D, 53, pp. 915943. In Russian, English translation in Math. USSR Izvestiya 35 (1990) 247275. [1989b] A provability logic without Craig'sprotect interpolation property, Matematicheskie Zametkie, 45, pp. 1222. In Russian, English translation in Math. Notes 45 (1989). [1991] Provability logics for natural Turing progressions of arithmetical theories, Studia Logica, pp. 107128. [1992] Independent numerations of theories and recursive progressions, Sibirskii Matematichskii Zhurnal, 33, pp. 2246. In Russian, English translation in Siberian Math. Journal, 33 (1992).
542
G. Japaridze and D. de Jongh
[1993a] On the complexity of arithmetic applications of modal formulae, Archive for Mathematical Logic, 32, pp. 229238. [1993b] Review of de Jongh and Montagna [1988,1989], Carbone and Montagna [1989,1990], Journal of Symbolic Logic, 58, pp. 715717. [1994] On bimodal logics of provability, Annals of Pure and Applied Logic, 68, pp. 115160. [1996a] Bimodal logics for extensions of arithmetical theories, Journal of Symbolic Logic, 61, pp. 91124. [1996b] Remarks on Magarialgebras of PA and IA0 + EXP, in: Logic and Algebra, A. Ursini and P. Aglianb, eds., Marcel Dekker, Inc., New York, pp. 317326. A. BERARDUCCI [1990] The interpretability logic of Peano arithmetic, Journal of Symbolic Logic, 55, pp. 10591089. A. BERARDUCCI AND R. VERBRUGGE [1993] On the provability logic of bounded arithmetic, Annals of Pure and Applied Logic, 61, pp. 7593. C. BERNARDI [1976] The uniqueness of the fixedpoint in every diagonalizable algebra, Studia Logica, 35, pp. 335343. G. BOOLOS [1979] The Unprovability of Consistency, Cambridge University Press. [1981] Provability, truth and modal logic, Journal of Philosophic Logic, 9, pp. 17. [1982] Extremely undecidable sentences, Journal of Symbolic Logic, 47, pp. 191196. [1993a] The analytical completeness of Dzhaparidze's polymodal logics, Annals of Pure and Applied Logic, 61, pp. 95111. [1993b] The Logic of Provability, Cambridge University Press. G. BOOLOS AND R. C. JEFFREY [1989] Computability and Logic, 3rd ed., Cambridge University Press. G. BOOLOS AND V. MCGEE [1987] The degree of the set of sentences of predicate provability logic that are true under every interpretation, Journal of Symbolic Logic, 52, pp. 165171. G. BOOLOS AND G. SAMBIN [1991] Provability: the emergence of a mathematical modality, Studia Logica, 50, pp. 123. A. CARBONE AND F. MONTAGNA [1989] Rosser orderings in bimodal logics, Zeitschrift fiir Mathematische Logik und Grundlagen der Mathematik, 35, pp. 343358. [1990] Much shorter proofs: a bimodal investigation, Zeitschrift fiir Mathematische Logik und Grundlagen der Mathematik, 36, pp. 4766. T. CARLSON [1986] Modal logics with several operators and provability interpretations, Israel Journal of Mathematics, 54, pp. 1424. B. F. CHELLAS [1980] Modal Logic: An Introduction, Cambridge University Press. P. CLOTE AND J. KRAJf(~EK [1993] eds., Arithmetic, Proof Theory and Computational Complexity, Oxford University Press. D. VAN DALEN [1994] Logic and Structure, Springer Verlag, Berlin, Amsterdam. L. L. ESAKIA [1988] Provability logic with quantifier modalities, in: Intensional Logics and the Logical
Structure of Theories: Material from the fourth SovietFinnish Symposium on Logic, Telavi, May 2024, 1985, Metsniereba, Tbilisi, pp. 49. In Russian.
The Logic of Provability
543
S. FEFERMAN [1960] Arithmetization of metamathematics in a general setting, Archive for Mathematical Logic, 6, pp. 5263. [1962] Transfinite recursive progressions of axiomatic theories, Journal of Symbolic Logic, 27, pp. 259316. S. FEFERMAN, G. KREISEL, AND S. GREY [1960] 1consistency and faithful interpretations, Fundamenta Mathematicae, 49, pp. 3592. Z. GLEIT AND W. GOLDFARB [1990] Characters and fixed points in provability logic, Notre Dame Journal o] Formal Logic, 31, pp. 26551. K. GODEL [1933] Eine Interpretation des intuitionistischen Aussagenkalkuls, Ergebnisse Math. Colloq., Bd. 4, pp. 3940. D. GUASPARI [1979] Partially conservative extensions of arithmetic, Transactions of the American Mathematical Society, 254, pp. 4768. [1983] Sentences implying their own provability, Journal of Symbolic Logic, 48, pp. 777789. D. GUASPARI AND R. M. SOLOVAY [1979] Rosser sentences, Annals of Mathematical Logic, 16, pp. 8199. P. HAJEK [1971] On interpretability in set theories I, Comm. Math. Univ. Carolinae, 12, pp. 7379. [1972] On interpretability in set theories II, Comm. Math. Univ. Carolinae, 13, pp. 445455. P. HAJEK AND F. MONTAGNA [1990] The logic of HlConservativity, Archly ]iir Mathematische Logik und Grundlagen]orschung, 30, pp. 113123. [1992] The logic of HlConservativity continued, Archiv fiir Mathematische Logik und Grundlagen]orschung, 32, pp. 5763. P. HAJEK, F. MONTAGNA, AND P. PUDLAK [1993] Abbreviating proofs using metamathematical rules, in: pp. 387428.
Clote and KrajiSek [1993],
D. HAREL [1984] Dynamic logic, in: Handbook of Philosophic Logic, Volume II, Extensions of Classical Logic, D. Gabbay and F. Guenthener, eds., Kluwer Academic Publishers, Dordrecht, Boston, pp. 497604. D. HILBERT AND P. BERNAYS [1939] Grundlagen der Mathematik II, Springer, Berlin. G. E. HUGHES AND M. J. CRESSWELL [1984] A Companion to MODAL LOGIC, Methuen, London, New York. K. N. IGNATIEV [1990] The logic of Elinterpolability over Peano arithmetic. Manuscript. In Russian. [1993a] On strong provability predicates and the associated modal logics, Journal of Symbolic Logic, 58, pp. 249290. [1993b] The provability logic of Elinterpolability, Annals of Pure and Applied Logic, 64, pp. 125. G. K. J APARIDZE (DZHAPARIDZE) [1986] The Modal Logical Means of Investigation of Provability, PhD thesis, Moscow State University. In Russian. [1988a] The arithmetical completeness of the logic of provability with quantifier modalities, Bull. Acad. Sci. Georgian SSR, 132, pp. 265268. In Russian.
544
G. Japaridze and D. de Jongh
[1988b] The polymodal logic of provability, in: Intensional Logics and the Logical Structure of Theories: Material from the fourth SovietFinnish Symposium on Logic, Telavi, May 2024, 1985, Metsniereba, Tbilisi, pp. 1648. In Russian. [1990a] Decidable and enumerable predicate logics of provability, Studia Logica, 49, pp. 721. [1990b] Provability logic with modalities for arithmetical complexities, Bull. Acad. Sci. Georgian SSR, 138, pp. 481484. [1991] Predicate provability logic with nonmodalized quantifiers, Studia Logica, 50, pp. 149160. [1992] The logic of linear tolerance, Studia Logica, 51, pp. 249277. [1993] A generalized notion of weak interpretability and the corresponding logic, Annals of Pure and Applied Logic, 61, pp. 113160. [1994a] The logic of arithmetical hierarchy, Annals of Pure and Applied Logic, 66, pp. 89112. [1994b] A simple proof of arithmetical completeness for IIlConservativity logic, Notre Dame Journal of Formal Logic, 35, pp. 346354. D. H. J. DE JONGH [1987] A simplification of a completeness proof of Guaspari and Solovay, Studia Logica, 46, pp. 187192. D. H. J. DE JONGH, M. JUMELET, AND F. MONTAGNA [1991] On the proof of Solovay's theorem, Studia Logiea, 50, pp. 5170. D. H. J. DE J ONGH AND F. M ONTAGNA [1988] Provable fixed points, Zeitschr#2 fiir Mathematische Logik und Grundlagen der Mathematik~ 34, pp. 229250. [1989] Much shorter proofs, Zeitschrift fiir Mathematische Logik und Grundlagen der Mathematik, 35, pp. 247260. [1991] Rosserorderings and free variables, Studia Logica, 50, pp. 7180. D. H. J. DE JONGH AND D. PIANIGIANI [1998] Solution of a problem of David Guaspari, Studia Logica, 57. To appear. D. H. J. DE JONGH AND F. VELTMAN [1990] Provability logics for relative interpretability, in: Petkov [1990], pp. 3142. D. H. J. DE JONGH AND A. VISSER [1991] Explicit fixed points in interpretability logic, Studia Logica, 50, pp. 3950. G. KREISEL AND A. LEVY [1968] Reflection principles and their use for establishing the complexity of axiomatic systems, Zeitschrift fiir Mathematische Logik und Grundlagen der Mathematik, 14, pp. 97142. P. LINDSTROM [1984] On faithful interpretability, in: Computation and Proof Theory, M. M. Richter, E. BSrger, W. Oberschelp, B. Schinzel, and W. Thomas, eds., Lecture Notes in Mathematics #1104, Springer Verlag, Berlin, Berlin, pp. 279288. [1994] The Modal Logic of Parikh Provability, Tech. Rep. Filosofiska Meddelanden, GrSna serien, No. 5, University of GSteborg. J. C. C. M CKINSEY AND A. TARSKI [1948] Some theorems about the calculi of Lewis and Heyting, Journal of Symbolic Logic, 13, pp. 115. F. MONTAGNA [1979] On the diagonalizable algebra of Peano arithmetic, Bulletino della Unione Matematica Italiana, 5, 16B, pp. 795812. [1984] The predicate modal logic of provability, Notre Dame Journal of Formal Logic, 25, pp. 179189. [1987] Provability in finite subtheories of PA, Journal of Symbolic Logic, 52, pp. 494511.
The Logic of Provability
545
[1992] Polynomially and superexponentially shorter proofs in fragments of arithmetic, Journal of Symbolic Logic, 57, pp. 844863. S. GREY [1961] Relative interpretations, Zeitschri# fiir Mathematische Logik und Grundlagen der Mathematik, 7, pp. 146153. R. PARIKH [1971] Existence and feasibility, Journal of Symbolic Logic, 36, pp. 494508. P. P. PETKOV [1990] ed., Mathematical Logic, Proceedings of the Heyting 1988 Summer School, New York, Plenum Press. M. DE RIJKE [1992] Unary interpretability logic, Notre Dame Journal of Formal Logic, 33, pp. 249272. G. SAMBIN [1976] An effective fixedpoint theorem in intuitionistic diagonalizable algebras, Studia Logica, 35, pp. 345361. G. SAMBIN AND S. VALENTINI [1982] The modal logic of provability. The sequential approach., Journal of Philosophical Logic, 11, pp. 311342. [1983] The modal logic of provability: cut elimination., Journal of Philosophical Logic, 12, pp. 471476. D. S. SCOTT [1962] Algebras of sets binumerable in complete extensions of arithmetic, in: Recursive Function Theory, American Mathematical Society, Providence, R.I., pp. 117121. V. Y. SHAVRUKOV [1988] The Logic of Relative Interpretability over Peano Arithmetic, Tech. Rep. Report No.5, Stekhlov Mathematical Institute, Moscow. (in Russian). [1991] On RoBBer's provability predicate, Zeitschrift fiir Mathematische Logik und Grundlagen der Mathematik, 37, pp. 317330. [1993a] A note on the diagonalizable algebras of PA and ZF, Annals of Pure and Applied Logic, 61, pp. 161173. [1993b] Subalgebras of diagonalizable algebras of theories containing arithmetic, Dissertationes mathematicae (Rozprawy matematycne), 323. Instytut Matematyczny, Polska Akademia Nauk, Warsaw. [1994] A smart child of Peano's, Notre Dame Journal of Formal Logic, 35, pp. 161185. [1997] Undecidability in diagonalizable algebras, Journal of Symbolic Logic, 62, pp. 79116. C. SMORYI~SKI [1977] The incompleteness theorems, in: Handbook of Mathematical Logic, J. Barwise, ed., vol. 4, NorthHolland, Amsterdam, Amsterdam, pp. 821865. [1978] Beth's theorem and selfreferential statements, in: Computation and Proof Theory, A. Macintyre, L. Pacholski, and J. B. Paris, eds., NorthHolland, Amsterdam, Amsterdam, pp. 1736. [1985] Selfreference and modal logic, SpringerVerlag, Berlin. R. M. SOLOVAY [1976] Provability interpretations of modal logic, Israel Journal of Mathematics, 25, pp. 287304. C. STRANNEG?,RD [1997] Arithmetical Realizations of Modal Formulas, PhD thesis, University of G5teborg, Acta Philosophica Gothoburgensia 5. A. TARSKI, A. MOSTOWSKI, AND R. M. ROBINSON [1953] Undecidable Theories, NorthHolland, Amsterdam, Amsterdam.
546
G. Japaridze and D. de Jongh
A. S. TROELSTRA AND H. SCHWICHTENBERG [1996] Basic Proof Theory, Cambridge University Press. A. TURING [1939] System of logics based on ordinals, Proceedings of the London Mathematical Society, Ser. 2, 45, pp. 161228. V. A. VARDANYAN [1986] Arithmetic complexity of predicate logics of provability and their fragments, Doklady Akademii Nauk SSSR, 288, pp. 1114. In Russian, English translation in Soviet Math. Dokl. 33 (1986), pp. 569572. R. VERBRUGGE [1993a] EJ~cient Metamathematics, PhD thesis, Universiteit van Amsterdam, ILLCdisseration series 19933. [1993b] Feasible interpretability, in: Clote and Kraj(Sek [1993], pp. 197221. A. VISSER [1980] Numerations, )~calculus and arithmetic, in: To H.B. Curry: Essays on Combinatory logic, lambda calculus and formalism, J. P. Seldin and J. R. Hindley, eds., Academic Press, Inc., London, pp. 259284. [1981] Aspects of Diagonalization and Provability, PhD thesis, University of Utrecht, Utrecht, The Netherlands. [1984] The provability logics of recursively enumerable theories extending Peano arithmetic at arbitrary theories extending Peano arithmetic, Journal of Philosophical Logic, 13, pp. 97113. [1985] Evaluation, Provably Deductive Equivalence in Heyting's arithmetic of Substitution Instances of Propositional Formulas, Tech. Rep. LGPS 4, Department of Philosophy, Utrecht University. [1989] Peano's smart children, a provability logical study of systems with builtin consistency, Notre Dame Journal of Formal Logic, 30, pp. 161196. [1990] Interpretability logic, in: Petkov [1990], pp. 175209. [1991] The formalization of interpretability, Studia Logica, 50, pp. 81106. [1994] Propositional Combinations of ~Sentences in Heyting's Arithmetic, Tech. Rep. LGPS 117, Department of Philosophy, Utrecht University. To appear in the Annals of Pure and Applied Logic. [1995] A course in bimodal provability logic, Annals of Pure and Applied Logic, 73, pp. 109142. [1997] An overview of interpretability logic, in: Advances in Modal Logic '96, M. Kracht, M. de Rijke, and H. Wansing, eds., CSLI Publications, Stanford. A. VISSER, J. VAN BENTHEM, D. H. J. DE JONGH, AND G. R. RENARDEL DE LAVALETTE [1995] NILL, a study in intuitionistic propositional logic, in: Modal Logic and Process Algebra, a Bisimulation Perspective, A. Ponse, M. de Rijke, and Y. Venema, eds., CSLI Lecture Notes #53, CSLI Publications, Stanford, pp. 289326. F. VOORBRAAK [1988] A simplification of the completeness proofs for Guaspari and Solovay's R, Notre Dame Journal of Formal Logic, 31, pp. 4463. D. ZAMBELLA [1992] On the proofs of arithmetical completeness of interpretability logic, Notre Dame Journal of Formal Logic, 35, pp. 542551. [1994] Shavrukov's theorem on the subalgebras of diagonalizable algebras for theories containing IA0 + EXP, Notre Dame Journal of Formal Logic, 35, pp. 147157.
CHAPTER VIII
The Lengths of Proofs Pavel Pudls Mathematical Institute, Academy of Sciences o.f the Czech Republic 115 67 Prague 1, The Czech Republic
Contents 1. I n t r o d u c t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. T y p e s of proofs and measures of complexity . . . . . . . . . . . . . . . . . . . . 3. Some short formulas and short proofs . . . . . . . . . . . . . . . . . . . . . . . . 4. More on the s t r u c t u r e of proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Bounds on cutelimination and H e r b r a n d ' s t h e o r e m . . . . . . . . . . . . . . . . 6. Finite consistency s t a t e m e n t s  concrete b o u n d s . . . . . . . . . . . . . . . . . . 7. Speedup theorems in first order logic . . . . . . . . . . . . . . . . . . . . . . . 8. P r o p o s i t i o n a l p r o o f systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. Lower b o u n d s on propositional proofs . . . . . . . . . . . . . . . . . . . . . . . 10. B o u n d e d a r i t h m e t i c and propositional logic . . . . . . . . . . . . . . . . . . . . 11. Bibliographical remarks for further reading . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
HANDBOOK OF PROOF THEORY E d i t e d by S. R. Buss (~) 1998 Elsevier Science B.V. All rights reserved
548 549 555 564 573 577 585 590 605 619 627 629
548
P. Pudldk
1. I n t r o d u c t i o n In this chapter we shall consider the problem of determining the minimal complexity of a proof of a theorem in a given proof system. We shall deal with propositional logic and first order logic. There are several measures of complexity of a proof and there are many different proof systems. Let us give some reasons for this research, before we discuss particular instances of the problem. 1.1. Our subject could be called the quantitative study of the proofs. In contrast with the classical proof theory we want to know not only whether a theorem has a proof but also whether the proof is feasible, i.e., can be actually written down or checked by a computer. An ideal justification for such research would be a proof that a particular theorem for which we have only long proofs (such as the four color theorem), or a conjecture for which we do not have any proof (such as P ~ AfT~), does not have a short proof in some reasonable theory (such as ZF). Presently this seems to be a very distant goal; we are only able to prove lower bounds on the lengths of proofs for artificial statements, or for natural statements, but in very weak proof systems. The situation here is similar to the situation in the study of (weak) fragments of arithmetic and complexity theory. In fragments of arithmetic we can prove unprovability of H ~ sentences only for sentences obtained by diagonalization, and in complexity theory we can separate complexity classes also only when diagonalization is possible. These three areas are very much connected and it is not possible to advance very much in one of them without making progress in the others. Nevertheless there are already now some practical consequences of this research. For instance in first order logic we know quite precisely how much cutelimination increases the size of proofs. In propositional logic we have simple tautologies which have only exponentially long resolution proofs. This is very important information for designers of automated theorem provers. Another reason for studying the lengths of proofs is that information about the size of proofs is very important in the study of weak fragments of arithmetic, namely when metamathematics of fragments is considered. For instance, in bounded arithmetic the exponentiation function is not provably total. Therefore the cutelimination theorem is not provable in bounded arithmetic (in fact first order cutelimination requires more than elementary increase in the size of proofs). Consequently we have (at least) two different concepts of consistency in bounded arithmetic: the usual one and cutfree consistency. Furthermore there is a relation between provability in bounded arithmetic and the lengths of proofs in certain proof systems for propositional logic. This seems to be the most promising way of proving concrete independence results for bounded arithmetic. Finally this area is important because of its tight relation to complexity theory. Actually, research into the lengths of proofs should be considered as a part of complexity theory. There are two kinds of connections with computational complexity.
The Lengths of Proofs
549
On the one hand there are explicit connections such as the fact that a proof system for propositional logic is a nondeterministic algorithm for the (coAl7~ complete) set of tautologies. On the other hand there are intuitive connections which are not supported by theorems. For example the relation between Frege systems and extension Frege systems (see below for definitions) for propositional logic is very much like the relation between the complexity measures of boolean functions based on formula size and circuit size, respectively. It is an open problem whether Frege systems are as powerful as extension Frege systems and also it is an open problem whether formulas are as powerful as circuits; but we are not able to prove any of two implications between these apparently related problems. Some people think that the difficult problems in complexity theory such as 7~  AfT~? are essentially logical (not combinatorial) problems. If it is so, then proof theory, and in particular the lengths of proofs, should play an important role in their solution. 1.2. Now we shall briefly outline the contents of this chapter. Section 2 introduces some basic concepts. In section 3 we describe a technique of constructing short formulas for inductively defined concepts. This technique has various applications. Section 4 contains results about dependence of different measures of complexity of proofs and a remark on the popular Kreisel Conjecture. In section 5 we shall consider the cutelimination theorem from the point of view of the lengths of proofs; namely, we shall show a lower bound on the increase of the length. In section 6 we shall prove a version of the second incompleteness theorem for finite consistencies. This enables us to prove some concrete lower bounds and speedup. In section 7 we survey speedup theorems, namely results about shortening of proofs when a stronger theory is used instead of a weaker one and related results. Section 8 is a survey of the most important propositional proof systems. In section 9 we give a nontrivial example of a lower bound on the lengths of propositional proofs in the resolution system. In section 10 we present important relations between the lengths of proofs in propositional logic and provability in fragments of arithmetic. The final section 11 surveys especially those results which have not been treated in the main text. 2. T y p e s
of proofs and measures
of complexity
In this section we introduce notation and some basic concepts used in both propositional logic and first order logic. 2.1. One can consider many different formalizations and it is difficult to find a classification schema which would cover all. There is however one basic property which all formalizations of the concept of a proof must satisfy: it must be computable in polynomial time whether a given sequence is a proof of a given formula. Here we assume, as usual, that proofs and formulas are encoded as strings in a finite alphabet and we identify feasible computations with polynomial time computations. This trivial observation gives us important link to computational complexity. The proof systems in such a general setting are just nondeterministic decision procedures
550
P. Pudldk
for the set of tautologies or the set of theorems of a theory in question. More specifically, an upper bound on the size of proofs for a particular proof system gives a nondeterministic decision procedure with the bound on the running time and, conversely, a lower bound on the nondeterministic time complexity is a lower bound for any proof system. In particular, let T A U T be the set of propositional tautologies in some fixed complete basis of connectives. A propositional proof system is a binary relation P(x, y) which is computable in polynomial time and
qa E T A U T 
3y P(qa, y).
Since the set of propositional tautologies is iV'Pcomplete, we get the following immediate corollary. 2.1.1. T h e o r e m . (Cook and Reckhow [1979]) There exists a proof system for propositional logic in which all tautologies have proofs of polynomial length if and only if N'P = coN'P, o This general concept of a proof system can be generalized even further. Firstly, we can allow randomized computations; secondly, we can assume that the proof is not given to us, but we can access parts of the proof via an oracle. Usually such an interactive proof system is presented as a two player game, where we are the Verifier and the oracle is the Prover. It turns out that the Verifier can check with high probability that a proof for a given formula exists without learning almost anything about the proof. The most striking example is the socalled PCP theorem by Arora et al. [1992]. Roughly speaking, they showed, that there there are interactive proof systems, where the Verifier needs to check only a constant number of randomly selected bits of the sequence in order to check with high probability that the proof is correct. Note, however, that these results concern only the question how can be proofs checked but do not give new information about the lengths of proofs. 2.2. We turn now to more structured proofs, which are typical for logic, while the above concepts rather belong to complexity theory. Such proof systems are usually defined using a finite list of deduction rules. The basic element of a proof, called a proof step, or a proof line, is a formula, a set of formulas, a sequence of formulas or a sequent (pair of sequences of formulas). A proof is either a sequence or a tree of proof steps such that each step is an axiom or follows from previous ones by a deduction rule. The complete information about the intended way of proving a given theorem should also contain the information for each step of which rule is applied and to which previous steps it is applied. However in most cases this does not influence the complexity of the proofs essentially. It is important to realize that when proof lines and deduction rules are determined, there are two possible forms of proofs: the tree form and the sequence form. In the tree form, a proof line may be a premise of an application of a rule only once, while
The Lengths of Proofs
551
in the sequence form it can be used again and again. The trivial transformation from the sequence form to tree form results in exponential increase of size. The most important measure of complexity of proofs is the size of a proof. We take a finite alphabet and a natural encoding of proofs as sequences (words) in a finite alphabet. Then the size of a proof is the length of its code. The next one is the number of proof lines. Trivially, the number of proof lines is at most the size, however, a proof may contain very large formulas, thus there is an essential difference between the two measures. Quite often it is important to bound the maximal complexity of formulas in the proof. Usually we consider the quantifier complexity or the number of logical symbols. Thus we get other measures. Comparing the above measures with the complexity measures in computational complexity we see that the size corresponds clearly to time. At first glance it may seem that the maximal size of a formula (or proof line) should correspond to space, but this is not correct. In order to present a proof in a lecture, or to check it on a computer we cannot show a single formula (proof line) at a time, we have to keep the formulas (lemmas) on the blackboard until they are used for the last time as premises. The minimal size of a blackboard on which the proof can be presented is the right concept corresponding to space. Note that a suitable choice of the concept of a proof line and rules leads to linear proofs, where each rule has at most one premise (Craig [1957a]). In such proofs the maximal size of a proof line is the measure corresponding to space. In first order logic we consider also the proofs in a theory T. This means that we can use axioms of T in proofs. Talking about theories is not quite precise here; different axiomatizations give clearly different concepts of proofs and hence the smallest size proofs of a given formula may be different. Therefore we shall use preferably the term axiomatization. 2.3. We shall use the following notation. The size of a formula ~ resp. a proof d will be denoted by I~1 resp. Id[. Let A be a proof system or a proof system plus an axiomatization of a theory. Then we write d" A F ~, if d is a proof of ~ in A; A F ~, if ~is provable in A; and A Sn ~ , if ~ has a proof of size 0 (13) have polynomial size proofs. Let us prove the implication. Assume the antecedent and dpt~+l(x). We distinguish the cases: x is atomic, x is a negation, x is an implication etc. If x is atomic, then, by the definition of 9 and (10), both ~ + 2 ( x , y) and ~ + ~ (x, y) are equivalent to ~0(x, y). If x is the negation of x' then we have (by the definition of r
y) 
y)
and
y)By our assumption we have
y) =
y),
since we have also dptn(x'). Thus ~n+2(x, y)  ~n+l(X, y). The other cases are proved in the same way. In order to see that the resulting proof has polynomial size, observe that it does not use the structure of formulas ~ , ~ + 1 , ~ + 2 . Thus these proofs can be constructed from a finite proof schema by substituting the formulas ~ , ~n+x, ~n+2. The resulting proof has size linear in the size of ~n, ~,+~, ~P,+2. This finishes the proof of (11). Now we can show easily that (9) have also polynomial size proofs provided a is of depth _ n (and of polynomial size, i.e., does not use variables with long codes). This is done by induction on the depth of a; we leave out the details. The following theorem summarizes what we have proved above.
562
P. Pudldk
3.3.1. T h e o r e m . (Pudlgk [1986]) Let T be a sequential theory. There exists a sequence of formulas go,,(x, y) (of polynomial size) and such that there are polynomial size proofs in T of Tarski's condition for gon(x, y) where x is of depth < n and polynomial size proofs of 
(x)o)
for c~ of depth < n.
[3
3.4. Now we consider another application of Theorem 3.2.3. Let T be a fragment of arithmetic, let S be a function symbol for the successor. Now T can be much weaker; we shall specify the condition that we need later. Let go(x) be a formula. We say that go(x) is a cut in T of T proves
go(O), Vx(go(x) +go(S(x))), Vx, y(~ < y A ,#(y) + ,#(~)).
(14) (15) (16)
If go(x) satisfies only (14) and (15), then go(x) is called inductive. Let go(x) be inductive and assume that T proves x + 0  O, x + S(y) = S(x + y) and the associative law for +. Define r by r all Vz(go(z) + go(z + x)). (17) Then one can easily show that r
is also inductive in T and
T Pr
Ar
+ r
+ y);
(18)
(this construction is due to Solovay, unpublished). If (18) is satisfied, we say that r is closed under addition. Assuming a little bit more about T and that go(x) is a cut, we get that r is also a cut. Suppose that T contains exponentiation 2x along with axioms 2 0 = S(O), 2s(~) = 2~ + 2~. (19) Then we can continue by first taking
~(~) =~z r
(20)
We get that gol(X) is inductive (resp., is a cut) and T F Vx go1(x) + go(2x),
(21)
since T k r + go(x). Then we can repeat the construction, since gol(x) is inductive, and we obtain some go2(x) with
T F W ~(~) + ~(2 ~~
The Lengths of Proofs
563
etc. Observe that the above construction is schematic, we could have assumed that ~o is just a second order variables and derive (21) from (14) (16). More precisely it means the following: let R(x) be a unary predicate, let Ind(R(x)) (resp. Cut(R(x)) denote the conjunction of (14) and (15) (and (16) resp.). Then there exists a formula 9 (R, x) and a finite fragment of arithmetic To such that To F Ind(R(x)) ~ Ind(qt(R,x)) A V y ( ~ ( R , y ) ~ R(2u)). Applying Theorem 3.2.3 to ~(R, x) defined by
9 (R,
To
we obtain the following theorem. 3.4.1. T h e o r e m . Let T be a sufficiently strong fragment of arithmetic; suppose ~Oo(x) is inductive (resp. is a cut) in T. Then there exists a sequence of formulas ~ol(x), ~o2(x), . . . , such that for each n Ind(~On+l(X)) A Vx(~On+l(X) "+ (~On(X) A ~On(2X))) (resp. the formula with Cut instead of Ind) has a proof in T of size polynomial in n. [] 3.5. Since cuts are quite important in the study of theories containing some part of arithmetic, we shall mention a few basic facts about them, though they are not needed in this chapter. More can be found e.g. in Hs and Pudls [1993]. In order to obtain a cut closed under multiplication and contained in ~o(x) one can apply the trick of (17) to r It is possible to go on and get cuts closed under more rapidly growing functions, but not for 2x (unless ~o(x) has some special properties). There is another way to get such cuts, using which we can better see what these functions are. Let 2~ denote ntimes iterated exponential function. Let w~ (x) be nondecreasing functions such that
2.
=
(22)
we assume that these properties are provable in T. Let r be defined by
_
O, there exists j such that u occurs in t i e in depth < d(tj).
4.2.3. C l a i m .
P. Pudldk
568
To prove the Claim, suppose it is false. Then u can occur only in the part of the tie which belongs to a. Thus we can obtain a smaller unifier by replacing all occurrences of u by a variable. [] terms
We shall show that B 1 , . . . , Br have disjoint occurrences. For each Bi take the term wi corresponding to the first vertex of Bi and take an occurrence of wi in the depth _< d(tj) in some tiE. Then the occurrences of Di's in these occurrences of wi's must be disjoint. Thus we have
S > IBII+...§ >
d+l+2(d+l)+...+
=
(d+l).~.L 1d+ DlJ.
Ld +Dl J (d§ ( [ d + 1] + 1)
I[DJ.D > ~ d+l >

D2
D
2(d+ 1)
2
= ~
D2
2(d+ 1)
9(1  o(1)).
4.2.4. Suppose that A = (qol,..., qon) is a proof of qo, i.e., ~n = ~. The skeleton of A is a sequence of the same length where each ~i is replaced by an axiom schemas or a rule used in A at this step; moreover, if a rule was used, then there is also information about the proof lines to which the rule was applied. E.g. a formula obtained by modus ponens from formulas qoj and q0k will be replaced by (MP, j, k). We shall show that for a given formula ~p and a skeleton E there exists in a sense a most general proof. This proof will be constructed from a most general unifier for a unification problem obtained from E. In defining the unification problem assigned to the proof A we shall follow Baaz and Pudl~k [1993], the idea goes back to Parikh [1973]. Replace all atomic formulas in A by a single constant c; let A' = (qo~,..., qo~) be the resulting sequence. The language for the terms in the unification problem will consists of the constant c, distinct variables va for every subformula ~ of A ~ and a function symbol for each connective and quantifier, i.e., f_~, f~, f3 etc. We shall write the pairs of the unification problem as equations: (1) For each propositional axiom schema used in the proof we add an equation which represents it; e.g., if qo~ is c~ + (~ ~ c~), then we add equation
v~: = f_,(~,, f_,(v~,,,)); (2) if qoi is derived from qoj and qok via modus ponens, where qok is qoj + qoi, we add
v~, = y_,(v~,~,
v~);
The Lengths of Proofs
569
(3) if ~i is an instance of a quantifier axiom, say ~pi is (I)(t) + 3x~(x), then we add the equation v~  f~(va, f3(v~)); here c~ is the formula obtained from (I)(t) by substituting c for atomic formulas, this is the same formula which we thus obtain from (I)(x); (4) in the same way we add equations for quantifier rules: e.g., suppose ~j is c~ ~/~, ~ is 3xc~ ~/~ and ~i is derived from ~j by the quantifier rule (6.3), then we add equations v~; = f~(v~, vB),
v~ = f~(fa(v~), v~); (5) finally we add V~[ = T,
where T is obtained from ~'~ by replacing connectives and quantifiers by the corresponding function symbols f_,, f~, f3, .... Now we are ready to prove the result. Let dp(~) denote the depth of ~ a formula, where we consider ~ as a term but we treat atomic formulas as atoms. Let dp(A), for a proof A, denote the maximal depth of a formula in A. 4.2.5. T h e o r e m . (Krajf~ek [1989a], PudlAk [1987]) Let A be a proof of ~ and suppose A has smallest possible size. Then
o
1/)
P r o o f . Consider a proof A of ~ of minimum size. Let H be the unification problem assigned to A. Clearly A determines a unifier a for H in the natural way. Let be a most general unifier of H. We shall construct a proof F = ( r Cn) from ~. Let ~ be the substitution such that a  ~ . Choose a small formula ~ which does not contain any variable which occurs in A, e.g., 0 = 0 if it is in the language. Consider the ith formula in the proof A, i.e., ~i, and terms v ~ a and v ~ . We have v ~ 6  v~a; this means that v ~ is v ~ a with some subformulas replaced by variables. Thus we define r to be ~i with the subformulas corresponding to variables in v~ ~ replaced by ~. 4.2.6. Let us consider an example. Suppose ~i has been obtained from ~j by the quantifier rule (6.3). Suppose ~Pi is
3x(P(x) ~ (Q(x) + R(y))) ~ R(y), Then vvla is
c)), By case (4) of the definition of H, vv~E has form
s),
P. Pudldk
570
for some terms t and s. Because E is most general, s is either c or a variable and t is either as in vv~a or f_.(c, v~), or a variable. Let us suppose that vv~E is
Then r is
3x(P(x) + ~) >R(y). Furthermore Cj must be
(P(x) > ~) + R(y). We see that the structure of formulas needed in axiom schemas and rules is preserved. Note that also the restrictions on variables in quantifier rules are satisfied, since does not contain any variable which should be bounded. Finally we have also r =~, =~. 4.2.7. Now we can apply Lemma 4.2.2. The terms in b/ have constant depth (where the constant is determined by our choice of the proof system) except for the last equation where we have a term whose depth is equal to dp(~o); thus the maximal depth is O(dp(~)). Hence the maximal depth of a term v ~ is O(~/dp(~)S), where S = ~ i I v ~ l 9 Clearly also S = O(~i Ir Furthermore IFI = O(IAI) , since we have replaced some subformulas in A by a constant size formula ~ in F. This finishes the proof of Theorem 4.2.5. [:] 4.2.8. R e m a r k s . (1) Clearly the theorem holds for a variety of other systems. In particular it holds for every Frege system, (see section 8 for the definition). (2) In the proof we have actually constructed "a most general" proof F with the same skeleton A. To make it more precise, we should allow propositional variables in our first order formulas and then keep the variables v~ in F and treat them as propositional variables. 4.3. Now we consider the relation of the number of steps to the size and depth of a proof. A relation to the depth is easy to obtain, since the depth does not include information about terms. For instance we can also bound the depth of a most general unifier as follows (see Kraji~ek and Pudl~k [1988]). 4.3.1. L e m m a . Let E be a most general unifier of a unification problem {(tl, t 2 ) , . . . , (t2n1, t2n)}. Then
maxd(t,2) G(~, k) then Vx~(Sn(x)) is provable. In the theorem we use the provability in pure logic; note that this implies that the theorem is true also for any finitely axiomatized theory T as we can incorporate finitely many axioms in ~. We need to add only a very weak assumption about T in order to deduce Kreisel's conjecture.
4.4.2. Corollary.
Let T be a finite fragment of arithmetic such that
T b Vx(x = 0 V x = S(O) V . . . V x
=
snl(0) V :::]y(x
=
Sn(y)))
for every n. Then Kreisel's Conjecture holds for T. Proofhint.
By the assumption on T we have
T ~ ~(0) A ~(S(0)) A . . . A ~(Snl(0)) A Vx~(Sn(x)) + Vx~p(x), for every formula ~(x).
[3
P r o o f  i d e a of T h e o r e m 4.4.1. Let ~(x) and k, n be given such that ~(Sn(O)) has a proof with k steps. We shall see how large n must be. First we transform the proof into a cutfree proof in the Gentzen system. By Corollary 5.2.2 below, the number of steps in a cutfree proof can be bounded by a constant which depends only k and [~p(x)l. Then we apply the technique of unification. This time, however, we consider also the terms in the proof. This is done in two stages. First we consider all proofskeletons of length K, (there are finitely many). For each of them we find a most general proof (with respect to the propositional and quantifier structure) as in the proof of Theorem 4.2.5. Then for each of these proofs we find most general terms which can be used in them. This can also be done using the theorem about a most general unifier. However, now we treat terms Sn(O) in the sentence ~(Sn(O)) as unknown, which means that it is represented by a variable in the unification problem. If in terms in the most general solution remain variables for terms, we replace them by first order variables. Thus we obtain a proof whose size is bounded by a primitive recursive function in K and ~(x), thus also in k and ~(x). Let us denote the bound by L. (This was essentially the idea of the proof of Theorem 4.3.3, except for the treatment of the term Sn(O).)
The Lengths of Proofs
573
Let us have a look on what happens with ~(Sn(0)) in the most general proof. This formula is replaced by ~(t) for some term t which has two properties
(1) Itl_ L; (2) ta = S"(O), for some substitution a. Thus t is either Sin(y) for m n~, c > 0. (2) We need the following lemma. 6.4.2. L e m m a . ~(~)
=
~(n),
~
+ n_ = m + ~ ,
m . ~ = m . ~
have proofs of size polynomial in log n and log m.
The proof is easy, if we assume ring operations, otherwise we have to work in a suitable cut. [] One consequence is that the same holds for arbitrary arithmetical terms. This can be used to show that, for a bounded formula/?(x) in the language of Q, there exists a polynomial Pl such that
(41)
IIZ~(~)IIT ~ Pl (n), whenever ~(_n)is true. The second consequence is that there is a polynomial p2 such that
tin~ = n~lr~ < p~(k, log ~).
(42)
(Hint: prove n . n = n 2, n . n 2 = n3, . . . , n . n k1 = n k.) We continue with the proof of (2). Assume that/?(n) is true for every n E IN. Then we have (41) for all n. Clearly
II C ~
I1~ _< O(llZ(~)I1~ + IIW(Z(~) ~
C o n T ( x k) lit
+ I1~k = nkllr),
The Lengths of Proofs
585
]1Con ( ')ll < p (n, k, IIW(Z( )
(43)
thus for some polynomial P3
By Theorem 6.2.3 there exists an e > 0 be such that for every r Jl C~
> r~.
(44)
Let d be the degree of n in p3(n, k, m). Take k so that ke > d. Now suppose (2) fails for k, thus
T ~ Vx(~(x) ~ COnT(Xk)). Let m be the length of this proof. Take n so large that
p3(n, k, m) < n ke. Then, by (43), Jl Con(nk)lJT O. P r o o f . To prove (1) we need, by Lemma 7.3.1, only to have a short proof of I(2a) in G B . The bound ]]I(2.)]]GB = n ~ follows from Theorem 3.4.1. The second part is contained in Theorem 7.2.2.
[]
A more precise computation gives a bound II COnZF(2a)IIGB = O(n2), which implies a lower bound on the speedup of G B over ZF of the form 2n(~). An upper bound 2o(v~) complementing the above result was proved by Solovay [1990]. Let us observe that such bounds can be used to show that the estimate on the depth of formulas in a proof, Theorem 4.2.5, is asymptotically optimal. 8. P r o p o s i t i o n a l
proof systems
In this section we consider some concrete propositional proof systems. There are several reasons for studying these systems. Firstly they are natural systems which are used for formalization of the concept of a proof, in fact, they are good approximations of human reasoning. Some systems, especially resolution, are also used in automated theorem proving. Therefore it is important to know how efficient they are. Secondly they are suitable benchmarks for testing our lower bound techniques. Presently we are able to prove superpolynomial lower bounds only for the weakest systems; we shall give an example of a lower bound in section 9. Thirdly there are important connections between provability in some important theories of bounded arithmetic and the lengths of proofs in these propositional proof systems. This will be the topic of section 10. 8.1. Frege systems and its extensions. Frege systems are the most natural calculi for propositional logic and they are also used to axiomatize the propositional part of the first order logic in the Hilbert style formalizations. We have used a particular special case of a Frege system for presenting results on the lengths of proofs in first order logic.
The Lengths of Proofs
591
8.1.1.
To define a general Frege system, we need the concept of a Frege rule 9 A Frege rule is a pair ({~I(Pl,... , P ~ ) , . . . , ~k(Pl,... ,Pn)}, ~ ( P l , . . . , Pn)), such that the implication qOlA...A~Pk + ~ is a tautology. We use P l , . . . , Pn to denote propositional variables. Usually we write the rule as (~I,
'' 9 ,
~k
When using the rule, we use actually its instances which are obtained by substituting arbitrary formulas for the variables P l , . . . , P~. A Frege rule can have zero assumptions, in which case it is an axiom schema. A Frege proof is a sequence of formulas such that each formula follows from previous ones by an application of a Frege rule from a given set. 8.1.2. Definition. A Frege system F is determined by a finite complete set of connectives B and a finite set of Frege rules. We require that F be implicationally complete for the set of formulas in the basis B. Recall that implicationally complete means that whenever an implication r A .../~ r + r is a tautology, then r is derivable from r 1 6 2 Rules such as modus ponens and cut ensure that the system is implicationally complete whenever it is complete. An example of a Frege system is the propositional part of the proof system considered in section 2; it has 14 axiom schemas and one rule with two assumptions (modus ponens). 8.1.3. Note that in an application of a Frege rule (in particular also in axioms) we substitute arbitrary formulas for the variables in the rule, however we are not allowed to substitute in an arbitrary derived formula. It is natural to add such a rule. The rule is called the substitution rule and allows to derive from T ( P l , . . . , P k ) , with propositional variables p l , . . . , pk, any formula of the form ~ ( r Ck). 1 8.1.4. Definition. A substitution Frege system SF is a Frege system augmented with the substitution rule. 8.1.5.
The extension rule is the rule which allows to introduce the formula p~p,
where p is a propositional variable and ~ is any formula and the following conditions hold: 1. when introducing p  ~, p must not occur in the preceding part of the proof orin ~; 2. such a p must not be present in the proved formula. 1in fact Frege used this rule originally and the idea of axiom schemas was introduced by von Neumann later.
592
P. Pudldk
If  is not in the basis, we can use an equivalent formula instead, e.g. (iv + ~p) A (~p + p). This rule is not used to derive a new tautology quickly, as it is the case of Frege and substitution rules, but its purpose is to abbreviate long formulas. 8.1.6. Definition. An extension Frege system E F is a Frege system augmented with the extension rule. 2 8.1.7. The first question that we shall address is: how does the lengths of proofs depend on a particular choice of the basis of connectives and the Frege rules. If the basis is the same for two Frege systems F1 and F2, it is fairly easy to prove that they are polynomially equivalent. We have used this argument already in the previous sections. Let for instance qOl,..., ~Pk be a rule in F1. Since F2 is implicationally complete, there exists a proof r of q0 from ~Pl,..., ~k in F2. To simulate an instance of this rule obtained by substituting some formulas into it, we simply substitute the same formulas in r . Thus we get only linear increase of the size. If the two bases are different, the proof is not so easy, but the basic idea is simple. One uses a wellknown fact from boolean complexity theory that a formula in one complete basis can be transformed into an equivalent formula in another complete basis with at most polynomial increase in size, in fact, using a polynomial algorithm. This, of course, does not produce a proof from a proof, but one can show that it suffices to add pieces of proofs of at most polynomial size between the formulas to get one. Details are tedious, so we leave them out. The same holds for substitution Frege and extension Frege systems. Thus we have: 8.1.8. T h e o r e m . (Cook and Reckhow [1979], Reckhow [1976]) Every two Frege systems are polynomially equivalent, every two substitution Frege systems are polynomially equivalent, and every two extension Frege systems are polynomially equivalent. D 8.1.9. This still leaves three classes, moreover each can be considered also in the tree form and we can count the number of steps instead of the size, which gives altogether twelve possibilities. We shall show that these cases reduce to only three, if we identify polynomially equivalent ones (namely, Frege, extension Frege and the number of steps in substitution Frege). The question about a speed up of sequence versus tree proofs has been solved in Theorem 4.1 for Frege systems which contain modus ponens. The same holds for extensions of such Frege systems by extension and substitution rules. We shall return to this question below. Now we shall consider the remaining ones.' Let us first consider the relation of substitution Frege systems and extension Frege systems. 2Here I deviate slightly from the literature where the name extended Frege system is used, which, I think, is rather ambiguous.
The Lengths of Proofs
593
8.1.10. T h e o r e m . (Dowd [1985], Kraji~ek and Pudls [1989]) Every substitution Frege system is polynomially equivalent to every extension Frege system. P r o o f . By the theorem above, we can assume that both systems have the same language. 1. First we show a polynomial simulation of an extension Frege system by a substitution Frege system. Let an extension Frege proof of a tautology r be given, let Pl = qol,...,pm  ~om be all formulas introduced by the extension rule listed in the order in which they were introduced. By Theorem 8.1.8 we can assume w.l.o.g. that our systems contain suitable connectives and suitable Frege rules. Using an effective version of the deduction theorem (whose easy proof we leave to the reader) we get, by a polynomial transformation, a proof of pl  ~ol A . . . Apm = ~om ~ r
(48)
which does not use the extension rule (i.e., a Frege proof). Now apply the substitution rule to (48) with the substitution Pm ~ ~m. Thus we get Pl
:
~1 A ...
A Pro1
 ~m1
A ~
: qOm ~ ~D.
(49)
From (49) we get by a polynomial size Frege proof P l   qO1 A . . . A Pro1   qPm  1 ~ ~) .
We repeat the same until we get a proof of r 2. The polynomial simulation of substitution Frege systems by extension Prege systems is not so simple. The idea of the proof is the following. Suppose we have simulated a substitution Frege proof until ~oj which is derived by a substitution from qoi, say qoj = qoi(P/(~). Then we can derive qoj by repeating the previous part of the proof with variables t5 replaced by 5. However, repeating this, the new Frege proof would grow exponentially. The trick is to prove the formulas of the substitution Frege proof not for a particular substitution, but for the substitution, where the proof fails for the first time. In reality the proof does not fail (we start with a real proof), but it enables us to argue: "if it failed, then we could go on, hence we can go on in any case". Now the problem is for which propositional variables should the proof fail. Therefore we introduce an extra set of variables for each formula of the substitution Frege proof. The variables at some step will be defined using variables at the following steps of the proof. As this nesting may result in an exponential growth, we introduce them using the extension rule. Now we shall argue formally. W.l.o.g. we can assume that the substitution Frege system has only modus ponens and axiom schemas as Frege rules. Let (qol,..., qPm) be a substitution Frege proof. Let i5 = ( P l , . . . , Pn) be all propositional variables of the proof. Take sequences qi of length n consisting of new distinct variables, for i  1 , . . . , m  1, and denote by qm  P. Let r be (Pi(P/qi), for i  1 , . . . , m; thus Cm ~m. Let/?j be a sequence of n formulas defined as follows: 
594
P. Pudldk
1. if qoj is an axiom or is derived by modus ponens then ~j = ~; 2. if q0j is derived by substitution from ~i, namely qoj = ~i(/~/5), then ~j = The extension Frege proof will start by introducing q
,t
A
^
v...
v
A
A
where ~j is r A . . . A Cj. We have to introduce these formulas in the order im1,...,1. Then we add polynomial size proofs of
~j1 A ~r ~ r ~ el(all'j),
(50)
for i < j. To prove (50) we first derive I'T~j  1 A ~Dj + qi,t  ~j,t
from the axioms introducing qi,t and then successively construct proofs of the corresponding statements for subformulas of r Now we derive r era. Suppose we have proved ~31, . . . , C j  l " Consider three cases. 1. qoj is an axiom. Then Cj is also an axiom. 2. qoj was derived from ~ , ~,, u, v < j, ~p~  ~, + ~j by modus ponens. First derive ~j1. Then, using (50) with i = u, v, we get
Since
= r
Cj(Zj),
we get As we are considering the case of modus ponens, /~j = ~ , hence we have derived ~r + Cj, whence we get Cj immediately. 3. qoj was derived from q0i, i < j by substitution. Then Cj is just r thus (50) gives and we get Cj easily from r
Cj1.
Finally recall that Cm is the conclusion of the proof ~m, thus we have the simulation. D The simulation of extension Frege systems by substitution Frege systems was shown already in Cook and Reckhow [1979]. The other simulation has a simple "higher order" proof based on a relation to bounded arithmetic. Namely by Theorem 10.3.6 below, it suffices to prove the reflection principle for substitution Frege system in S 1, which is easy. This was observed independently by Dowd [1985] and Kraji~ek and Pudl~k [1988].
The Lengths of Proofs
595
8.1.11. It is an open problem whether Frege systems can simulate extension and substitution Frege systems. We conjecture that the answer is no. It seems, though it is not supported by any mathematical result, that the relation of Frege systems to extension Frege systems is the same as the relation of boolean formulas to boolean circuits in complexity theory. It is generally accepted that it is unlikely that formulas can simulate circuits with only polynomial increase in size; the uniform version of this conjecture is Arc 1 =/=P; both conjectures are also open. 8.1.12. We shall now consider the number of steps in these systems. It turns out that the number of steps in Frege and extension Frege systems is, up to a polynomial, the size of extension Frege proofs. 8.1.13. L e m m a . If ~ can be proved by a proof with n steps in an extension Frege system, then it can be proved by a proof with n steps in a Frege system; namely, we can omit the extension rule from the given system. P r o o f  s k e t c h . Omit every instance of the extension rule p  ~ and at the same time replace all occurrences of the variable p by ~. [] 8.1.14. L e m m a . There exists a polynomial f (x) such that for every tautology and every extension Frege proof of ~ with n steps, there exists an extension Frege
p oo/ o/
size 1.
(Of course, to get an expression of the form (54), we have to collect the constant terms on the right hand side; also we collect constant and other terms after each application of a rule.) The axioms and derivation rules are 1. axioms are all translations of the clauses in question and the expressions pi _> 0, Pi _>  1 ; 2. a d d i t i o n : add two lines; 3. m u l t i p l i c a t i o n : multiply a line by a positive integer; 4Another name proposed for this calculus is the Groebnerproof system.
The Lengths of Proofs
605
4. division divide a line (54) by a positive integer c which divides evenly a l , . . . , ak and roundup the constant term on the right hand side, i.e., we get
+...+
VB1.
(Note that on the left hand side we have integers, thus rounding up is sound.) A contradiction is obtained, when we prove 0 > 1. We suggest to the reader, as an easy exercise, to check that this system simulates resolution. Goerdt [1991] proved that Frege systems polynomially simulate the cutting plane proof system. Furthermore, Buss and Clote [1996] proved that the cutting plane system with the division rule restricted to the division by 2 (or any other constant > 1) polynomially simulates the general system. Recent success in proving exponential lower bounds on the lengths of cutting plane proofs (see section 9.3) gives us also interesting separations. The cutting plane proof system cannot be simulated by bounded depth Frege systems as it proves the pigeonhole principle (see Cook, Coullard and Turin [1987]) using polynomial size proofs. The cutting plane proof system does not polynomially simulate bounded depth Frege systems Bonet, Pitassi and Raz [1997a], Kraji~ek [1997a], Pudl~k [1997]. 9. L o w e r b o u n d s
on propositional
proofs
In this section we give an example of a lower bound proof in propositional logic. Our lower bound will be an exponential lower bound on the size of resolution proofs of the pigeonhole principle. The first such bound for unrestricted resolution was proved by Haken [1985]. Unfortunately his proof cannot be generalized to stronger systems, (at least nobody has succeeded in doing it). Therefore we shall apply a technique of Ajtai [1994a], which he used for bounded depth Frege systems. The case of resolution, which can be considered as a depth one Frege system, is simpler than for larger depths and thus can serve as a good introduction to more advanced results. 9.1. A general m e t h o d . Before we consider the concrete example, we shall present a general framework for lower bound proofs, which can be applied to some existing proofs and, maybe, can be also used for some new proofs. A general description of what is going on in lower bound proofs is always useful, since, when proving a lower bound, we are working with nonexisting things (the short proofs whose existence we are disproving) and therefore it is difficult to give any intuition about them. The basic idea of our approach is as follows. Suppose that we want to show that (c~1, c~2,... ,am) is not a proof of c~. Let L be the set of subformulas of c~1,a2,... ,am and c~. L is a partial algebra with operations given by the connectives. Suppose that we have a boolean algebra B and a homomorphism A : L + B such that A(c~) ~ lB. Then c~ cannot be among c~1,... ,c~m, since A(qo) = 1B for every axiom and this is preserved by Frege rules. In this form the method cannot work: if c~ is a tautology (and we are interested only in tautologies), then A(c~)  lB. Therefore we have to
P. Pudldk
606
modify it. We take only some subsets Li C_ L and Ai : Li + Bi for different boolean algebras Bi. Now we shall describe this method in details. Let
v~ (p~,..., pt),..., v~(p~,..., p~)
(/9(Pl,''' ,Pl) be a Frege rule R. We shall associate with it the set LR of all subformulas of ~Pl,..., ~k and ~. If
~1(r
r
~(r
r
~(r162 is an instance of R, we associate with it the set LR(g) = LR(r .....C t )  ( a ( r
Ct); a ( p t , . . . , p t ) E LR).
Let B be a boolean algebra. A homomorphism A 9LR(5) + B is a mapping which maps connectives onto corresponding operations in B, i.e.,
z ( ~ ) = ~.~(~) z(v v r = ~(v) v , ~(r etc. The following lemma formalizes our method. 9.1.1. L e m m a . Let (at, a 2 , . . . , am) be a Frege proof using a set of assumptions S. Suppose the following conditions are satisfied: 1. For every .formula ai of the proof we have a boolean algebra Bi and an element bi E Bi. Furthermore, if ai E S, then bi = 1B~.
2. For every instance of a rule R(r of the proof we have a boolean algebra BR(g ) and a homomorphism AR(g) 9LR(g) ~ BR(g). 3. For every formula ai of the proof and and every instance of a rule R(r where ai E LR(r we have an embedding ai,R(g) " Bi + BR(g ) so that ai,R(5)(bi) = Then bt = 1 s l , . . . , b m = lB,,. The proof of this lemma is based on the following observation: 9.1.2. L e m m a .
A Frege rule is sound in any boolean algebra.
P r o o f . Suppose for some assignment of values from B we get the value 1B for the assumptions but a value b < 1B for the conclusion. Take a homomorphism : B + {0, 1} such that a(b) = 0. Then we get a contradiction with the soundness of the rule for the algebra {0, 1}. [:]
The Lengths of Proofs
607
P r o o f o f L e m m a 9.1.1. We shall use induction. If cq E S, then bt = 1B1, otherwise c~t is an instance of a logical axiom, say, R(r (a rule without assumptions). Thus, by Lemma 9.1.2, AR(~7)(c~t) 1BR(r). Hence =
~l,R(g)(bt) = AR(~7)(~t) = 1BR(~7). Since ~t,R(~) is an embedding, bl = 1Bt. The induction step is similar.
[:]
It may seem at the first glance that it does not make sense to talk about boolean algebras Bi, since we can simply take all of them to be 4element algebras, and thus isomorphic. It turns out, however, that in applications nontrivial boolean algebras Bi appear quite naturally. They have to reflect the properties of the tautologies that we consider. Let us remark that this is only one possible interpretation of some lower bound proofs and there are other interpretations. In particular other interpretations are based on the idea of forcing, due to Ajtai [1994a], and partial boolean algebras, due to Krajf~ek [1994b]; let us note that using partial boolean algebras one can characterize (up to a polynomial) the length of proofs in Frege systems. Another tool which we shall use are random restrictions. They have been successfully applied for proving lower bounds on bounded depth boolean circuits and later also for lower bounds on proofs in bounded depth Frege systems by Ajtai [1994a,1990,1994b], Beame et al. [1992], Beame and Pitassi [1996], Bellantoni, Pitassi and Urquhart [1992], Kraji~ek [1994a], Krajf~ek, Pudl~k and Woods [1995] and Pitassi, Beame and Impagliazzo [1993]. The idea is to assign more or less randomly O's and l's to some variables. Then many conjunctions and disjunctions became constant and thus the circuits, or the formulas, can be simplified. For the reduced formulas it is then much easier to construct boolean algebras with the required properties. 9.2. An exponential lower bound on the pigeonhole principle in resolution. Let D and R be disjoint sets with cardinalities IDI n + 1 and I R I  n. We shall consider the proposition PHPn stating that there is no 1  1 mapping from D onto R. (This is a weaker proposition than just" "there is no 1  1 mapping of D into R", thus the lower bound is a stronger statement.) We denote by Pij, i E D, i E R propositional variables (meaning that i maps onto j in the alleged mapping). We shall consider clauses made of Pij and ~Pij, furthermore we shall use the true clause T and the false, or empty, clause _l_. PHPn is the negation of the following set of clauses VjERPij, for i E D; VieRPij, for j E R; ~pijV~pik, for j E D , j, k E R , j ~ k ; ~pjiV~p~i, for j, k E D , j ~ k , i E R . Let M be the set of partial onetoone mappings D ~ R. We shall consider boolean algebras determined by subsets T C_ D U R, IT] < n, as follows. Let VT be a subset of partial matchings g such that
P. Pudldk
608
1. T c_ dom(g)U rng(g); 2. V(i,j) e g ( i e T v j e T ) . The boolean algebra associated with T is P(VT), the boolean algebra of subsets of
Yr. In order to be able to assign a value to a clause A in P(VT), a certain relation of T to A must be satisfied. We define that A is covered by T if 1. Pij E /k ::~ i E T v j E T; 2.T~jEA=~iETAjET.
The clauses T and _1_are covered by any set T. Suppose A is covered by T, then the value of A in P(VT) is the set
bT = { g e V T ;
g(i)=jforsomepijeA, or
or
g(i) 7~ j f o r s o m e  ~ p i j e A
g  l ( j ) 7L i for some 'Pij e A}.
The following can be easily checked: 9.2.1. L e m m a .
then bT = 1P(VT ).
If A is one of the clauses of PHPn and T covers A, ITI < n, []
Suppose T C_ T', IT'I < n, then there is a natural mapping )~T,T," P(VT) + P(VT,) defined by
~,~,(b) = {g' e y~, .3g e b(g c_ d ) } . Let T C_ T', IT'I < n. Then )~T,T' i8 an embedding of the boolean algebra P(VT) into P(VT,).
9.2.2. L e m m a .
P r o o f . All properties are trivial except for the following o n e : ,~T,T' is injective. This property follows from the fact that each g E VT can be extended to a g' E VT, which holds due to the fact that IT'I < n. [] Consider an instance of the cut rule
P v pij
A V 'Pij FVA
Suppose we have chosen P(VT~), i = 1, 2, 3 as the boolean algebra for rvp~j, zXv,p~j and r V A respectively. Then we choose P(VT) with T = T1 U T2 U Ta for this rule. We only have to ensure that ITI < n. Since T covers all subformulas involved, we can define their values in P(VT). The condition that this is a homomorphism of and V is easy to check. By Lemma 9.2.2 we have also the necessary embeddings. The simplest way to ensure ITI < n for the rules is to choose the covering sets of the formulas of size < n/3. This is not always possible (take e.g. p~ V p22 V...Pnn), therefore we apply random restrictions. Suppose we assign O's and l's to some variables pq and leave the other as they are. If we do it for all formulas in a proof, the resulting sequence will be a proof again. However some initial clauses may reduce to _1_, so we cannot argue that _k
The Lengths of Proofs
609
cannot be derived from them by a short proof. Therefore the restrictions must reflect the nature of the tautology in question. Let g E M be a partial onetoone mapping. We shall associate with g the partial assignment defined by
(i, j) E g;
PO + 1
if
pij + 0 P i j  + PO
if i e dom(g) or j E rng(g), otherwise.
but
(i, j) r g;
Given a clause A, we define Ag to be 1. T if s o m e Pij E A is mapped to 1 or some P0 such that ~Pij E A is mapped to 0, 2. otherwise it is the clause consisting of all literals which are not 0. Let us denote by D ' = D  dom(g), R ' = R  rng(g), n ' = IR'I . Clearly Ag is a clause with variables P0, i E D', j E R'. If A is a clause of P H P ~ then Ag is either T or becomes a clause of PHPn, (on D ~ and R~). Denote by M,,,,, the set of all partial onetoone mappings of size n  n ~. The following is the key combinatorial lemma for the proof of the lower bound. 9.2.3. L e m m a . Let n' = [nl/3J, let A be an arbitrary clause. Then for a g E Mn,n', chosen with uniform probability, the probability that Ag can be covered by a set of size < i1n i is at least 1  2enl/3~
where ~ > 0 is a constant. We shall use the following simple estimate. 9.2.4. L e m m a . Let a, b, 1 < n,A C_ { 1 , . . . , n } , l A I = a. { 1 , . . . , n}, [B[ = b, with uniform probability. Then
Take a random B C_
Prob(IAMB I >_l) < \ nl ] Proof. Prob(IA M BI
_> l) _
~
Prob(al E B , . . . , at E B)
{ a l ..... a t } C A
1
n
n1
nl+1
P. Pudldk
610
1 P r o o f of L e m m a 9.2.3. Let us denote by 1 = [~n'J. Let A be given. We shall simplify the situation by replacing each "~Pij E A by
V pi,J v V pij,.
i'r
j'r
This operation commutes with the restriction and the new clause is covered by T, ITI < l, iff the old one is, since e < n'  2. Thus we can assume that A contains only positive literals. Such a A is determined by the graph
E = {(i,j);pij e A}. Let
n2/3 a
~"
~
.
40 From now on we shall omit the integer part function and assume that all numbers are integers. This introduces only inessential errors. Furthermore denote by A = {j E R; degE(j) _ 2a}. We shall consider two cases. Case 1" IAI _> 2a. We shall show that in this case Ag = 7 with high probability. First we estimate IA M rng(g)I. Note that rng(g) is a random subset of R of size n  n', thus also R'  R \ rng(g) is a random subset of size n'. Hence we can apply Lemma 9.2.4. Prob([A M rng(g)[ < a)
=
Prob([A M R'[ > [ A [  a)
< (elZlnX/3
(55)
)lala n(IAIa)
(2e)a
< n~
"
The probability that Ag is not 7 is bounded by Prob
(Vj E m M rng(g)((gl(j),j) ~ E)) < Prob(IA n rng(g)l < a) + Prob(Vj E ANrng(g)((g~(j),j) r E)
(56) I IAnrng(g)l > a).
The second term can be estimated by max
CC_A, ICl>a
Prob (Vj E
Anrng(g)((g~(j),j) r E) l Anrng(g)= C),
thus it suffices to consider a fixed such C and bound the probability. Let C = {jl, j 2 , . . . , Jlcl}; think of the vertices gX(jl), g  t ( j 2 ) , . . . ' gl(Jlcl) as chosen one by one independently, except that they must be different. Prob =
((g~(jt+,),jt+~) r E 1
I g(il)
= jl,...,g(it) = jr) =
] E  l ( J t + l )  { i l , . . . ,it}l < 1  d e g E ( J t + l )  t < 1  2 a  t. n+lt n+l n+l 
The Lengths of Proofs
611
Thus the probability that (gl(jt),jt) r E for all t = 1 , . . . ,
}.
For applications, however, this trivial modeassignment is not sufficient. One is interested in having modes with as many output arguments as possible. The clauses of the append relation are also correct with respect to the following mode assignment #: #+(append) = {(in, in, out>, 1 since there are as yet no constant types to include. An example is t(A, B)  A x B. Next we define the typed propositional functions
p" t(T) + Prop. In general we need to consider functions whose inputs are ntuples of the type
(tl(T)~ Prop) •
• (t,(T) ~ Prop)
and whose output is a Prop. We do not pursue this topic further here, but when we examine the proof system for typed propositions we will see that it offers a simple way to provide abstract proofs for pure typed propositions that use only rules for the connectives and quantifiers say a pure proof. There are various results suggesting that if there is any proof of these pure propositions, then there is a pure proof. These are completeness results for this typed version of the predicate calculus. We will not prove them here. 2.4. F o r m u l a s P r o p o s i t i o n a l c a l c u l u s . Consider first the case of formulas to represent the pure propositions. The standard way to do this is to inductively define a class of propositional formulas, PropFormula. The base case includes 17Since we do not study any mapping of formulas to pure propositions, I have not worried about relating elements of Pn and Pro, n < m, by coherence conditions.
R. Constable
702
the propositional constants, Constants  {T, _L}, and propositional variables, Variables = {P, Q, R, P1, Q1, R1, . . .}. These are propositional formulas. The inductive case is If F, G are PropFormulas, then so are (F & G), (FV G), and (F =~ G). Nothing else is a PropFormula. We can assign to every formula a mathematical meaning as a pure proposition. Given a formula F , let P 1 , . . . , Pn be the propositional variables occurring in it (say ordered from left to right); let t5 be the vector of them. Define a map from n variable propositional formulas, PropFormulasn, into (Prop ~ + Prop) inductively by
IPi] I(F & G)] [(F v G)] [(F=~G)]
= proj~ = [F] & i a l = [FI v [G] = IFI=~iG].
For each variable Pi, corresponds to the projection function proj~(P) = Pi. Say that F is valid iff IF] is a valid pure proposition. B o o l e a n v a l u e d formulas. If we consider a singlevalued relation from propositions to their truth values, taken as Booleans, then we get an especially simple semantics. Let ]B = {tt, ff} and let B :Prop x ]B ~ Prop such that P r B (P, tt) . In classical mathematics one usually assumes the existence of a function like b, say b : P r o p + ]B where P r b(P)  tt in lB. But since b is not a computable function, this way of describing the situation would not be used in constructive mathematics. Instead we could talk about "decidable propositions" or "boolean propositions."
BoolProp = = { (P, v) " Prop x
IBIP r (v =
tt in ]B)}
Then there is a function b E BoolProp + ]~ such that P r ( b(P) = tt in ]IS). If we interpret formulas as representing elements of pure boolean propositions, then each variable Pi denotes an element of B. An assignment a is a mapping of variables into ]B, that is, an element of Variables + ]~. Given an assignment a we can compute a boolean value for any formula F . Namely
Value(F, a) =
if F is a variable, then a ( F ) if F is (F1 op F2) then Value(F1, a) bop Value(F2, a)
where bop is the boolean operation corresponding to the propositional operator op in the usual way, e. g. P tt ff tt ff
tt ff ff
P &b tt ff ff ff
PVb tt tt tt ff
Q
P~bQ tt tt ff tt
Types
703
T y p e d p r o p o s i t i o n a l formulas. To define typed propositional formulas, we need the notion of a type expression, a term, and a type context because formulas are built in a type context. Then we define propositional variables and propositionalfunction variables which are used along with terms to make atomic propositions in a context. From these we build compound formulas using the binary connectives &, V, =~, and the typed quantifiers Vx:A, 3 x : A . We let op denote any binary connective and Qx:A denote either of the quantifiers. T y p e expressions. Let A1, A2,... be type variables, then Ai are type expressions. If T1, T2 are type expressions, then so is (T1 x T2). Nothing else is a type expression for now. T e r m s . Let Xl, x2,.., be individual variables (or element variables); they are terms. If s, t are terms, then so is the ordered pair (s, t). Nothing else is a term for now. T y p i n g c o n t e x t s . If T1,...,T~ are type expressions and xi, i = 1 , . . . , n are distinct individual variables, then xi : Ti is a type assumption and the list xl :T1,... ,x~ :T~ is a typing context. We let T, T', Tjj denote typing contexts. T y p i n g j u d g m e n t s . Given a typing context, T, we can assign types to terms built from the variables in the context. The judgment that term t has type T in context T is expressed by writing
T~tET. If we need to be explicit about the variables of T and t, we use a secondorder variable t[xl,..., xn] and write
Zl : T ~ , . . . , z , :T, ~ t[z~,... ,z,] ~ T When using a secondorder variable we know that the only variables occurring in t are xi. We call these variables of t free variables. Later, we give rules for knowing these judgments. Just as we said in section 2.2, it should be noted that t E T is not a proposition; it is not an expression that has a truth value. We are saying what an ordered pair is rather than giving a property of it. So the judgment t E T is giving the meaning of t and telling us that the expression t is wellformed or meaningful. In other presentations of predicate logic these judgments are incorporated into the syntax of terms, and there is an algorithm to check that terms are meaningful before one considers their truth. We want a more flexible approach so that typing judgments need not be decidable. We let P1, P2,... denote propositional variables, writing Pi E Prop, for propositional function variables, writing Pi E (T 4 Prop) for T a type expression. If T ~ t e T and P e (T + Prop), then P(t) is an atomic formula in the context T with the variables occurring in t free; it is an instance of P. Note, we abbreviate P ( ( t l , . . . , tn)) by P(tl,... ,tn). If t is a variable, say x, then P ( x ) i s
R. Constable
704
an arbitrary instance or arbitrary value of P with free variable x. A propositional variable, Pi E Prop, is also an atomic formula. If F and G are formulas with free variables x, y respectively in contexts T , and if op is a connective and Qx:A a quantifier, then
(F opG) is a formula with free variables {~} U (~} in context T and with immediate subformulas F and G;
Qv:T.F is a formula in context 7" where 7" is T with v:A removed; this formula has leading binding operator Q v : A with binding occurrence of v whose scope is F , and its free variables are {~} with v removed, and all free occurrences of v in F become bound by Qv : A; its immediate subformula is F. A formula is closed iff it has no free variables; such a formula is wellformed in an empty context, but its subformulas might only be wellformed in a context. A subformula G of a formula F is either an immediate subformula or a subformula of a subformula. Examples.
Here are examples, for P1 : A + Prop, P2 : B + Prop, P3 : A x B +
Prop. (Vx:A. 3y:B. P3(x,y) ~ (3x:A. P~(x) & 3y : B. P2(x))) is a closed formula. Vx:A. 3y:B. P3(x, y) is an immediate subformula which is also closed, but 3y:B. P3(x, y) is not closed since it has the free variable x:A; this latter formula is wellformed in the context x:A. The atomic subformulas are P~(x), P2(Y), and P3((x, y)) which are formulas in the context x:A, y:B, and the typing judgment x:A, y : B F (x,y) E A x B is used to understand the formation of P3(x, y) (which is an abbreviation of P3((x, y>)). 2.5. F o r m a l proofs There are many ways to organize formal proofs of typed formulas, e. g. natural deduction, the sequent calculus, or its dual, tableaux, or Hilbert style systems to name a few. We choose a sequent calculus presented in a topdown fashion (as with tableaux). We call this a refinement logic (RL). The choice is motivated by the advantages sequents provide for automation and display, is Here is what a simple proof looks like for A E Type, P E A + Prop; only the relevant hypotheses are mentioned and only the first time they are generated. lSThis is the mechanism used in Nuprl and HOL; PVS uses multiple conclusion sequents.
705
Types
by VR
F Vx" A. (Vy" A. P ( y ) =v 3x " A. P ( x ) ) x" A F Vy" A. P ( y ) =v 3 x " A. P ( x )
1.1
f " (Vy " A. P ( y ) ) F 3x " A. P ( x )
1.1.2
byVLonfwithx by 3R with x
l " P ( x ) F 3x " A. P ( x )
1.1.1
by =~R
1.1.1.1
F P ( x )
by hyp l
1.1.1.2
~ x E A
by hyp x by hyp x
txEA
The schematic tree structure with path names is FG 1. 1.1 1.1.1 1.1.1.1
H1 tG1 H2 F G2
/\
//3 F G2
//3 F G4
/\
1.1.2
1.1.1.2
//2 F G3
//3 P G3
S e q u e n t s . The nodes of a proof tree are called sequents. They are a list of hypotheses separated by the assertion sign, t (called turnstile or proof sign) followed by the conclusion. A hypothesis can be a typing assumption such as x : A for A a type or a labeled assertion, such as l: P ( x ) . The label 1 is used to refer to the hypothesis in the rules. The occurrence of x in x : A is an individual variable, and we are assuming that it is an object of type A. So it is an assumption that A is inhabited. Here is a sequent, xl " H 1 , . . . , x ~ " H~ FG
where H~ is an assertion or a type and xi is either a label or a variable respectively. The xi are all distinct. G is always an unlabeled formula. We can also refer to the hypothesis by number, 1 . . . n , and we refer to G as the 0th component of the sequent. We abbreviate a sequent by /~ ~ G for /~ = (xl " H 1 , . . . ,x~" H~); sometimes we write 9 9 F G. R u l e s . Proof rules are organized in the usual format of the singleconclusion sequent calculus. They appear in a table shortly. We explain now some entries of this table. There are two rules for each logical operator (connective or quantifier). The right
R. Constable
706
rule for an operator tells how to decompose a conclusion formula built with that operator, and the left rule for an operator tells how to decompose such a formula when it is on the left, that is when it is a hypothesis. There are also trivial rules for the constants T and _1_ and a rule for hypotheses. So the rules fit this pattern and are named as shown. Left & V V 3 T _L
&L vL ~L VL 3L _kL
Right
&R vRI VRr =>R VR 3R TR
Xl " H 1 , . . . , x n " / I n F Hi by hyp xi l O
II
i BY DTerm rm + 2] 0 THENM DTerm rn  1] 0 THEN Auto.
\
6. ~ (n > O)
I
BY DTerm [m  3] 0 THENM DTerm [n + 2] 0 THEN Auto.
I
I' 0 < m  3
I
BY SupInf THEN Auto
Types
771
6.3. G e n e r a l i z e d s t a m p s p r o b l e m
Lemmata from the Standard Nurpl Library. 9T mul_bounds_ia ~ kia,b: N . 0 < a , b 9T mul_bounds_lb F kia,b" N + . 0 < a , b 9T m u l _ p r e s e r v e s _ l t ~ k i a , b ' Z , k i n : N + . a < b = ~ n * a < n * b 9T m u l _ p r e s e r v e s _ l e ~ k i a , b  Z , k i n ' N , a < b = ~ n * a < n * b 9T mult i p l y _ f u n c t i o n a l i t y _ w r t _ l e F kiil,i2,jl,j2"N,
il , 514 E., E + , 528 >>, 529 O, 529 D,V,3,539 modal propositional logic, 477 modal systems K , L , K 4 , S , 477, 478 $4, 481,497 A,D, 487 CS,CSM, 492, 493 LP, 497 IL,ILM, 514 TOL,TLR,ELH, 529 Lq,S5, 539 Sq, 539 QL,QS, 540 modality, 73 modally expressible, 490 mode, 665 mode assignment, 665 model, 28, 501,647 modified realizability, see realizability Modula, 757 module, 757, 760 Modulus of Uniform Continuity (MUC), 433 modus ponens, 5, 706, 729 monotone operator, 269 monotonic, 282 monotonicity axiom, 493 Monotonicity Lemma, 225 most general proof, 568 move, 524, 525 multiplicative connective, 7173, 733 multiply recursive, 189 Ninterpretation, 342, see also negative translation natural deduction, 4748, 69, 600 natural numbers, 711 natural proofs, 134 NDinterpreted, 348 necessitation, 477, 498 negation as failure, 661 negative clause, 19 negative formula, 437, 439 negative occurrence, 15
805
Subject Index
negative translation, 66, 67, 338, 341, 342, 355, 370, 392, 766 neighbourhood function, 426 nocounterexample interpretation, 54, 340, 355, 362 node, 221,478 nonlogical symbols, 81 nonschematic theory, 117 norm, 242 c~norm, 216 norm function, 201 normal, 498, 499 normal form, 358 normal function, 212 normal modal logic, 477 normalizable, 358 normalization, 17 normalizing, 358 Nullstellensatz, 603 Number Theory, NT, 232, see also arithmetic secondorder, NT2,271 numeral, 81, 116, 119, 220, 409 numeralwise representability, 113 numerate, 504 Nuprl, 722 object, 757 object assignment, 28 objectoriented programming, 758 occurs check, 60 wconsistent, see consistent gt function, 447 gt functionset, 451 powerset, 451 gt predicate, 447 product, 447 w provability, 487, 494 relation, 447 gt set, 446 oneway function, 617 ontological axiom, 216, 217 Operations Hereditarily Effective, 431 Hereditarily Recursive, 430 operator, 300 operator controlled derivable, 301 operator controlled derivation, 253, 254, 300 optimal propositional proof system, 626 oracle, 106 order type, otyp, 212, 221,222, 288 ordered pair, 696 ordinal, 210, 280, see also tree ordinal ordinal analysis, 229, 230
n~,
229
for set theories, 321331 of NT, 240 profound, 263 ~, 219 II ~ 247 ordinal arithmetic, 156, 193 ordinal notation, 495, see also treeordinal ordinal of a formula
IHl~,,
229
[F[Ho , 260 ordinal of a theory IIAxll ~, 216
IIAxll ~, 216 IIAxll~, IIAxII~, IIAxlln~, 217, 219 IIAxlln;, 228 IIAxll, 228 [[Ax[[ ~cK, 229 ]Ell table of impredicative theories, 332 ordinal operator, 300 ordinal sum, 212 ordinal term, 308 ordinal terms, 308 Orey sentence, 531 Orey set, 531 output argument, 665 output variables, 665 pairing, 70, 423, 429, 445 pairing axioms, 177, 216, 279 parameter variable, 33 parameters, par(), 258, 300 paramodulation, 63 parentheses, omitting, 5, 26 Parikh provability, 495 Parikh's Theorem, 87, 112 partial combinatory algebra, 424 partial continuous application, 426 Partial Continuous Operations (PCO), 426 partial equivalence relation (per), 719, 745, 746, 748 partial recursive, 172 in an ordinal, 217 Partial Recursive Operations (PRO), 424 partial type, 759 Pascal, 754 path, 221 Peano arithmetic, 84, 175, 231,352, 721, see also arithmetic persistence, 170 EPersistency, 280
806
Subject Index
persistency downwards, 301 upwards, 301 Hicompleteness, 494 pinning down, 267 pointer, 760 polymodal logic, 491,495 polymorphic, 393, 715, 745 polymorphic Acalculus, F, 394 polymorphism, 393 polynomial calculus, 604 polynomial growth rate, 98, 100 Polynomial Local Search (PLS), 133, 134 PLS function, 133 polynomial size tree (pst) proof, 564 polynomial time, 103, 104, 106 polynomial time hierarchy, 105108 polynomially equivalent, 552 polynomially numerates, 578 polynomially simulates, 552 positive clause, 19 positive formula, 643 positive occurrence, 15, 282 positive resolution, 22 power type, 445 predecessor, 89, 423, 733 npredecessor, 154 immediate npredecessor, 154 predicate provability logic, 531 predicative, 268 Predicative Elimination Lemma, 237, 302 predicative polymorphism, 394, 398 predicativity, 267 prenexification, 51 Epreservativity, 488 prime powers, 90 primes, 90 primitive notion, PN, 721 primitive recursion, 82, 96, 733 primitive recursive, 175, 189, 219, 363, 364 primitive recursive arithmetic, see arithmetic primitive recursive function, 82, 96 defining equations, 82 primitive recursive predicate, 96 primitive recursive well ordering, PRWO, 264 principal formula, 12, 46, 110, 112, see also main part of an inference principal term, 308 probabilistically checkable proofs, 550 product topology, 9, 373 product type, 429, 739 profound, 263
program clause, 649 program rules, 656 stratified, 660 programs as deductive systems, 655 programs as theories, 655 progressive, 187 Prog, 225, 238, 286 projection, 70, 96, 103 PROLOG, 64, 668 proof, 550 length, see length, proof sequencelike (daglike), 13, 551 treelike, 13, 550 proof by contradiction, 707 proof equality, 723 proof expression, 708 proof predicate, 116, 263, 476, 498, 499 proof system associated to theory, 624 cutting plane, 604 extension Frege, 592 Frege, 510, 591 bounded depth, 599 Groebner, 604 HajSs calculus, 601 Hilbert style, 29, 553 Nullstellensatz, 603 polynomial calculus, 604 propositional, 550 optimal, 626 quantified, 600 resolution, 1826, 5964, 598599, see also resolution substitution Frege, 591 proof theoretic ordinal, 228, see also ordinal of a theory proofs as programs, 679, 754 proposition, 694 category P r o p , 694, 695 propositional function, 695 propositional logic, see Frege system, proof system, quantified propositional logic and resolution and bounded arithmetic, 619 propositional rule, 11,710 propositional theory, 484, 485 propositions as types, 724, 752 protoeffective, 453 canonically, 453 provability logic, 476, 487, 489, 491,492 provability predicate, 116 provably recursive, 87, 173, 199, 202, 248,
Subject Index
353,354, 364, 370,498, see also definable function in P A , 189, 253, 362 in T (PROvREc(T)), 173 provably total, 498, 587, see also provably recursive ProverAdversary game, 596 pullback, 719 pure proof, 701 pure proposition, 700, 701 pure propositional function, 700 pure type, 343 pure typed function, 701 Q, R (theories of arithmetic), 8283, 507, 513, 560, 579 quantified propositional logic, 600 quantifier exchange property, 100 quantifier rule, 32, 109, 710 Quantifier Theorem, 286, 287 quantifier theorem, hyperarithmetical, 229 quasitautology, 49, 52 quotient type, 719, 720 ramified analysis, 383, 385 ramified set theory, 294 Ramsey's theorem, 619 random restriction, 607 range type, 715 rank, 168, 178, 221,297, 361,525, 642, 656 realistic, 485 realizability, 66, 407462 abstract (r_), 424 extensional (re,rne,rnet), 439, 440 function (rf), 427, 428 function with truth (rft), 427, 428 Lifschitz (rln, rlf), 437 modified (mr), 429, 431,432, 434 function (turf), 434 numerical (turn), 434, 443, 457 with truth (tort), 431 naming conventions, 422 numerical (rn), 408, 410, 413, 418, 442, 444, 446, 455 numerical with truth (rnt), 413, 442, 457 q, 421,422 set theory, 458 realization, see arithmetic realization realizational instance, 532 record type, 756, 764 ERecursion Theorem, 281
807
recursion, see bar recursion, limited recursion, primitive recursion, transfinite recursion recursion operator, 425 recursive, 172 7recursive (REC(~/) ), 172, see also descendent recursive recursive comprehension (RCA), 371 recursive type, 760 recursively inaccessible, 289 recursively regular, 228, 304 recursor, 232, 344, 345, 348, 349, 360, 362, 364, 378, 387, 429, 734, 737, 761,763 redex, 358 reduced sequence, 222 reduces, 358 reduces in one step, 358 reducibility candidate, 397 reducible, 222, 358, 359 Reduction Lemma, 235, 256, 302 refinement logic, 704 EReflection, 280 reflection principle, 217, 218, 280, 281, 490, 624 iterated, 495 reflexive, 505 reflexivity, 86 reflexivity axiom, 494 regular axiom system, 248 regular counterwitness, 504 regular ordinal, 211 regular ordinals (Reg), 304 topological closure Reg, 304 regular term, 308 regular witness, 504 relation symbol, 26 relative translation, 501 relativization, 118, 216 Relativized ERecursion Theorem, 284 relativizing formula, 501 remainder, 89 EReplacement, 280 replacement, 84, 94, 109, 110, 112, 135, 412, 445, 447, see also collection a n d strong replacement resolution, 1826, 5964, 598599 ground, 62 hyper, 22 input, 24 linear, 24 negative, 23 positive, 22
808
Subject Index
positive unit, 25 Rresolution, 61 semantic, 23 set of support, 23 SLD, 26, 640, 661 SLDNF, 640, 661 unit, 24 resolution proof, 20 resolution refutation, 19, 598 resolution rule, 19, 598 resolvent, 19, 61,664 restricted arithmetic (Arith), 711 restricted quantifiers, 215 reverse mathematics, 371,766 rewrite system, 358 Richman's Principle (RP), 457 Robinson arithmetic, see Q, R root, 478 Rosser ordering, 496 Rosser provability, 120, 495, 496 Rosser sentence, 121,496 Rosser's Theorem, 120 run time typing, 755 satisfiable, 4, 19, 28 satisfied, 28 satisfy, 28, 61 schematic theory, 115, 117, 552, 554 Scheme, 755 scope, 704, 734 search tree, 221,222, 228 Second Incompleteness Theorem, 121, 137, 476, 583 formalized, 506 second order logic, 271 selfrealizing, 415 selfreference, 118 selfreferential, see Diagonal Lemma semantic resolution, 23 semantic tableau, 36 Semantical Main Lemma, 223 semantics, 27 semiformal calculus, 231,234, 298 semiformula, 31 semiterm, 31 sentence, 27 sentential rule, 317 separated, 453 canonically, 453 ASeparation, 280 separation, 216, 321 Separation axiom, 216 sequence, 713
sequence coding, 9194 sequencelike proof, see proof sequent, 10, 705 empty, 10 initial, 11 upper, lower, 11 sequent calculus, 10, 31,600 LJ, 64 LK, 32 PK, 11 sequential theory, 560, 562 set existence axioms, 216 set of support resolution, 23 set terms, 295 set theory, 718 set type, 718 Shanin's algorithm, 422 sharply bounded quantifier, 82 side formulas, 12 signature, 758 simple contradiction, 596 Simula, 755 simulate, 624 simultaneous inductive definition (SID), 676 size proof, 142, 551, see also length, proof term, 567 skeleton, 42, 114, 568 Skolem function, 50 Skolem functional, 377, 378, 386 Skolemization, 50, 346 slash ([), 420421 Aczel, 421 SLD, SLDNF, see resolution, completeness, a n d soundness slowgrowing hierarchy, 152, 157, 194 slowgrowing operator, G, 152, 156 smash function ( # ) , 81, 99, 100 social proof, 2 Solovay function, 482 sorting, 393 sound, 480 soundness firstorder, 30, 33 HA~, 432 HAI , 438 HA*, 414 strong, 414, 417, 420 weak, 414, 417 implicational, 6, 13 intuitionistic manysorted, 448, 449 modal logic, 478
Subject Index
propositional, 6, 13 resolution, 19 SLDNF, 669 space representable, 161 sparse set, 626 species, 392 Spector(Howard) interpretation, 367 SpectorGandy Theorem, 286 spectrum II~spectrum, 228 20spectrum, 246 H~ 247 speed up, 497 square root, 90 stage in constructible hierarchy, 215, 295 stg, 295 stage of an inductive definition, 269, 281 standard interpretation, 295 starting function, 242 static typing, 755 stratification, 728 stratified program rules, 660 strict, 411,447 strong fragment, 81 strong inference, 11, 32 strong interpretation, 502 strong replacement, 96, 109, 110 strongly critical, 214, 308 SC, 214 strongly critical components, SC, 305 strongly normalizable, 358 strongly normalizing, 358 strongly positive, 388 structural rule, 11,301,317, 708, 710 structure, 27 adequate, 650 equational, 648 fourvalued, 644 free term, 670 Herbrand, 645 lower threevalued, 644 twovalued, 644 upper threevalued, 644 structured treeordinal, see treeordinal subformula, 704 subformula property, 13, 111,573 subobject classifier, 719 substitution, 5, 27, 59, 116, 341, 567, 648, 728, 734 closed under, 33 empty, 648 variable renaming, 59
809
substitution Frege system, 591 substitution operator, 232 substitution rule, 591 subsume, 22 subsumption, 22 subtheory, 501 subtraction, 89, 349 subtree ordering, 154, 193 subtype, 693 succedent, 10 successor, 96, 103, 220, 232, 344, 360, 409, 423, 429, 516 successor ordinal, 211,304 superarithmetic theory, 504 superexponentiation, 37, 81, 138, 139 support, set of, 23 supremum, 211 surjection, 445 Suslin quantifier, 384 switching lemma, 618 symmetric sum, 213 Syntactical Main Lemma, 223 system F, 394 Tpredicate, Kleene's, 409 tableaux proof, 704 tactic, 709 tactic tree proof, 766 tactical, 709 tail, 125, 714 tail model, 480, 490 Tait calculus, 1618, 165, 220, 232 Takeuti's conjecture, 398 Tarski's conditions, 560 tautological implication, 4 tautology, 4, 505 Tautology Lemma, 233 tautology rule, 317 term, 26, 31,220, 642, 703 Acalculus, 68 term model, 357, 358 terminal, 739 tertium non datur, see excluded middle, law of theory, 29, 501 theory delimiters, 722 theory of implication, 600 thin, 708 thread, 221 threevalued closure ordinal, 653 TOL model, 530 tolerance, 503, 528530 topos, 421,441,451,452, 457, 461,719
810
topos theory, 719 transfer, 525 transfinite induction, see induction transfinite recursion, 211,281 transitive, 210 transitivity, 86 translation, 501 tree, 221 tree of knowledge, 722 tree relation, 222 treelike proof, see proof treeordinal, 154, 191,386 finite type theory (OR"{), 386, 387 structured, 154, 198 trichotomy, 86 truth, 28, 501 truth assignment, 3, 702 truth complexity, 219 tc, 224, 297 truth definition, 137, 139, 142, 220 truth provability logic, 487 truth value, 694 contradictory, 643 false, 643 true, 643 undefined, 643 type, 68, 342, 692, 703 of a term, 343, 429 type assumption, 703 type level, 343, 452 type structure, 343 type system, 748 type theory, 726, 767 typed Acalculus, 755 typed propositional formula, 703 typing context, 703 typing judgment, 698, 735 unbounded quantifier, 82 unbounded set, 211 uncountable cardinal, 304 unification, 55, 59, 567, 648 unification algorithm, 6061 Unification Theorem, 60 unifier, 59, 567, 648 most general, 60, 567, 648 uniform, 452 canonically, 452 Uniform Continuity Modulus of, 433 Uniformity Principle (UP), 442, 453 Uniformity Rule (UR), 443 union axiom, 216, 279
Subject Index
union type, 756 unique factorization, 90 unit clause, 24 unit resolution, 24 unit type, 700, 735 universal closure, 32 universe, 27, 394, 398400, 744 universe rules, 744 unpairing, 429 unrestricted quantifiers, 215 unsecured sequences, 230, 277, 287, 290 untyped Acalculus, 755, 759 unwinding, 338 upward persistency, 301 valid, 28, 32, 448, 478, 535, 647, 702 valid element, 701 valid formula, 4 valid inference, 115 variable, 3, 26, 702 free and bound, 31,703, 704, 734 variant, 648 term, 42 Veblen function, 214 Veblen hierarchy, 383 Veblen normal form, 214 Veltman frame, 515 very dependent, 765 very dependent function, 764 very dependent type, 764 very weak fragment, 81 Visser frame, 530 Weak Continuity (WC), 434 Weak Extended Church's Thesis (WECT), 440 weak fragment, 81 weak inference, 11 Weak KSnig's Lemma (WKL), 371,374 Weakening Lemma, 167 weakening rule, 11, 73 weakly wconsistent, 119 weakly compact cardinal, 331 weakly inaccessible, 304 weakly interpretable, 503, 528 weakly introduced, 43 weakly positive, 388 well founded, 221,222 Wf (