DOV GABBAY AND NICOLA OLIVETTI
GOAL-DIRECTED PROOF THEORY
FOREWORD
This book contains the beginnings of our researc...
54 downloads
1310 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
DOV GABBAY AND NICOLA OLIVETTI
GOAL-DIRECTED PROOF THEORY
FOREWORD
This book contains the beginnings of our research program into goal-directed deduction. The idea of presenting logics in a goal-directed way was conceived in 1984 when the first author was teaching logic at Imperial college, London. The aim was to formulate the entire classical logic in a goal-directed prolog-like way see the book [Gabbay, 1981] and [Gabbay, 1998]. This formulation for both classical and intuitionistic logic was successful and gave rise to an early version of the Restart rule and N -Prolog, the first extension of Horn clause programming with hypothetical implication (1984–1985). Since that time, several authors and research groups have adopted the goal-directed methodology either as a favourite deduction system for some specific logic or as a favourite extension of Prolog. In this book we present goal-directed deduction as a methodology to be applied to the major families of non-classical logics. From the point of view of automated deduction, what makes goal-directedness a desirable presentation is that it drastically reduces the non-determinism in proof search. Moreover, a goal-directed procedure is by definition capable of focussing on the relevant data for the proof of the goal, ignoring the rest. Even if this may seem a minor point in small theoremproving examples, it is a most important feature if non-classical logics are to be used to specify deductive databases or logic programs. Whenever the set of data comprises hundreds of formulas, most of which are irrelevant to the proof of the current goal, one cannot just randomly select a formula of the data to process at the next step. Here goal-focusing or goal-directedness methods become essential. Furthermore, as we shall see in this book, the proof systems we present are not just of interest in the pursue of automatization, but also for theoretical reasons. Our presentation highlight a procedural interpretation of logical systems. There is a common nature of all deductive systems made of procedural rules, and different proof systems can be seen as procedural variants, or perturbation, of the same deduction algorithm. We do not try to give a general definition or pattern of goal-directedness, it would be rather artificial as defining what is a sequent calculus or a tableau system in general. Nevertheless, all sequent calculi or tableaux methods have a family resemblance which justify the appellation of a method as a ‘tableau method’ even without a formal definition of this concept. The same holds true with goal-directed proof methods presented in this book. This book is only the beginning of our research program. It demonstrates the the possiblity of developing goal-directed proof procedures for a variety of logical
iv
GOAL-DIRECTED PROOF THEORY
systems ranging over intuitionistic, intermediate, modal and substructural logics. Our future research topics are discussed in the last chapter. We hope we presented enough material in this volume to enable the reader to apply this methodology to his own favourite logic system. The book is directed to researcher, undergraduate or graduate students with at least an elementary knowledge of classical logic and some non-classical logics, such as modal logics. It can be used in a course on non-classical logic and automated deduction, complementing standard presentations of non-classical logic and proof-theory. Dov M. Gabbay Nicola Olivetti London, February 2000
Acknowledgements
We are grateful to Matteo Baldoni, Marcello D’Agostino, Laura Giordano, Alberto Martelli, Pierangelo Miglioli, Daniele Mundici, Alessandra Russo, Luca Vigan`o, Sergei Vorobyov for valuable discussions and comments. We are indebted to Agata Ciabattoni and Ulrich Endriss for carefully reading and commenting on the manuscript. Special thanks are due to Bob Meyer and Alasdair Urquhart for valuable explanation and information on relevant logics. Thanks also to Roy Dyckhoff and James Harland for providing us with relevant material for the book. The second author gratefully acknowledges the support and encouragement of all his colleagues of the Logic Programming and Automated-Reasoning Group of the University of Torino. Moreover he express his gratitude to Hans Tompits and the staff of the Knowledge Based System Group at Vienna University of Technology for having invited him to hold a one-term course on the subject of this book in May 1999. He thanks the students who took part for fruitful discussion and remarks. Finally we would like to thank Mrs Jane Spurr, King’s College Publications Manager, for her usual efficiency, dedication and excellence in producing the final manuscript.
The second author was partially supported in this research by a six-month fellowship from the Italian Consiglio Nazionale delle Ricerche, Comitato per le Scienze matematiche, in 1997.
CONTENTS
CHAPTER 1 INTRODUCTION 1 INTRODUCTION . . . . . . . . . . . . . . . . 2 A SURVEY OF GOAL-DIRECTED METHODS 3 GOAL-DIRECTED SYSTEMS ARE CUT-FREE 4 AN OUTLINE OF THE BOOK . . . . . . . . . 5 NOTATION AND BASIC NOTIONS . . . . . .
. . . . .
. . . . .
1 1 6 14 15 16
CHAPTER 2 INTUITIONISTIC AND CLASSICAL LOGICS 1 ALTERNATIVE PRESENTATIONS OF INTUITIONISTIC LOGIC 2 RULES FOR INTUITIONISTIC IMPLICATION . . . . . . . . . . 2.1 Soundness and Completeness . . . . . . . . . . . . . . . . . . . 2.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 BOUNDED RESOURCE DEDUCTION FOR IMPLICATION . . 3.1 Bounded Restart Rule for Intuitionistic Logic . . . . . . . . . . . 3.2 Restart Rule for Classical Logic . . . . . . . . . . . . . . . . . . 3.3 Some Efficiency Consideration . . . . . . . . . . . . . . . . . . . 4 CONJUNCTION AND NEGATION . . . . . . . . . . . . . . . . . 5 DISJUNCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 THE ∀, →-FRAGMENT OF INTUITIONISTIC LOGIC . . . . . .
. . . . . . . . . . .
21 21 26 28 30 33 39 47 54 58 62 78
CHAPTER 3 INTERMEDIATE LOGICS 1 LOGICS OF BOUNDED HEIGHT KRIPKE MODELS . . . ¨ 2 DUMMETT–GODEL LOGIC LC . . . . . . . . . . . . . . . 2.1 Unlabelled Procedure for the Implicational Fragment of LC 2.2 Soundness and Completeness . . . . . . . . . . . . . . . . 3 RELATION WITH AVRON’S HYPERSEQUENTS . . . . . .
. . . . .
. . . . .
. . . . .
93 . 94 . 99 . 102 . 107 . 111
CHAPTER 4 MODAL LOGICS OF STRICT IMPLICATION 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . 2 PROOF SYSTEMS . . . . . . . . . . . . . . . . . . . . . 3 ADMISSIBILITY OF CUT . . . . . . . . . . . . . . . . . 4 SOUNDNESS AND COMPLETENESS . . . . . . . . . . 5 SIMPLIFICATION FOR SPECIFIC SYSTEMS . . . . . . 5.1 Simplification for K, K4, S4, KT: Databases as Lists . . 5.2 Simplification for K5, K45, S5: Databases as Clusters .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . .
. . . . .
. . . . . . .
. . . . .
. . . . .
117 117 119 123 126 132 133 138
0
6 7 7.1 7.2 7.3 8 9 10
GOAL-DIRECTED PROOF THEORY
AN INTUITIONISTIC VERSION OF K5, K45, S5, KB, KBT EXTENDING THE LANGUAGE . . . . . . . . . . . . . . . Conjunction . . . . . . . . . . . . . . . . . . . . . . . . . . Modal Harrop Formulas . . . . . . . . . . . . . . . . . . . Extension to the Whole Propositional Language . . . . . . . A FURTHER CASE STUDY: MODAL LOGIC G . . . . . . . EXTENSION TO HORN MODAL LOGICS . . . . . . . . . COMPARISON WITH OTHER WORK . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
142 146 146 148 152 158 162 164
CHAPTER 5 SUBSTRUCTURAL LOGICS 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . 2 PROOF SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . 3 BASIC PROPERTIES . . . . . . . . . . . . . . . . . . . . . 4 ADMISSIBILITY OF CUT . . . . . . . . . . . . . . . . . . . 5 SOUNDNESS AND COMPLETENESS . . . . . . . . . . . . 5.1 Soundness with Respect to Fine Semantics . . . . . . . . . 6 EXTENDING THE LANGUAGE TO HARROP FORMULAS 6.1 Routley–Meyer Semantics, Soundness and Completeness . . 7 A DECISION PROCEDURE FOR IMPLICATIONAL R . . . 8 A FURTHER CASE STUDY: THE SYSTEM RM0 . . . . . . 9 RELATION WITH OTHER APPROACHES . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
171 171 174 178 181 192 195 202 206 220 231 244
CHAPTER 6 CONCLUSIONS AND FURTHER WORK 249 1 A MORE GENERAL VIEW OF OUR WORK . . . . . . . . . . . . 249 2 FUTURE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 INDEX
265
CHAPTER 1
INTRODUCTION
1 INTRODUCTION This book is about goal-directed proof-theoretical formulations of non-classical logics. It evolved from a response to the existence of two camps in the applied logic (computer science/artificial intelligence) community. There are those members who believe that the new non-classical logics are the most important for applications and that classical logic itself is no longer the main workhorse of applied logic and there are those who maintain that classical logic is the only logic worth considering and that within classical logic the Horn clause fragment is the most important. The book presents a uniform Prolog-like formulation of the landscape of classical and non-classical logics, done in such a way that the distinctions and movements from one logic to another seem simple and natural; and within it classical logic becomes just one among many. This should please the non-classical logic camp. It will also please the classical logic camp since the goal-directed formulation makes it all look like an algorithmic extension of logic programming . The spectacular rise in non-classical logic and its role in computer science and artificial intelligence was fuelled by the fact that more and more computational ‘devices’ were needed to help the human at his work and satisfy his needs. To make such a ‘device’ more effective in an application area, a logical model of the main feature of human behaviour in that area was needed. Thus logical analysis of human behaviour became part of applied computer science. Such study and analysis of human activity is not new. Philosophers and pure logicians have also been modelling such behaviours, and in fact, have already produced many of the non-classical logics used in computer science. Typical examples are modal and temporal logics. They have been applied extensively in philosophy and linguistics as well as in computer science and artificial intelligence. The landscape of non-classical logics applications in computer science and artificial intelligence is wide and varied. Modal and temporal logics have been profitably applied to verification and specification of concurrent systems [Manna and Pnueli, 1981; Pnueli, 1981 ]. In the area of artificial intelligence as well as distributed systems, the problem of reasoning about knowledge, belief and action has received much attention; modal logics [Turner, 1985; Halpern and Moses, 1990] have been seen to provide the formal language to represent this type of reasoning. Relevance logics have been applied in natural language understanding and 1
2
GOAL-DIRECTED PROOF THEORY
database updating [Martins and Shapiro, 1988]. Lambek’s logic [1958] and its extensions are currently used for natural language processing. Another source of interest in non-classical logics, and in particular in the so-called substructural logics (with the prominent case of linear logic [Girard, 1987]) has originated from the functional interpretation provided by the Curry–Howard’s isomorphism between formulas and types in functional languages (see [Hindley, 1997] for an introduction and references therein, see also also[Wansing, 1990; Gabbay and de Queiroz, 1992]). In parallel with the theoretical study of the logics mentioned above and their applications, there has been a considerable amount of work on their automation. The area of non-classical theorem-proving is growing rapidly, although it is yet not as developed as classical theorem proving. Although there is a wide variety of logics, we can group the existing methodologies for automated deduction into a few categories. Most of the ideas and methods for non-classical deduction have been derived from their classical counterpart. We have analytic methods such as tableaux, systems based on sequent calculi , methods which extend and reformulate classical resolution , translation based methods, and goal-directed methods. We can roughly distinguish two paradigms of deduction: human-oriented proof methods versus machine-oriented proof methods. We can call a deduction method human-oriented if, in principle, the formal deduction follows closely the way a human does it. In other words, we can understand how a deduction proceeds and, more precisely, how each intermediate step is related to the original deductive query. With machine-oriented proof methods this requirement is not mandatory: the original deductive task might be translated and encoded even in another formalism. The intermediate steps of a computation might have no directly visible relationship with the original problem. According to this distinction, natural deduction, tableaux and sequent calculi are examples of the human-oriented paradigm, whereas resolution and translationbased methods are better seen as examples of the machine oriented paradigm. In particular resolution methods require us to transform the question ‘does Q follow from ∆?’, into the question ‘is ∆∗ ∪ {Q∗ } consistent?’, where ∆∗ and Q∗ are preprocessed normal forms of ∆ and Q. Stepping from ∆ and Q to ∆∗ and Q∗ one may lose information involved in the original ∆ and Q. Moreover, the normal forms may be natural only from the machine implementation point of view, and not supported by the human way of reasoning. Machine-oriented proof methods are more promising from the point of view of efficiency than human-oriented proof methods. After all, efficiency, uniformity and reduction of the search space were the main motivation behind the introduction of resolution. The basic features of goal-directed methods are that: they are human oriented and they are a generalization of logic programming style of deduction. Goaldirected methods can be seen as an attempt to fill the gap between the two paradigms: on the one hand, they maintain the perspicuity of human-oriented proof methods,
1. INTRODUCTION
3
on the other hand, they are not too far from an efficient implementation. There is another reason to be interested in goal-directed proof search. Although we generally speak about deduction, there are several different tasks/problems which can be qualified as deductive. These different tasks might be theoretically reducible one to each other, but a method or an algorithm to solve one does not necessarily apply successfully to another. To make this point more concrete, assume we are dealing with a given logical system L (say classical, or intuitionistic logic, or modal logic S4), we use the symbol ` to mean both theoremhood and consequence relation in that logic. Compare the following problems: 1. given a formula A we want to know whether ` A, that is whether A is a theorem of L; 2. we are given a set Γ containing 10,000 formulas and a formula A and we want to know whether Γ ` A; 3. we are given a set of formulas Γ and we are asked to generate all atomic propositions which are entailed by Γ; 4. We are given a formula A and a set of formulas Γ such that Γ 6` A. We are asked to find a set of atomic propositions S such that Γ ∪ S ` A. The list of problems/tasks might continue. We call the first problem ‘theoremproving’. The second problem is close to deductive-database query answering. The third problem/task may occur when we want to revise a knowledge-base or a state description as an effect of some new incoming information. The fourth problem is involved in abductive reasoning and, in practice, one would impose various constraints on possible solution sets S. It is not difficult to see that the second, third and fourth problems are reducible to the first one. An algorithm to determine theoremhood can be used to solve the other problems as well. Suppose we have V an efficient theorem prover P. Problem 2 can be reduced to check the theorem Γ → A. Thus, we can feed P with this huge formula, run it and get an answer. However it might be that the formulas of Γ have a particular simple format and, most importantly, only a very small subset of them (say 10 formulas) are relevant to obtain the proof of A. Even if our theorem prover P has an optimal complexity in the size of the data (Γ + A), we would rather prefer a deduction method which is, in principle, capable of concentrating on the data in Γ which are relevant to the proof of A and ignore the rest. The theorem prover can be usedVto solve also Problem 3: just enumerate all atomic formulas pi , check whether Γ → pi , give as output the atomic formulas for which the answer is yes. It is likely that there are better methods to accomplish this task! For instance in the case that Γ is a set of Horn clauses, one can use a bottom-up evaluation; more generally, one would try to generate this set incrementally by a saturation procedure. The theorem-prover can be used to solve Problem 4 in a non-deterministic way: V guess a set S and check by the theorem-prover whether Γ ∧ S → A. Again,
4
GOAL-DIRECTED PROOF THEORY
no matter how efficient our theorem-prover is, it is obvious that there are better methods of performing this task, for instance one attempts a proof of A from Γ and determines as far as the proof proceeds, what should be added to Γ (i.e. a solution S) to make the proof succeed. To perform this task one would prefer a method by which the extraction of such solution sets S from derivations is easy. All these considerations, show that the theorem-proving perspective is not the only possible way of looking at deduction. Another well-known example is proofsearch: one might be interested not only in determining whether a formula is a theorem, but also to find out a proof of the formula with certain features (this interest is intrinsic in type-inhabitation problems). The goal-directed paradigm we follow in this work is particularly well-suited to deduction which may involve a great amount of data (that is what we have qualified as deductive database perspective). Moreover goal-directed deductive procedures can be used to design abductive reasoning procedures [Eshghi, 1989], although this point will not be developed further in the present work. The goal-directed paradigm we adopt is the same as the one underlying logic programming. The deduction process can be described as follows: we have a structured collection of formulas (called a database) ∆ and a goal formula A, and we want to know whether A follows from ∆ or not, in a specific logic. Let us denote by ∆ `? A the query ‘does A follows from ∆?’ (in a given logic). The deduction is goaldirected in the sense that the next step in a proof is determined by the form of the current goal: the goal is stepwise decomposed, according to its logical structure, until we reach its atomic constituents. An atomic goal q is then matched with the ‘head’ of a formula G0 → q (if any, otherwise we fail) in the database, and its ‘body’ G0 is asked in turn. This is what happens with the logic programming approach to Horn-clause computation in intuitionistic or classical logic, namely a Horn-clause can be read as a procedure ∆, a1 ∧ a2 → c reduces to ∆, a1 ∧ a2 → c
`? c ` ? a1 ∧ a2 .
A call to c reduces to a call to a1 and to a2 . This procedural way of looking at clauses is equivalent to the declarative way, namely to ` provability (which for Horn-clauses coincides for classical and intuitionistic logics). We ask ourselves whether we can extend this backward reasoning, goal-directed paradigm to all classical and neighbouring logics. In other words, can we have a logic programming-like proof system presentation for classical, intuitionistic, relevant and other logics? In this book we try to give a positive answer to this question. As we will see, in order to provide a goal-oriented presentation of a large family of non-classical logics, we will have to refine this simple model of deduction in two directions, in a few words:
1. INTRODUCTION
5
• we may put constraints on use and ‘visibility’ of database formulas, so that not necessarily all of them are available to match an atomic constituent; • we may allow to re-ask a goal previously occurred in the deduction. In the next section we will show a variety of examples of goal-directed computations. The concept of goal-directed computation we adopt can also be seen as a generalization of the notion of uniform proof as introduced in [Miller et al., 1991]. As far as we know, a goal-directed presentation has been given for (fragments of) intuitionistic logic [Gabbay and Reyle, 1984; Miller, 1989; McCarty, 1988a; McCarty, 1988b], higher order logic [Miller et al., 1991], substructural logics, namely linear logic [Hodas and Miller, 1994; Harland and Pym, 1991], relevant logic [Bollen, 1991], and modal logic [Giordano et al., 1992; Giordano and Martelli, 1994 ]. In [Gabbay, 1992; Gabbay and Kriwaczek, 1991], goal-directed procedures for classical and some intermediate logics are presented. The extension of the uniform proof paradigm to classical logic is also discussed in [Harland, 1997] and [Nadathur, 1998]. In most of the literature, goal-directed procedures have been investigated for some specific logics but always considering them as a refinement of pre-existing deduction methods. This is, for instance, the case of the uniform proof paradigm promoted by Miller and others. In the uniform proof framework, goal-directed proofs are proofs in a given sequent system (for a specific logic), which satisfy some additional constraint. It is the analysis of provability within the sequent formulation which allows one to identify what fragment of a logic, if any, admits a goal-directed procedure. In this sense the goal-directed proof procedure is a refinement of the sequent formulation, it gives a discipline on how to find a proof in a given sequent calculus. A uniform proof system is called ‘abstract logic programming’ [Miller et al., 1991]. The essence of a uniform-proof system or, as we call it, a goal-directed system is that the proof-search is driven by the goal and that the connectives can be interpreted directly as search instructions. However, we do not have to necessarily rely on a sequent calculus for developing a goal-directed system. It may happen for specific systems that a proper formalization of provability by means of a sequent calculus is not known, or if known is not very natural. Still a goal-directed formulation might easily be obtainable. We think that sequent calculi and goal-directed proof methods are related, but distinct concepts. We will not attempt at the beginning to provide a general definition of goal-directedness (see the next section for examples). It would be rather artificial, similar to attempting to give a definition of ‘Tableaux’ procedures, or sequent calculi. Instead we will try to develop a uniform family of calculi for several types of logics. For this purpose, we consider mainly a minimal fragment of the logics (namely the implicational fragment); this will make our presentation uniform, and will allow us to compare logics through their goal-directed procedure.
6
GOAL-DIRECTED PROOF THEORY
We develop goal-directed procedures for a variety of logics, stretching from modal, to relevant and intermediate logics. By means of the goal-directed formulation, one can prove cut elimination and other properties. From a practical point of view, a goal-directed formulation of a logic may be used to design efficient, Prolog-like, deductive procedures (and even abductive procedures) for each logic. In this work we will mainly concentrate on implicational logics. One reason is the uniformity of treatment of all logics, the other is because we regard implication as the basic connective of most logics. The prominence we give to implication is justified by the connection between the consequence relation of a logic (in the case it is defined) and its implication connective usually expressed by the deduction theorem. If a logic L contains an implication connective, then it is always possible to define a form of consequence relation `L by letting A1 , . . . , An `L B ⇔ A1 → (A2 → . . . → (An → B) . . .) is valid in L. This form of consequence relation may or may not coincide with any known notion of consequence relation for L. For instance, we will define as above a form of consequence relation for modal logics, where the connective → is read as strict implication. A further motivation to restrict our investigation (at least at the first stage) to implicational logics, is the observation that most logics differ in their implication connective, whereas they coincide in the treatment of the other connectives.
2
A SURVEY OF GOAL-DIRECTED METHODS
To explain what we mean by goal-directed deduction style, we begin by recalling standard propositional Horn deductions. This type of deduction is usually interpreted in terms of classical resolution, but it is not the only possible interpretation.1 The data are represented by a set of propositional Horn clauses, which we write as a1 ∧ . . . ∧ an → b. The ai are just propositional variables and n ≥ 0. In case n = 0, the formula reduces to b. This formula is equivalent to: ¬a1 ∨ . . . ∨ ¬an ∨ b. Let ∆ be a set of such formulas, we can give a calculus to derive formulas, called ‘goals’ of the form b1 ∧ . . . ∧ bm . The rules are: ∆ `? b succeeds if b ∈ ∆; ∆ `? A ∧ B is reduced to ∆ `? A and ∆ `? B; 1 For a survey on foundations of logic programming, we refer to [Lloyd, 1984] and to [Gallier, 1987].
1. INTRODUCTION
7
∆ `? q is reduced to ∆ `? a1 ∧ . . . ∧ an , if there is a clause in ∆ of the form a1 ∧ . . . ∧ an → q. The main difference from the traditional logic programming convention is that in the latter conjunction is eliminated and a goal is kept as a sequence of atoms b1 , . . . , bm . The computation does not split because of conjunction, all the subgoals bi are kept in parallel, and when some bi succeeds (that is bi ∈ ∆) it is deleted from the sequence. To obtain a real algorithm we should specify in which order we scan the database when we search for a clause whose head matches the goal. Let us see an example. EXAMPLE 1.1. Let ∆ contain the following clauses (1) a ∧ b → g, (2) t → g, (3) p ∧ q → t, (4) h → q, (5) c → d, (6) c ∧ f → a, (7) d ∧ a → b, (8) a → p, (9) f ∧ t → h, (10) c, (11) f . A derivation of g from ∆ can be displayed in the form of a tree and it is shown in Figure 1.1. The number in front of every non-leaf node indicates the clause of ∆ which is used to reduce the atomic goal in that node. (1) ∆ `? g ! aa !! a (7) ∆ `? b (6) ∆ `? a Q @ @ Q ? ? ? ∆ ` f (6) ∆ `? a ∆ ` c (5) ∆ ` d ∆ `? c
∆ `? c
@ @ ∆ `? f
Figure 1.1. Derivation of Example 1.1. We can make a few observations. First, we do not need to consider the whole database, it might even be infinite, and the derivation would be exactly the same;
8
GOAL-DIRECTED PROOF THEORY
irrelevant clauses, that is those whose ‘head’ do not match with the current goal are ignored. The derivation is driven by the goal in the sense that each step in the proof simply replaces the current goal with the next one. Notice also that in this specific case there is no other way to prove the goal, and the sequence of steps is entirely determined. Two weak points of the method can also be noticed. Suppose that when asking for g we use the second formula, then we continue asking for t, then for h, and then we are lead to ask for t again. We are in a loop. An even simpler situation is the following p → p `? p. We can keep on asking for p without realizing that we are in a loop. To deal with this problem we should add a mechanism which ensures termination. Another problem which has a bearing on the efficiency of the procedure is that a derivation may contain redundant subtrees. This occurs when the same goal is asked several times. In the previous example it happens with the subgoal a. In this case, the global derivation contains multiple subderivations of the same goal. It would be better to be able to remember whether a goal has already been asked (and succeeded) in order to avoid the duplication of its derivation. Whereas the problem of termination is crucial in the evaluation of the method (if we are interested in getting an answer eventually), the problem of redundancy will not be considered in this work. However, avoiding the type of redundancy we have described has a dramatic effect on the efficiency of the procedure, for redundant derivations may grow exponentially with the size of the data. Although the goal-directed procedure does not necessarily produce the shortest proofs, and does not always terminate, still it has the advantage that proofs, when they exist, are easily found. Making this notion of ‘easyness’ precise in terms of proof-search space is admittedly rather difficult and we will not try do it here.2 Let us see if we can extend this goal-directed approach to a broader fragment. We still consider the form of the database as before, but we also allow to ask clauses as goals. How do we evaluate the following goal? ∆ `? a1 ∧ . . . ∧ an → b. This can be read as an hypothetical query. We can think of using the deduction theorem as a deduction rule: Γ, A ` B . Γ`A→B The above query is hence reduced to ∆ ∪ {a1 , . . . , an } `? b. 2 The property of uniformity (in the sense of Miller’s uniform proof paradigm) might be the base of a possible answer: goal-directed (uniform) proofs are, or correspond to, a very restricted kind of proof in a given sequent calculus. When we search for a a uniform proof we therefore explore a restricted search space of possible derivations in the sequent calculus.
1. INTRODUCTION
9
This step can also be justified in the traditional refutational interpretation of deduction. Namely, to show the inconsistency of ∆ ∪ {¬(a1 ∧ . . . ∧ an → b)} we show that ∆ ∪ {a1 , . . . , an , ¬b} is inconsistent. If we are capable of treating clauses as goals, why don’t we allow clauses as bodies of other clauses? We are therefore lead to consider a hypothetical extension of the Horn deductive mechanism. This means that we can handle arbitrary hypothetical goals through the rule: from ∆ `? A → B, step to ∆ ∪ {A} `? B. This straightforward extension of Horn logic exactly captures intuitionistic provability for this fragment, but not classical provability. In other words, the embedded implication behaves as intuitionistic implication, but not as classical implication. This kind of implicational extension based on intuitionistic logic is part of many extensions of logic programming we have recalled above, [Gabbay and Reyle, 1984; Miller, 1989; McCarty, 1988a; McCarty, 1988b]. But we can do more. Suppose we label the data to keep track of the way they are used in derivations. One may want to exert some control on the way the data is used. Now the database is a labelled set of formulas xi : Ai and it provides some additional information on the labels. To give an example, a label may represent a position (or a state) and the database itself specifies which positions are accessible from any other. This information is expressed as a relational theory about a predicate R(x, y) which can be interpreted as ‘x sees y’, or ‘y is accessible from x’. This corresponds to a modal reading of the data, and the implication behaves as strict implication in modal logic. Goals are proved relative to a specific position, so that we will write a query as ∆ `? x : G, where x is a position. To keep things simple, we only consider Horn-clauses here, and we put the following constraints: • (success rule) ∆ `? x : q succeeds if x : q ∈ ∆; • (reduction rule) From ∆ `? x : q step to ∆ y : G → q ∈ ∆ such that R(y, x).
`?
x : G if there is
A similar modal reading of clauses is the foundation of the modal-logic programming language elaborated by Giordano, Martelli and Rossi [1992; 1994]. Modal operators are used to govern visibility rules within logic programs. In this way, it is possible to introduce structuring concepts, such as modules and blocks in logic programs, based on logic.
10
GOAL-DIRECTED PROOF THEORY
EXAMPLE 1.2. Consider the following database ∆ with the data x : b → c, x : d → c, y : a, z : a → b, z : d. Moreover, we know that R(x, y), R(y, z) and that R is symmetric. Suppose we want to check whether ∆ proves c at position y, that is ∆ `? y : c. This configuration can be displayed as in Figure 1.2 x : b → c, d → c y : a `? c z : a → b, d
Figure 1.2. Derivation of Example 1.2. The query succeeds: ∆ `? y : c ∆ `? y : b, since x : b → c ∈ ∆ and R(x, y), ∆ `? y : a, since z : a → b ∈ ∆ and R(z, y), by symmetry from R(y, z). The last query immediately succeeds, as y : a ∈ ∆. Notice however that neither ∆ `? x : c, nor ∆ `? z : c succeeds. This example may seem arbitrary, but the restrictions on success and reduction have a logical meaning. If we interpret → as modal strict implication ⇒, that is (A ⇒ B) =def 2(A →m B) (where →m is material implication), the above configuration is (essentially) what is built in a proof of the formula [(b ⇒ c) ∧ (d ⇒ c) ∧ (((a ⇒ b) ∧ d) ⇒ e ⇒ c)] ⇒ (a ⇒ c).3 This formula happens to be a theorem of modal logic KB. Another interpretation of the labels is that they represent resources and the deduction process imposes some constraint on their usage. Example of constraints about usage are: 3 The configuration above is simplified, because we do not want to handle implicational goals in this example. In the actual configuration generated in the proof of this formula the world x contains also (((a ⇒ b) ∧ d) ⇒ e) ⇒ c.
1. INTRODUCTION
11
• we may require that we use all data (or a specified subset of the data); • we may require that we use the data no more than once; • we may require that we use the data in a given order. These constraints correspond to well-known logics, namely the so-called substructural logics [Schroeder-Heister and Doˇsen, 1993; Gabbay, 1996; Anderson and Belnap, 1975]. In this case, a query will have the form Γ `? α : G, where α is an ordered set of atomic labels representing resources. Confining ourself to the Horn-case, we can put, as an example, the following constraints • (success rule) ∆ `? α : q succeeds if α = {x} and x : q ∈ ∆; • (reduction rule 1) from ∆ `? α : q step to ∆ `? α − {y} : G if there is y : G → q ∈ ∆ and y ∈ α; • (and rule 1) from ∆ `? α : A∧B step to ∆ `? α1 : A and ∆ `? α2 : B, provided α1 ∩ α2 = ∅ and α1 ∪ α2 = α. Another example of constraints is the following, we denote by max(α) the maximal label in α and we define: • (reduction rule 2) from ∆ `? α : q step to ∆ `? α0 : G if there is y : G → q ∈ ∆ such that y ∈ α and y ≤ max(α0 ) and α0 ∪ {y} = α; • (and rule 2) from ∆ `? α : A∧B step to ∆ `? α1 : A and ∆ `? α2 : B, provided max(α1 ) ≤ max(α2 ) and α1 ∪ α2 = α. EXAMPLE 1.3. Let us call P1 the proof system with restrictions 1 and P2 the proof systems with restrictions 2. Let ∆1 be the following database: x1 x2 x3 x4
:d :a∧b→c :d→a : b,
with x1 < x2 < x3 < x4 . Figure 1.3 shows a successful derivation of ∆1 `? {x1 , x2 , x3 , x4 } : c according to procedure P1. We omit the database since it is fixed. This query fails under procedure P2, step (*) violates the constraint in the reduction rule 2 rule, as α0 = {x1 } and y = x3 > x1 . On the other hand, let ∆2 be the following database:
12
GOAL-DIRECTED PROOF THEORY
`? {x1 , x2 , x3 , x4 } : c `? {x1 , x3 , x4 } : a ∧ b ZZ
`? {x1 , x3 } : a
`? {x4 } : b
(∗) `? {x1 } : d
Figure 1.3. Derivation of ∆1 `? {x1 , x2 , x3 , x4 } : c (Example 1.3). x1 x2 x3 x4
: a∧b → c :d→a :d→b : d,
with x1 < x2 < x3 < x4 . Figure 1.4 shows a successful derivation of ∆2 `? {x1 , x2 , x3 , x4 } : c according to procedure P2. We omit the database since it is fixed. This query fails under procedure P1, steps (*) violate the constraint in the (and rule 1) rule, as α1 = {x2 , x4 }, α2 = {x3 , x4 }, so that α1 ∩ α2 6= ∅. `? {x1 , x2 , x3 , x4 } : c `? {x2 , x3 , x4 } : a ∧ b HH H (∗) `? {x3 , x4 } : b (∗) `? {x2 , x4 } : a `? {x4 } : d
`? {x4 } : d
Figure 1.4. Derivation of ∆2 `? {x1 , x2 , x3 , x4 } : c (Example 1.3). As in the case of modal logic, the restrictions we have put in the rules may seem arbitrary, but they correspond to well-known logics. If we interpret → as the implication ( and ∧ as the intensional conjunction ⊗ of linear logic, procedure P1 is complete for a (Horn) (, ⊗-fragment of linear logic, and the success of the former query shows the validity in linear logic of the formula [d ⊗ (a ⊗ b ( c) ⊗ (d ( a) ⊗ b] ( c.
1. INTRODUCTION
13
In a similar way, if we interpret → as the relevant implication and ∧ as the relevant conjunction ◦, procedure P2 is complete for a (Horn) →, ◦-fragment of relevant logic T (Ticket Entailment), and the success of the latter query shows the validity in T of the formula [(a ◦ b → c) ◦ (d → a) ◦ (d → b) ◦ d] → c. The methodology of controlling the deduction process by labelling data and then putting constraints on the propagation and the composition of labels is powerful, it is extensively studied in [Gabbay, 1996]. We have started from Horn deduction, we have added intuitionistic implication and then we have turned to strict implication modal logic and substructural logics. We still do not know what the place of classical logic is in this framework. If we confine ourself to Horn clauses, the basic deduction procedure we have described is complete for both intuitionistic and classical provability. If we allow nested implications, this is no longer true. For instance, let us consider Peirce’s axiom ((a → b) → a) → a. Let us try a derivation (a → b) → a `? a reduces to (a → b) → a `? a → b, which reduces to (a → b) → a, a `? b. In intuitionistic logic we fail at this point because b does not unify with the head of any formula/clause. We know that in classical logic this formula must succeed since it is a tautology. What can we do? The answer is simple, we continue the computation by re-asking the original goal: (a → b) → a, a `? a. We immediately succeed. To obtain classical logic, we just need a rule which allows to replace the current (atomic) goal by a previous one, that is to restart the computation from that previous goal. For this purpose, the structure of queries must be enriched to record the history of past goals, a query will have the form Γ `? G, H where H is the sequence of past goals. In the case of classical logic, it is sufficient to record atomic goals, and we record them when we perform a reduction step. With this book-keeping the previous derivation becomes: (1) (a → b) → a `? a, ∅ (2) (a → b) → a `? a → b, {a} (3) (a → b) → a, a `? b, {a}
14
GOAL-DIRECTED PROOF THEORY
(4) (a → b) → a, a `? a, {a} restarting from a. In the case of classical logic this simple restart rule can be understood in terms of standard sequent calculi. It corresponds to allowing both weakening and contraction rules on the right. A sequent derivation corresponding to the above one is as follows: (4) a ` a (3) a ` a, b (2) ` a → b, a a ` a (10 ) (a → b) → a ` a, a (1) (a → b) → a ` a. It can be seen that both weakening (to get (3) from (4)) and contraction on the right (to get (1) from (10 )) are required. In other words, the formula we restart from is connected by a disjunction to the current goal. The restart rule in the above form was first proposed by Gabbay in his lecture notes in 1984 and the theoretical results published in [Gabbay, 1985].4 A similar idea to restart has been exploited by Loveland [1991; 1992], in order to extend conventional logic programming to non-Horn databases; in Loveland’s proof procedure the restart rule is a way of implementing reasoning by case-analysis. In the case of classical logic it has been simple to give a translation of the restart rule by a combination of standard sequent rules. In other cases, the restart rules take into account not only the previous goals, but also the relative databases (or the positions) from which they were asked. In these cases, there may be no direct counterpart of the restart rules in terms of sequent rules. This, for instance, is the case of G¨odel–Dummett’s logic LC presented in Chapter 3. 3 GOAL-DIRECTED SYSTEMS ARE CUT-FREE A significant feature of goal-directed procedures is that they are analytic and cutfree, by definition. One can often prove a cut admissibility property of computations and to establish further properties such as interpolation. The general form of cut is the following: if Γ[A] `? B and ∆ `? A both succeed in the goal-directed procedure, then also Γ[A/∆] `? B succeeds. The notion of cut therefore depends on the notion of substitution. For each logic, the specific property of cut is determined by the conditions on the substitution operation Γ[A/∆] which might be different for each logic, even if the structure 4 The
lecture notes have evolved into the book [Gabbay, 1998].
1. INTRODUCTION
15
of databases is the same.5 We will see examples of substitution and cut-rule in each chapter. The reason why the cut elimination process works well with the goal-directed style of deduction is that the deduction rules strongly constrain the form of derivations: there is no other way to prove a goal than decomposing it until we reach its atomic constituents which are then matched against database formulas. Moreover, unlike sequent systems, in goal-directed procedures there are no separate structural rules. These rules are incorporated in the other computation rules. The cut-admissibility property turns out to be the essential step needed to prove the completeness of the procedure. Namely, in most of the cases, the specific cut admissibility property for each logic is equivalent to the completeness of the procedure with respect to the canonical model for that logic as it is determined by the deduction procedure itself. This relation between cut-elimination and completeness has been pointed out by Miller in [1992]. On the other hand we can reverse the relation between semantics and proof procedure. Given a proof procedure P , we can always define the relation `P by stipulating ∆ `P A ⇔ ∆ `? A succeeds in P . It may be that the goal-directed procedure P satisfies some form of cut-admissibility and other properties (such as some form of identity and monotony). In this case the relation `P will turn out to be a consequence relation according to Scott’s definition [Scott, 1974]. Instead of asking if `P is complete with respect to an already-known semantics, we may ask whether there is a closely-related ‘semantic counterpart’ for the consequence relation `P defined as above. We will see an example in Chapter 4, where we use some particular deduction procedures for which cut is admissible to define some intuitionistic modal logics. 4 AN OUTLINE OF THE BOOK • In Chapter 2 we define goal-directed procedures for intuitionistic and classical logics. We start with the implicational fragment. Then we study how to optimize the procedure by avoiding re-use of data. This is also needed to make it terminating. We then expand the language by allowing other connectives, and finally we treat the implication–universal quantifier fragment of intuitionistic logic. • In Chapter 3 we treat some intermediate logics. We first consider the logics of Kripke models with bounded height. Then we treat one of the most significant intermediate logics: G¨odel–Dummett’s logic LC. This logic is complete with respect to linear Kripke models of intuitionistic logic and has at the same time a very natural many-valued semantics. 5 For a discussion on cut and structural consequence relations we refer to [Gabbay, 1993] and [Avron, 1991b].
16
GOAL-DIRECTED PROOF THEORY
• In Chapter 4 we define goal-directed procedures for strict implication as determined by several modal logics. By strict implication, we intend the connective ⇒ defined by A ⇒ B = 2(A → B), where the modality is understood according to each specific modal system. Our proof systems covers uniformly strict implication of K plus any subset of the axioms {T, 4, 5, B}. We also investigate some intuitionistic variants and G¨odel logic of provability G. We will see that one can have a goal-directed procedure for a natural class of modal logics called Horn-modal logics. Moreover we show how to extend our procedures to a larger fragment corresponding to the so-called ‘Harrop formulas’, and finally to the whole (propositional) modal logics by a transformation into a strict-implication normal form. • In Chapter 5 we define goal-directed procedures for the most important implicational substructural logics, namely R, linear logic, and other relevant logics such as ticket entailment and E. For logics without contraction, these proof systems provide a decision procedure, whereas for those logics which allow contraction they do not. We show how to obtain a decision procedure for implicational R by adding a loop-checking mechanism. As in the case of modal logic, we show how to extend our procedures to a fragment larger than the implicational one, corresponding to ‘Harrop formulas’. • In Chapter 6 we highlight possible directions of further research, listing a number of open problems which we are going to deal with in future work.
5
NOTATION AND BASIC NOTIONS
In this section we introduce some of the basic notations and notions that will be used throughout the book. Formulas By a propositional language L, we denote the set of propositional formulas built from a denumerable set Var of propositional variables by applying the propositional connectives ¬, ∧, ∨, →. Unless stated otherwise, we denote propositional variables (also called atoms) by lower case letters, and arbitrary formulas by upper case letters. We assign a complexity cp(A) to each formula A (as usual): cp(q) = 0 if q is an atom, cp(¬A) = 1 + cp(A), cp(A ∗ B) = cp(A) + cp(B) + 1, where ∗ ∈ {∧, ∨, →}. (Formula substitution) We define the notion of substitution of an atom q by a subformula B within a formula A. This operation is denoted by A[q/B].
1. INTRODUCTION
p[q/B] =
17
p if p 6= q B if p = q
(¬A)[q/B] = ¬A[q/B] (A ∗ C)[q/B] = A[q/B] ∗ C[q/B] where ∗ ∈ {∧, ∨, →}. Implicational formulas In much of the work we will be concerned with implicational formulas. These formulas are generated from a set of atoms by the only connective →. We adopt some specific notations for them. We sometimes distinguish the head and the body of an implicational formula6. The head of a formula A is its rightmost nested atom, whereas the body is the list of the antecedents of its head. Given a formula A, we define Head(A) and Body(A) as follows: Head(q) = q, if q is an atom, Head(A → B) = Head(B). Body(q) = ( ), if q is an atom, Body(A → B) = (A) ∗ Body(B), where (A) ∗ Body(B) denotes the list beginning with A followed by Body(B). Dealing with implicational formulas, we assume that implication associates on the right, i.e. we write A1 → A2 → . . . → An−1 → An , instead of A1 → (A2 → . . . → (An−1 → An ) . . .). It turns out that every formula A can be written as A1 → A2 → . . . → An → q, where we obviously have. Head(A) = q
and Body(A) = (A1 , . . . , An ).
Harrop and Horn formulas Two special classes of formulas will be of interest in this work, namely Harrop and Horn formulas. To introduce the former, let us define the two types of formulas DH and GH by mutual induction as follows: DH := q | GH → DH | DH ∧ DH GH := q | GH ∧ GH | GH ∨ GH | DH → GH. 6 This
terminology is reminiscent of logic programming [Lloyd, 1984].
18
GOAL-DIRECTED PROOF THEORY
A formula is Harrop if it is defined according to the DH-clauses above. REMARK 1.4. Harrop formulas are essentially propositional formulas that do not contain positive occurrences of disjunction. When considered, DH-formulas will be allowed as constituents of the database, whereas GH formulas will be allowed to be asked as goals. To introduce Horn formulas, let us define similarly HO and GO formulas: HO := q | GO → HO | HO ∧ HO GO := q | GO ∧ GO | GO ∨ GO. A formula is Horn if it is defined according to the HO-clauses above. REMARK 1.5. Horn formulas are hence Harrop formulas without nested implications. In any logical system where the following equivalences hold (for instance in classical and intuitionistic logic): A → (B → C) (A ∨ B) → C
≡ (A ∧ B) → C ≡ (A → C) ∧ (B → C)
(1.1) (1.2)
A → (B ∧ C)
≡ (A → B) ∧ (A → C)
(1.3)
we have that any Horn formula can be rewritten as a conjunction of implicational formulas of the form p1 → p2 → . . . → pn → q, where all pi and q are atoms. Similarly, in any logical system where (1.1) holds any Harrop formula is equivalent to a conjunction of DH-formulas defined only by DH := q | GH → DH.
Multisets A (finite) multiset is a function α from a (finite) set S to N , the set of natural numbers. We denote multisets by Greek letters α, β, . . .. We define the following operations: (Union) α t β = γ iff ∀x ∈ S γ(x) = α(x) + β(x). (Difference) α − β = γ iff ∀x ∈ S γ(x) = α(x) − β(x), where ‘−’ is subtraction on natural numbers. The support of a multiset α, denoted by α ¯ is the set of x ∈ S, such that α(x) > 0. In order to display the elements of a multiset α, we use the notation α = [x, x, y, z, z, z], or equivalently [x2 , y, z 3 ], which means that α(x) = 2, α(y) = 1, α(z) = 3, and α ¯ = {x, y, z}. (Weak containment) We define α ⊆ β as follows
1. INTRODUCTION
19
α ⊆ β to hold iff ∀x ∈ S α(x) ≤ β(x). (Strong containment) We define the relation α ⊆| β as follows ¯ α ⊆| β iff α ⊆ β and α ¯ = β. We use the notation α ⊂ β and α ⊂| β, for the corresponding strict versions of the relations defined above.
CHAPTER 2
INTUITIONISTIC AND CLASSICAL LOGICS
In this chapter we present a goal-directed proof system for intuitionistic logic. Intuitionistic logic has always been considered the major rival of classical logic. It has been introduced by Heyting to formalize constructive mathematics and its intuitionistic foundation. Intuitionistic proof theory has been developed side-byside with classical proof theory, starting from Gentzen’s work. Intuitionistic logic allows many interpretations which may not apply to classical logic in a natural way. One of the most important is the type-theoretic interpretation provided by the Curry–Howard isomorphism. The theorems of (propositional) intuitionistic logic can be interpreted as types of terms in a functional language (typed λ-calculus, combinatory logic); in the other direction the terms codify naturaldeduction proofs of the theorems. Apart from the type-theoretic interpretation, intuitionistic logic has found applications in a number of areas in computer science, from artificial intelligence to hardware verification. We cannot attempt to give a more detailed description of the background motivations of intuitionistic logic. We refer to Van Dalen’s chapter [van Dalen, 1986] for a quick, but complete survey, and to some fundamental studies such as: [Dummett, 1977; Heyting, 1956; Troelstra, 1969; Gabbay, 1981; Fitting, 1983]. From the perspective of goal-directed proof systems, the implicational fragment of intuitionistic logic is the natural starting point. We start presenting it and then we stepwise refine and extend it. Each new step will follow naturally from the previous steps. We further modify it and obtain classical logic. 1
ALTERNATIVE PRESENTATIONS OF INTUITIONISTIC LOGIC
There are many ways of presenting intuitionistic logic. We begin giving a Hilbert style axiomatisation of the propositional calculus using the following set of axioms, we denote by I: (1) Implication group: 1. A → A, 2. (A → B) → (C → A) → C → B, 3. (A → B) → (B → C) → A → C, 4. (A → B → C) → B → A → C, 21
22
GOAL-DIRECTED PROOF THEORY
5. (A → A → B) → A → B, 6. (A → B → C) → (A → B) → A → C, 7. A → B → A. (2) Conjunction group: 1. A → B → (A ∧ B), 2. A ∧ B → A, 3. A ∧ B → B. (3) Disjunction group: 1. (A → C) → (B → C) → (A ∨ B → C), 2. A → A ∨ B, 3. A → B ∨ A. (4) Falsity: 1. ⊥ → A, (5) Negation group: 1. (A → ¬B) → B → ¬A, 2. ¬A → A → B. In addition, it contains the Modus Ponens rule: `A
` A → B. `B
This axiom system is separated, that is to say, any theorem containing → and a set of connectives S ⊆ {∧, ∨, ¬, ⊥} can be proved by using the implicational axioms together with the axiom groups containing just the connectives in S. Furthermore, the implicational axioms are not independent. For instance, the axioms A → B → A and (A → B → C) → (A → B) → A → C are sufficient to prove the remaining implicational axioms. However without either of the two, we can get various weaker systems (some known as substructural logics, see Chapter 5) by dropping some of the other axioms. Other redundancies are negation and falsity axioms, as these two logical constants are interdefinable. One can adopt the axiom for falsity and define ¬A as A → ⊥. Or the other way round, one can adopt the axioms for negation and consider ⊥ as defined by, for instance, ¬(p0 → p0 ), or p0 ∧ ¬p0 , where p0 is any atom. However, if we adopt both the axioms for negation and for ⊥, we can prove their interdefinability, namely ¬A → (A → ⊥) and (A → ⊥) → ¬A
2. INTUITIONISTIC AND CLASSICAL LOGICS
23
are theorems of the above axiom system. The reader is invited to check them by herself.1 If we add to I any of the axioms below we get classical logic C: Alternative axioms for classical logic (Peirce’s axiom)
((A → B) → A) → A,
(double negation)
¬¬A → A,
(excluded middle) ¬A ∨ A, (∨, →-distribution) [(A → (B ∨ C)] → [(A → B) ∨ C]. In particular, the addition of Peirce’s axiom to the implicational axioms of intuitionistic logic give us an axiomatisation of classical implication. We introduce a standard model-theoretic semantics of intuitionistic logic, called Kripke semantics. DEFINITION 2.1. Given a propositional language L, a Kripke model for L is a structure of the form M = (W, ≤, w0 , V ), where W is a non-empty set, ≤ is a reflexive and transitive relation on W , w0 ∈ W , V is a function of type: W −→ P ow(V arL ), that is V maps each element of W to a set of propositional variables of L. We assume the following conditions: (1) w0 ≤ w, for all w ∈ W ; (2) w ≤ w0 implies V (w) ⊆ V (w0 ); (3) ⊥ 6∈ V (w), for all w ∈ W . Given M = (W, ≤, w0 , V ), w ∈ W , for any formula A of L, we define ‘A is true at w in M ’, denoted by M, w |= A by the following clauses: • M, w |= q iff q ∈ V (w); • M, w |= A ∧ B iff M, w |= A and M, w |= B; • M, w |= A ∨ B iff M, w |= A or M, w |= B; 1 Here is a hint anyway: the half ` ¬A → (A → ⊥) comes trivially; we sketch a possible proof for the other half (A → ⊥) → ¬A. Using the axioms for ¬, we derive
(1) (A → A) → (A → ¬A) → (A → ¬(A → ⊥)) from A → ¬A → ¬(A → ⊥), and (2) (A → ¬(A → ⊥)) → (A → ⊥) → ¬A. By A → A, (1) and (2), we get (3) (A → ¬A) → (A → ⊥) → ¬A. On the other hand from ⊥ → ¬A we derive (4) (A → ⊥) → (A → ¬A). From (3) and (4) the result follows easily.
24
GOAL-DIRECTED PROOF THEORY
• M, w |= A → B iff for all w0 ≥ w, if M, w0 |= A then M, w0 |= B; • M, w |= ¬A iff for all w0 ≥ wM, w0 6|= A. We say that A is valid in M if M, w0 |= A and we denote this by M |= A. We say that A is valid if it is valid in every Kripke model M . We also define a notion of entailment between sets of formulas and formulas. Let Γ = {A1 , . . . , An } be a set of formulas and B be a formula, we say that Γ entails B denoted by Γ |= B 2 iff M, w0 |= Ai for all Ai ∈ Γ, then M, w0 |= B. One can think of a Kripke model M = (W, ≤, w0 , V ) as a tree with root w0 . The definition of entailment is by no means restricted to finite Γs. Whenever Γ is finite, V Γ |= A holds iff Γ → A is valid. It is easy to prove that condition (2) implies that in every model M = (W, ≤, w0 , V ), for any w, w0 ∈ W , if w ≤ w0 and M, w |= A, then M, w0 |= A. By this property, we immediately obtain: M, w0 |= A iff for all w ∈ W M, w |= A. From this fact we can equally define truth in a model as truth in every point of the model. A standard argument shows that the propositional calculus given above is sound and complete with respect to Kripke models. For this and the other background facts mentioned in this section, the reader is referred to any of the texts quoted at the beginning of the chapter. THEOREM 2.2. For any formula A, A is a theorem of I iff A is valid. This completeness theorem can be sharpened to finite Kripke models (finite trees) that is, models M = (W, ≤, w0 , V ), where W is a finite set. THEOREM 2.3. For any formula A, A is a theorem of I iff A is valid in every finite Kripke model. Classical interpretations can be thought as degenerated Kripke models M = (W, ≤, w0 , V ), where W = {w0 }. We give a third presentation of intuitionistic logic in terms of consequence relation. Let Γ be a set of formulas and A be a formula. We write Γ`A to say that Γ proves A, where A is a formula. We will often use ‘,’ to denote set-theoretic union, i.e. we write Γ, A and Γ, ∆ to denote Γ ∪ {A} and Γ ∪ ∆, respectively. DEFINITION 2.4 (Consequence relation for intuitionistic logic). defined as the smallest relation which satisfies:
Let Γ ` A be
2 To be precise, we should write |= (and ` ) to denote validity and entailment (respectively, provI I ability and logical consequence) in intuitionistic logic I. To avoid burdening the notation, we usually omit the subscript if there is no risk of confusion.
2. INTUITIONISTIC AND CLASSICAL LOGICS
25
• (identity) if A ∈ Γ then Γ ` A; • (monotony) if Γ ` A and Γ ⊆ ∆ then ∆ ` A; • (cut) Γ ` A and Γ, A ` B imply Γ ` B. Plus the following conditions for the language containing {∧, ∨, →, ⊥} (1) Deduction theorem Γ, A ` B iff Γ ` A → B. (2) Conjunction rules: (a) A ∧ B ` A; (b) A ∧ B ` B; (c) A, B ` A ∧ B. (3) Falsity Rule ⊥ ` B. (4) Disjunction rules: (a) A ` A ∨ B; (b) B ` A ∨ B; (c) Γ, A ` C and Γ, B ` C imply Γ, A ∨ B ` C. The above closure rules define the intuitionistic propositional consequence relation. In the fragment without ⊥, the other rules (1), (2) and (4) suffice to define intuitionistic logic for that fragment. In the above characterisation, negation is not mentioned. We can either consider negation as defined by ¬A = A → ⊥, or add the rules: Γ, A ` ¬B . A, ¬A ` B and Γ, B ` ¬A THEOREM 2.5. Let ` be the smallest consequence relation satisfying (1)–(4) above. Then Γ ` A holds iff Γ |= A. Classical logic can be obtained as the smallest consequence relation satisfying (1)–(4) together with an additional condition corresponding to an axiom for classical logic as given above, for instance: ∆, A ` B ∨ C ∆ ` (A → B) ∨ C or
Γ, A ` B
Γ, ¬A ` B
Γ`B
.
26
GOAL-DIRECTED PROOF THEORY
2
RULES FOR INTUITIONISTIC IMPLICATION
We want to give computation rules for checking ∆ ` A, where all formulas of ∆ and A are implicational formulas. Our rules manipulate queries Q of the form: ∆ `? A. We call ∆ the database and A the goal of the query Q. We use the symbol `? to indicate that we do not know whether the query succeeds or not. On the other hand the success of Q means that ∆ ` A according to intuitionistic logic. Of course this must be proved and in due course it will. Meanwhile here are the rules. DEFINITION 2.6. • (success) ∆ `? q succeeds if q ∈ ∆. We say that q is used in this query. • (implication) from ∆ `? A → B step to ∆, A `? B. • (reduction) from ∆ `? q if C ∈ ∆, with C = D1 → D2 → . . . → Dn → q (that is Head(C) = q and Body(C) = (D1 , . . . Dn )) then step to ∆ `? Di , for i = 1, . . . , n. We say that C is used in this step. A derivation D of a query Q is a tree whose nodes are queries. The root of D is Q, and the successors of every non-leaf query are determined by exactly one applicable rule (implication or reduction) as described above. We say that D is successful if the success rule may be applied to every leaf of D. We finally say that a query Q succeeds if there is a successful derivation of Q. By definition, a derivation D might be an infinite tree. However if D is successful then it must be finite. This is easily seen from the fact that, in case of success, the height of D is finite and every non-terminal node of D has a finite number of successors, because of the form of the rules. Moreover, the databases involved in a deduction need not be finite. In a successful derivation only a finite number of formulas from the database will be used in the above sense. A last observation: the success of a query is defined in a non-deterministic way; a query succeeds if there is a successful derivation. To transform the proof rules into a deterministic algorithm one should give a method to search a successful derivation tree. In this respect we agree that when we come to an atomic goal we first try to apply the success rule and if it fails we try the reduction rule. Then the only choice we have is to indicate which formula among those of the database whose head matches the current atomic goal, we use to perform a reduction step,
2. INTUITIONISTIC AND CLASSICAL LOGICS
27
if there are more than one. Thinking of the database as a list of formulas, we can choose the first one and remember the point up to which we have scanned the database as a backtracking point. This is exactly as in conventional logic programming [Lloyd, 1984]. EXAMPLE 2.7. We check b → d, a → p, p → b, (a → b) → c → a, (p → d) → c ` b. Let Γ = {b → d, a → p, p → b, (a → b) → c → a, (p → d) → c}, a successful derivation of Γ `? b is shown in Figure 2.1. A quick explanation: (2) is obtained by reduction w.r.t. p → b, (3) by reduction w.r.t. a → p, (4) and (8) by reduction w.r.t. (a → b) → c → a, (6) by reduction w.r.t. p → b, (7) by reduction w.r.t. a → p, (9) by reduction w.r.t. (p → d) → c, (11) by reduction w.r.t. b → d, (12) by reduction w.r.t. p → b. (1) Γ `? b (2) Γ `? p (3) Γ `? a Q Q ? (8) Γ `? c (4) Γ ` a → b (5) Γ, a `? b
(9) Γ `? p → d
(6) Γ, a `? p
(10) Γ, p `? d
(7) Γ, a `? a
(11) Γ, p `? b (12) Γ, p `? p
Figure 2.1. Derivation for Example 2.7. We state some simple properties of the deduction procedure defined above. PROPOSITION 2.8. (a) ∆ `? G succeeds if G ∈ ∆; (b) ∆ `? G succeeds implies ∆, Γ `? G succeeds;
28
GOAL-DIRECTED PROOF THEORY
(c) ∆ `? A → B succeeds iff ∆, A `? B succeeds. Proof. (a) is proved by induction on the complexity of G. If G is an atom q, it follows immediately by the success rule. If G = A1 → . . . → An → q, then from ∆ `? G, we step, by repeated application of the implication rule to ∆, A1 , . . . , An `? q. Since G ∈ ∆, we can apply reduction and step to ∆, A1 , . . . , An `? Ai , for i = 1, . . . , n. By the induction hypothesis, each of the above queries succeeds. (b) is proved by induction on the height of a successful computation. If G succeeds by the success rule, then G is atomic and G ∈ ∆; hence G ∈ ∆ ∪ Γ, thus ∆, Γ `? G succeeds. If G is an implication A → B, then from ∆ `? A → B, we step to ∆, A `? B. In the same way from ∆, Γ `? A → B, we step to ∆, Γ, A `? B, which succeeds by the induction hypothesis. Let G be an atom q, suppose we proceed by reduction with respect to a formula, say, C = A1 → . . . → An → q in ∆, so that we step to ∆ `? Ai , for i = 1, . . . , n, then from ∆, Γ `? q, we can perform the same reduction step with respect to C, and step to ∆, Γ `? Ai , which succeed by the induction hypothesis. (c) is obvious, as there are no other rules, but the implication rule, which can be applied to an implicational goal. We have called property (b) monotony of `? . It is clear from the proof that the height of derivations is preserved, that is if ∆ `? G succeeds by a derivation of height h, then also ∆, Γ `? G succeeds by a derivation of height h.
2.1 Soundness and Completeness Since we are dealing with the implicational fragment, by Theorem 2.3, it is enough to show that the consequence relation defined by: ∆ `p A ↔ ∆ `? A succeeds coincides with the intuitionistic consequence relation. Let ` denotes intuitionistic provability.
2. INTUITIONISTIC AND CLASSICAL LOGICS
29
THEOREM 2.9. If ∆ `p A then ∆ ` A. Proof. By induction on the height h of a successful derivation of ∆ `? A. Let h = 0. If ∆ `? A succeeds by a derivation of height 0, then A is an atom and A ∈ Γ, thus Γ ` A follows by reflexivity. Let h > 0, if A = B → C, then ∆ `? A succeeds by the implication rule and Γ, B `? C succeeds by a derivation of height < h. Hence by the the induction hypothesis, Γ, B ` C holds, and by the deduction theorem Γ ` B → C also holds. If A is an atom q, then the derivation proceeds by reduction with respect to a formula C = A1 → . . . → An → q; then for i = 1, . . . , n, each query Γ `? Ai succeeds by a derivation of height < h. By the induction hypothesis, for i = 1, . . . , n, (ai ) Γ ` Ai holds. On the other hand, since C ∈ Γ, by reflexivity we have Γ ` A1 → . . . → An → q holds, so that, by deduction theorem also (b) Γ, A1 , . . . , An ` q holds. By repeatedly applying cut to (ai ) and (b), we finally obtain that Γ ` q holds.
In order to prove completeness it is sufficient to show that `p satisfies the conditions of intuitionistic consequence relation, namely identity, monotony, deduction theorem and cut. That the first three properties are satisfied is proved in Proposition 2.8. So, the only property we have to check is cut. We prove closure under cut in the next theorem. THEOREM 2.10 (Admissibility of Cut). ∆, A `p B and Γ `p A imply ∆, Γ `p B. Proof. Assume (1) ∆, A `p B and (2) Γ `p A. The theorem is proved by induction on lexicographically-ordered pairs (c, h), where c = cp(A), and h is the height of a successful derivation of (1), that is of ∆, A `? B. Suppose first c = 0, then A is an atom p, and we proceed by induction on h. If h = 0, B is an atom q and either q ∈ ∆ or q = p = A. In the first case, the claim trivially follows by Proposition 2.8. In the second case it follows by hypothesis (2) and Proposition 2.8. Let now h > 0, then (1) succeeds either by the implication rule or by the reduction rule. In the first case, we have that B = C → D and from ∆, A `? C → D we step to ∆, A, C `? D, which succeeds by a derivation h0 shorter than h. Since (0, h0 ) < (0, h), by the induction hypothesis we get that ∆, Γ, C `? D, succeeds, whence ∆, Γ `? C → D succeeds too. Let (1) succeed by reduction with respect to a formula C ∈ ∆. Since A is an atom, C 6= A. Then B = q is an atom. We let C = D1 → . . . → Dk → q. We have for i = 1, . . . , k
30
GOAL-DIRECTED PROOF THEORY
∆, A `? Di succeeds by a derivation of height hi < h. Since (0, hi ) < (0, h), we may apply the induction hypothesis and obtain (ai ) ∆, Γ `? Di succeeds, for i = 1, . . . , k. Since C ∈ ∆ ∪ Γ, from ∆, Γ concludes the case of (0, h).
`?
q we can step to (ai ) and succeed. This
If c is arbitrary and h = 0 the claim is trivial. Let c > 0 and h > 0. The only difference with the previous cases is when (1) succeeds by reduction with respect to A. Let us see that case. Let A = D1 → . . . → Dk → q and B = q. Then we have for i = 1, . . . , k ∆, A `? Di succeeds by a derivation of height hi < h. Since (c, hi ) < (c, h), we may apply the induction hypothesis and obtain (bi ) ∆, Γ `? Di succeeds for i = 1, . . . , k. By hypothesis (2) we can conclude that Γ, D1 , . . . , Dk `? q succeeds by a derivation of arbitrary height h0 . Notice that each Di has a smaller complexity than A, that is cp(Di ) = ci < c. Thus (c1 , h0 ) < (c, h), and we can cut on (3) and (b1 ), so that we obtain that ∆, Γ, D2 , . . . , Dk `? q succeeds with some height h00 . Again (c2 , h00 ) < (c, h), so that we can cut (b2 ) and (4). By repeating the same argument up to k we finally obtain ∆, Γ `? q succeeds.
This concludes the proof. THEOREM 2.11 (Completeness for I). If ∆ ` A, then ∆ `p A.
Proof. We know that ` is the smallest consequence relation satisfying (identity), (monotony), (deduction theorem) and (cut). By Proposition 2.8 and Theorem 2.10, `p satisfies these properties as well. Then, the claim follows by the minimality of `.
2.2 Interpolation The meaning of the interpolation property is that whenever we have Γ ` A, we can find an intermediate formula B which does not contain any ‘concept’ that is not involved either in Γ or in A, such that both Γ`B
and B ` A.
2. INTUITIONISTIC AND CLASSICAL LOGICS
31
B is called an interpolant of A and Γ. The interpolation property is a strong analytic property of deduction and it says that we can simplify a proof of Γ ` A in intermediate steps which involve only non-logical constants (here propositional variables) that are present both in A and in Γ. In the implicational fragment, the interpolant cannot be assumed to be a single formula. Consider for example a, b ` (a → b → q) → q. There is no implicational formula A containing only the atoms a and b such that a, b ` A and A ` (a → b → q) → q. Here the interpolant should be a∧b. We must allow the interpolant to be a database itself. Hence the right notion is one of interpolant database, and the property of interpolation must be restated as follows: if Γ ` A, then there is a database Π, which contains only propositional symbols common to A and Γ, such that Γ `∗ Π and Π ` A. The relation `∗ denotes provability among databases; in this case it can be simply defined as Γ `∗ ∆ ⇔ ∀C ∈ ∆ Γ ` C. Obviously Γ `∗ {A} is the same as Γ ` A. Before we prove this property, we warn the reader that the stated property is not true in general. It may happen, that Γ ` A but A and Γ do not share any propositional symbol, it is the case for instance of p`q→q where p 6= q. In this case, A (here q → q) is a theorem by itself. It is clear that here the interpolant database Π should be the empty set of formulas, or the empty conjunction. We add a symbol to the language > to denote truth. We assume that > is in the language of any formula. Actually > may be defined as p0 → p0 , where p0 is a fixed atom; we have Γ ` >, and hence Γ, > ` A iff Γ ` A. Thus the constant > represents the empty database in the object language. We can regard > as an atom which immediately succeeds from any database. We can now define the language of a formula and of a database as follows: we let L(A) = {p S : p is an atom and p occurs in A} ∪ {>} L(∆) = A∈∆ L(A).
32
GOAL-DIRECTED PROOF THEORY
We also write L(A, B) for L({A, B}), and L(∆, A) for L(∆ ∪ {A}). The interpolation property is a simple consequence of the following lemma. LEMMA 2.12. Let Γ and ∆ be databases. If Γ, ∆ ` q there is a database Π such that the following hold: 1. Γ `∗ Π, 2. Π, ∆ ` q, 3. L(Π) ⊆ L(Γ)
T
L(∆, q).
Proof. We proceed by induction on the height h of a successful derivation of Γ, ∆ `? q. Let h = 0, then either q = > or q ∈ Γ ∪ ∆. If q = > or q ∈ ∆ − Γ, then we take Π = {>}, otherwise we take Π = {q}. Let h > 0, then any successful derivation proceeds by reduction of q either with respect to a formula of ∆ (Case a), or with respect to a formula of Γ (Case b). (Case a) let C = D1 → . . . → Du → q ∈ ∆. The computation steps, for i = 1, . . . , u, to Γ, ∆ `? Di . Let us assume that Di = B1i → . . . → Bki i → ri , (it might be ki = 0, that is Di = ri ), then, by the implication rule, the computation steps to Γ, ∆, {B1i , . . . , Bki i } `? ri . All the above queries succeed by a shorter derivation, thus by the induction hypothesis, for each i there are databases Πi such that 1. Γ `∗ Πi , 2. Πi , ∆, {B1i , . . . , Bki i } ` ri , T 3. L(Πi ) ⊆ L(Γ) L(∆, B1i , . . . , Bki i , ri , ). Since Di is part of C ∈ ∆, we have that L(Πi ) ⊆ L(Γ) ∩ L(∆). Let Π = then we have that Γ `∗ Π and for i = 1, . . . , u
S i
Πi ,
(∗) Π, ∆, {B1i , . . . , Bki i } `? ri succeeds T and also that L(Πi ) ⊆ L(Γ) L(∆). (∗) implies that for each i Π, ∆ `? Di succeeds; since C ∈ ∆ we can apply the reduction rule to q so that Π, ∆ `? q succeeds. This concludes (Case a). (Case b) let C = D1 → . . . → Du → q ∈ Γ. The computation steps for i = 1, . . . , u to Γ, ∆ `? Di , and then to Γ, ∆, {B1i , . . . , Bki i } `? ri , where we assume Di = B1i → . . . → Bki i → ri , (it might be ki = 0, that is Di might be an atom). All the above queries succeed by a shorter derivation, thus by the induction hypothesis, for each i there are databases Πi such that
2. INTUITIONISTIC AND CLASSICAL LOGICS
33
1. ∆ `∗ Πi , 2. Πi , Γ, {B1i , . . . , Bki i } ` ri , T 3. L(Πi ) ⊆ L(∆) L(Γ, B1i , . . . , Bki i , ri , ). Since Di is part of C ∈ Γ, the last fact implies that L(Πi ) ⊆ L(∆) (Case a) we let S Π = i Πi = {E1 , . . . , En },
T
L(Γ). As in
and we get (4) ∆ `∗ Π, and for all i = 1, . . . , u Bki i } ` ri , and also (5) Π, Γ, {B1i , . . . ,T (6) L(Π) ⊆ L(∆) L(Γ). We now let G = E1 → . . . → En → q. From (4) we obtain that G, ∆ ` q, and from (5), we get (7) Π, Γ ` Di . Since C ∈ Γ, we can conclude that Π, Γ ` q, and hence Γ ` G. Finally from (6), since q occurs in Γ, we have that L(G) ⊆ L(Γ) ∩ L(∆, q). This concludes the proof. THEOREM 2.13 (Interpolation). If Γ ` A, then there is a database Π such that Γ `∗ Π, Π ` A, and L(Π) ⊆ L(Γ) ∩ L(A). Proof. We proceed by cases according to the form of A. If A is an atom, then we take Π = A. If A is a complex formula, let A = F1 → . . . Fu → p. By hypothesis, we have that Γ, {F1 , . . . Fu } `? p succeeds. By the previous lemma there is a database Π such that (1) Γ `∗ Π (2) Π, {F1 , . . . Fu } ` p, and (3) L(Π) ⊆ L(Γ) ∩ L(F1 , . . . , Fu , p). From (2) we immediately conclude that Π ` A and from (3) that L(Π) ⊆ L(Γ) ∩ L(A). 3
BOUNDED RESOURCE DEDUCTION FOR IMPLICATION
In the previous section, we have introduced a goal-directed proof method for intuitionistic implicational logic. We now want to refine it further towards an automated implementation. We will be guided by key examples. The examples will
34
GOAL-DIRECTED PROOF THEORY
show us difficulties to overcome in order to achieve an implementable system. We will respond to the difficulties by proposing the most obvious optimisations at hand. Slowly the computation steps will evolve and we will naturally be led to consider new logics, which correspond to various optimisations. We will note that the new logics thus obtained are really logics we already know, and we thus realise that we have stumbled on proof systems for other well-known logics. Consider the data and query below: q → q `? q. Clearly the algorithm has to know it is looping. The simplest way of loop-checking is to record the history of the computation. This can be done as follows: Let ∆ `? G, H represent the query of the goal G from data ∆ and history H. H is the list of past queries, namely pairs of the form (Γ, G0 ). The computation rules become: Historical rule for → ∆ `? A → B, H succeeds if ∆ ∪ {A} `? B, H ∗ (∆, A → B) succeeds and (∆, A → B) is not in H. Thus (∆, A → B) is appended to H. Historical Reduction ∆ `? q, H succeeds, if for some B = C1 → C2 → . . . Cn → q in ∆ we have that for all i ∆ `? Ci , H ∗ (∆, q) succeeds and that (∆, q) is not in H. Thus the computation for our example above becomes: q → q `? q, ∅ q → q `? q, (q → q, q) fail. Note that the historical loop checking conjunct in the rule for → is redundant as the loop will be captured in the rule for reduction. Also note that in this case we made the decision that looping means failure. The most general case of loopchecking may first detect the loop and then decide that under certain conditions looping implies success. One way to perform loop-checking in a more efficient way has been described in [Heudering et al., 1996]. The idea is to avoid repeated insertions of the same formula in the database, taking advantage of the fact that the database is a set of formulas. One simply records the history of the atomic goals asked from the same database; the history is cleared when a new formula is inserted in the database. This idea is implemented in a terminating proof system for full intuitionistic propositional logic in the form of a sequent calculus. We reformulate here this variant of loop checking adapted to our context.
2. INTUITIONISTIC AND CLASSICAL LOGICS
35
An Improved Historical Loop-checking Here H is the list of past atomic goals. The computation rules become: Rule 1 for → ∆ `? A → B, H succeeds if A 6∈ ∆ and ∆, A `? B, ∅ succeeds. Rule 2 for → ∆ `? A → B, H succeeds if A ∈ ∆ and ∆ `? B, H succeeds. Reduction Rule ∆ `? q, H succeeds, if q 6∈ H and for some C1 → C2 → . . . Cn → q in ∆ we have that for all i ∆ `? Ci , H ∗ (q) succeeds. EXAMPLE 2.14.
(p → q) → q
`? `?
((p → q) → q) → (q → p) → p, ∅, (q → p) → p, ∅,
(p → q) → q, q → p
`?
p, ∅,
(p → q) → q, q → p (p → q) → q, q → p
?
` `?
q, (p), p → q, (p, q),
(p → q) → q, q → p, p (p → q) → q, q → p
`? `?
q, ∅, p → q, (q),
(p → q) → q, q → p
`?
q, (q), f ail
In this way one is able to detect a loop (the same atomic goal repeats from the same database), without having to record each pair (database goal). We will use a similar idea in Chapter 5 to give a terminating procedure for the implicational fragment of relevance logic R. However, loop-checking is not the only way to ensure termination. Trying to think constructively, in order to make sure the algorithm terminates, let us ask ourselves what is involved in the computation. There are two parameters; the data and the goal. The goal is reduced to an atom via the rule for → and the looping occurs because a data item is being used by the rule for reduction again and again. Our aim in the historical loop checking is to stop that. Well, why don’t we try a modified rule for reduction which can use each item of data only once? This way, we will certainly run out of data and the computation will terminate. Let us adopt the point of view that each database item can be used at most once. Thus our rule for reduction becomes
36
GOAL-DIRECTED PROOF THEORY
from ∆ `? q, if there is B ∈ ∆, with B = C1 → C2 → . . . Cn → q , step to ∆ − {B} `? Ci for i = 1, . . . , n. The item B is thus thrown out as soon as it is used. Let us call such a computation locally linear computation, as each formula can be used at most once in each path of the computation. That is why we are using the word ‘locally’. One can also have the notion of (globally) linear computation, in which each formula can be used exactly once in the entire computation tree. Before we proceed, we need to give a formal definition of these concepts. We give Definition 2.15 below and it should be compared with Definition 2.6 of the previous section. Since we take care of usage of formulas, it is natural to regard multiple copies of the same formula as distinct. This means that databases can now be considered as multisets of formulas. In order to keep the notation simple, we use the same notation as in the previous section. From now on, Γ, ∆, etc. will range on multisets of formulas, and we will write Γ, ∆ to denote the union multiset of Γ and ∆, that is Γ t ∆. To denote a multiset [A1 , . . . An ], if there is no risk of confusion we will simply write A1 , . . . An (see the last section of Chapter 1, for the formal definitions of multiset and relative notions). We present three notions of computation by defining their computation trees. These are: 1. the goal-directed computation for intuitionistic logic; 2. locally linear goal-directed computation (LL-computation); 3. linear goal-directed computation. DEFINITION 2.15 (Goal-directed computation, locally linear goal-directed computation and linear goal-directed computation). We give the computation rules for a query: Γ `? G, where Γ is a multiset of formulas and G is a formula. • (success) ∆ `? q immediately succeeds if the following holds: 1. for intuitionistic and LL-computation, q ∈ ∆, 2. for linear computation ∆ = [q]. • (implication) From ∆ `? A → B, we step to ∆, A `? B. • (reduction) If there is a formula B ∈ ∆ with B = C1 → . . . → Cn → q then from ∆ `? q, we step, for i = 1, . . . , n to ∆i `? Ci ,
2. INTUITIONISTIC AND CLASSICAL LOGICS
37
where the following holds 1. in the case of intuitionistic computation, ∆i = ∆; 2. in the case of locally linear computation, ∆i = ∆ − [B]; 3. in the case of linear computation ti ∆i = ∆ − [B]. The question now is: do we retain completeness? Are there examples for intuitionistic logic where items of data essentially need to be used locally more than once? It is reasonable to assume that if things go wrong, they do so with formulas with at least two nested implications, i.e. formulas with the structure X → Y → Z or (X → Y ) → Z. Namely, by adding new atoms and renaming implicational subformulas by the new atoms, we can reduce any database to an equivalent database with only two levels of nested implication. Sooner or later we will discover the following examples. EXAMPLE 2.16. 1. c → a, (c → a) → c `? a. The formula c → a has to be used twice in order for a to succeed. This example can be generalised. 2. Let A0 = c. An+1 = (An → a) → c. Consider the following query: An , c → a `? a The formula c → a has to be used n + 1 times (locally). EXAMPLE 2.17. Another example in which a formula must be used locally more than once is the following: the database contains (b1 → a) → c .. . (bn → a) → c b 1 → b 2 → . . . bn → c c → a. The goal a succeeds. It is easy to see that c → a has to be used n + 1 times locally to ensure success. EXAMPLE 2.18. Let us do the full LL-computation for a → b → c, a → b, a `? c
38
GOAL-DIRECTED PROOF THEORY
a → b → c, a → b, a `? c Q Q ? a → b, a `? b a → b, a ` a a `? a
Figure 2.2. Derivation for Example 2.18. Here a has to be used twice globally, but not locally on each branch of the computation. Thus the locally linear computation succeeds. On the other hand, in the linear computation case, this query fail, as a must be used globally twice. The example above shows that the locally linear proof system of Definition 2.15 is not the same as the linear proof system. First, we do not require that all assumptions must be used, the condition on the success rule. A more serious difference is that we do our ‘counting’ of how many times a formula is used separately on each path of the computation and not globally for the entire computation. The counting in the linear case is global, as can be seen by the condition in the reduction rule. Another example, the query A, A → A → B `? B will succeed in our locally linear computation because A is used once on each of two parallel paths. It will not be accepted in the linear computation because A is used globally twice. This is ensured by the condition in the reduction rule. In contrast with the intuitionistic consequence relation, we can notice that linear computation does not satisfy all the structural properties of Definition 2.4. Monotonicity does not hold. Reflexivity holds in the restricted form A ` A. The cut rule holds in the form ∆ ` A Γ, A ` B . ∆, Γ ` B On the other hand, as a difference with respect to the intuitionistic case, the cut rule does not hold in the form ∆`A
∆, A ` B
∆`B
.
The two forms of cut are easily seen to be equivalent in intuitionistic logic, whereas they are not for the globally linear computation. A counterexample to the latter form is the following: a, a `? (a → b) → (a → b → c) → c succeeds, and a `? a succeeds. Therefore, by cut we should get
2. INTUITIONISTIC AND CLASSICAL LOGICS
39
a `? (a → b) → (a → b → c) → c succeeds, which, of course, does not hold. Linear computation as defined in Definition 2.15 corresponds to linear logic implication, [Girard, 1987], in the sense that the procedure of linear computation is sound and complete for the implicational fragment of linear logic. This will be shown in Chapter 5, within the broader context of substructural logics. Linear logic has the implication connective ( and A1 , . . . , An ` B in linear logic means that B can be proved from Ai using each Ai exactly once. A deduction theorem holds, namely that A1 , . . . , An ` B is equivalent to ` A1 ( (A2 ( . . . (An ( B) . . .). In the case of LL-computation (that is, locally linear computation), the properties of reflexivity and monotony are satisfied, but cut is not, here is a counterexample: letting A = (a → b) → (a → b) → b, we have A, a → b `? b succeeds, and (a → b) → a `? A succeeds. On the other hand, as we have seen (a → b) → a, a → b `? b fails. The above examples show that we do not have completeness with respect to intuitionistic provability for locally linear computations. Still, the locally linear computation is attractive, because if the database is finite it is always terminating and it is efficient. It is natural to wonder whether we can compensate for the use of the locally linear rule (i.e. for throwing out the data) by some other means. Moreover, even if the locally linear computation is not complete for the full intuitionistic implicational fragment, one may still wonder whether it works in some particular and significant case. A significant (and well-known) case is shown in the next proposition. We recall that a formula A is Horn if it is an atom, or has the form p1 → . . . → pn → q, where all pi and q are atoms. PROPOSITION 2.19. The locally-linear procedure is complete for Horn formulas. Proof. One may simply observe that in a derivation involving only Horn formulas the database never changes (except for an initial argument if the starting goal is non-atomic). By this fact, we do not need to ask the same goal more than once along the same branch, as any subsequent call of the same goal will not have more chances to succeed than its first call. But this means that we do not need to use a database formula in a reduction step more than once along the same derivation branch.
3.1 Bounded Restart Rule for Intuitionistic Logic Let us now go back to the notion of locally linear computation. We have seen that the locally linear restriction does not retain completeness with respect to intu-
40
GOAL-DIRECTED PROOF THEORY
itionistic provability. There are examples where formulas need to be used locally several times. Can we compensate? Can we at the same time throw data out once it has been used and retain completeness for intuitionistic logic by adding some other computation rule? The answer is yes. The rule is called the bounded restart rule and is used in the context of the notion of locally linear computation with history. Let us examine more closely why we needed in Example 2.16 the formula c → a several times. The reason was that from other formulas, we got the query `? a and we wanted to use c → a to continue to the query `? c. Why was c → a not available? Because c → a had already been used. In other words, `? a as a query, had already been asked and c → a was used. This means that the next query after `? a in the history was `? c. If H is the history of the atomic queries asked, then somewhere in H there is `? a and immediately afterwards `? c. We can therefore compensate for the reuse of c → a by allowing ourselves to go back in the history to where `? a was, and allow ourselves to ask all queries that come afterwards. We call this type of move bounded restart. Let us see what happens to our Example 2.16 (c → a) → c c→a We use the second formula to get (c → a) → c c→a we continue (c → a) → c (c → a) we continue (c → a) → c c→a c
[1] [1]
`? a
[1] [0]
`? c, (a)
[0] [0]
`? c → a, (a, c)
[0] [0] [1]
`? a, (a, c)
The ‘1’(‘0’) annotate the formula to indicate it is active (inactive) for use. The history can be seen as the right hand column of past queries. We can now ask any query that comes after an ‘a’ in the history, hence (c → a) → c (c → a) c
[0] [0] [1]
`? c, (a, c, a)
success. The previous example suggests the following new computation with bounded restart rule. For technical reasons, from now on we regard again databases as sets
2. INTUITIONISTIC AND CLASSICAL LOGICS
41
of formulas rather than multisets. However, the results which follows hold indifferently for databases which are either sets or multisets of formulas. In this respect, we observe that, whereas for the locally linear computation without bounded restart it makes a great difference whether the database is regarded as a set or as a multiset, once that bounded restart is added the difference is no longer significant. DEFINITION 2.20 (Locally linear computation with bounded restart). In the computation with bounded restart, the queries have the form ∆ `? G, H, where ∆ is a set of formulas and the history H is a sequence of atomic goals. The rules are as follows: • (success) ∆ `? q, H succeeds if q ∈ ∆; • (implication) from ∆ `? A → B, H step to ∆, A `? B, H; • (reduction) from ∆ `? q, H if C = D1 → D2 → . . . → Dn → q ∈ ∆, then we step to ∆ − {C} `? Di , H ∗ (q) for i = 1, . . . , n; • (bounded restart) from ∆ `? q, H step to ∆ `? q1 , H ∗ (q), provided for some H1 , H2 , H3 , it holds H = H1 ∗ (q) ∗ H2 ∗ (q1 ) ∗ H3 , where each Hi may be empty. Both soundness and completeness of this procedure with respect to intuitionistic provability may not be evident. Consider the following two databases: ∆1 = {(c → a) → b} and ∆2 = {c, a → b}. If we ask the query ∆1 `? b and ∆2 `? b, using the bounded restart computation, we get that both queries reduce to c `? a, (b). Thus we see that one cannot reconstruct the original database from the history. Therefore the soundness of the system is not immediately clear. However, the procedure with bounded restart is sound and complete with respect to intuitionistic provability as we show below. LEMMA 2.21. The following holds in intuitionistic logic: 1. if ∆, A, B → C ` B, then ∆, (A → B) → C ` A → B 2. (A → C) → A, (B → C) → B ` (A → B → C) → (A ∧ B). Proof. Left to the reader.
42
GOAL-DIRECTED PROOF THEORY
THEOREM 2.22 (Soundness of locally linear computation with bounded restart). For the computation of Definition 2.20 we have : if ∆ `? G, (p1 , . . . , pn ) succeeds then ∆, G → pn , pn → pn−1 , . . . , p2 → p1 ` G holds.3 In particular ∆ `? G, ( ) succeeds implies ∆ ` G. Proof. By induction on the height h of a successful proof of (1). If h = 0 then G = q ∈ ∆ and the result follows immediately. Let h > 0. Let G = A → B. The computation steps to ∆, A `? B, (p1 , . . . , pn ) by the induction hypothesis we have ∆, A, B → pn , pn−1 , . . . , p2 → p1 ` B. By Lemma 2.21(1), we have ∆, (A → B) → pn , pn−1 , . . . , p2 → p1 ` A → B. Let G be an atom q. Suppose that reduction rule is applied to q, then ∆ = ∆0 , C, with C = D1 → . . . → Dk → q, and the computation steps for i = 1, . . . , k to ∆0 `? Di , (p1 , . . . , pn , q) by the induction hypothesis, we have ∆0 , Di → q, q → pn , pn → pn−1 , . . . , p2 → p1 ` Di , by Lemma 2.21(2), for i = 1, . . . , k, we get ∆0 , D1 → . . . → Dk → q, q → pn , pn → pn−1 , . . . , p2 → p1 ` Di , and hence, ∆0 , C, q → pn , pn → pn−1 , . . . , p2 → p1 ` q. Finally, suppose that bounded restart rule is applied to q. Then from ∆ `? q, (p1 , . . . , q . . . , pj , . . . , pn ) the computation steps to ∆ `? pj , (p1 , . . . , q . . . , pj , . . . , pn , q) that is, it must be q = pi , with i < j ≤ n. By the induction hypothesis, we have ∆, pj → q, q → pn , pn → pn−1 , . . . , p2 → p1 ` pj , but since q = pi precedes pj , we have 3 This theorem is a correct version of [Gabbay, 1992, Theorem 5.0.8, p. 351], which contained a mistake.
2. INTUITIONISTIC AND CLASSICAL LOGICS
43
q → pn , pn → pn−1 , . . . , p2 → p1 ` pj → q, and hence we obtain ∆, q → pn , pn → pn−1 , . . . , p2 → p1 ` q.
We now turn to completeness We first prove that if ∆ ` G then ∆ `? G, ∅ succeeds. To this end we introduce an intermediate procedure, we call it Pi . In Pi , data are regarded as sets and each formula can be used locally more than once, but after its first use it is removed from the database, say ∆, and recorded in a separate part, Γ, ‘the storage’; the data in the storage are listed according to usage. The storage Γ only contains non-atomic formulas and the sequence of their heads forms the history of the procedure with bounded restart. For technical reasons, we allow in Pi the rule of bounded restart in a restricted form. It should be clear that this rule is redundant. Queries for Pi have the form ∆ | Γ `? G, where ∆ is a set of formulas and Γ is a list of non-atomic formulas. The rules of Pi are as follows: • (success) ∆ | Γ `? q succeeds if q ∈ ∆; • (implication) from ∆ | Γ `? A → B step to ∆ ∪ {A} | Γ `? B; • (reduction) if there is a formula C = B1 → . . . → Bn → q ∈ ∆ ∪ Γ, then from ∆ | Γ `? q step to ∆0 | Γ ∗ (C) `? Bi , for i = 1, . . . , n, where ∆0 = ∆ − {C} if C ∈ ∆ and ∆0 = ∆ if C ∈ Γ. • (bounded restart) if Γ = Γ1 ∗ (C1 ) ∗ (C2 ) ∗ Γ2 , with Head(C1 ) = q and Head(C2 ) = r, then from ∆ | Γ `? q step to ∆ | Γ ∗ (C1 ) `? r. As one can see, formulas of ∆ are moved to Γ the first time they are used. In case C occurs in both ∆ and Γ one can choose to apply reduction with respect to either one or the other occurrence of C and the database ∆0 is determined accordingly. Notice that in Pi , the storage Γ plays a dual role, as history for bounded restart and as database for reduction steps. We will show how to simulate reduction steps with respect the same formula along one branch (that is reduction steps with respect to formulas in the storage) by bounded restart steps. One can easily prove that Pi is equivalent to the procedure for intuitionistic provability. LEMMA 2.23. A query ∆ `? G succeeds in the basic intuitionistic procedure iff the query ∆ | ∅ `? G succeeds in Pi .
44
GOAL-DIRECTED PROOF THEORY
We need the following lemma. Let Atom(Σ), N Atom(Σ) respectively denote the set of atomic and non-atomic formulas in Σ. LEMMA 2.24. If ∆ ∪ Σ | Γ `? G succeeds in Pi and Atom(Σ) ⊆ ∆, N Atom(Σ) ⊆ ∆ ∪ Γ, then the query Q = ∆ | Γ `? G succeeds in Pi by a derivation of no greater size (number of nodes). Proof. We proceede by induction on the height of a successful derivation of the query in the hypothesis. The cases of (success), (implication) and (bounded restart) are straightforward. We only consider the relevant case of reduction. Suppose, from ∆ ∪ Σ | Γ `? q we step to Qi = (∆ ∪ Σ)0 | Γ ∗ (C) `? Bi , for i = 1, . . . , n, for some C = B1 → . . . → Bn → q ∈ ∆ ∪ Σ ∪ Γ, where the precise form of (∆ ∪ Σ)0 depends on which database contains C. We have several cases. If C ∈ ∆, we step to (∆ − {C}) ∪ (Σ − {C}) | Γ ∗ (C) `? Bi , It is obvious that Σ − {C} satisfies the hypothesis of the lemma, hence we can apply the induction hypothesis, that is for i = 1, . . . , n, Q0i = ∆−{C} | Γ∗(C) `? Bi succeeds, so that we can apply reduction w.r.t. C ∈ ∆ and step, from ∆ | Γ `? q, to Qi and succeed. If C ∈ Γ, we proceed as before by applying the induction hypothesis and performing a reduction step w.r.t. C ∈ Γ. Finally, if C ∈ Σ − ∆, then C ∈ Γ, and we are back to the previous case. Since there is one-to-one mapping it is clear that the size of the derivation of Q is not greater than that of the original query. We are now in the position of proving the completeness of bounded restart computation. We show that every reduction step with respect to a formula of Γ can be simulated by a bounded restart step. Let D be a derivation of a query Q0 = ∆0 | ∅ `? G0 . Given a query Q = ∆ | Γ `? q, we say that a reduction step is ‘bad’ if it is performed with respect to a formula of Γ. A derivation without ‘bad’ reduction steps is a derivation according to the procedure of Definition 2.20, with the small difference that in the latter we just record the head of the formulas of the storage part Γ. LEMMA 2.25. If Q0 = ∆0 | ∅ `? G0 succeeds by a derivation in Pi , then it succeeds by a derivation which does not contain ‘bad’ reduction steps, that is a locally linear derivation with bounded restart. Proof. Let D0 be a Pi -derivation of Q0 . Working from the root downwards, we replace every ‘bad’ reduction step by a bounded restart step. Let Q = ∆ | Γ `? q be a query in D0 , suppose that the successors of Q in D0 are obtained by a bad reduction step, that is a reduction step with respect to a formula C1 ∈ Γ; let Γ = Γ1 ∗ (C1 ) ∗ Γ2 and C1 = B1 → . . . → Bn → q, then the successors of Q are Qi = ∆ | Γ ∗ (C1 ) `? Bi , for i = 1, . . . , n. Since C1 ∈ Γ, C1 must have already been used at a previous step on the same branch, there is a query Q0 = ∆0 | Γ1 `? q, and for some i, say i = 1, there is a successor Q01 of Q0
2. INTUITIONISTIC AND CLASSICAL LOGICS
45
Q01 = ∆0 | Γ1 ∗ (C1 ) `? B1 , on the same branch. In the most general case, let B1 = D1 → . . . → Dk → r , with r > 0 and Σ1 = {D1 , . . . , Dk }. Then, under Q01 on the same branch there is a query Q∗1 = ∆0 ∪ Σ1 | Γ1 ∗ (C1 ) `? r. It is clear that in case B1 is atomic, Σ1 = ∅ and Q∗1 = Q01 . We can assume that Q is a descendent of Q∗1 . If it were Q∗1 = Q, then q = r and the reduction step performed on Q would just be an immediate repetition of the reduction step performed on Q0 . We can assume that D0 does not contain such immediate repetitions. The query Q1 will have a descendant corresponding to Q∗1 , that is ? Q∗∗ 1 = ∆ ∪ Σ1 | Γ ∗ (C1 ) ` r.
To sum up, the sequence of the queries along the branch (from the root down0 ∗ wards) is as follows: Q0 , Q01 , Q∗1 , Q, Q1 , Q∗∗ 1 , where it might be Q1 = Q1 and ∗∗ ∗ Q1 = Q1 . Since Q is a descendent of Q1 , it must be Γ = Γ1 ∗ (C1 ) ∗ Γ2 , with Γ2 6= ∅, that is Γ2 = (C2 ) ∗ Γ02 and Head(C2 ) = r, no matter whether the successors of Q∗1 are determined by reduction or bounded restart. Moreover, since Q follows Q∗1 , Atom(Σ1 ) ⊆ ∆, and N Atom(Σ1 ) ⊆ ∆ ∪ Γ. Thus, we can apply the previous lemma so that we have Q001 = ∆ | Γ ∗ (C1 ) `? r. succeeds by a derivation, say D1∗ of size no greater than the size of the subderivation D1 of Q∗∗ 1 . Now, we can replace the subderivation with root Q (and successors) Qi , by the following: Q ↓ Q001 D1∗ and we can justify the step from Q to Q001 by bounded restart. We have described one transformation step. Proceeding downwards from the root, each transformation step reduces by one the number of bad reductions at maximal level (closest to the root), without increasing the derivation size (the number of nodes), thus the entire process must terminate and the final derivation does not contain bad reduc tions. THEOREM 2.26 (Completeness of locally linear computation with bounded restart). For the computation of Definition 2.20 we have : if ∆, G → pn , pn → pn−1 , . . . , p2 → p1 ` G holds, then ∆ `? G, (p1 , . . . , pn ) succeeds.
46
GOAL-DIRECTED PROOF THEORY
Proof. It is sufficient, in view of the deduction theorem, to assume that G is an atom q. Let ∆0 = ∆ ∪ {q → pn , pn → pn−1 , . . . , p2 → p1 }. We show that if ∆0 ` q then ∆ `? q, (p1 , . . . , pn ) succeeds . By the hypothesis and Lemmas 2.23,2.25, we have ∆, q → pn , pn → pn−1 , . . . , p2 → p1 `? q, ∅ succeeds. Let D be a derivation of the above query. Let X be the sequence (p1 , . . . , pn ). For each query Qt = ∆t , `? Gt , Ht in D, let Ht0 = X ∗ Ht , and Q0t = ∆t , `? Gt , Ht0 . Let D0 be a derivation of ∆ ∪ {q → pn , . . . , p2 → p1 } `? q, X, obtained by replacing each Qt by Q0t . All uses of the bounded restart rule in D0 are still valid because we have appended X at the beginning of Ht . Moreover we can use the bounded restart rule to perform any step of the form `? pi−1 ↓ `? pi rather than using the formulas pi−1 → pi , since X is appended at the beginning of Ht0 . This also holds for q → pn as q follows pn in any Ht0 . Thus, D0 is also a successful derivation of ∆ ∪ {q → pn , pn → pn−1 , . . . , p2 → p1 } `? q, X. In D0 , the formulas q → pn , pn → pn−1 , . . . , p2 → p1 are never used. Thus they can be taken out of the database and D0 is a successful derivation of ∆ `? q, X. A Historical Remark The locally linear computation with bounded restart, first presented by Gabbay [1991], seems strongly connected to the contraction-free sequent calculi for intuitionistic logic which have been proposed by many people: Dyckhoff [1992], Hudelmaier [1990], and Lincoln et al. [1991], in a slightly different context. To see the intuitive connection, let us consider the query: (1) ∆, (A → p) → q `? q, ∅ we can step by reduction to ∆ `? A → p, (q) and then to ∆, A `? p, (q), which, by the soundness theorem corresponds to (2) ∆, A, p → q `? q. In all the mentioned calculi (1) can be reduced to (2) by a sharpened left-implication rule (here used backwards). This modified rule is the essential ingredient to obtain a contraction-free sequent calculus for I, at least for its implicational fragment. A formal connection with these contraction-free calculi has not been studied yet, although we think it is worth studying. It might turn out that LLcomputations correspond to uniform proofs (in the sense of Miller et al., [1991]) within these calculi.
2. INTUITIONISTIC AND CLASSICAL LOGICS
47
3.2 Restart Rule for Classical Logic We now introduce a new rule, the restart rule. It is a variation of the bounded restart rule obtained by cancelling any restrictions and simply allowing us to ask any earlier atomic goal. We need not keep the history as a sequence, but only as a set of atomic goals. The rule becomes DEFINITION 2.27 (Restart rule in the LL-computation). If a ∈ H, from ∆ `? q, H step to ∆ `? a, H ∪ {q}. The formal definition of locally linear computation with restart is Definition 2.20 with the additional restart rule above in place of the bounded restart rule. EXAMPLE 2.28.
(a → b) → a `? a, ∅
use the formula and throw it out to get: `? a → b, {a} and
a `? b, {a}
restart
a `? a, {a, b}
success. The above query fails in the intuitionistic computation. Thus, this example shows that we are getting a logic which is stronger than intuitionistic logic. Namely, we are getting classical logic. This claim has to be properly proved, of course. If we adopt the basic computation procedure for intuitionistic implication Definition 2.6 rather than the LL-computation, we can restrict the restart rule to always choose the initial goal as the goal with which we restart. Thus, we do not need to keep the history, but only the initial goal and the rule becomes more deterministic. On the other hand, the price we pay is that we cannot throw out the formulas of database when they are used. DEFINITION 2.29 (Simple computation with restart). The queries have the form ∆ `? G, (G0 ), where G0 is a goal. The computation rules are the same as in the basic computation procedure for intuitionistic implication of Definition 2.6 plus the following rule (Restart) from step to
∆ `? q, (G0 ) ∆ `? G0 , (G0 ).
48
GOAL-DIRECTED PROOF THEORY
It is clear that the initial query of any derivation will have the form Γ `? A, (A). It is natural to wonder what we get if we add the restart rule of Definition 2.27 to the basic procedure for intuitionistic logic of Definition 2.6. In the next proposition we show that given the underlying computation procedure of Definition 2.6, restarting from an arbitrary atomic goal in the history is equivalent to restart from the initial goal. To this purpose, let `?RI and `?RA be respectively the deduction procedure of Definition 2.29 and the deduction procedure of Definition 2.6 extended by the restart rule of Definition 2.27. PROPOSITION 2.30. For any database ∆ and formula G, we have (1) ∆ `?RA G, ∅ succeeds iff (2) ∆ `?RI G, (G) succeeds. Proof. Let G = A1 → . . . Ak → q, with k ≥ 0 and let ∆0 = ∆ ∪ {A1 , . . . , Ak }. By (1) we have (1) ∆ `?RA G, ∅ succeeds iff (10 ) ∆0 `?RA q, ∅ succeeds. On the other hand, by (2) and the fact that ∆0 is a set and it never decreases, we have (2) ∆ `?RI G, (G) succeeds iff (20 ) ∆0 `?RI q, (G) succeeds iff ∆0 `?RI q, (q) succeeds. (⇐) Unless q ∈ ∆0 , in which case both (1) and (2) immediately succeed, (20 ) will step by reduction to some query ∆0 `?RI Ai , (q); we can perform the same reduction step from (10 ), and the resulting queries will contain q as the first element of the history and we are done. (⇒) Let (10 ) succeed, consider a successful derivation of (10 ). Suppose that the restart rule is applied to a query Q0 by re-asking an atomic goal p which is the goal of an ancestor query Q of Q0 . It suffices to show that we still obtain a successful derivation if we restart by re-asking an atomic goal which comes from any ancestor query Q0 of Q. This fact implies what we have to prove, namely it is more general. To show the fact above, suppose the following situation occurs: in one branch of a given derivation D we have (S1) ∆1 `?RA q1 , H1 .. . (S2) ∆2 `?RA q2 , H2 .. . (S3) ∆3 `?RA q3 , H3 .
2. INTUITIONISTIC AND CLASSICAL LOGICS
49
The branch goes on by restart using q2 ∈ H3 , that is we step to (S4) ∆3 `?RA q2 , H3 ∪ {q3 }. Since q1 ∈ H2 ⊆ H3 , we can build an alternative derivation D0 , which coincides with D up to (S3) and goes on by restart using q1 ; that is after (S3) we step to (S40 ) ∆3 `?RA q1 , H3 ∪ {q3 }. Since ∆1 ⊆ ∆2 ⊆ ∆3 , by monotony, from (S40 ) we can reach (S4), by doing the same steps we have done to reach (S2) from (S1). As it is clear from the proof of the above lemma, we can further restrict restart from the initial goal G0 to restart from the first initial goal q0 = Head(G0 ). In the following we prove soundness and completeness with respect to classical logic of two proof procedures: one is the basic procedure for intuitionistic logic with the additional rule of restart from initial goal, and the other one is the locally linear computation procedure with the additional rule of restart from any previous atomic goal. Soundness and Completeness of Restart from the Initial Goal We show that the proof procedure obtained by adding the rule of restart from the initial goal to the basic procedure for intuitionistic logic defined in Definition 2.6 is sound and complete with respect to classical provability. We need the notion of complement of a formula which works as its negation. DEFINITION 2.31. Let A be any formula. The complement of A, denoted by Cop(A) is the following set of formulas: Cop(A) = {A → p | p any atom of the language}. The set Cop(A) represents the negation of A in our implicational language which does not contain neither ¬, nor ⊥. We show that we can replace any application of the restart rule by a reduction step using a formula in Cop(A). LEMMA 2.32. (1) Γ `? A succeeds by using the restart rule (from the initial goal) if and only if (2) Cop(A)∪Γ `? A succeeds without using restart, that is by the intuitionistic procedure of Definition 2.6.
50
GOAL-DIRECTED PROOF THEORY
Proof. We can easily establish a mapping between derivations of (1) and derivations of (2), each query Σ `? G, (A) in a derivation of (1) corresponds to a query Cop(A) ∪ Σ `? G in a derivation of (2) and vice-versa: let Σ `? r, (A) be any step in a derivation of (1), where restart is used, so that the next step will be Σ `? A, (A). Since A → r ∈ Cop(A), from the corresponding query Cop(A) ∪ Σ `? r, we can step to Cop(A) ∪ Σ `? A, by reduction with respect to A → r. On the other hand, whenever we step from Cop(A) ∪ Σ `? q to Cop(A) ∪ Σ `? A by reduction with respect to A → q ∈ Cop(A), we can achieve the same result by restarting from A. We will prove first that ∆ ` A in classical logic iff ∆∪Cop(A) `? A succeeds by the procedure defined in Definition 2.6. LEMMA 2.33. For any database ∆ and formulas G such that ∆ ⊇ Cop(G), and for any goal A, conditions (a) and (b) below imply condition (c): (a) (b) (c)
∆ ∪ {A} `? G succeeds; ∆ ∪ Cop(A) `? G succeeds; ∆ `? G succeeds.
Proof. Since ∆ ∪ Cop(A) `? G succeeds and computations are finite, only a finite number of the elements of Cop(A) are used in the computation. Assume then that (b1)
∆, A → p1 , . . . , A → pn `? G succeeds.
We use the cut rule. Since ∆ ⊇ Cop(G) we have G → pi ∈ ∆ and hence by (a) and the computation rules ∆ ∪ {A} `? pi succeeds. Thus ∆ `? A → pi succeeds for all i = 1, . . . , n. Now by cut on (b1) we get ∆ `? G succeeds . THEOREM 2.34. For any ∆ and A, (a) is equivalent to (b) below: (a) ∆ ` A in classical logic, (b) ∆ ∪ Cop(A) `? Definition 2.6.
A succeeds by the intuitionistic procedure defined in
Proof. 1. Show (b) implies (a). Assume ∆ ∪ Cop(A) `? A succeeds Then by the soundness of the computation procedure we get that ∆ ∪ Cop(A) ` A in intuitionistic logic, and hence in classical logic. Since the proof is finite there is a finite set of the form {A → pi , . . . , A → pn } such that
2. INTUITIONISTIC AND CLASSICAL LOGICS
(a1)
51
∆, A → p1 , . . . A → pn ` A (in intuitionistic logic).
We must also have that ∆ ` A, in classical logic, because if there was an assignment h making ∆ true and A false, it would also make A → pi all true, contradicting (a1). The above concludes the proof that (b) implies (a). 2. Show that (a) implies (b). We prove that if ∆ ∪ Cop(A) `? A does not succeed then ∆ 6` A in classical logic. Let ∆0 = ∆ ∪ Cop(A). We define a sequence of databases ∆n , n = 1, 2 . . . as follows: Let B1 , B2 , B3 , . . . be an enumeration of all formulas of the language. Assume ∆n−1 has been defined and assume that ∆n−1 `? A does not succeed. We define ∆n : If ∆n−1 ∪ {Bn } `? A does not succeed, let ∆n = ∆n−1 ∪ {Bn }. Otherwise from Lemma 2.33 we must have: ∆n−1 ∪ Cop(Bn ) `? A does not succeed. and so let S ∆n = ∆n−1 ∪ Cop(Bn ). Let ∆0 = n ∆n Clearly ∆0 `? A does not succeed. Define an assignment of truth values h on the atoms of the language by h(p) = true iff ∆0 `? p succeeds. We now prove that for any B, h(B) = true iff ∆0 `? B succeeds, by induction on B. (a) For atoms this is the definition. (b) Let B = C → D. We prove the two directions by simultaneous induction. (b1) Suppose ∆0 `? C → D succeeds. If h(C) = false, then h(C → D) = true and we are done. Thus, assume h(C) = true. By the induction hypothesis, it follows that ∆0 `? C succeeds. Since, by hypothesis we have that ∆0 , C `? D succeeds, by cut we obtain that ∆0 `? D succeeds, and hence by the induction hypothesis h(D) = true. (b2) Suppose ∆0 `? C → D does not succeed; we will show that h(C → D) = false. Let Head(D) = q, we get (1) ∆0 `? D does not succeed (2) ∆0 , C `? q does not succeed now. Hence by the induction hypothesis on (1) we have h(D) = false. We show that ∆0 `? C must succeed. Suppose on the contrary that ∆0 `? C does not succeed. Hence C 6∈ ∆0 . Let Bn = C in the given
52
GOAL-DIRECTED PROOF THEORY
enumeration. Since Bn 6∈ ∆n , by construction, it must be Cop(C) ⊆ ∆0 . In particular C → q ∈ ∆0 , and hence ∆0 , C `? q succeeds, against (2). We have shown that ∆0 `? C succeeds, whence h(C) = true, by the induction hypothesis. Since h(C) = false, we obtain h(C → D) = false. We can now complete the proof. Since ∆0 `? A does not succeed, we get h(A) = false. On the other hand, for any B ∈ ∆ ∪ Cop(A), h(B) = true (since ∆ ∪ Cop(A) ⊆ ∆0 ). This means that ∆ ∪ Cop(A) 6` A in classical logic. This complete the proof. From the above theorem and Lemma 2.32 we immediately obtain the completeness of the proof procedure with restart from the initial goal. THEOREM 2.35. ∆ ` A in classical logic iff ∆ `? A, (A) succeeds using the restart rule from the initial goal, added to the procedure of Definition 2.6. Soundness and Completeness of the LL-procedure with Restart from any previous atomic Goal We now examine soundness and completeness for locally linear computation as in Definition 2.20 with the restart rule from any previous goal. As the next example shows, we cannot restrict the application of restart to the first atomic goal occurring in the computation. EXAMPLE 2.36. The following query succeeds: (p → q) → p, p → r `? r, ∅, by the following computation: (p → q) → p, p → r (p → q) → p
`? `?
r, ∅, p, {r},
p
`? `?
p → q, {r, p}, q, {r, p},
p
`?
p, {r, p}, restart from p.
It is clear that restarting from r, the first atomic goal would not help. THEOREM 2.37 (Soundness and completeness W of locally linear computation with restart). ∆ `? G, H succeeds iff ∆ ` G ∨ H in classical logic. Proof. Soundness. We prove soundness by induction on the length of the computation. 1. Length 0 In this case G = W q is atomic and q ∈ ∆. Thus ∆ ` q ∨ H.
2. INTUITIONISTIC AND CLASSICAL LOGICS
53
2. Length k + 1 (a) G = A → B if ∆ ∪ {A} `? B, H succeeds and hence ∆ `? G, H succeeds W ∆ ∪W {A} ` B ∨ H by the induction hypothesis and hence ∆ ` G ∨ H. (b) G = q and for some B = B1 → . . . → Bn → q ∈ ∆, (∆ − {B}) `? Bi , H ∪ {q} succeeds for i = 1, . . . , n. W By the induction hypothesisV ∆ − {B} ` Bi ∨ q ∨ H for i = 1 . . . n. n ≡ (B1 → . . . → Bn → However in classical logic i=1 (Bi ∨ q) W q) → q. Hence ∆W− {B} ` (B → q) ∨ H and by the deduction theorem ∆ ` q ∨ H. (c) G = q and the restart rule was used, i.e. for some a ∈ H ∆ `? a, H ∪ {q} succeeds. Hence ∆ ` a ∨ q ∨
W
H and since a ∈ H we get ∆ ` q ∨
W
H
Completeness. We prove completeness by induction on the complexity of the query, defined as follows. Let Q = ∆ `? G, H, we define cp(Q) as the multiset of the complexity values of non-atomic formulas in ∆ ∪ {G} if any, and the empty multiset otherwise, i.e. cp(Q) =
[cp(A1 ), . . . , cp(An )] if N Atom(∆ ∪ {G}) = {A1 , . . . , An } ∅ if N Atom(∆ ∪ {G}) = ∅
We consider the following relation on integer multisets: given α = [n1 , . . . , np ] and β = [m1 , . . . , mq ], we write α ≺ β if the following holds: either α ⊂ β, or α is obtained from β by replacing some occurrence of mi ∈ β by a multiset of numbers strictly smaller than mi . It is well-known that the relation ≺ is a well-order on integer multisets [Dershowitz and Manna, 1979]. ? We are ready W for the completeness proof. Let Q = ∆ ` G, H, we show that if ∆ ` G ∨ H in classical logic then Q succeeds by induction on cp(Q). 1. The base of the induction, cp(Q) = ∅, occurs when G and the database ∆ are all atoms; this case is clear, because it must be ∆ ∩ ({G} ∪ H) 6= ∅. 2. Let cp(Q) 6= ∅. If G is of the form A → B, then we have W W (*) ∆ ` (A → B) ∨ H iff ∆, A ` B ∨ H.
54
GOAL-DIRECTED PROOF THEORY
Let Q0 be the query ∆, A `? B, H; since cp(A), cp(B) < cp(A → B), we easily obtain that cp(Q0 ) ≺ cp(Q); thus by (*) and by the induction hypothesis, we obtain that Q0 succeeds, whence Q succeeds by the implication rule. W 3. Let cp(Q) 6= ∅ and let G be an atom q. If ∆ ` q ∨ H there must be a formula C ∈ ∆ such that Head(C) ∈ {q} ∪ H, otherwise we can define a countermodel (by making false all atoms in H ∪ {q} and true all heads of formulas of ∆). Assume C is such a formula and Head(C) = p1 . If Body(C) is empty then the computation succeeds either immediately, or by the restart rule (according to p1 = q or p1 ∈ H). Otherwise, let C = A1 → . . . → An → p1 , with n > 0. Then in classical logic we have W W (**) ∆ ` q∨ H iff for all i = 1, . . . , n ∆−{C} ` Ai ∨q∨ H. Let Qi = ∆ − {C} `? Ai , H ∪ {q} for i = 1, . . . , n. Since cp(C) = Σni=1 cp(Ai ) + n, we have cp(Ai ) < cp(C). So that each cp(Qi ) ≺ cp(Q). By hypothesis, (**) and the induction hypothesis, each Qi succeeds. We can now show that Q succeeds as follows. First, if p1 6= q we use restart with p1 ∈ H and ask ∆ `? p1 , H ∪ {q} Then, in any case, we proceed by reduction with respect to C and ask Qi , for i = 1, . . . , n, which succeed by the induction hypothesis.
3.3 Some Efficiency Consideration The proof systems based on locally linear computation are a good starting point for designing efficient automated deduction procedures; on the one hand proof search is guided by the goal, on the other hand derivations have a smaller size since a formula that has to be reused does not create further branching. We now want to remark upon termination of the procedures. The basic LL-procedure obviously terminates: since formulas are thrown out as soon as they are used in a reduction step, every branch of a given derivation eventually ends with a query which either immediately succeeds, or no further reduction step is possible from it. This was the motivation of the LL-procedure as an alternative to a loop-checking mechanism. Does the (bounded) restart rule preserve this property? As we have stated, it does not, in the sense that a silly kind of loop may be created by restart. Let us consider the following example, here we give the computation for intuitionistic logic, but the example works for the classical case as well: a → b, b → a a→b
`? `?
a, ∅ b, (a)
`?
a, (a, b)
2. INTUITIONISTIC AND CLASSICAL LOGICS
`? `? .. .
55
b, (a, b, a) a, (a, b, a, b)
This is a loop created by restart. It is clear that continuing the derivation by restart does not help, as none of the new atomic goals match the head of any formula in the database. In the case of classical logic, we can modify the restart rules as follows. From ∆ `? q, H, step to ∆ `? q1 , H ∪ {q}, provided there exists a formula C ∈ ∆, with q1 = Head(C), and q1 ∈ H. It is obvious that this restriction preserves completeness. In the case of intuitionistic logic, the situation is slightly more complex. The requirement that the atom from which we restart must match the head of some formula is too strong as the next example shows. EXAMPLE 2.38. Let ∆ contain the following formulas: A1 A2 A3 A4 A5
= s → r, = q → s, = (b → r) → q, = b → a → r, = (s → q) → a.
Then, we have the following deduction: ∆ A2 , A3 , A4 , A5
`? `?
r, ∅ s, (r),
A3 , A4 , A5 A3 , A4 , A5
`? `?
q, (r, s), b → r, (r, s, q),
A4 , A5 , b
`?
r, (r, s, q).
Then we get A5 , b `? b, (r, s, q, r) and A5 , b `? a, (r, s, q, r). The first query succeeds, the latter is reduced to b `? s → q, (r, s, q, r, a), b, s `? q, (r, s, q, r, a),
56
GOAL-DIRECTED PROOF THEORY
we then apply bounded restart, the only options are r and a, the latter would not help, so we choose r even if it does not match the head of any formula in the database b, s `? r, (r, s, q, r, a, q), We now apply bounded restart on s. b, s `? s, (r, s, q, r, a, q, r) success. It should be clear that the atom with which we finally restart must match the head of some formula of the database in order to make any progress. But this atom might only be reachable through a sequence of restart steps which goes further and further back in the history. To handle this situation, we require that the atom chosen for restart matches some head, but we ‘collapse’ several restart steps into a single one. In other words, we allow restart from a previous goal q which is accessible from the current one through a sequence of bounded restart steps. Given a history H = (q1 , q2 , . . . , xn ), we define the relation ‘qj is accessible from qi in H’, for qj in the sequence, denoted by Acc(H, qi , qj ), as follows: Acc(H, qi , qj )
≡ ∃qk ∈ H (k < j ∧ qk = qi ) ∨ ∨ ∃qk ∈ H(Acc(H, qi , qk ) ∧ Acc(H, qk , qj )).
The modified bounded restart rule for intuitionistic logic becomes: from ∆ `? q, H, step to ∆ `? q 0 , H ∗ (q), provided 1. there exists a formula C ∈ ∆, with q 0 = Head(C), and 2. Acc(H, q, q 0 ) holds. It is easy to see that the above rule ensures the termination of the procedure without spoiling its completeness. Another issue which is important from the point of view of proof search is whether backtracking is necessary or not when we are searching a derivation. The next lemmas shows that in the case of classical logic, backtracking is not necessary, that is, it does not matter which formula we choose to match an atomic goal in a reduction step. LEMMA 2.39. Let A = A1 → A2 → . . . → An → q and B = B1 → B2 → . . . → Bm → q. Then (a) is equivalent to (b):
2. INTUITIONISTIC AND CLASSICAL LOGICS
57
(a) ∆, A `? Bi , H ∪ {q} succeeds for i = 1, . . . m; (b) ∆, B `? Ai , H ∪ {q}) succeeds for i = 1, . . . , n. Proof. By Theorem 2.37, conditions (a) and (b) are equivalent, respectively, to (a0 ) and (b0 ) below: W (a0 ) ∆, A ` Bi ∨ q ∨ W H, for i = 1, . . . , n, (b0 ) ∆, B ` Ai ∨ q ∨ H, for i = 1, . . . , n. By classical logic (a0 ) and (b0 ) are equivalent to (a00 ) and (b00 ) respectively: Vm W (a00 ) ∆, A ` (V i=1 Bi ) ∨ q ∨ W H n 00 (b ) ∆, B ` ( i=1 Ai ) ∨ q ∨ H. Which are equivalent to (a000 ) and (b000 ) respectively: V W (a000 ) ∆, A ` ((V m i=1 Bi → q) → q) ∨ W H n (b000 ) ∆, B ` (( i=1 Ai → q) → q) ∨ H. Both (a000 ) and (b000 ) are equivalent to (c) below by the deduction theorem for classical logic. W (c) ∆, A, B ` q ∨ H. This concludes the proof of Lemma 2.39.
By the previous lemma we immediately have: PROPOSITION 2.40. In any computation of ∆ `? q, H with restart, no backtracking is necessary. The atom q can match with the head of any A1 → . . . → An → q ∈ ∆ and success or failure does not depend on the choice of such a formula. The parallel property to Lemma 2.39, Proposition 2.40 does not hold for the intuitionistic case. Let A = A1 → . . . → An → q, B = B1 → . . . → Bm → q. Then (a) is not necessarily equivalent to (b): (a) ∆, A `? Bi , H ∗ (q) succeeds for i = 1, . . . , m (b) ∆, B `? Ai , H ∗ (q) succeeds for i = 1, . . . , n. To this regard, let ∆ = {a}, A = a → q, B = b → q Then ∆, A `? b, (q) is a, a → q `? b, (q)
58
GOAL-DIRECTED PROOF THEORY
while ∆, B `? a, (q) is a, b → q `? a, (q). The first computation fails (we cannot restart here) and the second computation succeeds. Trivially, {a, a → q, b → q} does not prove b, whereas it does prove a. We thus see that in the intuitionistic computation ∆ `? q, H, with bounded restart, backtracking is certainly necessary. The atom q can unify with any formula A1 → . . . → An → q and success or failure may depend on the choice of the formula. This may give an intuitive account of the difference of complexity between the intuitionistic and the classical case.4
4 CONJUNCTION AND NEGATION In this and the next section we extend the language to the full propositional language. The addition of conjunction to the propositional language does not change the system much. As we have seen at the beginning of the chapter, ∧ can be fully characterised by the following two conditions on the consequence relation: 1. ∧-Elimination rule A∧B `A A∧B `B 2. ∧-Introduction rule A, B ` A ∧ B. Let ` be the smallest consequence relation for the language of {→, ∧} closed under the deduction theorem and the ∧ elimination and introduction rules. We want to characterise ` computationally. This we do using the following lemmas. LEMMA 2.41. 1. A → B → C ` A ∧ B → C 2. A ∧ B → C ` A → B → C 3. A → B ∧ C ` (A → B) ∧ (A → C) 4. (A → B) ∧ (A → C) ` A → B ∧ C. Proof. Exercise.
5
4 We recall that intutionistic provability is PSPACE-complete [Statman, 1979], whereas classical provability is CoNP-complete. 5 As a notational convention, we assume that ∧ is more binding than →.
2. INTUITIONISTIC AND CLASSICAL LOGICS
59
LEMMA 2.42. Every formula A with conjunctions is equivalent in intuitionistic logic to a conjunction of formulas which contain no conjunctions. Proof. Use the equivalences of Lemma 2.41 to pull the conjunctions out.
DEFINITION 2.43. 1. With any formula A with conjunctions we associate a unique (up to equivalence) set C(A) of formulas as follows: given A, use Lemma 2.42 to present it as ∧Ai , where each Ai contains no conjunctions. Let C(A) = {Ai }. S 2. Let ∆ be a set of formulas. Define C(∆) = A∈∆ C(A). 3. We can now define ∆ `? A, for ∆ and A containing conjunctions. We will simply compute C(∆) `? C(A), that is C(∆) `? B
∀ B ∈ C(A).
4. The computation rule for conjunction can be stated directly: ∆ `? A ∧ B succeeds iff ∆ `? A succeeds and ∆ `? B succeeds. We now turn to negation. As we have seen, negation can be introduced in classical and intuitionistic logic by adding a constant symbol ⊥ for falsity and defining the new connective ¬A for negation as A → ⊥. We will adopt this definition. However, we have to modify the computation rules, because we have to allow for the special nature of ⊥, namely that ⊥ ` A holds for any A. DEFINITION 2.44 (Computations for data and goal containing ⊥ for intuitionistic and classical logic). The basic procedure is that one defined in 2.6, (plus the restart rule for classical logic), with the following modifications: 1. Modify (success) rule to read: ∆ `? q immediately succeeds, if q ∈ ∆ or ⊥∈ ∆. 2. Modify (reduction rule) to read: from ∆ `? q step, for i = 1, . . . , n to ∆ `? Bi if there is C ∈ ∆ such that Head(C) = q or Head(C) = ⊥, and Body(C) = (B1 , . . . , Bn ). In Definition 2.44 we have actually defined two procedures. One is the computation without the restart rule for intuitionistic logic with ⊥, and the other is the computation with the restart rule for classical logic. We have to show that the two procedures indeed correctly capture the intended fragment of the respective systems. This is easy to see. The rule ⊥ ` A is built into the computation via the
60
GOAL-DIRECTED PROOF THEORY
modifications in 1. and 2. of Definition 2.44 and hence we know we are getting intuitionistic logic. To show that the restart rule yields classical logic, it is sufficient to show that the computation (A → ⊥) → ⊥ `? A always succeeds with the restart rule. This can also be easily checked. To complete the picture we show in the next proposition that the computation of ∆ `? A with restart is the same as the computation of ∆ `? (A → ⊥) → A without restart. This means that the restart rule (with original goal A) can be effectively implemented by adding A → ⊥ to the database and using the formula A → ⊥ and the ⊥-rules to replace uses of the restart rule. The above considerations correspond to the known translation from classical logic to intuitionistic logic, namely: ∆ ` A in classical logic iff ∆ ` ¬A → A in intuitionistic logic. The proof is similar to that one of Lemma 2.32, namely Cop(G) is a way of representing G → ⊥ without using ⊥.6 PROPOSITION 2.45. For any database ∆ and goal G: ∆ `? G succeeds with restart iff ∆ ∪ {G → ⊥} succeeds without restart.
`?
G
The above proposition can be used to prove that the restart rule can also be generalised to allow restart at any time, not only when we reach an atomic goal i.e. from ∆ `? A, (G0 ) step to ∆ `? G0 , (G0 ) for any formula A. The soundness of this generalisation may be easily derived from Proposition 2.45. Details are left to the reader. EXAMPLE 2.46. We check: (a → b) → b `? (a → ⊥) → b. By the implication rule (a → b) → b, a → ⊥ `? b. We apply reduction w.r.t. (a → b) → b. (a → b) → b, a → ⊥ `? a → b (a → b) → b, a → ⊥, a `? b. 6 Recently, this result appears in [Nadathur, 1998] where a uniform proof system for (first-order) classical logic is given.
2. INTUITIONISTIC AND CLASSICAL LOGICS
61
We apply reduction w.r.t. a →⊥ (a → b) → b, a →⊥, a `? a success. EXAMPLE 2.47. We check: (q →⊥) → q `? q, (q) (q →⊥) → q `? q →⊥, (q) (q →⊥) → q, q `? ⊥, (q). We cannot use the reduction rule here. So, we fail in intuitionistic logic. In classical logic we can use restart to obtain: (q →⊥) → q, q `? q, (q) and terminate successfully. The locally linear computation with bounded restart (respectively restart) is complete for the (→, ⊥, ∧)-fragment of intuitionistic (classical) logic. Here we show an example for intuitionistic logic. EXAMPLE 2.48. (((a →⊥) →⊥) →⊥) → c `? (a →⊥) → (b →⊥) → c. We show a derivation below. Steps (i) and (ii) are obtained by the reduction rule in the special case of ⊥. Step (iii) is obtained by bounded restart. The explanation of the other steps is left to the reader. ((((a →⊥) →⊥) →⊥) → c (((a →⊥) →⊥) →⊥) → c, a →⊥
`? `?
(a →⊥) → (b →⊥) → c, ∅ (b →⊥) → c, ∅
(((a →⊥) →⊥) →⊥) → c, a →⊥, b →⊥ a →⊥, b →⊥
`? `?
c, ∅ ((a →⊥) →⊥) →⊥, (c)
a →⊥, b →⊥, (a →⊥) →⊥ a →⊥, (a →⊥) →⊥
`? `?
⊥, (c) b, (c, ⊥)
(i) (a →⊥) →⊥ (ii)
`? `?
a, (c, ⊥, b) a →⊥, (c, ⊥, b, a)
a
`?
⊥, (c, ⊥, b, a)
(iii) a
?
`
a, (c, ⊥, b, a, ⊥) success.
62
GOAL-DIRECTED PROOF THEORY
5
DISJUNCTION
The handling of disjunction is much more difficult than the handling of conjunction and negation. Consider the formula a → (b ∨ (c → d)). We cannot rewrite this formula in intuitionistic logic to anything of the form B → q, where q is atomic (or ⊥). We therefore have to change our proof procedures to accommodate the general form of an intuitionistic formula with disjunction. In classical logic disjunctions can be pulled to the outside of formulas using the following rules: 1. (A ∨ B → C) ≡ (A → C) ∧ (B → C) 2. (C → A ∨ B) ≡ (C → A) ∨ (C → B), where ≡ denotes logical equivalence in classical logic. 1. is valid in intuitionistic logic but 2. is not valid. In fact, if we add 2. as an axiom schema to intuitionistic logic we obtain a stronger logical system known as LC (see Chapter 4). In intuitionistic logic we have the disjunction property, namely `I A ∨ B iff `I A or `I B.7 We have seen at the beginning of the chapter, in Definition 2.4 the consequence relation rules defining disjunction. In view of those rules we may want to adopt the computation rule below for disjunction in the goal. R1: from ∆ `? A ∨ B step to ∆ `? A or to ∆ `? B. This corresponds to the consequence relation rules (4a) and (4b) of Definition 2.4. In case we have disjunction in the data, the rule is clear: R2: from ∆, A ∨ B `? C step to ∆, A `? C and to ∆, B `? C. This corresponds to the consequence relation rule (4c) of Definition 2.4. Let us adopt the above two rules for computation. EXAMPLE 2.49. A ∨ B `? A ∨ B. Using R1 we get A ∨ B `? A or A ∨ B `? B which fail. However using R2, we get A `? A ∨ B and B `? A ∨ B 7 This is not true in classical logic C. Thus, for example, ` C A ∨ (A → B) but 6`C A and 6`C A → B.
2. INTUITIONISTIC AND CLASSICAL LOGICS
63
which succeed using R1. We can try to incorporate the two rules for disjunction within a goal-directed proof procedure for full intuitionistic logic. We tentatively propose the following definition. DEFINITION 2.50. Computation rules for full intuitionistic logic with disjunction. 1. The propositional language contains the connectives ∧, ∨, →, ⊥. Formulas are defined inductively as usual. V 2. We define the operation ∆ + A, for any formula A = i Ai , as follows: ∆ + A = ∆ ∪ {Ai } provided Ai are not conjunctions. 3. The computation rules are as follows. (suc)
∆ `? q succeeds if q ∈ ∆ or ⊥∈ ∆;
(conj)
from ∆ `? A ∧ B step to ∆ `? A and to ∆ `? B;
(g-dis) from ∆ `? A ∨ B step to ∆ `? A or to ∆ `? B; (imp)
from ∆ `? A → B step to ∆ + A `? B;
(red)
from ∆ `? G if G is an atom q or G = A ∨ B, if C ∈ ∆, with C = A1 → . . . An → D (where D is not an implication) step to (a) ∆ `? Ai , for i = 1, . . . , n, and to (b) ∆ + D `? G.
(c-dis) from ∆, A ∨ B `? C step to ∆ + A `? C and to ∆ + B `? C. Notice that we must be allowed to perform a reduction step not only when the goal is atomic, but also when it is a disjunction, as in the following case A, A → B ∨ C `? B ∨ C. Similarly, even if the goal is an atom q in the reduction case (red) we cannot require that D in the formula A1 → . . . An → D is atomic and D = q. If there are disjunctions in the database, at every step we can choose to work on the goal or to split the database by (c-dis) rule. Moreover, if there are n formulas a systematic application of (c-dis) rule yields 2n branches. All of this means that if we handle disjunction in the most obvious way we loose the goal-directedness of deduction and the computation becomes very inefficient. Can we do better? In the following we give an intuitionistic goal-directed procedure for data containing disjunction. It is not easy to define a goal-directed procedure in the intuitionistic case because we must be prepared to switch the goal with another goal coming from another disjunct, but at the same time we must keep the dependency between a goal and the database from which it is asked. In the proof procedure defined below we use labels to account for the dependency of a goal from the database it was
64
GOAL-DIRECTED PROOF THEORY
asked from and we use restart to handle disjunction. The labels in the database are partially ordered, that is they form a tree. This way of using labels is clearly reminiscent of the Kripke semantics of intuitionistic logic. The deduction-procedure defined below retains the goal-directness of the non-disjunctive case; in particular it does not suffer of the two drawbacks of the naive-procedure we have seen above: • we apply the reduction rule only when we reach an atomic goal; • we split a disjunction A ∨ B in the database only if the ‘head’ of A or B (to be defined properly) matches the current atomic goal. At the end of this section, we will see how to obtain, as a particular case, a proof procedure for classical logic with disjunctive formulas. However, in principle it is not even necessary as we can eliminate disjunction by translating A ∨ B as (A → B) → B, or (A → ⊥) ∧ (B → ⊥) → ⊥, although this translation may not be very efficient. To simplify the proof procedure, we rewrite the formulas in an inessential way, by introducing the notion of a D-formula, then we define a proof procedure for D-formulas. DEFINITION 2.51. A D-formula is defined as follows V W D := ⊥ | q | D → D V W where D and D are respectively finite conjunctions and disjunctions of Dformulas. It is easy to see that every formula is equivalent to a set (or conjunction) of D-formulas. When we consider a D-formula, we V distinguish the Wnfollowing cases: either D m is an atom, or it is ⊥, or has the form i=1 Di → j=1 Ej , where Di and Ej are D-formulas and we assume n > 0 and either m > 0 or n > 1. Thus, we can distinguish two subcases of non-atomic D-formulas: Vm (i) D = Wi=1 Di → D0 with m > 0 and D0 is a D-formula, and (ii) D = nj=1 Ej , with n > 1. We need to define the Head of a D-formula: Head(q) Sn Vm= {q}, Wn Head( i=1 Di → j=1 Ej ) = j=1 Head(Ej ). DEFINITION 2.52. A (labelled) query has the form Γ `? x : E, H where E is a D-formula, Γ is a set of labelled D-formulas and constraints of the form x ≤ y, (x, y are labels), H the history is a set of pairs of the form {(x1 , D1 ), . . . , (xn , Dn )},
2. INTUITIONISTIC AND CLASSICAL LOGICS
65
where xi are labels and Di are D-formulas. We denote by Lab(E) the set of labels occurring in E. The proof rules for constraints are the following: • (≤ 1) Γ ` x ≤ x; • (≤ 2) Γ ` x ≤ y, if x ≤ y ∈ Γ; • (≤ 3) Γ ` x ≤ y, if for some z, Γ ` x ≤ z and Γ ` z ≤ y. The computation rules are the following: • (success-atom) Γ `? x : q, H succeeds if for some y, Γ ` y ≤ x, and either y : q ∈ Γ, or y : ⊥ ∈ Γ; • (implication) from Γ `? x :
Vm i=1
Di →
Wn j=1
Ej , H
step to Γ, y : D1 , . . . , y : Dm , x ≤ y y 6∈ Lab(Γ) ∪ Lab(H) ∪ {x};
`?
y :
Wn j=1
Ej , H, where
• (disjunction)
W from Γ `? x : nj=1 Ej , H step to W Γ `? x : El , H ∪ {(x, nj6=l El )} for some l ∈ {1, . . . , n};
y : D • (reduction) from Γ `? x : q, H, if there is a formula Wn∈ Γ, such Vm that q ∈ Head(D) or ⊥ ∈ Head(D), with D = i=1 Di → j=1 Ej and Γ ` y ≤ x, then for some z such that Γ ` y ≤ z and Γ ` z ≤ x we step to Γ `? z : Di , H ∪ {(x : q)} for i = 1, . . . , m and Γ, z : Ej `? x : q, H for j = 1, . . . , n; • (restart) from Γ `? x : q, H, with q atomic if (y, D0 ) ∈ H step to ∆ `? y : D0 , H ∪ {(x, q)}; • (reduction-false) From Γ `? x : q, H step to Γ `? y : ⊥, H for any y ∈ Lab(Γ). Notice that the success-rule is a degenerate case of the reduction rule, (we have kept it distinct by imposing the constraints on the number of antecedents and consequents). Notice also the presence of the reduction-false rule. This rule is necessary in cases such as the following (if we allow them):
66
GOAL-DIRECTED PROOF THEORY
x : a, y : ⊥, x ≤ y `? x : c, here the success rule does not help. However, we do not need the reduction-false rule, if the starting database does not prove x : ⊥ for any x. This is stated in Lemma 2.56. The proof system can be extended with a rule for conjunction. Vm (conjunction) from Γ `? x : i=1 Di step to Γ `? x : Di , for i = 1, . . . , m. When we apply the rule to Γ `? x : q, H with respect to a formula Vmreduction W y : D ∈ Γ, D = i=1 Di → nj=1 Ej , we have to choose a label z such that Γ ` y ≤ z and Γ ` z ≤ x and we must step to (1) Γ `? z : Di , H ∪ {(x : q)} for i = 1, . . . , m and to (2) Γ, z : Ej `? x : q, H for j = 1, . . . , n. In general we cannot fix this z a priori. It must be a ≤-minimal z such that (1) succeeds. In specific cases we can determine z in advance: if n = 1 and En is atomic, we can always take z = x. If m = 0, we can always take z = y.8 The correctness of these choices follows from the the property of monotony stated in Lemma 2.56. Slight variants of the procedure are possible. For instance, we can think of making more goal-oriented the reduction rule. Rather than asking Γ, z : Ej `? x : q, H for j = 1, . . . , n, we can go on decomposing those Ej such that q or ⊥ is in Head(Ej ). To this purpose we can define a relation NEXT, whose format is NEXT(Q; y : D; S), where Q is a query, y : D belongs to the database of Q and S is a set of queries. The intention is that S is a set of queries which are obtained by (repeatedly) applying reduction to Q with respect to y : D. We said a set of queries rather than the set of queries because S cannot be uniquely determined. The reason is that there may be more choices for the label z in front of the goals for which Γ ` y ≤ z and Γ ` z ≤ x holds, as we explained above. The relation NEXT just applies the standard decomposition of sequent calculi keeping the goal focused. The relation NEXT(∆ `? x : q, H; y : D; S) is recursively defined as follows: • If {q, ⊥} ∩ Head(D) = ∅ we let NEXT(∆ `? x : q, H; y : D; S) ≡ S = {∆ `? x : q, H}. 8 In the general case we can postpone the choice of z to the end of the computation. The technique is rather standard. To give a (simplified) hint, we consider z as a variable Z; when an atomic goal Z : r is reached and the database contains, say, u : r we temptatively instantiate Z with u and then we check the constraints y ≤ u and u ≤ x. If they are satisfied we are done. Otherwise, we try another instantiation, if any. Thus, each derivation generates a system of constraints/inequalities to be solved.
2. INTUITIONISTIC AND CLASSICAL LOGICS
• If {q, ⊥} ∩ Head(D) 6= ∅ and D =
Vm i=1
Di →
Wn j=1
67
Ej , we let
NEXT(∆ `? x : q, H; y : D; S) ≡ there exist S1 , . . . Sn , z, such that 1. ∆ ` y ≤ z and ∆ ` z ≤ x, 2. S = {∆ `? z : Di , H ∪ {(x, q)} | i = 1, . . . , m} ∪
Sn j
Sj ,
3. NEXT(∆ ∪ {z : Ej } ` x : q, H; z : Ej ; Sj ). Then the reduction rule becomes: from Γ `? x : q, H, if there is a formula y : D ∈ Γ and a set of queries S such that - {q, ⊥} ∩ Head(D) 6= ∅, - Γ ` y ≤ x, and - NEXT(Γ `? x : q; H; y : D; S), step to every Q ∈ S. It can be proved that the reduction rule using NEXT is equivalent to the more general reduction rule. EXAMPLE 2.53. Let ∆ = {x : A1 , . . . , x : A5 } where: A1 A2 A3 A4 A5
= = = = =
a → (b → c) ∨ (d → e), c → p, (b → p) → t, q → d, (q → e) → s.
In Figure 2.3 a derivation is shown of ∆ `? x : a → t ∨ s, ∅ which correspond to check A1 ∧ . . . ∧ A5 ` a → (t ∨ s) in intuitionistic logic. We adopt the following abbreviations: we omit ∆ and in each node we only show the additional data which is first introduced to the database at that node. (Thus, the complete available data in each node is given by the initial ∆ plus the formulas introduced in each previous node on the branch from the root to that node). Moreover we omit the history as it is clear from the structure of the tree, we make an exception for (y : t) which is used in the (unique) restart step. Here is an explanation of the steps: (4) is obtained by reduction w.r.t. A3 , (6) by reduction w.r.t. A2 . (7) and (8) are obtained by reduction w.r.t. A1 . (7) succeeds as y : a has been put in the data at step (2). (9) and (10) are obtained by reduction w.r.t. (b → c) ∨ (d → e). (11) is obtained by reduction w.r.t. b → c and it succeeds as z : b has been put in the data at step (5). (12) is obtained by restart. (13) is obtained by reduction w.r.t. A5 . (15) is obtained by reduction w.r.t. d → e. (16) is obtained by reduction w.r.t. A4 and it succeeds as u : q has been put in the data at step (14). If we use the formulation with the function NEXT, from (6) we immediately step to (7), (11), and (10) without any intermediate step.
68
GOAL-DIRECTED PROOF THEORY
(1) `? x : a → t ∨ s (2) x ≤ y, y : a `? y : t ∨ s (3) `? y : t, (y : s) (4) `? y : b → p (5) y ≤ z, z : b `? z : p (6) `? z : c ! aa a !! (8) y : (b → c) ∨ (d → e) `? z : c (7) `? y : a H H H ? (10) y : d → e `? z : c (9) y : b → c ` z : c (11) y : b → c `? z : b
(12) `? y : s (restart) (13) `? y : q → e (14) y ≤ u, u : q `? u : e (15) `? u : d (16) `? u : q
Figure 2.3. Derivation for Example 2.53.
2. INTUITIONISTIC AND CLASSICAL LOGICS
69
In order to prove the soundness of the procedure, we need to introduce the notion of realization of a database. DEFINITION 2.54. Given ∆ as above, let M = (W, ≤M , w0 , V ) be a Kripke model. A realization of Γ is a mapping ρ : A → W , such that • x ≤ y ∈ Γ implies ρ(x) ≤M ρ(y); • x : D ∈ Γ implies M, ρ(x) |= D. Given a query Q = Γ ` x : E, H, we say that Q is valid iff for all M and all realizations ρ of Γ in M , we have: either M, ρ(x) |= E, or for some (y, F ) ∈ H, M, ρ(y) |= F . THEOREM 2.55 (Soundness). If ∆ ` x : E, H is derivable then it is valid. Proof. By induction on the length of computations. Details are left to the reader as an exercise. In order to prove the completeness we need to show that the cut-rule, suitably formulated, is admissible. The proof makes use of some properties of the deduction procedure which are stated in the following lemma, whose proof is left to the reader. LEMMA 2.56. (i) (Monotony) if Γ `? x : D, H succeeds and Γ ⊆ ∆, ∆ ` x ≤ y, H ⊆ H 0 , then also ∆ `? y : D, H 0 succeeds. (ii) Γ `? x : D, H ∪ {(y : E)} succeeds iff Γ `? y : E, H ∪ {(x : D)} succeeds. (iii) if Γ `? x : D, H ∪{(x : D)} succeeds then also Γ `? x : D, H succeeds. (iv) if Γ `? x : D, H succeeds and for no label y, Γ `? y : ⊥, H succeeds, then Γ `? x : D, H succeeds without using the reduction-false rule. (v) if Γ `? x : ⊥, H succeeds, then for any D Γ `? y : D, H succeeds. By Γ[x : D], we denote that x : D ∈ Γ. THEOREM 2.57 (Cut for disjunctive databases). and ∆ `? z : D, H2 succeeds, then also
If Γ[x : D] `? y : D1 , H1
Γ[x : D/∆, z] `? y[x/z] : D1 , H1 [x/z] ∪ H2 succeeds, where Γ[x : D/∆, z] = (Γ − {x : D})[x/z] ∪ ∆.
70
GOAL-DIRECTED PROOF THEORY
Proof. The proof of this theorem is similar to the one of Theorem 2.10 and proceeds by induction on pairs (c, h), where c is the complexity of D and h is the height of a successful derivation of Γ[x : D] `? y : D1 , H1 . The precise definition of complexity does not matter, we only need that the atoms have a minimal Wn Vk complexity and that given D of the form i=1 Di → j=1 Ej , the complexity of Di and of Ej is smaller than that of D. We only consider the most difficult case, where Γ[x : D] `? y : D1 , H1 succeeds by reduction with respect to D. In this case D1 is an atom, say q, and either q ∈ Head(D) or ⊥ ∈ Head(D), Γ ` x ≤ y, and for some u such that Γ ` x ≤ u and Γ ` u ≤ y we step to Γ `? u : Di , H1 ∪ {(y, q)} for i = 1, . . . , k and Γ, u : Ej `? y : q, H1 for j = 1, . . . , n, which succeed with a smaller height. By the induction hypothesis, we get (ai ) Γ[x : D/∆, z] `? u[x/z] : Di , (H1 ∪ {(y, q)})[x/z] ∪ H2 for i = 1, . . . , k and (bj ) (Γ ∪ {u : Ej })[x : D/∆, z] `? y[x/z] : q, H1 [x/z] ∪ H2 for j = 1, . . . , n. By the second premiss, we get that for some v 6∈ ∆, we have Wn (c) ∆, z ≤ v, v : D1 , . . . , v : Dk `? v : j=1 Ej , H2 succeeds. Since each Di has a smaller complexity than D, by the induction hypothesis we can repeatedly cut (ai ) and (c), so that we get Wn Γ[x : D/∆, z] `? u[x/z] : j=1 Ej , (H1 ∪ {(y, q)})[x/z] ∪ H2 succeeds. But this implies that also : D/∆, z] `? u[x/z] : E1 , H1 [x/z] ∪ {(y[x/z], q), (d1 ) Γ[x W n (u[x/z], j=2 Ej )} ∪ H2 succeeds. Since E1 has a smaller complexity than D, again by the induction hypothesis we can cut (d1 ) and (b1 ) so that we get Γ[x : D/∆, z] `? y[x/z] : q, H1 [x/z] ∪ {(y[x/z], q), (u[x/z], W n j=2 Ej )} ∪ H2 succeeds. Wn By the previous lemma, we get that also Γ[x : D/∆, z] `? u[x/z] : j=2 Ej H1 [x/z] ∪ {(y[x/z], q), } ∪ H2 succeeds, whence (d2 ) Γ[x : D/∆, z] `? u[x/z] : E2 , H1 [x/z]∪{(y[x/z], q), (u[x/z], W n j=3 Ej )} ∪ H2 succeeds. By the induction hypothesis, we can cut (d2 ) and (b2 ). By repeating this argument to n we finally get
2. INTUITIONISTIC AND CLASSICAL LOGICS
71
Γ[x : D/∆, z] `? y[x/z] : q, H1 [x/z]∪{(y[x/z], q)}∪H2 succeeds, so that by the previous lemma the claim follows.
THEOREM 2.58 (Completeness). If Γ `? x : D, H is valid then it succeeds. Proof. We prove the contrapositive, i.e. if Γ `? x : D, H does not succeed then it is not valid. Assume Γ `? x : D, H fails. We construct a Kripke counter model of Γ `? x : D, H, by extending the database, through the evaluation of all possible formulas at every label (each representing one world) of the database. Since such evaluation may lead, in the case of implication, to the creation of new worlds, we must extend the evaluation process to these new worlds. For this reason we consider in the construction an enumeration of pairs (xi , Di ), where xi is a label and Di is a D-formula. We let A be a denumerable alphabet of labels and L be the underlying propositional language. Let (xi , Di ), for i ∈ ω be an enumeration of pairs of A × L, starting with the pair (x, D) and containing infinitely many repetitions, that is (x0 , D0 ) = (x, D), ∀y ∈ A, ∀E ∈ L, ∀n ∃m > n (y, E) = (xm , Dm ). Given such an enumeration we define i) a sequence of databases Γn , ii) a sequence of histories Hn , and iii) a new enumeration of pairs (yn , En ), as explained below. The construction will depend at each stage on the form of the formula En under examination. We refer to the notation for D-formulas introduced Vm at the beginning 0 of the section, i.e. either D is an atom, or it is ⊥, or D = i=1 Fi → D with Wn 0 m > 0, or D = j=1 Gj , with n > 1, where Fi , Gj , D are D-formulas. • (step 0) Let Γ0 = Γ, H0 = H, (y0 , E0 ) = (x, D). • (step n+1) Given (yn , En ), we let – Γn+1 = Γn , – (yn+1 , En+1 ) = (xk+1 , Dk+1 ), where k = maxt≤n ∃s≤n (ys , Es ) = (xt , Dt ), – Hn+1 = Hn , unless yn ∈ Lab(Γn ) and we are in one of the cases listed below. Wm – En = q (q is an atom) or En = j Gj , and Γn `? yn : En , Hn fails, in this case we let Hn+1 = Hn ∪ {(yn , En )}. Vm – En = j=1 Fj → D0 and Γn `? yn : En , Hn fails, we let Γn+1 = Γn ∪ {yn ≤ xs , xs : F1 , . . . , xs : Fm }, (yn+1 , En+1 ) = (xs , D0 ), where xs = min{xt ∈ A | xt 6∈ Lab(Γn )}.
72
GOAL-DIRECTED PROOF THEORY
Wm
Gj , and Γn `? yn : En , Hn succeeds, but Wm for every l = 1, . . . , m, Γn `? yn : j6=l Gj , Hn fails. Wm we let Hn+1 = Hn ∪ {(yn , j6=l Gj )} for one 1 ≤ l ≤ m.
– En =
j
Each stage of the construction defines, so to say, a new query. Intuitively, we follow the enumeration (xn , Dn ) in order to determine which formula and world to consider at the next stage, unless the formula currently evaluated is implicational (and it fails). In such a case, we evaluate its consequent at a newly created world. When we come to an atomic formula, or to a disjunction, we go back to the enumeration (xn , Dn ) to pick the next pair. The proof of the theorem is composed by the next lemmas. LEMMA 2.59. ∀k ∃n ≥ k (xk , Dk ) = (yn , En ). Proof. By induction on k. If k = 0, the claim hold by definition. Let (xk , Dk ) = (yn = En ). (i) if either yn 6∈ Lab(Γn ), or Γn , αn `? yn : En , Hn succeeds, or En is atomic, or it is a disjunction, then (xk+1 , Dk+1 ) = (yn+1 , En+1 ). Vm (ii) Otherwise, let En = j=1 Fj → G1 → . . . → Gt → K, where K is not an implication (t ≥ 0), then (xk+1 , Dk+1 ) = (yn+t+1 , En+t+1 ).
LEMMA 2.60. In the hypothesis Γ0 `? x0 : D0 , H0 fails, the following holds: for all n ≥ 0, if (y, E) ∈ Hn , or E = ⊥, then Γn `? y : E, Hn fails. Proof. By Lemma 2.56 we consider only the case of (y : E) ∈ Hn . We proceed by induction on n. Suppose it does not hold, let n be the minimum stage for which it does not hold. By hypothesis and Lemma 2.56, we can assume n > 0. Let n = m + 1 and suppose the property holds up to m. We only need to consider the cases when Γm 6= Γm+1 or Hm 6= Hm+1 . • Let Em be an atom or a disjunction, and Hm+1 = Hm ∪ {(ym , Em )}. Then Γm `? ym : Em , Hm fails. Suppose for some (y, E) ∈ Hm+1 , Γm `? y : E, Hm+1 succeeds. By Lemma 2.56(ii) it must be (y : E) 6= (ym , Em ), thus (y, E) ∈ Hm , then by hypothesis and Lemma 2.56(iii), we get a contradiction. Vk • Let Em = j=1 Fj → D0 and Γm `? ym : Em , Hm fails, then Γm+1 = Γm ∪ {ym ≤ xt , xt : F1 , . . . , xt : Fk }, (ym+1 , Em+1 ) = (xt , D0 ),
2. INTUITIONISTIC AND CLASSICAL LOGICS
73
where xt = min{xs ∈ A | xs 6∈ Lab(Γm )}. Suppose for some (y, E) ∈ Hm+1 = Hm , (*) Γm+1 `? y : E, Hm+1 succeeds. Consider the following computation of Γm `? ym : Em , Hm . Start with Vk Γm `? ym : j=1 Fj → D0 , Hm , step by the implication rule to Γm+1 `? ym+1 : D0 , Hm+1 , go on with the computation until an atomic goal is reached, let us say Σ `? z : q, H 0 . Since Hm+1 ⊆ H 0 , we can step by restart to Σ `? y : E, H 0 ∪ {(z, q)}. Since Γm+1 ⊆ Σ, by (*) and monotony, the above query succeeds, whence Γm `? ym : Em , Hm succeeds, contradicting the hypothesis. Wt • Em = j Gj , and Γm `? ym : Em , Hm succeeds, but for l = 1, . . . , t, Wt Wt Γm `? ym : j6=l Gj , Hm fails, then Hm+1 = Hm ∪ {(ym , j6=l0 Gj )} for one 1 ≤ l0 ≤ t. Suppose for some (y : E) ∈ Hm+1 Γm+1 `? y : E, Hm+1 succeeds, by Lemma 2.56(ii), we get Wt Γm+1 `? ym : j6=l0 Gj Hm+1 succeeds, so that by Lemma 2.56(iii), since Γm+1 = Γm , we get Γm W t j6=l0 Gj , Hm succeeds, against the hypothesis.
`?
ym :
LEMMA 2.61. If for all n ≥ 0, if Γn `? yn : En , Hn fails, then: ∀m ≥ n Γm `? yn : En , Hm fails. Proof. By induction on the complexity of En . If En = ⊥ or En = q, or En is a disjunction, the claim immediately follows by construction and the previous Vk lemma, without actually using the induction hypothesis. Let En = j=1 Fj → D0 and Γn `? yn : En , Hn fails, then Γn+1 = Γn ∪ {yn ≤ xt , xt : F1 , . . . , xt : Fk }, where xt = min{xs ∈ A | xs 6∈ Lab(Γn )}. By the computation rules, Γn+1 `? yt : D0 , Hn+1 fails; since (yn+1 , En+1 ) = (xt , D0 ), by the induction hypothesis, we have (*) for all m ≥ n + 1, Γm `? yn+1 : D0 , Hm fails.
74
GOAL-DIRECTED PROOF THEORY
Suppose for some m > n, Γn `? yn : some label u 6∈ Lab(Γm ),
Vk j=1
Fj → D0 , Hn succeeds, then for
(1) Γm ∪ {yn ≤ u, u : F1 , . . . , u : Fk } `? u : D0 , Hm succeeds. On the other hand by monotony, being Γn+1 ⊆ Γm , we easily get (2) Γm `? yn+1 : Fj , Hm succeeds for j = 1, . . . , t. Since yn ≤ yn+1 ∈ Γm , By cutting (1) and (2), we get Γm `? yn+1 : D0 , Hm succeeds, against (*). LEMMA 2.62. ∀m, Γm `? x : D, Hm fails. Proof. Immediate by the previous lemma. V LEMMA 2.63. If En = tj=1 Fj → D0 and Γn ` ? y n :
Vt j=1
Fj → D0 ,Hn fails,
then there is a y ∈ A, such that for k ≤ n, y 6∈ Lab(Γk ) and ∀m > n: (i) yn ≤ y ∈ Γm , (ii) Γm `? y : Fj , Hm succeeds for j = 1, . . . , t, (iii) Γm `? y : D0 , Hm fails. Proof. By construction, we can take y = yn+1 , the new point created at step n+ 1, so that (i), (ii), (iii) hold for m = n + 1. In particular, (*) Γn+1 `? yn+1 : D, Hn+1 fails. Since the Γm are not decreasing by monotony, we immediately have that (i) and (ii) also hold for every m > n + 1. By construction, we know that En+1 = D0 , whence by (*) and Lemma 2.61, (iii) also holds for every m > n + 1. W LEMMA 2.64. If for some n Γn `? y : tj Gj , Hn succeeds, then there is an m > n, such that for some Gl , with 1 ≤ l ≤ t, Γm `? y : Gl , Hm succeeds. W W Proof. Let Γn `? y : tj Gj , Hn succeed, and let k ≥ n, such that (y, tj Gj ) W is considered at step k, that is (y, tj Gj ) = (yk , Ek ), then Γk `? yk : Ek , Hk succeeds. If (a) for some Gl , Γk `? yk :WGl , Hk succeeds, we are done. On the t other hand, if (b) for every l, Γk `? yk : j6=l Gj , Hk fails, we argue as follows: W t by hypothesis Γk `? y : j Gj , Hn succeeds; by the disjunction rule for any l,
2. INTUITIONISTIC AND CLASSICAL LOGICS
75
Wt Γk `? yk : Gl , Hk ∪ {(yk , j6=l Gj )} succeeds; thus by definition of the step (k+1), we have Γk+1 `? yk+1 : Gl , Hk+1 succeeds Wt for some Gl and we are done again. If neither (a), nor (b) holds, Γk `? yk :W j6=l Gj , Hk succeeds for some l. Then, as before, there is h ≥ k, such that (yk , j6=l Gj ) = (yh , Eh ) is considered at step h, we can repeat the argument (at most t − 2 times) until we fail in case (a) or in case (b). Construction of the Canonical model We define an intuitionistic Kripke-model as follows M = (W, u0 , ≤, V ), such that S S • W = n Lab(Γn ) ∪ {u0 }, where u0 6∈ n Γn ; • x ≤ y ≡ x = u0 ∨ ∃n Γn ` x ≤ y; • V (u0 ) = ∅; • V (x) = {q | ∃n x ∈ Lab(Γn ) ∧ Γn `? x : q, ; Hn succeeds} for x 6= u0 . It is easy to see that ≤ is reflexive and transitive and that V is monotonic with respect to ≤. LEMMA 2.65. for all x ∈ W, x 6= u0 and D-formulas E, M, x |= E ⇔ ∃n x ∈ Lab(Γn ) ∧ Γn `? x : E, Hn succeeds. Proof. We prove both directions by mutual induction on cp(E). If E is an atom then the Vt claim holds by definition, if E is ⊥ it follows by Lemma 2.60. Assume E = j=1 Fj → D0 . Vt (⇐) Suppose for some m Γm , αm `? x : j=1 Fj → D0 , Hm succeeds. Let Vt x ≤ y and M, y |= j=1 Fj , for some y. Then, M, y |= Fj for j = 1, . . . , t. By definition of ≤ we have that for some n1 , Γn1 ` x ≤ y holds. Moreover, by the induction hypothesis, for some mj , j = 1, . . . , t, Γmj `? y : Fj , Hmj succeeds. Let k = max{m1 , . . . , mt , n1 , m}, then we have Vt (1) Γk `? x : j=1 Fj → D0 , Hk succeeds, (2) Γk `? y : Fj , Hk succeeds for all j = 1, . . . , t, (3) Γk ` x ≤ y. So that from (1) we also have: (10 ) Γk ∪ {x ≤ z, z : F1 , . . . , z : Ft } `? z : D0 , Hk succeeds, (with z 6∈ Lab(Γk )). We can cut (10 ) and (2), and obtain : Γk `? y : D0 , Hk succeeds.
76
GOAL-DIRECTED PROOF THEORY
and by the induction hypothesis, M, y |= D. V (⇒) Suppose by way of contradiction that M, x |= tj=1 Fj → D0 , but for all V n, if x ∈ Lab(Γn ), then Γn `? x : tj=1 Fj → D0 , Hn fails. Let x ∈ Lab(Γn ), V then there is m > n, such that (x, tj=1 Fj → D0 ) = (ym , Em ) is considered at step m + 1, so that we have: V Γm `? ym : tj=1 Fj → D0 , Hm fails. By Lemma 2.63, there is a y ∈ A, such that (a) for t ≤ m, y 6∈ Lab(Γt ) and (b): ∀m0 > m (i) Γm0 ` ym ≤ y, (ii) Γm0 `? y : Fj , Hm0 succeeds for all j = 1, . . . , t, (iii) Γm0 `? y : D0 , Hm0 fails. By (b)(i)Vwe have x ≤ y holds, by (b)(ii) and the induction hypothesis, we have t M, y |= j=1 Fj . By (a) and (b)(iii), we get ∀n if y ∈ Lab(Γn ), then Γn , αn `? y : D0 , Hn fails. Hence, by the induction hypothesis, we have M, y 6|= D0 , and we get a contradiction. Wt Let E = j=1 Gj . Wt (⇐) Let Γn `? x : j=1 Gj , Hn succeeds, then by Lemma 2.64, there is some m ≥ n such that Γm `? x : Gj , Hm succeeds for some j = 1, . . . , t. By the induction hypothesis, we have M, x |= Gj . (⇒) If M, x |= E, then for some j, M |= Gj , then we simply apply the induction hypothesis and conclude. Proof of Completeness Theorem 2.58. We are able now to conclude the proof of the completeness theorem. Let ρ(z) = z, for every z ∈ Lab(Γ), where Γ is the original database. It is easy to see that ρ is a realization of Γ in M and if u : C ∈ Γ, then by identity and the previous lemma we have M, ρ(u) |= C. On the other hand, by Lemmas 2.62,2.60 and 2.65 we have M, ρ(x) 6|= D and for all (y, E) ∈ H, M, ρ(y) 6|= E. This concludes the proof. We end this section with a remark on the treatment of disjunction in classical logic. To have classical disjunction we can adopt the procedure for intuitionistic logic and ignore the constraints on the labels (whence the labels themselves). In the case of classical logic, we can think that there is a single world, so that for any pair of labels x, y, the constraint x ≤ y is trivially satisfied. EXAMPLE 2.66. Let us check a ∨ (b → c) ` (b → a) ∨ c in classical logic. The derivation is shown in Figure 2.4. Let ∆ = {x : a ∨ (b → c)} be the initial database. As usual at every step we only show the new data, if any. Query (∗) fails
2. INTUITIONISTIC AND CLASSICAL LOGICS
77
4 `? x : (b → a) ∨ c, ∅
`? x : b → a, {(x, c)}
x ≤ y, y : b `? y : a, {(x, c)}
x ≤ y, y : b, x : a `? y : a, {(x, c)}
x ≤ y, y : b, x : b → c `? y : a, {(x, c)}
x ≤ y, y : b, x : b → c `? x : c, {(x, c), (y, a)} restart
(*) x ≤ y, y : b, x : b → c `? x : b, {(x, c), (y, a)}
Figure 2.4. Derivation for Example 2.66. in intuitionistic logic as y 6≤ x, whereas it succeeds in classical logic since b is in the database, regardless of the label. The procedure for classical logic can be further simplified in many ways according to the syntax of the formulas we want to treat (or the amount of rewriting we are willing to perform). One radical reduction is the following: any formula can be classically transformed into a set of clauses C of the form C = p1 ∧ . . . pn → q1 ∨ . . . ∨ qm where m > 0, and if qi = ⊥ then m = 1. We assume that goals are just atoms. This restricted pattern of clauses and goals is however sufficient to encode any tautology problem in classical logic. We can even further eliminate ⊥ introducing new atoms, although we will not do it. For databases and goals of this format, the restart rule can be restricted to the initial goal (the proof is similar to the one of Proposition 2.30). Thus, we may write a query as ∆ `? q, (q0 ). For database and goal of the above form the rules of Definition 2.52 simplify to the following: • (success) ∆ `? q, (q0 ) immediately succeeds if q ∈ ∆;
78
GOAL-DIRECTED PROOF THEORY
• (reduction) from ∆ `? q, (q0 ), if there is a clause p1 ∧ . . . ∧ pn → q1 ∨ . . . ∨ qm ∈ ∆ such that q = qi or m = 1 and q1 = ⊥ step to (1) ∆ `? pj , (q0 ) for j = 1, . . . , n and to (2) ∆, qj `? q, (q0 ) for j = 1, . . . , n and j 6= i. If m = 1 and q1 = ⊥ step only to (1). • (restart) from ∆ `? q, (q0 ) step to ∆ `? q0 , (q0 ). These rules are almost identical to the propositional version of the rules given recently by Nadathur [1998] for first-order classical logic.
6
THE ∀, →-FRAGMENT OF INTUITIONISTIC LOGIC
In this last section we present a goal-directed procedure for the ∀, → fragment of intuitionistic logic. This procedure is in the style of a logic programming proof procedure which uses unification and computes answer-substitutions . This means that the outcome of a successful computation of ∆ `? G[X], where G[X] stands for ∃XG[X] is not only ’yes’, but it is (a most general substitution) X/t, such that ∆ ` G[X/t] holds in intuitionistic logic. In the literature [Miller et al., 1991; Miller, 1992] similar extensions have been proposed, but without describing in detail the answer-substitution computation. We consider this computation inherent to the extension of the logic programming paradigm, rather than part of the implementation details, and here we give a detailed description of it. Although the extension of the goal-directed approach to first-order languages is beyond the scope of this book, what we present here may give some hint on how to extend the goal-directed methods to the first-order case for other (nonclassical) logics. On the other hand, the purpose of this section is to show that for the ∀, →-fragment of intuitionistic logic,9 one can give a proof procedure which is a relatively simple extension of the standard proof procedure for Horn first-order logic adopted in conventional logic programming. In classical logic one can always put formulas in prenex form, replace existential quantifiers by Skolem functions, and use the unification mechanism to deal with so-obtained universal sentences. In intuitionistic logic, as well as in modal logics, one cannot introduce Skolem functions at the beginning of the computation. The process of eliminating existential quantifiers by Skolem functions must be carried on in parallel with goal reduction. This process is called ‘run-time 9 For reasons of clarity we limit our consideration to this fragment, but one can easily extend it to the so-called (Hereditary) Harrop-formulas as defined in [Miller et al., 1991].
2. INTUITIONISTIC AND CLASSICAL LOGICS
79
Skolemisation’ in [Gabbay, 1992; Gabbay and Reyle, 1993] and adopted in NProlog [Gabbay and Reyle, 1993], a hypothetical extension of Prolog based on intuitionistic logic. A similar idea for classical logic is embodied in the rree-variable tableaux, see [Fitting, 1990] and [H¨ahnle and Schmitt, 1994] for an improved rule. The use of Skolem functions and normal form for intuitionistic proof-search has been extensively studied by many authors, we just mention: [Shankar, 1992] (with regard to the intuitionistic sequent calculus), [Pym and Wallen, 1990], and [Sahlin et al., 1992]. Wallen [Wallen, 1990] has developed a first-order proof method based on a matrix characterisation of intuitionistic logic.10 We assume known standard notions and notations relative to first-order languages (we refer the reader to any standard textbook as [Gallier, 1987]). We will however reformulate the Kripke semantics of intuitionistic logic for the first-order case. We begin by recalling the intuitionistic rules for the universal quantifier and the troubles they give in the goal-directed proof search. The consequence-relation rules for the universal quantification are the following [van Dalen, 1986]: (∀-I)
Γ ` A[c]
, Γ ` ∀XA[X] provided c is a constant which does not occur either in Γ or in A.
(∀-E) ∀XA[X] ` A[X/t], where t is any term. When used backwards the (∀-I) rule says that, in order to prove ∀XA[X], one has to prove A[c] for a new constant c. To incorporate this rule soundly within the goal-directed proof procedure for intuitionistic logic requires some care. EXAMPLE 2.67. The formula ∀X((p(X) → ∀Y p(Y )) → q) → q is not a theorem of intuitionistic logic,11 but if we apply the rule (∀-I) naively we succeed as shown in Fig 2.5. In the derivation shown, c is a new constant, we succeed since we can unify p(X) and p(c). We should block the unification of X with c. The rule should prevent the unification of a free variable with a constant which is introduced later. This might be done by supplying information on when a constant has been introduced (a sort of ‘time-stamping’ of constants). We will follow the alternative approach of run-time Skolemisation. The idea is to eliminate the universal quantifiers by introducing a new Skolem function c(X) which depends on all free-variables occurring in the database and in the goal.12 In 10 An
analysis and comparison with these works is far out of the scope of the present section. is equivalent to ∃X(p(X) → ∀Y p(Y )) which is not intuitionistically valid. 12 By analogy with tableaux (see the references above), one may study more refined Skolemisation techniques to minimise the free variables on which the Skolem function depends [H¨ahnle and Schmitt, 1994; Shankar, 1992]. 11 It
80
GOAL-DIRECTED PROOF THEORY
`? ∀X((p(X) → ∀Y p(Y )) → q) → q ∀X((p(X) → ∀Y p(Y )) → q) `? q ∀X((p(X) → ∀Y p(Y )) → q) `? p(X) → ∀Y p(Y ) ∀X((p(X) → ∀Y p(Y )) → q), p(X) `? ∀Y p(Y ) ∀X((p(X) → ∀Y p(Y )) → q), p(X) `? p(c)
Figure 2.5. Derivation for Example 2.67. the example above, we block unification, since c(X) and X cannot unify by the occur-check. Before we define the proof procedure, we fix some notation. We consider formulas of a first-order language containing the logical constants ∀, →, function and predicate symbols of each arity. We assume the notion of term is known. DEFINITION 2.68. We simultaneously define the formulas, the set of bounded variables (BV ), and free variables (F V ) occurring in a formula. • If R is a n-ary predicate symbol and t1 , . . . tn is a tuple of terms, then R(t1 , . . . , tn ) is a formula; we call R(t1 , . . . , tn ) an atom. The set of variables occurring in a term t is denoted by Var(t). The notation Var(t) denotes the set of variables occurring in a term t. We let BV (R(t1 , . . . , tn )) = ∅ and F V (R(t1 , . . . , tn )) = Var(t1 ) ∪ . . . ∪ Var(tn ). • If A and B are formulas, and BV (A) ∩ BV (B)
= =
F V (A) ∩ BV (B) BV (A) ∩ F V (B) = ∅,
then A → B is a formula, and we let BV (A → B) = BV (A) ∪ BV (B) and F V (A → B) = F V (A) ∪ F V (B). • if A is a formula and X ∈ F V (A) then ∀XA is a formula; we let BV (∀XA) = BV (A) ∪ {X} and F V (∀XA) = F V (A) − {X}. It is easy to see that for all formulas A, F V (A) ∩ BV (A) = ∅, and each universal quantifier acts on a different variable, that is we do not allow formulas such as ∀X(p(X) → ∀Xp(X)).
2. INTUITIONISTIC AND CLASSICAL LOGICS
81
Every formula of the language can be displayed as ∀X¯1 (A1 → ∀X¯2 (A2 → . . . ∀X¯k (Ak → ∀Y¯ q) . . .), ¯i and Y¯ are (possibly empty) sequences of where Ai are arbitrary formulas, X variables, and q is an atomic formula. According to the definition, variables Y¯ cannot occur in Ai and variables X¯i cannot occur in Aj for j < i. The restrictions involved in our definition of formulas do not cause any loss of generality, as we can always rename bound variables. A formula A0 which is obtained from a formula A by renaming some or all bound variables of A is called a variant of A. Given a formula A we extend the definition of Body(A) and Head(A) formulas of the form ∀XA: Body(∀XA) = Body(A) and Head(∀XA) = Head(A). Thus, in a formula A = ∀X¯1 (A1 → ∀X¯2 (A2 → . . . ∀X¯k (Ak → ∀Y¯ q) . . .), we have Body(A) = (A1 , . . . , Ak ) and Head(A) = q. We assume that the usual notions regarding substitutions (composition, empty substitution, mgu, etc...) are known (the reader is referred to [Gallier, 1987; Lloyd, 1984]). A substitution may only act on the free variables of a formula, hence if θ = {X/a, Y /b}, then (∀Y p(X, Y ))θ = ∀Y p(a, Y ). Given two substitutions σ, γ, we define σ ≤ γ (σ is an instance of γ) iff there is a substitution δ, such that σ = γδ. As usual a database is a finite set of formulas. The proof procedure we present below manipulates queries N which are finite sets of basic queries of the form ∆ `? A, where ∆ is a database and A is a formula. As in conventional logic languages, the proof procedure described below computes answer substitutions in the case of success. DEFINITION 2.69. A derivation of a query N is a sequence of queries N0 , N1 , . . . , Nk , together with a sequence of substitutions σ1 , . . . , σk , such that N0 = N , and for i = 0, . . . , k Ni+1 , and σi+1 are determined according to one of the following rules: • (Success) if Ni = N 0 ∪ {∆ `? q}, where q is an atom, and there is a formula C ∈ ∆ and a variant C 0 of C such that BV (C 0 )∩F V (∆∪{q}) = ∅, Body(C 0 ) = ∅, and there exists σ = mgu(Head(C 0 ), q), then we can put Ni+1 = N 0 σ,
σi+1 = σ.
82
GOAL-DIRECTED PROOF THEORY
• (Implication) if Ni = N 0 ∪ {∆ `? A → B}, then we can put Ni+1 = N 0 ∪ {∆ ∪ {A} `? B},
σi+1 = .
¯ = {U1 , . . . , Un } = F V (∆∪{A}), • (For all) if Ni = N 0 ∪{∆ `? ∀XA}, U and f is a function symbol not occurring in ∆ ∪ {A}, then we can put ¯ Ni+1 = N 0 ∪ {∆ `? A[X/f (U)]},
σi+1 = .
• (Reduction) if Ni = N 0 ∪ {∆ `? q}, where q is an atom, and there is a formula C ∈ ∆ and a variant C 0 of C such that BV (C 0 ) ∩ F V (∆ ∪ q) = ∅, Body(C 0 ) = {A1 , . . . , An }, and there exists σ = mgu(Head(C 0 ), q), then we can put Ni+1 = N 0 σ ∪ {∆σ `? A1 σ, . . . , ∆σ `? An σ} and σi+1 = σ. A successful derivation of N is a derivation D = N0 , . . . , Nk , σ1 , . . . , σk , such that Nk = ∅. The answer substitution θ computed by D is defined as the composition of σi restricted to the free variables of N , that is θ = (σ1 σ2 . . . σk ) |F V (N ) . We conventionally assume that dom(θ) = F V (N ).13 We say that N succeeds with answer θ if there is a successful derivation of N computing answer θ. EXAMPLE 2.70. Let ∆ be the following set of formulas: ∀X(p(X) → r(X)) ∀Y (r(Y ) → q(Y )) ∀V (s(V ) → [∀Zp(Z) → ∀U q(U )] → a(V )) ∀W ([s(W ) → a(W )] → t). In Figure 2.6, we show a derivation of ∆ `? t. The following propositions state some basic properties of the computation. PROPOSITION 2.71. Let N be a query then we have 1. if N has a successful derivation of length h with computed answer σ, then N σ has a successful derivation of length ≤ h with computed answer ; 2. if N has a successful derivation of length h with computed answer σ and θ ≤ σ, then N θ has a successful derivation of length ≤ h with computed answer ; 13 This can always be achieved by extending θ with dummy bindings {X/X}, for variables X ∈ F V (N ) which have no proper bindings in θ.
2. INTUITIONISTIC AND CLASSICAL LOGICS
83
∆ `? t ∆ `? s(W 0 ) → a(W 0 ) ∆, s(W 0 ) `? a(W 0 ) ∆, s(W 0 ) `? s(W 0 ),
∆, s(w) `? ∀Zp(Z) → ∀U q(U )
∆, s(W 0 ), ∀Zp(Z) `? ∀U q(U ) ∆, s(W 0 ), ∀Zp(Z) `? q(f (W 0 )) ∆, s(W 0 ), ∀Zp(Z) `? r(f (W 0 )) ∆, s(W 0 ), ∀Zp(Z) `? p(f (W 0 )) success, as mgu(p(Z), p(f (W 0 ))) = {Z/f (W 0 )}
Figure 2.6. Derivation for Example 2.70. 3. if N σ has a successful derivation of length h with computed answer θ, then there is a substitution γ such that σθ ≤ γ and N has a successful derivation of length ≤ h with computed answer γ; 4. if N = N 0 ∪ {∆ `? ∀Y A} and N has a successful derivation of length h with computed σ, then for any term t, the query N 0 ∪ {∆ `? A[Y /t]} has a successful derivation of length ≤ h with computed answer σ. Proof. All claims can be proved by induction on the length of the derivations. We omit the details. PROPOSITION 2.72. (Monotony) If (Γ `? A) succeeds with answer θ and height h, then (Γ ∪ ∆ `? A) succeeds with answer θ and height ≤ h.
84
GOAL-DIRECTED PROOF THEORY
PROPOSITION 2.73. Given two queries N1 , N2 , we have: (1) if N1 and N2 both succeed with answer θ, then the query N1 ∪ N2 succeeds with answer θ; (2) if N1 ∪ N2 succeeds with θ, then there exists θ1 , θ2 ≥ θ, such that N1 succeeds with θ1 and N2 succeeds with θ2 . Proof. 1. Let D be a successful derivation of N1 , starting from N1 ∪N2 we perform the same steps as in the D, this will leave us at the end with N2 θ. By hypothesis and Proposition 2.71(1), N2 θ succeeds with answer , thus from the initial query N1 ∪ N2 , we obtain the answer θ = θ. 2. Let N1 ∪ N2 succeed with θ, then (N1 ∪ N2 )θ succeeds with , (Proposition 2.71(1)); this implies that both N1 θ and N2 θ succeed with ; now we conclude by Proposition 2.71(3). PROPOSITION 2.74. (Identity) ∆, A `? A succeeds with . In order to prove the completeness of the procedure (proved at the end of the section), we need to show that computations are closed with respect to the cutrule, i.e. the cut-rule is admissible. This property might be of interest by itself. First, we must define what we intend by cut in this context. In order to formulate the cut property, we must remember that the proof procedure checks the success and simultaneous answers for conjunctions or sets of basic queries. The computed answer must be taken into account. For instance (1) p(X) `? p(a) succeeds with answer X/a, and (2) q(b), ∀Z(q(Z) → p(Z)) `? p(X) succeeds with X/b, by cutting (1) and (2) on p(X) we would obtain q(b), ∀Z(q(Z) → p(Z)) `? p(a), which obviously fails. In order to perform a cut the computed answers of the two premisses must be compatible. We say that two substitution σ and θ are compatible if they have a common instance δ. If Γ `? A succeeds with answer σ and ∆, A `? B succeeds with answer θ, and σ and θ are compatible, that is there is a common instance δ of σ and θ, we are able to cut on A. We expect that the resulting query ∆, Γ `? B succeeds with a substitution γ which is at least as general as the common instance δ. We need to define cut on queries, i.e. on sets of basic queries. Given a query N = {∆1 `? B1 , . . . , ∆n `? Bn }, a formula A, and a database Γ, we denote by
2. INTUITIONISTIC AND CLASSICAL LOGICS
85
N [A/Γ] = {∆01 `? B1 , . . . , ∆0n `? Bn } the query obtained from N by replacing A in ∆i by Γ, that is (∆i − {A}) ∪ Γ if A ∈ ∆i 0 ∆i = ∆i otherwise THEOREM 2.75. If the following conditions hold: 1. Γ `? A succeeds with answer σ, 2. N succeeds with answer θ, 3. there exist two substitutions φ1 , φ2 , such that σφ1 = θφ2 , then there exists a substitution γ such that σφ1 = θφ2 ≤ γ and N [A/Γ] succeeds with answer γ. Proof. By induction on pairs (c, h) where c is the complexity of A, and h is the length of a successful derivation of (2). The complexity cp(A) of a formula A is defined as in Chapter 1, with the additional stipulation that cp(∀XA) = cp(A)+1. Let c = 0, we consider the case when h ≥ 0 and N 6= ∅. Suppose first N = N 0 ∪ {∆ ∪ {A} `? q} and the success rule is applied to N on ∆ ∪ {A} `? q. This means that from N we step to N 0 π, which succeeds with height < h, where π = mgu(Head(C 0 ), q), for some variant C 0 of a formula C ∈ ∆, whose body is empty, and θ = πθ0 for some θ0 . If h = 0 the proof below simplifies, since N 0 = ∅, and we do not need to apply the inductive hypothesis. Let A = q1 . Suppose C 6= A then, since σφ1 = πθ0 φ2 ≤ σ, we have Γπθ0 φ2 `? Aπθ0 φ2 succeeds with . By Proposition 2.71, Γπ `? q1 π succeeds with some η such that θ0 φ2 ≤ η and height < h. Since for some δ, θ0 φ2 = ηδ, by the induction hypothesis, we get N 0 π[Aπ/Γπ] = N 0 [A/Γ]π succeeds with some γ ≥ θ0 φ2 . Since C 6= A, we obtain, by monotony N 0 [A/Γ] ∪ {∆ ∪ Γ `? q} succeeds with πγ, and θφ2 = (πθ0 )φ2 ≤ πγ. If C = A = q1 , let π = mgu(q1 , q), then N 0 π succeeds with height < h and θ = πθ0 . Let α = πθ0 φ2 . Using Proposition 2.71 we have (∗) Γα `? q1 α
and N 0 α both succeeds with .
Moreover, by Proposition 2.71 N 0 α succeeds with height < h. Then we can apply the inductive hypothesis and conclude that
86
GOAL-DIRECTED PROOF THEORY
N 0 α[q1 /Γα] = N 0 [q1 /Γ]α succeeds with . Since q1 α = qα, we can combine the above conclusion with (*) (Proposition 2.73), and by monotony we have (N 0 [q1 /Γ]α ∪ {∆ ∪ Γ `? q})α succeeds with . By Proposition 2.71, we have that for some δ ≥ α, N 0 [q1 /Γ]α ∪ {∆ ∪ Γ `? q} succeeds with answer δ. But since θφ2 = πθ0 φ2 = α ≤ δ, we have obtained the desired conclusion. Next we consider the cases when c = 0, h > 0, N has the form N = N 0 ∪ {∆ ∪ {A} `? B} and the next query in the derivation is obtained by applying either implication rule or the rule for universal quantification to ∆∪{A} `? B. Since these two rules do not modify bindings, we can just apply the inductive hypothesis to the next queries and conclude. We omit the details. We now consider the case when c = 0, h > 0, N has the form N = N 0 ∪ {∆ ∪ {A} `? q} and the next query in the derivation is obtained by applying reduction to the basic query (∆ ∪ {A} `? q). Since cp(A) = c = 0, q is reduced with respect to C ∈ ∆ different from A. Then, for some variant C 0 of C such that BV (C 0 ) ∩ F V (∆ ∪ {A, q}) = ∅, Body(C 0 ) = {D1 , . . . , Dn }, there exists π = mgu(Head(C 0 ), q), and we step to (i) N 0 π ∪ {∆π ∪ {Aπ} `? D1 π, . . . , ∆π ∪ {Aπ} `? Dn π} which succeeds with θ0 such that θ = πθ0 , and with height < h. Since σφ1 = πθ0 φ2 ≤ σ, we have Γπθ0 φ2 `? Aπθ0 φ2 succeeds with . By Proposition 2.71, (ii) Γπ `? Aπ succeeds with some η such that θ0 φ2 ≤ η and height < h. We have that, for some δ, θ0 φ2 = ηδ; we can apply the induction hypothesis to (i) and (ii), and obtain N 0 π[Aπ/Γπ] ∪ {∆π ∪ Γπ `? D1 π, . . . , ∆π ∪ Γπ `? Dn π} that is (iii) N 0 [A/Γ]π ∪ {(∆ ∪ Γ)π `? D1 π, . . . , (∆ ∪ Γ)π `? Dn π} succeeds with some γ ≥ θ0 φ2 .
2. INTUITIONISTIC AND CLASSICAL LOGICS
87
Thus, we can reduce the query N 0 [A/Γ] ∪ {∆ ∪ Γ `? q} to (iii) and succeed with πγ. Since θφ2 = (πθ0 )φ2 ≤ πγ, we have obtained the desired result. Suppose now that cp(A) = c > 0. It is easily seen that there is only one additional case: that one in which N = N 0 ∪ {∆ ∪ {A} `? q} and the next query in the derivation is obtained by reduction of q with respect to A. Let A = ∀X¯1 (D1 → ∀X¯2 (D2 → . . . ∀X¯k (Dk → ∀Y¯ q1 ) . . .) and let A0 be a variant of A such that BV (A0 ) ∩ F V (∆, q) = ∅. Then we have Body(A0 ) = (D10 , . . . , Dk0 ) and Head(A0 ) = q10 . Then, there exists π = mgu(q10 , q) and we step to (i) N 0 π ∪ {∆π ∪ {Aπ} `? D10 π, . . . , ∆π ∪ {Aπ} `? Dk0 π} which succeeds with θ0 such that θ = πθ0 , and with height < h. We can proceed as before: since σφ1 = πθ0 φ2 ≤ σ, we have that Γπθ0 φ2 `? Aπθ0 φ2 succeeds with . Thus, by Proposition 2.71 (ii) Γπ `? Aπ succeeds with some η1 such that θ0 φ2 ≤ η1 and height < h. We have that, for some δ, θ0 φ2 = η1 δ; we can apply the induction hypothesis to (i) and (ii), and obtain (iii) N 0 [A/Γ]π ∪ {(∆ ∪ Γ)π `? D10 π, . . . , (∆ ∪ Γ)π `? Dk0 π} succeeds with some γ ≥ θ0 φ2 . From (ii), by Proposition 2.74, we get Γπ `? D10 π → (∀X¯2 (D2 → . . . ∀X¯k (Dk → ∀Y¯ q1 ) . . .)π succeeds with η1 , So that by Proposition 2.71 and the implication rule we obtain Γπ ∪ {D10 π, D20 π . . . Dk0 π} `? q10 π succeeds with η1 . But since q10 π = qπ, we also get (iv) Γπ ∪ {D10 π, D20 π . . . Dk0 π} `? qπ succeeds with η1 . On the other hand from (iii) by Propositions 2.71 and 2.73 we get (a) N 0 [A/Γ]πθ0 φ2 succeeds with , and for i = 1, . . . , k (bi ) ∆π ∪ Γπ `? Di0 π succeeds with answer γi such that θ0 φ2 ≤ γi . We have that for some χ1 and ψ1 , η1 χ1 = θ0 φ2 = γ1 ψ1 , Furthermore, it holds cp(D10 ) < cp(A), we can hence apply the induction hypothesis to (iv) and (b1 ) and obtain:
88
GOAL-DIRECTED PROOF THEORY
(v) ∆π ∪ Γπ ∪ {D20 π . . . Dk0 π} θ0 φ2 ≤ η2 .
`?
qπ succeeds with some η2 , such that
We can repeat the same argument, now using (v) and (b2 ). At the end we obtain that ∆π ∪ Γπ `? qπ succeeds with some ηk+1 such that θ0 φ2 ≤ ηk+1 . By Proposition2.71, we get (c) ∆πθ0 φ2 ∪ Γπθ0 φ2 `? qπθ0 φ2 succeeds with . By Proposition 2.73, we can combine (a) and (c) and conclude that (N 0 [A/Γ] ∪ {∆ ∪ Γ `? q})πθ0 φ2 succeeds with . Finally, by Proposition 2.71, we obtain that for some β such that πθ0 φ2 ≤ β, N 0 [A/Γ] ∪ {∆ ∪ Γ `? q} succeeds with answer β. Since it holds that θφ2 = (πθ0 )φ2 ≤ β, we have obtained the desired conclusion. COROLLARY 2.76. (a) If Γ `? A succeeds with σ and ∆, A `? B succeeds with θ and for some substitutions φ1 , φ2 σφ1 = θφ2 , then there is a substitution γ ≥ θφ2 such that ∆, Γ `? B succeeds with γ. (b) In particular, if Γ `? A and ∆, A ∆, Γ `? B succeeds with .
`?
B both succeed with then
We now prove the soundness and completeness of the the proof procedure. To this aim we introduce the Kripke semantics for the first-order fragment L(∀,→,∧) of intuitionistic logic, which is sufficient to interpret our queries. For the fragment of the language we treat, it is sufficient to consider constant domain Kripke models (see [Gabbay, 1981]). We also assume that the denotations of terms are rigid, i.e. do not depend on worlds. DEFINITION 2.77. A structure M for a language L(∀, →, ∧) is a quadruple M = (W, D, ≤, w0 , V ) where D and W are non empty sets, w0 ∈ W , ≤ is a reflexivetransitive relation with w0 as the least elment, and V (called the interpretation function) maps • every n-ary function symbol f on a n-ary function V (f ) : Dn → D, • every n-ary predicate symbol p on a function V (p) : W → 2Dn , that is V (p)(w) for w ∈ W is an n-ary relation on W , • every variable X on an element of D. V is assumed to be increasing on the interpretation of predicates: w ≤ w0 ⇒ V (w)(p) ⊆ V (w0 )(p).
2. INTUITIONISTIC AND CLASSICAL LOGICS
89
The interpretation of terms is defined as in first-order classical structures. To define truth in a structure, we assume that the language is expanded with names (constants) for all elements of D; we will not distinguish between an element of D and its name. We define M, w |= A, which is read as ‘A is true in M at world w’, as follows: M, w |= p(t¯) iff t¯ ∈ V (p)(w); M, w |= A ∧ B iff M, w |= A and M, w |= B; M, w |= A → B iff for all w0 such that w ≤ w0 M, w0 |= A implies M, w0 |= B; M, w |= ∀XA iff ∀d ∈ D M, w |= A[X/d].14 Validity in a structure M (i.e. M |= A) is defined as truth in the least world of M , or equivalently, in every world of M , and logical validity (i.e. |=I A) is defined as validity in every structure. The next theorem states the soundness of the procedure with respect to intuitionistic logic. ? `? V A1 , . . . , ∆n ` THEOREM 2.78 (Soundness). Let N = {∆1 V An } be n a query. If N succeeds with answer θ, then ∀ i ( ∆i → Ai )θ is valid in intuitionistic logic.
Proof. By induction on the height of a successful derivation of N . We omit the details. We prove the completeness of the procedure by a canonical model construction which makes an essential use of the cut-admissibility Theorem 2.75. Canonical Model construction We consider a language L which contains infinitely many function symbols of each arity, and we define a structure M = (W, D, ∅, ⊆, V ), where W is the set of databases (finite sets of formulas) on L, D is the set of terms on L, ∅ is the empty database, and V is the identity on terms, and for every ∆ ∈ W , and predicate p t¯ ∈ V (p)(∆) ⇔ ∆ `? p(t¯) succeeds with . By monotony of deduction it is easily seen that V is increasing on the interpretation of predicates. PROPOSITION 2.79. For any ∆, A we have: M, ∆ |= A ⇔ ∆ `? A succeeds with . 14 Notice that, in the case of ∀ we do not need to consider truth in upper worlds; this is because we are dealing with constant domain Kripke models.
90
GOAL-DIRECTED PROOF THEORY
Proof. We prove the two directions simultaneously by induction on the complexity of A. If A is an atomic formula, the claim holds by definition of V . Let A = B → C. (⇒) Suppose that M, ∆ |= B → C. By identity we have that ∆ ∪ {B} `? B succeeds with . Let Γ = ∆ ∪ {B}, by the induction hypothesis we have M, Γ |= B; since ∆ ⊆ Γ, by hypothesis we can conclude that M, Γ |= C, so that by the induction hypothesis ∆ ∪ {B} `? C succeeds with ; this implies that ∆ `? B → C succeeds with . (⇐) Suppose that ∆ `? B → C succeeds with . By implication rule, it must be (i) ∆ ∪ {B} `? C succeeds with . Now let Γ be a database such that ∆ ⊆ Γ and M, Γ |= B. By the induction hypothesis we get (ii) Γ `? B succeeds with . From (i) and (ii) by cut (Corollary 2.76), we obtain Γ ∪ ∆ `? C succeeds with , that is the same as Γ `? C succeeds with , since ∆ ⊆ Γ. By the induction hypothesis we finally get M, Γ |= C. Let A = ∀XB. (⇒) Suppose that M, ∆ |= ∀XB, then, for any term t we have that M, ∆ |= ¯ = F V (∆ ∪ {B}) and let f be a k-ary funcB[X/t]. Let {U1 , . . . Uk } = U tion symbol not occurring in ∆ ∪ {B}, then we have in particular that M, ∆ |= ¯ suc¯ and by the induction hypothesis we get that ∆ `? B[X/f (U)] B[X/f (U)], ? ceeds with . By the for-all rule this implies that also ∆ ` ∀XB succeeds with . (⇐) Suppose that ∆ `? ∀B succeeds with ; by Proposition 2.71, we have that for every term t, ∆ `? B[X/t] succeeds with . By the induction hypothesis we get that, for all t M, ∆ |= B[X/t] and hence M, ∆ |= ∀XB. From Proposition 2.79 we easily obtain the completeness of the proof procedure. As in conventional Horn logic programming, we cannot expect that for every θ, if |=I (∆ → A)θ, then ∆ `? A succeeds with θ, but only that there is some substitution γ at least as general as θ such that ∆ `? A succeeds with γ. For instance, for any θ, (p(X) → p(X))θ is valid, but ∅ `? p(X) → p(X) succeeds with only. ? ? THEOREM V 2.80V(Completeness). Let N = {∆1 ` A1 , . . . , ∆n ` An } be a n query. If ∀ i ( ∆i → Ai )θ is valid in intuitionistic logic, then N θ succeeds with and N succeeds with γ such that θ ≤ γ. V V Proof. Suppose that ∀ ni ( ∆i → Ai )θ is valid then it is valid in the canonical model M defined above. Thus, in each world Γ of M , we have M, Γ |= Vn V ( ∆i → Ai )θ. Since for every i,V∆i θ ∈ W , in particular, we have M, ∆i θ |= i V ∆i θ → Ai θ. Since M, ∆i θ |= ∆i θ, we get that M, ∆i θ |= Ai θ. By the previous proposition, we obtain
2. INTUITIONISTIC AND CLASSICAL LOGICS
91
∆i θ `? Ai θ succeeds with , for i = 1, . . . , n. This implies that N θ succeeds with and N succeeds with γ such that θ ≤ γ by Proposition 2.71.
CHAPTER 3
INTERMEDIATE LOGICS
Intermediate logics are logics stronger than intuitionistic logic I but weaker than classical logic C. Most of them are motivated by their semantical characterization. In this chapter we see how the goal-directed approach can be extended to this area by analysing two case-studies. We have seen in the previous chapter that intuitionistic logic is complete with respect to the class of finite Kripke models. One can refine the completeness theorem and show that intuitionistic logic is complete with respect to Kripke models whose possible-worlds structure form a finite tree. Given a Kripke model M = (W, ≤, w0 , V ), we can concentrate on the structure (W, ≤, w0 ), which is a finite tree, and forget about the evaluation function V for atoms. We write λ( E) for the set of labels occurring in E. Changing the terminology a bit, we will speak about models based on a given finite tree (W, ≤, w0 ), since varying V we will have several models based on it. The completeness result can then be re-phrased to assert that intuitionistic logic is complete with respect to the class of finite trees, that is to say, with respect to Kripke models based on finite trees. This change of terminology matters as we are naturally lead to consider subclasses of finite trees and ask what axioms we can add to obtain a characterization of valid formulas in these subclasses. For instance here are two natural subclasses: (1) for any n, the class of finite trees of height ≤ n; a finite tree T = (W, ≤, w0 ) is in this class if there are not n + 1 different elements w0 , w1 , . . . , wn , such that w0 ≤ w1 ≤ . . . ≤ wn holds. This means that all chains are of length ≤ n. Valid formulas in these subclasses are axiomatized by the axioms BHn of the next section. (2) for any n, the class of finite trees of width ≤ n; a finite tree T = (W, ≤,w0 ) is in this class if there are not n different elements which are pairwise incomparable. Valid formulas in these subclasses are axiomatized by taking the axiom schema W Wn i6=j Aj ) i=1 (Ai → for finite width trees of width ≤ n. Notice that for n = 2, we have the axiom (A1 → A2 ) ∨ (A2 → A1 ) 93
94
GOAL-DIRECTED PROOF THEORY
which gives the well known logic LC introduced independently by Dummett [1959] and G¨odel [1932] that we have already mentioned in the previous chapter. This axiom corresponds to the property of linearity : the models are finite ordered sequence of worlds (i.e. there are not two incomparable points). For other classes of intermediate logics we refer to [van Dalen, 1986; Gabbay, 1981]. In this chapter, we study two examples of intermediate logics: we give a goal-directed formulation for the logics of bounded height Kripke models and for LC. 1
LOGICS OF BOUNDED HEIGHT KRIPKE MODELS
In order to formalize the logics of bounded height Kripke models, we consider the following axioms BH1 = ((A1 → A0 ) → A1 ) → A1 .. . BHn = ((An → An−1 ) → An ) → An . where An−1 is an instance of BHn−1 . Axiom schema BH1 is nothing other than Peirce’s axiom, and adding it to intuitionistic logic we get classical logic. The axiom BHn+1 is strictly weaker than BHn . Let BHn denote the logic obtained by adding to (intuitionistic logic) I the axiom schema BHn . Then we have the following facts. PROPOSITION 3.1 (Gabbay, 1981). (a) BHn ⊂ BHn+1 ; (b) I =
Tω n=1
BHn ;
(c) A is a theorem of BHn iff A is valid in all tree models of height ≤ n. The proof systems for BHn are a simple extension of the labelled procedure we have seen in Chapter 2 for intuitionistic logic with disjunction. For brevity, we give it here for the implicational fragment, but it is possible to extend it to the full propositional language exactly as shown in Definition 2.52. We recall the rules of the basic implicational system. DEFINITION 3.2 (Proof system for BHn ). A query has the form Γ `?n x : A, H, where Γ is a labelled set of formulas and constraints of the form x ≤ y where x, y are labels. H, called the history, is a set of pairs of the form {(x1 , q1 ), . . . , (xn , qn )}
3. INTERMEDIATE LOGICS
95
where xi are labels and qi are atoms. We write Lab(E) for the set of labels occurring in E. The rules for proving constraints are as follows: • (≤ 1) Γ ` x ≤ x; • (≤ 2) Γ ` x ≤ y, if x ≤ y ∈ Γ; • (≤ 3) Γ ` x ≤ y, if for some z, Γ ` x ≤ z and Γ ` z ≤ y. For each n ≥ 1, we define the proof system `?n as follows. The logical rules are the following • (success) Γ `?n x : q, H immediately succeeds if y : q ∈ Γ and Γ ` y ≤ x; • (implication) from Γ `?n x : A → B, H we step to Γ, y : A, x ≤ y `?n y : B, H, where y 6∈ Lab(Γ) ∪ Lab(H) ∪ {x}. • (reduction) from Γ `?n x : q, H if there is a formula y : C ∈ Γ, with C : A1 → . . . → Ak → q such that Γ ` y ≤ x we step to Γ `?n x : Ai , H ∪ {(x, q)}, for i = 1, . . . , k. • (n-shifting restart) from Γ `?n x : r, H if there are (x1 , q1 ), . . . , (xn , qn ) ∈ H, such that Γ ` x1 ≤ x2 . . . Γ ` xn ≤ x we step to Γ `?n x2 : q1 , H ∪ {(x, r)}, .. .
Γ `?n xn : qn−1 , H ∪ {(x, r)}, Γ `?n x : qn , H ∪ {(x, r)}.
We have called the last rule ‘shifting’ restart as each goal qi is re-asked from a point xi+1 which is, so to say, above the point xi from which it was asked the first time (the condition is xi ≤ xi+1 ); the goal qi is therefore shifted from xi to xi+1 . Notice that in case of n = 1, the 1-shifting-restart rule just prescribes to ‘shift’ a previous goal to the current point, that is it prescribes to re-ask any previous atomic goal from the current database. The 1-shifting-restart is therefore restart for classical logic. If we want a proof system for the whole propositional language we must add the rules of Definition 2.52. In this extension the histories will contain arbitrary formulas rather than atoms. EXAMPLE 3.3. In Figure 3.1, we show a derivation of the following atomic instance of BH2
96
GOAL-DIRECTED PROOF THEORY
((a2 → ((a1 → a0 ) → a1 ) → a1 ) → a2 ) → a2 At every step we only show the newly added data, that is the database in each query is given by the collection of the data along the same branch. At step (*) we apply the special 2-shifting-restart rule (i.e. the one for BH2 ). At this step, the history is {(x1 , a2 ), (x3 , a1 )} and the current position is x4 , thus, by restart, we ‘shift’ a2 to x3 and a1 to x4 . The queries on the leaves of the tree immediately succeed as x3 : a2 and x4 : a1 are in the database. `?2 x0 : ((a2 → ((a1 → a0 ) → a1 ) → a1 ) → a2 ) → a2 , ∅ x1 : ((a2 → ((a1 → a0 ) → a1 ) → a1 ) → a2 `?2 x1 : a2 , ∅ `?2 x1 : a2 → ((a1 → a0 ) → a1 ) → a1 , {(x1 , a2 )} x1 ≤ x2 , x2 : a2 `?2 x2 : ((a1 → a0 ) → a1 ) → a1 , {(x1 , a2 )} x2 ≤ x3 , x3 : (a1 → a0 ) → a1 `?2 x3 : a1 , {(x1 , a2 )} `?2 x3 : a1 → a0 , {(x1 , a2 ), (x3 , a1 )} (∗) x3 ≤ x4 , x4 : a1 `?2 x4 : a0 , {(x1 , a2 ), (x3 , a1 )} ! aa !! a
`?2 x3 : a2 , {(x1 , a2 ), (x3 , a1 )}
`?2 x4 : a1 , {(x1 , a2 ), (x3 , a1 )}
Figure 3.1. Derivation for Example 3.3. We prove the soundness and completeness of this family of goal-directed procedures with respect to BHn . We need again the notion of realization and validity we introduced in Chapter 2, Section 5, Definition 2.54 here restricted to tree-models of height ≤ n. In particular we say that a query Γ `?n x : G, H is valid in BHn iff for all models M of height ≤ n and all realization ρ of Γ in M , we have: either M, ρ(x) |= G, or for some (y, r) ∈ H, M, ρ(y) |= r. THEOREM 3.4 (Soundness). If ∆ `?n G, H succeeds then it is valid in BHn . Proof. We proceed by induction on the derivation of the query in the hypothesis. We only consider the case when the query ∆ `?n x : G, H succeeds by n-shifting
3. INTERMEDIATE LOGICS
97
restart. In this case G is an atom q and there are (x1 , q1 ), . . . , (xn , qn ) ∈ H, such that Γ ` x1 ≤ x2 . . . Γ ` xn ≤ x, and from (*) Γ `?n x : r, H we step to Γ `?n x2 : q1 , H ∪ {(x, r)}, (h1 ) .. .
(hn−1 ) Γ `?n xn : qn−1 , H ∪ {(x, r)}, Γ `?n x : qn , H ∪ {(x, r)}. (hn ) Since each of the queries above succeeds by a shorter derivation, we can apply the induction hypothesis and assume that (h1 ) − (hn ) are valid. Suppose now that (*) is not valid in BHn , then for some model M = (W, w0 , ≤, V ) and some realization ρ of Γ in M , we have (1) M, ρ(xn+1 ) 6|= r, where xn+1 = x, (2) for all (y, q) ∈ H, M, ρ(y) 6|= q, (3) ρ(xi ) ≤ ρ(xi+1 ) for i = 1, . . . , n. In particular from (2) we have (4) M, ρ(xi ) 6|= qi for every i = 1, . . . , n. On the other hand by the validity of (h1 ) − (hn ), (1) and (4), we must have (5) M, ρ(xi+1 ) |= qi for every i = 1, . . . , n. By (3) we have ρ(x1 ) ≤ ρ(x2 ) ≤ . . . ≤ ρ(xn ) ≤ ρ(xn+1 ), but every chain in M has length at most n, thus it must be ρ(xi ) = ρ(xi+1 ) for some xi , with i = 1, . . . , n. Thus, by (4) and (5) we have a contradiction.
We now turn to completeness, we first notice that for the axiomatization of BHn , it is sufficient to add to the Hilbert system for intuitionistic logic only the atomic instances of BHn . To this regard let Πn be the set of atomic instances of BHn . LEMMA 3.5 (Gabbay, 1981). We have |=BHn A iff Πn ` A in intuitionistic logic.
98
GOAL-DIRECTED PROOF THEORY
PROPOSITION 3.6. Let πn ∈ Πn , the query `?n x0 : πn , ∅ succeeds in the proof system for BHn . Proof. We can assume that πn = ((qn → πn−1 ) → qn ) → qn , where q0 , q1 , . . . , qn are distinct propositional variables, and πi = ((qi → πi−1 ) → qi ) → qi ∈ Πi , for i = 1, . . . , n. We show a derivation of the above query (at each step we only show the new data in the database, if any): `?n `?n
x1 : (qn → πn−1 ) → qn , x0 ≤ x1
`?n
x0 : πn , ∅ x1 : qn , ∅, x1 : qn → πn−1 , {(x1 : qn )},
and after two implication steps x2 : qn , x3 : (qn−1 → πn−2 ) → qn−1 , x1 ≤ x2 , x2 ≤ x3
`?n `?n
x3 : qn−1 , {(x1 : qn )} x3 : qn−1 → πn−2 , {(x1 : qn ), (x3 : qn−1 )}.
Proceeding in this way we arrive to ∆
`?n
x2n : q0 , H,
where H
= {(x1 : qn ), (x3 : qn−1 ), . . . , (x2n−1 , q1 )},
∆
= {x2(i+1) : qn−i , x2i+1 : (qn−i → πn−(i+1) ) → qn−i , xi ≤ xi+1 | i = 0, . . . , n − 1}.
Since ∆ ` x1 ≤ x3 . . . ∆ ` x2n−1 ≤ x2n , we can apply n-shifting-restart and step to ∆ `?n x3 : qn , H ∪ {(x2n , q0 )} ∆ `?n x5 : qn−1 , H ∪ {(x2n , q0 )} .. . ∆ `?n x2n : q1 , H ∪ {(x2n , q0 )}.
All the above queries succeed by the success rule.
The completeness of the procedure is an easy consequence of the admissibility of cut. THEOREM 3.7 (Admissibility of Cut). If Γ[x : D] `?n y : D1 , H1 and ∆ `?n z : D, H2 succeed, then also
3. INTERMEDIATE LOGICS
99
Γ[x : D/∆, z] `?n y[x/z] : D1 , H1 [x/z] ∪ H2 succeeds, where Γ[x : D/∆, z] = (Γ − {x : D})[x/z] ∪ ∆. Proof. As in the previous cases, we proceed by double induction on the complexity of the cut formula and on the length of the derivation. The proof is very similar to the one of Theorem 2.57 and details are left to the reader. THEOREM 3.8 (Completeness). Let A be valid in BHn , then for any label x, `?n x : A, ∅ succeeds. Proof. By Proposition 3.5 if A is valid in BHn , then Π0n `I A holds, where Π0n is a finite set of atomic instances of Πn . By the completeness of the proof procedure for intuitionistic logic, we have that Π0n `? A succeeds. The proof system for BHn is an extension of the intuitionistic one. It is easy to see that letting Π00n = {x0 : B | B ∈ Π0n }, we have (i) Π00n `?n x0 : A, ∅ succeeds. But by Proposition 3.6, we also have (ii) `?n x0 : B, ∅ succeeds for every B ∈ Π00n . Since the computation is closed under cut (Theorem 3.7), from (i) and (ii) we get
`?n x0 : A, ∅ succeeds. ¨ 2 DUMMETT–GODEL LOGIC LC
In this section we give a goal-directed procedure for one of the most important intermediate logics, namely the logic LC [Dummett, 1959]. The system LC extends intuitionistic logic by the axiom (A → B) ∨ (B → A). Semantically, this axiom restricts the class of Kripke models of intuitionistic logic to those which are linearly ordered. This logical system was previously studied by G¨odel [1932] to show that intuitionistic logic does not admit a characteristic finite matrix. G¨odel formulated a particularly simple many-valued semantic for LC, that we recall below. Formulas are interpreted in the real interval [0, 1]; let v : V ar → [0, 1] be an evaluation of the propositional variables, v can be extended to any formula in the language (→, ∧, ¬, ∨) according to the following clauses: v(A → B) =
1 if v(A) ≤ v(B) v(B) otherwise
100
GOAL-DIRECTED PROOF THEORY
v(A ∧ B) v(A ∨ B) v(¬A)
= min(v(A), v(B)) = max(v(A), v(B)) 1 if v(A) = 0 = 0 otherwise.
We say that a formula A is valid if v(A) = 1 under any v on [0, 1]. Because of its many-valued semantic, logic LC is nowadays considered as one of the fundamental formalization of fuzzy logics (see [Hajek, 1998]). An alternative axiomatization of LC can be given by adding to the propositional fragment of intuitionistic logic one of the following: (A → B ∨ C) → (A → B) ∨ (A → C) (A ∧ B → C) → (A → C) ∨ (B → C). The first axiom allows one to carry out a disjunction from the consequent of an implication. It is easily seen that the converse of each axiom above is provable in intuitionistic logic. The implicational fragment of LC can be axiomatized by adding to the implicational fragment of intuitionistic logic the following axiom: ((A → B) → C) → ((B → A) → C) → C. We can formulate a proof procedure for LC similar to the one for systems BHn of the previous section, which makes use of labelled databases and constraints. In place of n-shifting restart we have the following rules • (backtracking) from Γ `? x : q, H, with q atomic if (y, r) ∈ H step to Γ `? y : r, H ∪ {(x, q)}. • (restart) from Γ `? x : q, H, with q atomic if (y, r) ∈ H step to Γ, y ≤ x `? x : q, H
and
Γ, x ≤ y `? y : r, H.
The proof system can be extended to the full propositional language as shown in Chapter 2, Section 5, Definition 2.52. In this case the history of a query will contain arbitrary formulas rather than atoms. The restart rule performs a restricted form of case analysis enforced by linearity. We give an example of computation when we have disjunction handled as in Definition 2.52. EXAMPLE 3.9. We show (Figure 3.2) that `? x : (a → b) ∨ (b → a), ∅ succeeds. Step (1) is obtained by the rule for disjunction, step (3) by the backtracking rule. We have abbreviated {(x : b → a), (y : b)} by H. Finally steps (5) and (6) are obtained by the restart rule, as (y, b) ∈ H. The above procedure can be shown to be sound and complete for LC. In this respect we can develop a syntactic proof as we did in the previous section for BHn
3. INTERMEDIATE LOGICS
101
`? x : (a → b) ∨ (b → a)
(1) `? x : a → b, {(x : b → a)}
(2) x ≤ y, y : a `? y : b, {(x : b → a)}
(3) x ≤ y, y : a `? x : b → a, {(x : b → a), (y : b)}
(4) x ≤ y, x ≤ z, y : a, z : b `? z : a, H
(5) x ≤ y, x ≤ z, y ≤ z, y : a, z : b
`?
z : a, H
(6) x ≤ y, x ≤ z, z ≤ y, y : a, z : b
`?
y : b, H
Figure 3.2. Derivation for Example 3.9. in three steps: (1) for the axiomatization of LC, it is sufficient to add the atomic instances of the axiom (A → B) ∨ (B → A) to I ; (2) every such atomic instance succeeds in the proof system for LC; (3) successful derivations are closed under cut. We leave the details to the reader. We will rather show that we can get rid of labels and obtain a simple procedure for the implicational fragment of LC. The unlabelled procedure we define in the next section has a strong relation with the hypersequent calculus for LC developed by Avron [1991a], as we will show at the end of the chapter. We give an intuitive argument to show how we can eliminate the labels. Let us define for any labelled database Γ and label x Γ ↓ x = {y : A | y : A ∈ Γ∧y ≤ x} ∪{u ≤ v | u ≤ v ∈ Γ∧v ≤ x}. Γ ↓ x represents the set of formulas of Γ which can be used from point x. Moreover, if we start a computation from the empty database, we see that each database Γ occurring in the computation grows like a tree and Γ ↓ x corresponds to the path of formulas from the root to point x. The idea of the unlabelled procedure is to consider at each step only a single path of formulas. Notice that if we restrict our attention to the intuitionistic implicational fragment (namely, success, implication, and reduction rule), we obviously have
102
GOAL-DIRECTED PROOF THEORY
Γ `? x : A succeeds iff Γ ↓ x `? x : A succeeds. In case of LC, we can switch from one point to another by backtracking or restart. We try to reformulate these two rules taking only care of paths of formulas. For backtracking we expect a rule like the following: from Γ ↓ x `? x : q, H with q atomic, if (y, r) ∈ H step to Γ ↓ y `? y : r, H ∪ {(x, q)}. For restart, we expect the rule: from Γ `? x : q, H, if (y, r) ∈ H step to (1) (Γ ∪ {y ≤ x}) ↓ x `? x : q, H, and (2) (Γ ∪ {x ≤ y}) ↓ y `? y : r, H. We can observe that the set of formulas in (Γ ∪ {y ≤ x}) ↓ x is the same as the set of formulas in (Γ ∪ {x ≤ y}) ↓ y and is exactly the set of formulas in Γ ↓ x ∪ Γ ↓ y. These are the formulas of Γ which can be used in the subderivation of (1) and (2). Since given Γ ↓ x and Γ ↓ y we can say which formulas can be used in the subderivation of (1) and (2) without making use of the constraints, we can get rid of the constraints themselves. From this observation, we can therefore reformulate the entire procedure, restricting our consideration to databases which are paths of formulas. We do not need the constraints (whence the labels) anymore.
2.1 Unlabelled Procedure for the Implicational Fragment of LC We give here a simplified procedure for the implicational fragment of LC. The procedure can be extended to the →, ∧, ⊥-fragment as shown in the previous chapter. This procedure is also inspired by Avron’s hypersequent formulation of LC [Avron, 1991a] as we will see at the end of the chapter. A query has the form Γ `? G, H where Γ is a set of formulas, G is a formula, and H (the history) is a set of pairs of the form (∆, q), where ∆ is a database and q is an atom. Here are the rules. DEFINITION 3.10 (Proof system for implicational LC). • (success) Γ `? q, H succeeds if q ∈ Γ. • (implication) from Γ `? A → B, H step to Γ, A `? B, H • (reduction) from Γ `? q, H if there is a formula C ∈ Γ with C : A1 → . . . Ak → q, then step to Γ `? Ai , H ∪ {(Γ, q)} for i = 1, . . . , k
3. INTERMEDIATE LOGICS
103
• (backtracking) from Γ `? q, H if (∆, r) ∈ H, step to ∆ `? r, H ∪ {(Γ, q)}. • (restart) from Γ `? q, H if (∆, r) ∈ H, step to Γ ∪ ∆ `? q, H
and Γ ∪ ∆ `? r, H.
The restart rule allows us to combine a previous database with the current database in order to prove both the old goal and the current one. Whereas the backtracking rule does not add any power, it is the restart rule which really leads out from intuitionistic logic. EXAMPLE 3.11. We show a derivation of the characteristic LC axiom ((a → b) → c) → ((b → a) → c) → c. In the derivation below, we let ∆ = {(a → b) → c, (b → a) → c}. (a → b) → c
`? `?
((a → b) → c) → ((b → a) → c) → c, ∅ ((b → a) → c) → c, ∅
(a → b) → c, (b → a) → c ∆
`? `?
c, ∅ a → b, {(∆, c)},
∆ ∪ {a}
?
`
by reduction w.r.t. (a → b) → c b, {(∆, c)}
∆
`?
c, {(∆, c), (∆ ∪ {a}, b) by backtracking
∆
`?
b → a, {(∆, c), (∆ ∪ {a}, b)} by reduction w.r.t. (b → a) → c
∆ ∪ {b}
`?
a, {(∆, c), (∆ ∪ {a}, b)}.
Now we apply restart and we step to ∆ ∪ {a, b} `? a, {(∆, c), (∆ ∪ {a}, b)} and ∆ ∪ {a, b} `? b, {(∆, c), (∆ ∪ {a}, b)}. Both of them immediately succeed. We next prove some properties of the computation for LC. First, we have monotony of deduction on both databases and histories. PROPOSITION 3.12. For every database Γ, ∆, formula A, and histories H, H 0 we have: Q = Γ `? A, H succeeds implies Γ, ∆ `? A, H 0 ∪ H 00 succeeds,
104
GOAL-DIRECTED PROOF THEORY
where H 00 = {(Σ ∪ ∆, c) : (Σ, c) ∈ H}. Proof. By induction on the height of a successful derivation of the query Q. We only exemplify the case of restart. Suppose Q succeeds by restart, then A is an atom q for some (Σ, r) ∈ H, the following queries succeed by a derivation of smaller height: Γ, Σ `? q, H and Γ, Σ `? r, H. By induction hypothesis, Γ, Σ, ∆ `? q, H 0 ∪ H 00 and Γ, Σ, ∆ `? r, H 0 ∪ H 00 succeed; but (Σ ∪ ∆, r) ∈ H 00 , whence the query Γ, ∆ `? A, H 0 ∪ H 00 succeeds.
PROPOSITION 3.13. Let A = A1 → . . . → Ak → q, and B = B1 → . . . → Bh → r, then for any ∆, Γ, H we have (i) Γ `? A, H ∪ {(∆ ∪ {B1 , . . . Bh }, r)} succeeds if and only if (ii) ∆ `? B, H ∪ {(Γ ∪ {A1 , . . . Ak }, q)} succeeds. Proof. Since the claim is symmetric it suffices to show one half. We start from the query (ii) ∆ `? B, H ∪ {(Γ ∪ {A1 , . . . Ak }, q)} and we step to ∆ ∪ {B1 , . . . , Bh } `? r, H ∪ {(Γ ∪ {A1 , . . . Ak }, q)} then we apply backtracking and step to (iii) Γ ∪ {A1 , . . . Ak } `? q, H ∪ {(∆ ∪ {B1 , . . . Bh }, r), (Γ ∪ {A1 , . . . Ak }, q)}. By the implication rule, every successful derivation of (i) contains a successful derivation of Γ ∪ {A1 , . . . Ak } `? q, H ∪ {(∆ ∪ {B1 , . . . Bh }, r)}, thus (iii) succeeds by monotony on histories.
The following property shows that we can restrict the application of both backtracking and restart to the case when we cannot proceed by a successful reduction. PROPOSITION 3.14. Suppose N = ∆ `? q, H succeeds by a derivation D and there is a clause C ∈ ∆ such that C is used to reduce q in some descendant
3. INTERMEDIATE LOGICS
105
of N . Then there is a successful derivation D0 of N such that the first step is the reduction of q with respect to C. The following proposition states a sort of ‘idempotency’ (or contraction) property. PROPOSITION 3.15. For all ∆ and H, we have: ∆ `? q, H∪{(∆, q)} succeeds implies ∆ `? q, H succeeds. Proof. Let D be a successful derivation of the query (call it N ) in the hypothesis. If the first step in D is a reduction step or a backtracking step (using (Γ, r) ∈ H) then the claim immediately follows: starting from ∆ `? q, H we go on the same as in D, (∆, q) will be added to H after the first step. Suppose that the first step of D is a restart step through (Γ, r) ∈ H; that is from N we step to ∆, Γ `? q, H ∪ {(∆, q)}
and ∆, Γ `? r, H ∪ {(∆, q)}.
Then, from ∆ `? q, H we step to Γ `? r, , H ∪ {(∆, q)} by backtracking, and then by restart we generate the same queries as above. In order to prove the completeness we need to prove that cut is admissible. Since the database may switch with another database occurring in the history, we must also take into account the occurrences of the the cut-formula in databases in the history. By this reason the proof of the cut property is slightly more difficult than in the previous cases. We must carefully formulate the cut property in its right generality in order to make a working inductive proof. In particular we must allow to cut on the history. THEOREM 3.16. Suppose (1) Γ `? B, H1 succeeds, and (2) ∆ `? A, H2 succeeds. Then Γ∗ `? B, H1∗ ∪ H2 succeeds, where Γ∗ = Γ or Γ∗ = Γ[A/∆], and H1∗ = {((Σ∗ , r) : (Σ, r) ∈ H1 ∧ (Σ∗ = Σ ∨ Σ∗ = Σ[A/∆])}. Proof. As usual, we proceed by double induction on cp(A) and the height h of a successful derivation of (1). We only consider the cases which are not a straightforward reformulation of the corresponding ones for intuitionistic logic, namely the cases in which the query (1) is obtained by backtracking, or it is obtained by restart, or cp(A) > 0 and query (1) is obtained by reduction with respect to A. • Suppose that B is an atom q and (1) succeeds by backtracking. Then from (1) we step to Σ `? p, H1 ∪ {(Γ, q)}, (where (Σ, p) ∈ H1 ) which succeeds by a shorter derivation. By the induction hypothesis we obtain (∗) Σ∗ `? p, H1∗ ∪ {(Γ∗ , q)} ∪ H2 succeeds. Since (Σ∗ , p) ∈ H1∗ , then from Γ∗ `? q, H1∗ ∪ H2 we can step to (∗) and succeed.
106
GOAL-DIRECTED PROOF THEORY
• Suppose that B is an atom q and (1) succeeds by restart. Then from (1) we step to Γ ∪ Σ `? p, H1 and Γ ∪ Σ `? q, H1 (where (Σ, p) ∈ H1 ), and both queries succeed by a shorter derivation. We can apply the induction hypothesis and obtain (Γ ∪ Σ)∗ `? p, H1∗ ∪ H2 succeed.
and (Γ ∪ Σ)∗ `? q, H1∗ ∪ H2 both
Since (Γ ∪ Σ)∗ ⊆ Γ∗ ∪ Σ∗ , by monotony we obtain Γ∗ ∪ Σ∗ `? p, H1∗ ∪ H2
and Γ∗ ∪ Σ∗ `? q, H1∗ ∪ H2
both succeed. As (Σ∗ , p) ∈ H1∗ , from Γ∗ `? q, H1∗ ∪ H2 , we can step to the two queries here above and succeed. • Suppose cp(A) > 0, B is an atom q and (1) succeeds by reduction with respect to A. Let A : D1 → . . . → Dk → q in Γ; from (1) we step to Γ `? Di , H1 ∪ {(Γ, q)},for i = 1, . . . , k which succeeds by a shorter derivation. By the induction hypothesis we obtain that for i = 1, . . . , k (ci ) Γ∗ `? Di , H1∗ ∪ {(Γ∗ , q)} ∪ H2 succeeds. We have two cases: if Γ∗ = Γ, we can still reduce with respect to A and the result easily follows. If Γ∗ 6= Γ, then A ∈ Γ and Γ∗ = Γ[A/∆]. From the hypothesis (2) we have (3) ∆ ∪ {D1 , . . . , Dk } `? q, H2 succeeds. Since cp(Di ) < cp(A) we can repeatedly apply the induction hypothesis, cutting (3) with (c1 ) and the result with (c2 ), and so on. For instance at the first step we get ∆1 ∪ {D2 , . . . , Dk } ∪ Γ∗ `? q, H1∗ ∪ {(Γ∗ , q)} ∪ H2 succeeds, where ∆1 = ∆ − {D1 } if D1 ∈ ∆ and ∆1 = ∆ otherwise. Notice that we do not modify H2 (that is we can let H2∗ = H2 ). At the end we obtain ∆k ∪ Γ∗ `? q, H1∗ ∪ {(Γ∗ , q)} ∪ H2 succeeds, where ∆k ⊆ ∆, and hence ∆k ⊆ Γ∗ = Γ[A/∆]. We therefore have Γ∗ `? q, H1∗ ∪ {(Γ∗ , q)} ∪ H2 succeeds, so that by Proposition 3.15 we finally obtain Γ∗ `? q, H1∗ ∪ H2 succeeds.
3. INTERMEDIATE LOGICS
107
PROPOSITION 3.17. Suppose (i) Γ, A → B `? C, H succeeds and (ii) Γ, B → A `? C, H succeeds. Then also Γ `? C, H succeeds. Proof. To simplify notation we let A = A1 → . . . An → q and Σ = {A1 , . . . , An }, B = B1 → . . . Bm → p and ∆ = {B1 , . . . , Bm }, C = C1 → . . . Ck → r and Π = {C1 , . . . , Ck }. It is easy to see that the following succeeds: (1) Γ `? A → B, {(Γ ∪ Σ ∪ {B}, q)}. To see this, from (1) we apply the implication rule and then restart. From (i) and (1) by cut we get Γ `? C, H ∪ {(Γ ∪ Σ ∪ {B}, q)} succeeds, and hence also that Γ, Π `? r, H ∪ {(Γ ∪ Σ ∪ {B}, q)} succeeds; by Proposition 3.13, we have Γ, Σ, B `? q, H ∪ {(Γ ∪ Π, r)} succeeds, that implies
Γ `? B → A, H ∪ {(Γ ∪ Π, r)} succeeds.
From (ii) and the last query, by using cut we obtain Γ `? C, H ∪ {(Γ ∪ Π, r)} succeeds, that implies
Γ, Π `? r, H ∪ {(Γ ∪ Π, r)} succeeds.
By Proposition 3.15 we have Γ, Π `? Γ `? C, H succeeds.
r, H; succeeds so that we finally have
2.2 Soundness and Completeness As we mentioned at the beginning of the section LC is complete with respect to the class of linear Kripke models i.e. models M = (W, w0 , ≤, V ) where ≤ is a linear order on W : ∀w, w0 ∈ W w ≤ w0 ∨ w0 ≤ w. We first show soundness. , . . . , Bn }, H = {(∆1 , q1 ), . . . , THEOREM 3.18 (Soundness). Let Γ = {B1V (∆k , qk )}, where ∆i = {CVi,1 , . . . , Ci,ri }. Let Γ denote the conjunction of formulas of Γ (and similarly, ∆i of ∆i ). Suppose that Γ `? A, H succeeds, then
108
GOAL-DIRECTED PROOF THEORY
V V V ( Γ → A) ∨ ( ∆1 → q1 ) ∨ . . . ∨ ( ∆k → qk ) is valid in LC. Proof. The proof proceeds by induction on the length of derivations. All cases are left to the reader, except for restart. Let Γ `? A, H succeeds by restart, then A is an atom r, and for some (∆i , qi ) ∈ H, the derivation steps to Γ ∪ ∆i `? r, H
Γ ∪ ∆i `? qi , H.
Both queries succeed by derivations of smaller height. Let ^ ^ _ H 0 = ( ∆1 → q1 ) ∨ . . . ∨ ( ∆i−1 → qi−1 ) ^ ^ ∨( ∆i+1 → qi+1 ) ∨ . . . ∨ ( ∆k → qk ). By the induction hypothesis, the following are valid in LC: V V V W (1) ( Γ ∧ ∆ → r) ∨ H 0 ∨ ( ∆i → qi ), W V V V (2) ( Γ ∧ ∆ → qi ) ∨ H 0 ∨ ( ∆i → qi ). We must show that also ^ ^ _ ( Γ → r) ∨ H 0 ∨ ( ∆i → qi ) is valid in LC. Suppose it is not valid, let M = (W, w0 , ≤, V ), be a linear Kripke model which falsifies the above formula. Then, in the initial world w0 , we have: V (3) M, w0 6|= Γ → r, W (4) M, w0 6|= H 0 , V (5) M, w0 6|= ∆i → qi . On the other hand, since (1) and (2) are valid in M , we must have V V (6) M, w0 |= Γ ∧ ∆ → r, and V V (7) M, w0 |= Γ ∧ ∆ → qi . From (3) and (5), it follows that there are w0 , w00 ≥ w0 such that V (8) M, w0 |= VΓ and M, w0 6|= r, (9) M, w00 |= ∆i and M, w00 6|= qi . By linearity, w0 ≤ w00 or w00 ≤ w0 . Thus, if we let w∗ = max(w0 , w00 ), we have w∗ = w0 or w∗ = w, whence by monotony V V M, w∗ |= Γ ∧ ∆, so that (by (6) and (7)) M, w∗ |= r ∧ qi ,
3. INTERMEDIATE LOGICS
which contradicts either (8) or (9).
109
We prove the completeness as usual by means of a canonical model construction. Before going into the proof, we notice (as usual) that the computation procedure is well defined also when the involved databases are infinite. Since a successful derivation is a finite tree, only a finite number of formulas can be involved in it. We thus have an immediate compactness property: Γ `? A, ∅ succeeds if and only if there is a finite subset Γ0 of Γ such that Γ0 `? A, ∅ succeeds. We use this property in the completeness proof. V THEOREM 3.19. [Completeness] If ∆0 → G is valid in LC then ∆0 `? G, ∅ succeeds. Proof. We prove that if ∆0 `? G, ∅ does not succeed, then there is a linear model M = (W, w0 , ≤, V ) and a w ∈ W such that ^ M, w 6|= ∆0 → G. Suppose ∆0 `? G, ∅ does not succeed. Let (Ai , Bi ) for i = 1, 2, . . . , n, . . ., be an enumeration of all pairs of distinct formulas of the language. We define an increasing sequence of databases Γ0 ⊆ Γ1 . . . ⊆ Γi . . .. Step 0
We let Γ0 = ∆0 .
Step i+1
We consider the pair (Ai+1 , Bi+1 ) and we proceed as follows: • if Γi , Ai+1 → Bi+1 `? G0 , ∅ does not succeed, then we let Γi+1 = Γi ∪ {Ai+1 → Bi+1 }; • else if Γi , Bi+1 → Ai+1 `? G0 , ∅ does not succeed, then we let Γi+1 = Γi ∪ {Bi+1 → Ai+1 };
• else we let Γi+1 = Γi . S We finally let Γ∗ = i Γi . Claim 1 for all i, Γi `? G, ∅ does not succeed. This is easily proved by induction on i. Claim 2 Γ∗ `? G, ∅ does not succeed. Suppose it succeeds, then there is a finite subset Γ of Γ∗ such that Γ `? G, ∅ succeeds. But it must be Γ ⊆ Γi for some i, and we have a contradiction with Claim 1. Claim 3 For all formulas A, B either Γ∗ ∪{A} `? B, ∅ succeeds, or Γ∗ ∪{B} `? A, ∅ succeeds. This is an easy consequence of the fact either A → B ∈ Γ∗ or B → A ∈ Γ∗ , what we prove now. Let A = Ai+1 and B = Bi+1 , then the pair (Ai+1 , Bi+1 ) is considered at step i + 1. If neither Ai+1 → Bi+1 ∈ Γi+1 , nor Bi+1 → Ai+1 ∈ Γi+1 , by definition, both
110
GOAL-DIRECTED PROOF THEORY
Γi , Ai+1 → Bi+1 succeeds.
`?
G, ∅ and Γi , Bi+1 → Ai+1
`?
G, ∅
By Proposition 3.17 we get Γi `? G, ∅ succeeds. against Claim 1. We now define for arbitrary formulas A ≤Γ∗ B if and only if Γ∗ ∪ {B} `? A, ∅ succeeds; A ≈Γ∗ B iff A ≤Γ∗ B and B ≤Γ∗ A. We can easily prove: Claim 4 ≈Γ∗ is a congruence on formulas. We consider the quotient W of the language with respect to ≈Γ∗ : W = {[A]≈Γ∗ : A ∈ L}. We still denote by ≤Γ∗ the relation between equivalence classes: [A]≈Γ∗ ≤Γ∗ [B]≈Γ∗ iff A ≤Γ∗ B. Using Claim 3 we can prove: Claim 5 ≤Γ∗ is a linear order on W . To simplify notation we omit ≈Γ∗ from equivalence classes, that is we simply write [A] instead of [A]≈Γ∗ . We now define a model M by putting M = (W, [A0 ], ≤Γ∗ , V ), where where A0 is, say p0 → p0 , for some atom p0 , V : W → P ow(V ar) is defined as follows p ∈ V ([A]) ⇔ Γ∗ ∪ {A} `? p, ∅ succeeds. It is easily seen that the definition is independent from the choice of the representative A, and that V is monotonic, that is if [A] ≤Γ∗ [B] then V ([A]) ⊆ V ([B]). Claim 6 For all formulas B, and [A], we have M, [A] |= B ⇔ Γ∗ ∪ {A} `? B, ∅ succeeds. Proof. If B is an atom the claim holds by definition. Let B = C → D. (⇒) Suppose M, [A] |= C → D. Either (a) [A] ≤Γ∗ [C], or (b) [C] ≤Γ∗ [A]. In case (a) we have that Γ∗ ∪ {C} `? C, ∅ succeeds, so that by the induction hypothesis M, [C] |= C, and by hypothesis and (a) we have M, [C] |= D. By the induction hypothesis we obtain Γ∗ ∪ {C} `? D, ∅ succeeds whence also Γ∗ ∪ {A} `? C → D, ∅ succeeds. In case (b), by definition we have that Γ∗ ∪ {A} `? C, ∅ succeeds, and hence by the induction hypothesis M, [A] |= C. Since [A] ≤Γ∗ [A], by hypothesis we have M, [A] |= D. so that by the induction hypothesis we may conclude Γ∗ ∪ {A} `? D, ∅ succeeds, and therefore Γ∗ ∪ {A} `? C → D, ∅ succeeds. (⇐) Suppose Γ∗ ∪ {A} `? C → D, ∅ succeeds, and let (1) [A] ≤Γ∗ [E] and (2) M, [E] |= C. By (1) we have
3. INTERMEDIATE LOGICS
111
(3) Γ∗ ∪ {E} `? A, ∅ succeeds; by (2) and the induction hypothesis we have (4) Γ∗ ∪ {E} `? C, ∅ succeeds By hypothesis we have (5) Γ∗ ∪ {A, C} `? D, ∅ succeeds Now cutting (5) with (3), and then with (4) we get Γ∗ ∪ {E} `? D, ∅ succeeds, so that by the induction hypothesis we can conclude M, [E] |= D. To conclude the proof of the theorem, let A be any formula in Γ∗ . Since ∆0 ⊆ Γ , and Γ∗ `? G does not succeed, by Claim 6, we get ∗
M, [A] |= B for all B ∈ ∆0 , but M, [A] 6|= G. V Thus ∆0 → G is not valid in M . 3
RELATION WITH AVRON’S HYPERSEQUENTS
Avron has presented a sequent calculus for LC [Avron, 1991a]. His method is based on hypersequents, a generalization of ordinary Gentzen’s methods in which the formal objects involved in derivations are disjunctions of ordinary sequents. Hypersequents are denoted by Γ1 ` A1 | . . . | Γn ` An and can be interpreted as ^ ^ ( Γ1 → A1 ) ∨ . . . ∨ ( Γn → An ). (Here we are only concerned with single-conclusion hypersequents, although for specific logics it is required a multi-conclusion version [Avron, 1987].) In hypersequent calculi there are initial hypersequents and rules, which are divided into logical and structural rules. The logical ones are essentially the same as in sequent calculi, the only difference being the presence of dummy contexts, called side hypersequents. The structural rules are divided into internal and external rules. The former deal with formulas within components. If they are present, they are the same as in ordinary sequent calculi. The external rules manipulate whole components within a hypersequent. These are external weakening (EW), contraction (EC), permutation (EP). H H |Γ`A
(EW )
112
GOAL-DIRECTED PROOF THEORY
H | Γ ` A | Γ ` A | H0 H | Γ ` A | H0 H | Γ ` A | ∆ ` B | H0 H | ∆ ` B | Γ ` A | H0
(EC)
(EP )
In hypersequent calculi one can define new structural rules which simultaneously act on several components of one or more hypersequents. It is this type of rule which increases the expressive power of hypersequent calculi with respect to ordinary sequent calculi. An example of this kind of rules is Avron’s communication rule for LC H1 | Γ1 , Γ2 ` A H2 | ∆1 , ∆2 ` B . H1 | H2 | Γ1 , ∆1 ` A | Γ2 , ∆2 ` B We want to show an intuitive mapping between the goal-directed procedure for LC and hypersequent calculus. To this purpose, a query Γ `? A, {(∆1 , q1 ), . . . , (∆n , qn )} corresponds to the hypersequent Γ ` A | ∆1 ` q1 | . . . | ∆n ` qn . Given this mapping, we might show that derivations in the goal-directed procedure correspond to a sort of ‘uniform proofs’ according to the terminology of [Miller et al., 1991] in Avron’s calculus, once we had adapted and extended the notion of uniform proof to this setting. Although we will not develop this correspondence formally here, we point out the main connections. In particular, we want to understand what is the role of the backtracking and restart rules in terms of Avron’s calculus. The application of backtracking corresponds to an application of external contraction rule (EC). Suppose we have a part of a derivation as shown in Figure 3.3. To simplify the matter, we suppose that only implication and reduction rules Γ `? q, H .. . ∆ `? r, H 0 ↓ ?
Γ ` q, H 0 ∪ {(∆, r)} Figure 3.3. Derivation using restart for LC.
3. INTERMEDIATE LOGICS
113
are applied on the branch from Γ `? q, H to ∆ `? r, H. A corresponding hypersequent derivation will contain the branch: ∆ ` r | Γ ` q | H0 .. . Γ`q|Γ`q|H Γ`q|H Notice that the direction of the derivation is inverted. Coming to the restart rule, we can see that this rule is equivalent to Avron’s communication rule. To see this, we easily observe that 1. The communication rule in Avron’s calculus can be restricted to A, B atoms. 2. The communication rule is equivalent to H | Γ, ∆ ` q
H | Γ, ∆ ` r
H |Γ`q|∆`r This follows from the fact that we have weakening and contraction (both internal and external). 1 Clearly, this rule is very similar to our restart rule, once we interpret H as the history. Notice in passing that the above formulation of the communication rule is much more deterministic in backward proof search than Avron’s original one: in the above rule once we have fixed the the lower sequent, the premises are uniquely determined, whereas in the original rule there are exponentially many choices of the premises. The relationship with hypersequents may be a source of inspiration for discovering goal-directed formulations of other logics. For example, another well-known intermediate logic (weaker than LC) is the so-called logic of weak excludedmiddle LQ [Hosoi, 1988], [Jankov, 1968] which is obtained by adding to I the following axiom: ¬A ∨ ¬¬A. An equivalent implicational axiom is ((A → ⊥) → B) → (((A → ⊥) → ⊥) → B) → B. The logic LQ is complete with respect to the class of bounded Kripke models (or, equivalently the class of Kripke models with a top element). To obtain a goaldirected proof system for LQ, we consider the basic proof system for I (say for the fragment (→, ⊥), enriched by the history book-keeping mechanism and the backtracking rule of LC, then we add the following restart rule: 1 The present version of the rule has been suggested by Mints as reported in [Avron, 1996] and has been formulated independently in [Olivetti, 1995].
114
GOAL-DIRECTED PROOF THEORY
if (∆, ⊥) ∈ H, then from Γ `? ⊥, H step to Γ, ∆ `? ⊥, H. We conjecture that the resulting proof system is sound and complete with respect to LQ, and we will leave to the interested reader to check it. The above rule for LQ closely corresponds to the hypersequent rule for LQ given in [Ciabattoni et al., 1999]. The restart rule for LC is connected to the one for classical logic we have seen in Chapter 2, Section 3.2. The connection can be explained in terms of hypersequent calculi by examining Avron’s communication rule. We can make it stronger by discarding one premise: H | Γ, ∆ ` A H |Γ`B|∆`A The calculus obtained by replacing the communication rule by the one above is cut-free and allows us to derive for instance A ∨ (A → B). We hence obtain a calculus for classical logic. In the goal-directed proof system, the corresponding version of restart would be obtained from the one for LC by discarding one branch: if (Γ, q) ∈ H, then from ∆ `? r, H step to Γ, ∆ `? q, H. Let us call the rule above ‘modified restart’. This rule is equivalent to the restart rule for classical logic. That is to say, the proof system for I plus modified restart gives classical logic. To see this, we simply observe that in this proof system databases never get smaller along a computation, that is: if Γ `? q precedes ∆ `? r in a branch of a derivation, then Γ ⊆ ∆. Thus, there is no longer need of keeping track of the databases in the history, and the modified restart rule simplifies to if q ∈ H, then from ∆ `? r, H step to ∆ `? q, H. which is the restart rule for classical logic. In the literature there are other proposals of calculi for LC and other intermediate logics. In [Avellone et al., 1998] duplication-free tableaux and sequent calculi are defined. On the same lines, in [Dyckhoff, 1999] terminating calculi for theorems and non-theorems of propositional LC are presented. These calculi do not need to go beyond the format of standard Gentzen calculi and contain global rules (similar to the standard calculi for modal logic [Gor´e, 1999]) which may act on several formulas at once.2 2 The
characteristic rule for LC has this form: Γ, A1 ` B1 , ∆1 . . . Γ, An ` Bn , ∆n Γ`∆
where A1 → B1 , . . . , An → Bn are all the implicational formulas of ∆ and ∆i is {A1 → B1 , . . . , Ai−1 → Bi−1 , Ai+1 → Bi+1 , . . . , An → Bn }. In [Dyckhoff, 1999] the application of this rule is restricted to so-called irreducible sequents.
3. INTERMEDIATE LOGICS
115
A rather different calculus for LC is presented in [Baaz and Ferm¨uller, 1999], where LC is considered a prominent example of projective logic. Hypersequent calculi for other intermediate logics (bounded width, bounded cardinality, finite G¨odel logics) are studied in [Ciabattoni and Ferrari, 1999].
CHAPTER 4
MODAL LOGICS OF STRICT IMPLICATION
1 INTRODUCTION The purpose of this chapter is to extend the goal-directed proof methods to strict implication modal logics. We consider this as a first step in order to extend the goal-directed paradigm to the realm of modal logics. Strict implication, denoted by A ⇒ B is read as ‘necessarily A implies B’. The notion of necessity (and the dual notion of possibility) are the subject of modal logics. Strict implication can be regarded as a derived notion: A ⇒ B = 2(A → B), where → denotes material implication and 2 denotes modal necessity. However, strict implication can also be considered as a primitive notion, and has already been considered as such at the beginning of the century in many discussions about the paradoxes of material implication [Lewis, 1912; Lewis and Langford, 1932]. The extension of the goal-directed approach to strict implication and modal logics relies upon the possible worlds semantics of modal logics which is mainly due to Kripke. As we have already done in the case of intuitionistic and intermediate logics, we regard a database as a set of labelled formulas xi : Ai equipped by a relation α giving connections between labels. The labels represent worlds, states, positions. Thus, xi : Ai means that Ai holds at xi . The goal of a query is always asked with respect to a position/world. The form of databases and goals determine the notion of consequence relation {x1 : A1 , . . . , xn : An }, α ` x : A whose intended meaning is that if Ai holds at xi (for i = 1, . . . , n) and the xi are connected as α prescribes, then A must hold at x. For different logics α will be required to satisfy different properties such as reflexivity, transitivity, etc., depending on the properties of the accessibility relation of the system under consideration. In most of the chapter we will be concerned with implicational modal logics whose language L(⇒) contains all formulas built out from a denumerable set Var of propositional variables by applying the strict implication connective, that is, if p ∈ Var then p is a formula of L(⇒), and if A and B are formulas of L(⇒), then so is A ⇒ B. Let us fix an atom p0 , we can define the constant > ≡ p0 ⇒ p0 and let 2A ≡ > ⇒ A. Semantics We review the standard Kripke semantics for L(⇒). 117
118
GOAL-DIRECTED PROOF THEORY
Table 4.1. Standard properties of the accessilbility relation. Reflexivity Transitivity Symmetry Euclidean Finite chains
∀x xRx ∀x∀y∀z xRy ∧ yRz → xRz ∀x∀y xRy → yRx ∀x∀y∀z xRy ∧ xRz → yRz no infinite sequences x0 Rx1 , . . . , xi Rxi+1 , . . .
A Kripke structure M for L(⇒) is a triple (W, R, V ), where W is a non-empty set (whose elements are called possible worlds), R is a binary relation on W , and V is a mapping from W to sets of propositional variables of L. Truth conditions for formulas (of L(⇒)) are defined as follows: • M, w |= p iff p ∈ V (w); • M, w |= A ⇒ B iff for all w0 such that wRw0 and M, w0 |= A, it holds M, w0 |= B. We say that a formula A is valid in a structure M , denoted by M |= A, if ∀w ∈ W, M, w |= A. We say that a formula A is valid with respect to a given class of structures K, iff it is valid in every structure M ∈ K. We sometimes use the notation |=K A. Let us fix a class of structures K. Given two formulas A and B, we can define the consequence relation A |=K B as ∀M ∈ K ∀w ∈ W if M, w |=K A then M, w |=K B. Different modal logics are obtained by considering classes of structures whose relation R satisfies some specific properties. The properties of the accessibility relations we consider are listed in Figure 4.1. We will take into consideration strict implication ⇒ as defined in systems K, KT,1 K4, S4, K5, K45, KB, KBT, S5 and G .2 Properties of accessibility relation R in Kripke frames, corresponding to these systems are shown in Figure 4.2 Hilbert-style axiomatizations of fragments of strict implication have been given in [Prior, 1961; Prior, 1963; Meredith and Prior, 1964; Corsi, 1987]. Letting S be one of the modal systems above, we use the notation |=S A (and A |=S B) to denote validity in (and the consequence relation determined by) the class of structures corresponding to S. 1 We use the acronym KT rather than the more common T, as the latter is also the name of a subrelevance logic we will meet in Chapter 5. 2 We do not consider here systems containing D : 2A → 3A, which correspond to the seriality of the accessibility relation, i.e. ∀x∃y xRy in Kripke frames. The reason is that seriality cannot be expressed in the language of strict implication alone; moreover, it cannot be expressed in any modal language, unless ¬ or 3 is allowed. We will come back to the treatment of seriality in Section 7.3.
4. MODAL LOGICS OF STRICT IMPLICATION
119
Table 4.2. Some standard modal logics. Name K KT K4 S4 K5 K45 KB KTB S5 G
Reflexivity
Transitivity
Symmetry
Euclidean
Finite chains
* *
* * * *
* * *
* * *
* *
* *
2
PROOF SYSTEMS
In this section we present proof methods for all modal systems mentioned above with the exception of G¨odel logic G. DEFINITION 4.1. Let us fix a denumerable alphabet A = {x1 , . . . , xi , . . .} of labels. A database is a finite graph of formulas labelled by A. We denote a database as a pair (∆, α), where ∆ is a finite set of labelled formulas ∆ = {x1 : A1 , . . . , xn : An } and α = {(x1 , x01 ), . . . , (xm , x0m )} is a set of links. Let Lab(E) denote the set of labels x ∈ A occurring in E, and assume that (i) Lab(∆) = Lab(α), and (ii) if x : A ∈ ∆, x : B ∈ ∆, then A = B.3 A trivial database has the form ({x0 : A}, ∅). The expansion of a database (Γ, α) by y : C at x, with x ∈ Lab(Γ), y 6∈ Lab(Γ) is defined as follows: (Γ, α) ⊕x (y : C) = (∆ ∪ {y : C}, α ∪ {(x, y)}). DEFINITION 4.2. A query Q is an expression of the form: Q = (∆, α) `? x : G, H where (∆, α) is a database, x ∈ Lab(∆), G is a formula, and H, the history, is a set of pairs H = {(x1 , q1 ), . . . , (xm , qm )} where xi are labels and qi are atoms. We will often omit the parentheses around the two components of a database and write Q = ∆, α `? x : G, H. A query from a trivial database {x0 : A} will be written simply as: x0 : A `? x0 : B, H, 3 We
will drop this condition in Section 7 when we extend the language by allowing conjunction.
120
GOAL-DIRECTED PROOF THEORY
and if A = >, we sometimes just write `? x0 : B, H. DEFINITION 4.3. Let α be a set of links, we introduce a family of relation symbols AS α (x, y), where x, y ∈ Lab(α). We consider the following conditions: (K) (x, y) ∈ α ⇒ AS α (x, y), (x, y), (T) x = y ⇒ AS α S S (x, z) ∧ A (4) ∃z(AS α α (z, y)) ⇒ Aα (x, y), S S (5) ∃z(Aα (z, x) ∧ Aα (z, y)) ⇒ AS α (x, y), S (x, y) ⇒ A (y, x). (B) AS α α For K ∈ S ⊆ {K, T, 4, 5, B}, we let AS be the least relation satisfying all conditions in S. Thus, for instance, AK45 is the least relation such that: (x, y) AK45 α
⇔ (x, y) ∈ α ∨ (x, z) ∧ AK45 (z, y)) ∨ ∨ ∃z(AK45 α α K45 (z, x) ∧ A (z, y)). ∨ ∃z(AK45 α α
We will use the standard abbreviations (i.e. AS5 = AKT5 = AKT45 ). DEFINITION 4.4 (Modal Deduction Rules). For each modal system S, the corresponding proof system, denoted by P(S), comprises the following rules parametrized to predicates AS : • (success) ∆, α `? x : q ∈ ∆.
x : q, H immediately succeeds if q is an atom and
• (implication) From ∆, α `? x : A ⇒ B, H, step to (∆, α) ⊕x (y : A) `? y : B, H, where y 6∈ Lab(∆) ∪ Lab(H). • (reduction) If y : C ∈ ∆, with C = B1 ⇒ B2 ⇒ . . . ⇒ Bk ⇒ q, with q atomic, then from ∆, α `? x : q, H step to ∆, α `? u1 : B1 , H ∪ {(x, q)},
...,
∆, α `? uk : Bk , H ∪ {(x, q)},
for some u0 , . . . , uk ∈ Lab(α), with u0 = y, uk = x, such that for i = 0, . . . , k − 1, AS α (ui , ui+1 ) holds. • (restart) If (y, r) ∈ H, then, from ∆, α `? x : q, H, with q atomic, step to ∆, α `? y : r, H ∪ {(x, q)}.
4. MODAL LOGICS OF STRICT IMPLICATION
121
Since most of the results which follow do not depend on the specific properties of the predicates AS involved in the definition of a proof system P(S), we will often omit the reference to P(S). The following proposition states easy properties of the deduction procedure. PROPOSITION 4.5. • (Identity) If x : A ∈ Γ, then Γ, α `? x : A, H succeeds. • (Monotony) If Q = Γ, α `? x : C, H succeeds and Γ ⊆ ∆, α ⊆ β, H ⊆ H 0 , then also Q0 = ∆, β `? x : C, H 0 succeeds. Moreover, any derivation of Q can be turned uniformly into a derivation of Q0 by replacing Γ with ∆, α with β, and H with H 0 . • (Increasingness) Let D be any derivation of a given query; if Q1 = Γ1 , α1 `? x1 : A1 , H1 and Q2 = Γ2 , α2 `? x2 : A2 , H2 are two queries in D, such that Q2 is a descendant of Q1 , then Γ1 ⊆ Γ2 , α1 ⊆ α2 , H1 ⊆ H2 . Restricted restart Similar to the case of classical logic, in any deduction of a query Q of the form ∆, α `? x : G, ∅, the restart rule can be restricted to the choice of the pair (y, r), such that r is the uppermost atomic goal occurred in the deduction and y is the label associated to r (that is, the query in which r appears contains . . . `? y : r). Hence, if the initial query is Q = ∆, α `? x : G, ∅ and G is an atom q, such a pair is (x, q), if G has the form B1 ⇒ . . . ⇒ Bk ⇒ r, then the first pair is obtained by repeatedly applying the implication rule until we reach the query . . . `? xk : r, with xk 6∈ Lab(∆). With this restriction, we do not need to keep track of the history any more, but only of the first pair. An equivalent formulation is to allow restart from the initial goal (and its relative label) even if it is implicational, but the re-evaluation of an implication causes a redundant increase of the database, that is why we have preferred the above formulation. PROPOSITION 4.6. If ∆, α `? x : G, ∅ succeeds then it succeeds by a derivation in which every application of restart is restricted restart. Proof. It suffices to show the following fact: whenever, in a successful derivation, we restart from a pair (x, p) coming from a query Q preceding the current one, we still obtain a successful derivation if we restart from any pair (y, q) coming from a query Q0 preceding Q. This fact implies that if the initial query succeeds, then there is a successful derivation in which every restart application makes use only of the first pair. The proof of this fact is essentially identical to the one of the (⇒) half of Proposition 2.30. We omit the details.
122
GOAL-DIRECTED PROOF THEORY
(1) `? x0 : ((p ⇒ p) ⇒ a ⇒ b) ⇒ (b ⇒ c) ⇒ a ⇒ c (2) x1 : (p ⇒ p) ⇒ a ⇒ b, α `? x1 : (b ⇒ c) ⇒ a ⇒ c (3) x2 : b ⇒ c, α `? x2 : a ⇒ c (4) x3 : a, α `? x3 : c (5) α `? x3 : b, (x3 , c) (6) α `
?
!!
x2 : p ⇒ p, (x3 , c)
aa
(8) α `? x3 : a, (x3 , c)
(7) x4 : p, α ∪ {(x2 , x4 )} `? x4 : p, (x3 , c)
Figure 4.1. Derivation for Example 4.7.
EXAMPLE 4.7. In Figure 4.1 we show a derivation of ((p ⇒ p) ⇒ a ⇒ b) ⇒ (b ⇒ c) ⇒ a ⇒ c. in P(K). By Proposition 4.6, we only record the first pair for restart, which, however, is not used in the derivation. As usual in each node we only show the additional data, if any. Thus the database in each node is given by the collection of the formulas from the root to that node. Here is an explanation of the steps: in step (2) α = {(x0 , x1 )}; in step (3) α = {(x0 , x1 ), (x1 , x2 )}; in step (4) α = {(x0 , x1 ), (x1 , x2 ), (x2 , x3 )}; since AK α (x2 , x3 ), by reduction w.r.t. x2 : b ⇒ c K we get (5); since AK α (x1 , x2 ) and Aα (x2 , x3 ), by reduction w.r.t. x1 : (p ⇒ p) ⇒ a ⇒ b we get (6) and (8). the latter immediately succeeds as x3 : a ∈ ∆; from (6) we step to (7) which immediately succeeds. EXAMPLE 4.8. In Figure 4.2 we show a derivation of ((((a ⇒ a) ⇒ p) ⇒ q) ⇒ p) ⇒ p in P(KBT), we use restricted restart according to Proposition 4.6. In step (2), α = {(x0 , x1 )}. Step (3) is obtained by reduction w.r.t. x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ (x1 , x1 ). In step (4) α = {(x0 , x1 ), (x1 , x2 )}; step (5) is obtained by p, as AKBT α (x2 , x1 ); in step restart; step (6) by reduction w.r.t. x2 : (a ⇒ a) ⇒ p, as AKBT α (7) α = {(x0 , x1 ), (x1 , x2 ), (x1 , x3 )} and the query immediately succeeds.
4. MODAL LOGICS OF STRICT IMPLICATION
123
(1) `? x0 : ((((a ⇒ a) ⇒ p) ⇒ q) ⇒ p) ⇒ p (2) x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ p, α `? x1 : p (3) x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ p, α `? x1 : ((a ⇒ a) ⇒ p) ⇒ q, (x1 , p) (4) x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ p, x2 : (a ⇒ a) ⇒ p, α `? x2 : q, (x1 , p) (5) x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ p, x2 : (a ⇒ a) ⇒ p, α `? x1 : p, (x1 , p) (6) x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ p, x2 : (a ⇒ a) ⇒ p, α `? x1 : a ⇒ a, (x1 , p) (7) x1 : (((a ⇒ a) ⇒ p) ⇒ q) ⇒ p, x2 : (a ⇒ a) ⇒ p, x3 : a, α `? x3 : a, (x1 , p)
Figure 4.2. Derivation for Example 4.8. 3
ADMISSIBILITY OF CUT
In this section we prove the admissibility of the cut rule. The cut rule states the following: let x : A ∈ Γ, then if (1) Γ ` y : B and (2) ∆ ` z : A succeed, we can ‘replace’ x : A by ∆ in Γ and get a successful query from (1). There are two points to clarify. First, we need to define the involved notion of substitution, and this we do in the next definition. Furthermore the proof systems P(S) depend uniformly on predicate AS , and we expect that the admissibility of cut depends on the properties of predicate AS . It will turn out that the admissibility of cut (proved in Theorem 4.10) will hold for every proof system P(S), such that AS satisfies the following conditions: • (i) AS is closed under substitution of labels; S • (ii) AS α (x, y) implies Aα∪β (x, y); S S • (iii) AS α (u, v) implies ∀x y (Aα∪{(u,v)} (x, y) ↔ Aα (x, y)).
We say that two databases (Γ, α), (∆, β) are compatible for substitution, 4 if for every x ∈ Lab(Γ) ∩ Lab(∆), for all formulas C, x : C ∈ Γ ⇔ x : C ∈ ∆. 4 We
will drop this condition in Section 7, when we add conjunction.
124
GOAL-DIRECTED PROOF THEORY
If (Γ, α) and (∆, β) are compatible for substitution, x : A ∈ Γ, and y ∈ Lab(∆), we denote by (Γ, α)[x : A/∆, β, y] = (Γ − {x : A} ∪ ∆, α[x/y] ∪ β). the database which results by replacing x : A in (Γ, α) by (∆, β) at point y. We say that a predicate AS is closed under substitution if, S whenever AS α (x, y) holds, then Aα[u/v] (x[u/v], y[u/v]) also holds.
PROPOSITION 4.9. (a) If AS satisfies condition (i) above and Γ, α `? x : C, H succeeds, then also Γ[u/v], α[u/v] `? x[u/v] : C, H[u/v] succeeds. ? (b) If AS satisfies condition (iii) above, then AS α (x, y) and Γ, α ∪ {(x, y)} ` ? u : G, H succeed, implies Γ, α ` u : G, H succeeds.
Proof. By induction on derivation length.
THEOREM 4.10 (Admissibility of cut). Let predicate AS satisfy the conditions (i), (ii), (iii) above. If the following queries succeed in the proof system P(S) : 1. Γ[x : A] `? u : B, H1 2. ∆, β `? y : A, H2 . and (Γ, α) and (∆, β) are compatible for substitution, then also 3. (Γ, α)[x : A/∆, β, y] `? u[x/y] : B, H1 [x/y] ∪ H2 succeeds in P(S) . Proof. As usual, we proceed by double induction on pairs (h, c), where h is the height of a derivation of 1. and c = cp(A). We only show the most difficult case, that is, when h, c > 0 and the first step in a derivation of 1. is by reduction with respect to x : A. Let A = D1 ⇒ . . . ⇒ Dk ⇒ q; from 1. we step to ∆[x : A], α `? ui : Di , H1 ∪ {(u, q)}, for some u0 , . . . , uk ∈ Lab(α), with u0 = x, uk = u, such that (*) for i = 0, . . . , k − 1, AS α (ui , ui+1 ) holds. By the induction hypothesis, we get for i = 0, . . . , k − 1, Qi = (∆[x : A], α)[x : A/∆, β, y] `? ui [x/y] : Di , (H1 [x/y] ∪ {(u, q)[x/y]} ∪ H2 succeeds.
4. MODAL LOGICS OF STRICT IMPLICATION
125
By 2., we get (∆, β) ⊕y (z1 : D1 ) ⊕z1 . . . ⊕zk−1 (zk : Dk ) `? zk : q, H2 succeeds, where we can assume z1 , . . . , zk 6∈ Lab(Γ) ∪ Lab(∆) and the zi are all distinct. That is to say, Q0 = ∆ ∪ {z1 : D, . . . , zk : Dk }, β ∪ γ `? zk : q, H2 succeeds, with γ = {(y, z1 ), . . . , (zk−1 , zk )}. Notice that (i), cp(Di ) < cp(A), for i = 1, . . . , k and (ii) Q0 and Q1 are compatible for substitution, whence we can apply the induction hypothesis and get that the following query succeeds: (∆ ∪ {z1 : D, . . . , zk : Dk }, β ∪ γ)[z1 /(Γ, α)[x/(∆, β, y)] `? zk : q, H2 [z1 /u1 [x/y]] ∪ H1 [x/y] ∪ H2 . By the hypothesis on zi , we also have zi 6∈ Lab(H2 ). Thus by definition of substitution, we have Q01 =
(Γ − {x : A}) ∪ ∆ ∪ {z2 : D2 , . . . , zk : Dk }, α[x/y] ∪ β ∪ γ[z1 /u1 [x/y]] `? zk : q, H1 [x/y] ∪ H2 succeeds.
The compatibility constraint is satisfied by Q01 and Q2 , and since cp(D2 ) < cp(A) we can apply the induction hypothesis again. By repeating this argument up to Dk , we finally get: (Γ − {x : A}) ∪ ∆, α[x/y] ∪ β ∪ γ[z1 /u1 [x/y], . . . , zk /uk [x/y]] `? uk [x/y] : q, H1 [x/y] ∪ H2 succeeds, so that by definition of γ, since u0 = x and uk = u, Q00 = (Γ − {x : A}) ∪ ∆, α[x/y] ∪ β ∪{(y, u1 [x/y]), . . . , (uk−1 [x/y], u[x/y])} `? u[x/y] : q, H1 [x/y] ∪ H2 succees. ‘By (*) and conditions (i) and (ii) on AS , we get: AS α[x/y]∪β (y, u1 [x/y]), .. . AS α[x/y]∪β (uk−1 [x/y], u1 [x/y]). Hence, by repeatedly applying Proposition 4.9(b) to query Q00 , we get (Γ − {x : A}) ∪ ∆, α[x/y] ∪ β `? u[x/y] : q, H1 [x/y] ∪ H2 , that is 3., succeeds.
126
GOAL-DIRECTED PROOF THEORY
COROLLARY 4.11. Under the same conditions as above, if x : A `? x : B succeeds and x : B `? x : C succeeds then also x : A `? x : C succeeds. COROLLARY 4.12. If K ∈ S ⊆ {K, 4, 5, B, T}, then in the proof system P(S) cut is admissible. Proof. One can easily check that the predicate AS satisfies the conditions of the previous theorem. 4
SOUNDNESS AND COMPLETENESS
We need to give a semantic meaning to queries. We do this by introducing a suitable notion of realization, similarly to what we did in the cases of intuitionistic and intermediate logic (see Definition 2.54). DEFINITION 4.13 (Realization). Let AS be an accessibility predicate, given a database (Γ, α) and a Kripke model M = (W, R, V ), a mapping f : Lab(Γ) → W is called a realization of (Γ, α) in M with respect to AS , if the following hold: 1. AS α (x, y) implies f (x)Rf (y); 2. if x : A ∈ Γ, then M, f (x) |= A. We say that a query Q = Γ , α `? x : G, H is valid in S if for every S-model M and every realization f of (Γ, α), we have either M, f (x) |= G, or for some (y, r) ∈ H, M, f (y) |= r. THEOREM 4.14 (Soundness). Let Q = Γ, α `? x : G, H succeed in the proof system P(S) , then it is valid in S. Proof. Let M = (W, R, V ) be an S-model and f be a realization of (Γ, α) in M , we proceed by induction on the height h of a successful derivation of Q. For the base case, we have that Q immediately succeeds, G is atomic, and x : q ∈ ∆; by hypothesis M, f (x) |= q. In the induction step, we have several cases. Let the first step in the derivation be obtained by implication, then G = A ⇒ B. Suppose by way of contradiction that M, f (x) 6|= A ⇒ B and for all (y, r) ∈ H, M, f (y) 6|= r. We have that for some w ∈ W , such that f (x)Rw, M, w |= A, but M, w 6|= B. From Q we step to Q0 = (Γ, α) ⊕x (u : A) `? u : B, H, with u 6∈ Lab(∆) ∪ Lab(H), which succeeds by a smaller derivation. Let f 0 (z) = f (z), for z 6= u and f 0 (u) = w. Then, f 0 is a realization of (Γ, α) ⊕x (u : A), and by the induction hypothesis, either M, f 0 (u) |= B, whence M, w |= B, or for some (y, r) ∈ H, M, f 0 (y) |= r, whence M, f (y) |= r; in both cases we have contradictions. Let the first step in the derivation be obtained by reduction, then G is an atom q, there is z : C ∈ ∆, with C = B1 ⇒ . . . ⇒ Bk ⇒ q, and from Q we step to ∆, α `? u1 : B1 , H ∪ {(x, q)},
...,
∆, α `? uk : Bk , H ∪ {(x, q)},
4. MODAL LOGICS OF STRICT IMPLICATION
127
for some u0 , . . . , uk ∈ Lab(α), with u0 = z, uk = x, such that for i = 0, . . . , k − 1, AS α (ui , ui+1 ) holds. By hypothesis, we have 1. M, f (z) |= B1 ⇒ . . . ⇒ Bk ⇒ q, and 2. f (ui )Rf (ui+1 ), for i = 0, . . . , k. By the induction hypothesis, either (a) for some (y, r) ∈ H, M, f (y) |= r, or (b) for i = 1, . . . , k, M, f (ui ) |= Bi . In case (a) we are done. Suppose (b) holds. From u0 = z, (1) and (2), we get M, f (ui ) |= Bi+1 ⇒ . . . ⇒ Bk ⇒ q, for i = 1, . . . , k − 1, and finally M, f (uk ) |= q, that is M, f (x) |= q. If the first step in the derivation is obtained by restart, then the claim immedi ately follows by the induction hypothesis. COROLLARY 4.15. If x0 : A `? x0 : B, ∅ succeeds in P(S) , then A |=S B holds. In particular, if `? x0 : A, ∅ succeeds in P(S), then A is valid in S. THEOREM 4.16 (Completeness). Given a query Q = Γ, α `? x : A, H, if Q is S-valid then Q succeeds in the proof system P(S) . By contraposition, we prove that if Q = Γ, α `? x : A, H does not succeed in one proof system P(S), then there is an S-model M and a realization f of (Γ, α), such that M, f (x) 6|= A and for any (y, r) ∈ H, M, f (y) 6|= r. The proof is very similar to the one of Theorem 2.58 for the completeness of the intuitionistic procedure with disjunction. As usual, we construct an S-model by extending the database, through the evaluation of all possible formulas at every world (each represented by one label) of the database. Since such evaluation may lead, for implication formulas, to create new worlds, we must carry on the evaluation process on these new worlds. Therefore, in the construction we consider an enumeration of pairs (xi , Ai ), where xi is a label and Ai is a formula. Assume Γ, α `? x : A, H fails in P(S) . We let A be a denumerable alphabet of labels and L be the underlying propositional language. Let (xi , Ai ), for i ∈ ω be an enumeration of pairs of A × L, starting with the pair (x, A) and containing infinitely many repetitions, that is (x0 , A0 ) = (x, A), ∀y ∈ A, ∀F ∈ L, ∀n ∃m > n (y, F ) = (xm , Am ). Given such enumeration we define i) a sequence of databases (Γn , αn ), ii) a sequence of histories Hn , iii) a new enumeration of pairs (yn , Bn ), as follows: • (step 0) Let (Γ0 , α0 ) = (Γ, α), H0 = H, (y0 , B0 ) = (x, A).
128
GOAL-DIRECTED PROOF THEORY
• (step n+1) Given (yn , Bn ), if yn ∈ Lab(Γn ) and Γn , αn `? yn : Bn , Hn fails then proceed according to (a) else to (b). (a) if Bn if atomic, then we set Hn+1 = Hn ∪ {(yn , Bn )}, (Γn+1 , αn+1 ) = (Γn , αn ), (yn+1 , Bn+1 ) = (xk+1 , Ak+1 ), where k = maxt≤n ∃s≤n (ys , Bs ) = (xt , At ), else let Bn = C ⇒ D, then we set Hn+1 = Hn , (Γn+1 , αn+1 ) = (Γn , αn ) ⊕yn (xm : C), (yn+1 , Bn+1 ) = (xm , D), where xm = min{xt ∈ A | xt 6∈ Lab(Γn ) ∪ Lab(Hn )}. (b) We set Hn+1 = Hn , (Γn+1 , αn+1 ) = (Γn , αn ), (yn+1 , Bn+1 ) = (xk+1 , Ak+1 ), where k = max{t ≤ n | ∃s≤n (ys , Bs ) = (xt , At )}, LEMMA 4.17. ∀k ∃n ≥ k (xk , Ak ) = (yn , Bn ). Proof. By induction on k. If k = 0, the claim holds by definition. Let (xk , Ak ) = (yn , Bn ). (i) if yn 6∈ Lab(Γn ), or Γn , αn `? yn : Bn , Hn succeeds, or Bn is atomic, then (xk+1 , Ak+1 ) = (yn+1 , Bn+1 ). (ii) Otherwise, let Bn = C1 ⇒ . . . ⇒ Ct ⇒ r, (t > 0), then (xk+1 , Ak+1 ) = (yn+t+1 , Bn+t+1 ).
LEMMA 4.18. For all n ≥ 0, if Γn , αn `? yn : Bn , Hn fails, then: ∀m ≥ n Γm , αm `? yn : Bn , Hm fails. Proof. By induction on cp(Bn ) = c. if c = 0, that is Bn is an atom, say q, then we proceed by induction on m ≥ n + 1. • (m = n + 1) we have Γn , αn `? yn : q, Hn fails, then also Γn , αn `? yn : q, Hn ∪ {(yn , q)} fails, whence, by construction,
4. MODAL LOGICS OF STRICT IMPLICATION
129
Γn+1 , αn+1 `? yn : q, Hn+1 fails. • (m > n + 1) Suppose we have proved the claim up to m ≥ n + 1, and suppose by way of contradiction that Γm , αm `? yn : q, Hm fails, but (i) Γm+1 , αm+1 `? yn : q, Hm+1 succeeds. At step m, (ym , Bm ) is considered; it must be ym ∈ Lab(Γm ) and (ii) Γm , αm `? ym : Bm , Hm fails. We have two cases, according to the form of Bm . If Bm is an atom r, as (yn , q) ∈ Hm , from query (ii) by restart we can step to Γm , αm `? yn : q, Hm ∪ {(ym , r)}, that is the same as Γm+1 , αm+1 `? yn : q, Hm+1 , which succeeds and we get a contradiction. If Bm = C1 ⇒ . . . ⇒ Ck ⇒ r, with k > 0, then from query (ii) we step in k steps to Γm+k , αm+k `? ym+k : r, Hm+k , where (Γm+k , αm+k ) = (Γm , αm ) ⊕ym (ym+1 : C1 ) ⊕ym+1 . . . ⊕ym+k−1 (ym+k : Ck ) and Hm+k = Hm ; then, by restart, since (yn , q) ∈ Hm+k , we step to (iii) Γm+k , αm+k `? yn : q, Hm+k ∪ {(ym+k , r)}. Since query (i) succeeds, by monotony we have that also query (iii) succeeds, whence query (ii) succeeds, contradicting the hypothesis. Let cp(Bn ) = c > 0, that is Bn = C ⇒ D. By hypothesis Γn , αn `? yn : C ⇒ D, Hn , fails. Then by construction and by the computation rules Γn+1 , αn+1 `? yn+1 : D, Hn+1 , fails, and hence, by the induction hypothesis, ∀m ≥ n + 1, Γm , αm `? yn+1 : D, Hm , fails. Suppose by way of contradiction that for some m ≥ n + 1, Γm , αm `? yn : C ⇒ D, Hm , succeeds. This implies that, for some z 6∈ Lab(Γm ) ∪ Lab(Hm ), (1) (Γm , αm ) ⊕yn (z : C) `? z : D, Hm , succeeds. Since yn+1 : C ∈ Γn+1 ⊆ Γm , αn+1 ⊆ αm , Hn+1 ⊆ Hm , by monotony, we get
130
GOAL-DIRECTED PROOF THEORY
(2) Γm , αm `? yn+1 : C, Hm , succeeds. The databases involved in queries (1) and (2) are clearly compatible for substitution, hence by cut we obtain Γm , αm `? yn+1 : D, Hm succeeds, and we have a contradiction.
LEMMA 4.19. (i) ∀m, Γm , αm `? x : A, Hm fails; (ii) ∀m, if (y, r) ∈ Hm , then Γm , αm `? y : r, Hm fails. Proof. (i) is immediate by the previous lemma. To prove (ii), suppose it does not hold for some m and (y, r) ∈ Hm , i.e. Γm , αm `? y : r, Hm succeeds. But then we can easily find a successful derivation of Γm , αm `? x : A, Hm against (i); in such a derivation (y, r) ∈ Hm is used in a restart step. LEMMA 4.20. If Bn = C ⇒ D and Γn , αn `? yn : C ⇒ D, Hn fails, then there is a y ∈ A, such that for k ≤ n, y 6∈ Lab(Γk ) and ∀m > n: (i) (yn , y) ∈ αm , (ii) Γm , αm `? y : C, Hm succeeds, (iii) Γm , αm `? y : D, Hm fails. Proof. By construction, we can take y = yn+1 , the new point created at step n+ 1, so that (i), (ii), (iii) hold for m = n + 1. In particular (*) Γn+1 , αn+1 `? yn+1 : D, Hn+1 fails. Since the (Γ, αm ) are not decreasing (w.r.t. inclusion), we immediately have that (i) and (ii) also hold for every m > n + 1. By construction, we know that Bn+1 = D, whence by (*) and Lemma 4.18, (iii) also holds for every m > n + 1. Construction of the Canonical model We define an S-model as follows M = (W, R, V ), such that S • W = n Lab(Γn ); • xRy ≡ ∃nAS αn (x, y), • V (x) = {q | ∃n x ∈ Lab(Γn ) ∧ Γn , αn `? x : q, Hn succeeds}.
4. MODAL LOGICS OF STRICT IMPLICATION
131
LEMMA 4.21. The relation R as defined above has the same properties of AS , e.g. if S=S4, that is AS is transitive and reflexive, then so is R and the same happens in all other cases. Proof. One easily verifies the claim in each case. For instance, we verify that if AS is Euclidean (condition (5) of Definition 4.3), then so is R. Assume xRy and xRz S hold; by definition, there are n and m, such that AS αn (x, y) and Aαm (x, z). Let S S k = max{n, m}, since A is monotonic, we have both Aαk (x, y) and AS αk (x, z), and by condition ((5)), also AS αk (y, z), whence yRz holds. LEMMA 4.22. for all x ∈ W and formulas B, M, x |= B ⇔ ∃n x ∈ Lab(Γn ) ∧ Γn , αn `? x : B, Hn succeeds. Proof. We prove both directions by mutual induction on cp(B). If B is an atom then the claim holds by definition. Thus, assume B = C ⇒ D. (⇐) Suppose for some m Γm , αm `? x : C ⇒ D, Hm succeeds. Let xRy and M, y |= C, for some y. By definition of R, we have that for some n1 , AS αn1 (x, y) holds. Moreover, by the induction hypothesis, for some n2 , Γn2 , αn2 `? y : C, Hn2 succeeds. Let k = max{n1 , n2 , m}, then we have 1. Γk , αk `? x : C ⇒ D, Hk succeeds, 2. Γk , αk `? y : C, Hk succeeds, 3. AS αk (x, y). So that from 1. we also have: 10 . (Γk , αk ) ⊕x (z : C) Lab(Hk )).
`?
z : D, Hk succeeds, (with z 6∈ Lab(Γk ) ∪
We can cut 10 . and 2., and obtain: Γk , αk ∪ {(x, y)} `? y : D, Hk succeeds. Hence, by 3. and Proposition 4.9(b) we get Γk , αk `? y : D, Hk succeeds, and by the induction hypothesis, M, y |= D, (⇒) Suppose by way of contradiction that M, x |= C ⇒ D, but for all n if x ∈ Lab(Γn ), then Γn , αn `? x : C ⇒ D, Hn fails. Let x ∈ Lab(Γn ), then there are m ≥ k > n, such that (x, C ⇒ D) = (xk , Ak ) = (ym , Bm ) is considered at step m + 1, so that we have: Γm , αm `? ym : C ⇒ D, Hm fails.
132
GOAL-DIRECTED PROOF THEORY
By Lemma 4.20, there is a y ∈ A, such that (a) for t ≤ m, y 6∈ Lab(Γt ) and (b): ∀m0 > m (i) (yn , y) ∈ αm0 , (ii) Γm0 , αm0 `? y : C, Hm0 succeeds, (iii) Γm0 , αm0 `? y : D, Hm0 fails. By (i) we have xRy holds, by (ii) and the induction hypothesis, we have M, y |= C. By (a) and (iii), we get ∀n if y ∈ Lab(Γn ), then Γn , αn `? y : D, Hn fails. Hence, by the induction hypothesis, we have M, y 6|= D, and we get a contradic tion. Proof of The Completeness Theorem 4.16. We are mow able to conclude the proof of the completeness theorem. Let f (z) = z, for every z ∈ Lab(Γ0 ), where (Γ0 , α0 ) = (Γ, α) is the original database. It is easy to see that f is a realization of (Γ, α) in M : S if AS α (u, v) then Aα0 (u, v), hence f (u)Rf (v).
If u : C ∈ Γ = Γ0 , then by identity and the previous lemma we have M, f (u) |= C. On the other hand, by Lemma 4.19, and the previous lemma we have M, f (x) 6|= A and M, f (y) 6|= r for every (y, r) ∈ H. This concludes the proof. COROLLARY 4.23. If A |=S B holds, then A `? x0 : B, ∅, succeeds in P(S) . In particular, if A is valid in the modal system S, then `? x0 : A, ∅, succeeds in P(S) . 5
SIMPLIFICATION FOR SPECIFIC SYSTEMS
In this section we show that for most of the modal logics we have considered, the use of labelled databases is not necessary and we can simplify either the structure of databases, or the deduction rules. We point out that the main results we have obtained for the general formulation with labels (cut-admissibility, completeness, restriction of the restart rule etc.) can be proved directly for each simplified system. If we want to check the validity of a formula A, we evaluate A from a trivial database `? x0 : A, ∅. Restricting our attention to computations from trivial databases, we observe that we can only generate databases which have the form of trees. DEFINITION 4.24. A database (∆, α) is called a tree-database if the set of links α forms a tree.
4. MODAL LOGICS OF STRICT IMPLICATION
133
Let (∆, α) be a tree database and x ∈ Lab(∆), we define the subdatabase P ath(∆, α, x) as the list of labelled formulas lying on the path from the root of α, say x0 , up to x, that is: P ath(∆, α, x) = (∆0 , α0 ), where: α0 0
∆
= {(x0 , x1 ), (x1 , x2 ), . . . , (xn−1 , xn ) | xn = x and for i = 1, . . . , n, (xi−1 , xi ) ∈ α} = {y : A ∈ ∆ | y ∈ Lab(α0 )}.
PROPOSITION 4.25. If a query Q occurs in any derivation from a trivial database, then Q = ∆, α `? z : B, H, where (∆, α) is a tree-database. From now on we restrict our consideration to tree-databases.
5.1 Simplification for K, K4, S4, KT: Databases as Lists We show that for systems K, K4, S4, KT the proof procedure can be simplified in the sense that: (i) the databases are lists of formulas, (ii) the restart rule is not needed. Given a successful query Q = ∆, α `? z : B, H and a successful derivation D of Q, let R(Q, D) = number of applications of the restart rule in D. PROPOSITION 4.26. If Q = ∆, α `? x1 : A1 , {x2 : A2 , . . . , xk : Ak } succeeds (with A2 , . . . , Ak atoms) by a derivation D such that R(Q, D) = m, then, for some i = 1, . . . , k, the query Qi = P ath(∆, α, xi ) `? xi : Ai , ∅ succeeds, by a derivation Di with R(Q, Di ) ≤ m. Proof. Fix a derivation D, with R(Q, D) = m; we proceed by induction on the height h of D. If h = 0, then A1 is atomic and x1 : A ∈ ∆; since x1 : A ∈ P ath(∆, α, x1 ), we have that Q1 = P ath(∆, α, x1 ) `? x1 : A1 , ∅ succeeds by a one-step derivation D1 , with R(Q1 , D1 ) = 0 and we are done. Let h > 0. We have several cases according to the first deduction step of D. Let H = {x2 : A2 , . . . , xk : Ak }. • (implication) In this case A = B ⇒ C and the only child of Q is Q0 = ∆0 , α0 `? y : C, H, where (∆0 , α0 ) = (∆, α) ⊕x1 (y : B), for y 6∈ Lab(∆). We have that Q0 succeeds by a subderivation D0 , with R(Q0 , D0 ) = m, hence by the induction hypothesis we have that either
134
GOAL-DIRECTED PROOF THEORY
Q00 = P ath(∆0 , α0 , y) `? y : C, ∅ succeeds by a derivation D00 , R(Q00 , D00 ) ≤ R(Q0 , D0 ), or Q0i = P ath(∆0 , α0 , xi ) `? xi : Ai , ∅ succeeds by a derivation D0 i , R(Q0i , D0 i ) ≤ R(Q0 , D0 ). In the former case, it is sufficient to append P ath(∆, α, x1 ) `? x1 : B ⇒ C, ∅ to the top of D00 to obtain the conclusion, since P ath(∆0 , α0 , y) = = P ath(∆, α, x1 ) ⊕x1 (y : C). In the latter case, we immediately conclude as P ath(∆0 , α0 , xi ) = P ath(∆, α, xi ). • (restart) In this case A1 is atomic and the only child of Q is Q0i = ∆, α `? xi : Ai , H ∪ {(x1 , A1 )}, which succeeds by a subderivation D0 i of D. Hence, by the induction hypothesis, for some j = 1, . . . , k, Qj = P ath(∆, α, xj ) `? xj : Aj , ∅ succeeds by a derivation Dj , with R(Qj , Dj ) ≤ R(Q0i , D0 i ) < R(Q, D). • (reduction) In this case, A1 is an atom q and there is one z : C ∈ ∆, with C = B1 ⇒ . . . ⇒ Bt ⇒ q, then Q has t children Q0j , j = 1, . . . , t: Q0j = ∆, α `? zj : Bj , H ∪ {(x1 , A1 )}, S 0 such that AS α (z, z1 ), . . . , Aα (zt−1 , zt ) hold, with zt = x1 . Every Qj suc0 ceeds by a shorter derivation D j , with
R(Q, D) = Σtj R(Qj , Dj ). By the induction hypothesis, for each j = 1, . . . , t, we have either (i) P ath(∆, α, zj ) `? zj : Bj , ∅ succeeds, or (ii) P ath(∆, α, xi ) `? xi : Ai , ∅ (i > 1) succeeds, or (iii) P ath(∆, α, x1 ) `? x1 : q, ∅ succeeds by a derivation D∗j whose R degree is no greater than the one of the derivation of Q0j . Clearly, if for some j either (ii) or (iii) holds we are done. Thus, suppose that j = 1, . . . , t always (i) holds. Call, for each j, Q00j the query involved in (i). We show that z, z1 , . . . , zt−1 must be on the path from the root x0 to x1 . Let z0 = z, zt = x1 . It is sufficient to prove that for j = 0, . . . , t − 1, zt−j−1 is on the path from x0 , to zt−j . Recall that AS α (zt−j−1 , zt−j ) holds.. Since α is a tree, this means that either (a) zt−j−1 is the parent of zt−j , or (b)
4. MODAL LOGICS OF STRICT IMPLICATION
135
zt−j−1 = zt−j , or (c) zt−j−1 is an ancestor of zt−j , (for K we have (a), for KT (a) + (b), for K4 (c), for S4 (b) +(c)). In all cases, the claim follows since the path from the root to zt−j is unique. An immediate consequence of what we have shown is that for j = 1, . . . , t, P ath(∆, α, zj ) ⊆ P ath(∆, α, x1 ), so that by monotony, for j = 1, . . . , t, Q00j = P ath(∆, α, x1 ) `? zj : Bj , ∅ succeeds, and it is easy to see that it does so by a derivation D00 j with a R-degree no greater than that of D∗j . Since, z : C ∈ P ath(∆, α, x1 ), we obtain a derivation D1 of Q1 = P ath(∆, α, x1 ) `? x1 , q, ∅ by making a tree with root Q1 and appending to Q1 each D00 j . We observe that R(Q1 , D1 ) = Σtj R(Q00j , D00 j ) ≤ Σtj R(Qj , Dj ) = R(Q, D).
PROPOSITION 4.27. Let Q = ∆, α `? x : A, ∅. If Q succeeds by a derivation D with R(Q, D) = k, then there is a successful derivation D0 of Q with R(Q, D) < k. Proof. Suppose k > 0. Inspecting D from the root downwards we can find a query Q1 = ∆, β `? y : q, H1 such that (y, q) is used in a restart step at some descendant, say Q2 , of Q1 . We say that Q2 ‘makes use’ of Q1 . Moreover, we can choose Q1 , such that no query in D makes use of an ancestor of Q1 in a restart step. Let Q2 be a descendant of Q1 which makes use of Q1 , and let Q3 be the child of Q2 obtained by restart, we have Q2 = Σ, γ `? z : r, H2 , Q3 = Σ, γ `? y : q, H2 ∪ {(z, r)}. Let Di , (i = 1, 2, 3), be the subderivation of D with root Qi . Since no query in D1 makes use of an ancestor of Q1 , in a restart step, we obtain that the query Q01 = ∆, β `? y : q, ∅, succeeds by a derivation D0 1 obtained by removing history H1 from any node of D1 ; Derivation D0 1 will contain queries Q02 and Q03 corresponding to Q2 and Q3 : Q02 = Σ, γ `? z : r, H2 − H1 , Q03 = Σ, γ `? y : q, (H2 − H1 ) ∪ {(z, r)}. Let D0 3 be the subderivation of D0 1 with root Q03 . We have (*) R(Q03 , D0 3 ) = R(Q3 , D3 ) < R(Q2 , D2 ) ≤ R(Q1 , D1 ). By the previous proposition, one of the following (a), (b), (c) holds: (a) Query Qa = P ath(Σ, γ, y) `? y : q, ∅ succeeds by a derivation Da , such that R(Qa , Da ) ≤ R(Q03 , D0 3 ).
136
GOAL-DIRECTED PROOF THEORY
(b) Query Qb = P ath(Σ, γ, z) `? z : r, ∅ succeeds by a derivation Db , such that R(Qb , Db ) ≤ R(Q03 , D0 3 ). (c) For some (uj , pj ) ∈ H2 − H1 , query Qc = P ath(Σ, γ, uj ) `? uj : pj , ∅ succeeds by a derivation Dc , such that R(Qc , Dc ) ≤ R(Q03 , D0 3 ). Moreover, there exists a query of the form Q0j = ∆j , γj `? uj : pj , Hj in D0 on the branch from Q01 to Q02 and there exists a corresponding query Qj = ∆j , γj `? uj : pj , Hj ∪ H1 occurring in D along the branch from Q1 to Q2 . In case (a) we have that P ath(Σ, γ, y) ⊆ (∆, β), hence by monotony, Q1 succeeds by a derivation D0 a , such that (by (*)), R(Q1 , D0 a ) = R(Qa , Da ) < R(Q1 , D1 ). In case (b) we obviously have P ath(Σ, γ, z) ⊆ (Σ, γ), hence by monotony, Q2 succeeds by a derivation D0 b , such that (by (*)), R(Q2 , D0 b ) = R(Qb , Db ) < R(Q2 , D2 ). In case (c) we have P ath(Σ, γ, uj ) ⊆ (∆j , γj ), hence by monotony, Qj succeeds by a derivation D0 c , such that R(Qj , D0 c ) = R(Qc , Dc ). If we call Dj the subderivation of D with root Qj , we have R(Qj , Dj ) > R(Q3 , D3 ), and hence (by (*)), R(Qj , D0 c ) = R(Qc , Dc ) ≤ R(Q03 , D0 3 ) = R(Q3 , D3 ) < R(Qj , Dj ). According to the case (a), (b), (c), we obtain a successful derivation D0 , with R(Q, D0 ) < R(Q, D), by replacing in D, either D1 by D0 a , or D2 by D0 b , or Dj by D0 c . By repeatedly applying the previous proposition, we get THEOREM 4.28. If ∆, α `? x : A, ∅ succeeds, then P ath(∆, α, x) `? x : A, ∅ succeeds without using restart. By virtue of this theorem we can reformulate the proof system for logics from K to S4 as follows. A database is simply a list of formulas A1 , . . . , An , which stands for the labelled database ({x1 : A1 , . . . , xn : An }, α), where α = {(x1 , x2 ), . . . (xn−1 , xn )}. A query has the form: A1 , . . . , An `? B
4. MODAL LOGICS OF STRICT IMPLICATION
137
which represents {x1 : A1 , . . . , xn : An }, α `? xn : B. The history has been omitted since restart is not needed. Letting ∆ = A1 , . . . , An , we reformulate the predicates AS as relations between formulas within a database AS (∆, Ai , Aj ), in particular we can define: ≡ i+1=j AK (∆, Ai , Aj ) AKT (∆, Ai , Aj ) ≡ i = j ∨ i + 1 = j AK4 (∆, Ai , Aj ) ≡ i < j AS4 (∆, Ai , Aj ) ≡ i ≤ j The rules become: • (success) ∆ `? q succeeds if ∆ = A1 , . . . , An , and An = q; • (implication) from ∆ `? A ⇒ B step to ∆, A `? B; • (reduction) from ∆ `? q step to ∆i `? Di , for i = 1, . . . , k, if there is a formula Aj = D1 ⇒ . . . ⇒ Dk ⇒ q ∈ ∆, and there are integers j = j0 ≤ j1 ≤ . . . ≤ jk = n, such that i = 1, . . . , k, AS (∆, Aji−1 , Aji ) holds and ∆i = A1 , . . . , Aji . EXAMPLE 4.29. We show that ((b ⇒ a) ⇒ b) ⇒ c ⇒ (b ⇒ a) ⇒ a is a theorem of S4. `?
((b ⇒ a) ⇒ b) ⇒ c ⇒ (b ⇒ a) ⇒ a
(b ⇒ a) ⇒ b (b ⇒ a) ⇒ b, c
`? `?
c ⇒ (b ⇒ a) ⇒ a (b ⇒ a) ⇒ a
(b ⇒ a) ⇒ b, c, b ⇒ a (b ⇒ a) ⇒ b, c, b ⇒ a
`? `?
a reduction w.r.t. b ⇒ a (1) b reduction w.r.t. (b ⇒ a) ⇒ b (2)
(b ⇒ a) ⇒ b, c, b ⇒ a (b ⇒ a) ⇒ b, c, b ⇒ a, b
`? `?
b⇒a a reduction w.r.t. b ⇒ a
(b ⇒ a) ⇒ b, c, b ⇒ a, b
`?
b.
This formula fails in both KT and K4, and therefore also fails in K: reduction at step (1) is allowed in KT but not in K4; on the contrary, reduction at step (2) is allowed in K4 but not in KT.
138
GOAL-DIRECTED PROOF THEORY
5.2 Simplification for K5, K45, S5: Databases as Clusters In this section we give an unlabelled formulation of logics K5, K45, S5. The next two propositions holds for S = K5 and S = K45. PROPOSITION 4.30. Let Q = Γ, α `? x : G, H be any query which occurs in a deduction from a trivial database x0 : A `? x0 : B, H0 , then ∀z ∈ Lab(α), ¬AS α (z, x0 ). Proof. We give the proof for K5 only; for K45 is similar. By Proposition 4.25, α is a tree with root x0 , thus if (x0 , v) ∈ α, then for all descendant z of x0 (that is for all z ∈ Lab(α)), we have (*) (z, x0 ) 6∈ α. By definition, we know that AK5 is the least Euclidean closure of α. By a standard argument, we have: S AS α (x, y) ⇔ ∃nAn,α (x, y), where S A0,α (x, y) ≡ (x, y) ∈ α, S S S AS n+1,α (x, y) ≡ An,α (x, y) ∨ ∃z(An,α (z, x) ∧ An,α (z, y)).
We show by induction on n, that ∀n ¬AS n,α (z, x0 ). For n = 0 it holds by (*). Assume the claim holds for n; if AS n+1,α (y, x0 ) held for some y, then by the induction hypothesis, we would get that there exists one z such that AS n,α (z, x0 ) ∧ S An,α (z, y)), contrary to the induction hypothesis. PROPOSITION 4.31. Let Q = ∆, β `? x : G, H be any query which occurs in a deduction from a trivial database x0 : A `? x0 : B, H0 , if (x0 , y1 ), (x0 , y2 ) ∈ β, then y1 = y2 . Proof. The proof is the same for both K5 and K45. Fix a deduction D. Assume by way of contradiction that (x0 , y1 ), (x0 , y2 ) ∈ β, but y1 6= y2 , for some query Q = ∆, β `? x : G, H appearing in D. Since at the root of D the graph is empty, points y1 , y2 , or better, links (x0 , y1 ), (x0 , y2 ) have been created because of the evaluation of some queries which are ancestors of Q. Without loss of generality we can assume that y1 has been introduced before y2 . That is, above Q, in the same branch, there is a query Q0 = Γ, β10 `? x0 : C ⇒ D, H 0 , whose child is Q00 = (Γ, β 0 ) ⊕x0 (y2 : C) `? y2 : D, H 0 ,
4. MODAL LOGICS OF STRICT IMPLICATION
139
with (x0 , y1 ) ∈ β 0 and β 0 ∪ {(x0 , y2 )} ⊆ β. Query Q is Q00 or is one of its descendants. Since β 0 6= ∅, Q0 cannot be the root of D; it is easy to see that Q0 cannot be obtained either by the implication rule (x0 is not a new point), or by restart (C ⇒ D is not atomic). Hence the only possibility is that Q0 has been obtained by reduction w.r.t. some z : E ∈ Γ, with E = F1 ⇒ . . . ⇒ Fk ⇒ q, from a query Q0 = Γ, β10 `? u : q, H 00 . Thus, there are some ui , for i = 1, . . . , k, such that letting u0 = z, and uk = u, AS β (ui−1 , ui ) hold, it must be uj = x0 and Fj = C ⇒ D, for some 1 ≤ j ≤ k. But this implies that AS β 0 (uj−1 , x0 ) holds, contradicting the previous proposition. PROPOSITION 4.32. Let Q = ∆, α `? x : G, H be any query which occurs in K5 (x, y) a P(K5) deduction from a trivial database x0 : A `? x0 : B, H0 . Let Rα be defined as follows: K5 (x, y) Rα
≡ (x = x0 ∧ (x0 , y) ∈ α) ∨ (x, y ∈ Lab(α) ∧ x 6= x0 ∧ y 6= x0 ).
K5 (x, y) ≡ AK5 Then we have Rα α (x, y). K5 (x, y) → AK5 Proof. (⇒) By induction on | α |, we show that Rα α (x, y). If α = K5 ∅, then Rα (x, y) does not hold and the claim trivially follows. Let α = {(x0 , v)} with v 6= x0 , then K5 (x, y) ≡ (x = x0 ∧ y = v) ∨ (x = y ∧ y = v). Rα
In the first case, (x = x0 ∧ y = v), we conclude that AK5 α (x, y), since (x, y) ∈ K5 (x, y); in the latter case, by A (x , v) and the by Euclidean property α → AK5 0 α α (v, v). we get AK5 α Let | α |> 1, so that α = α0 ∪ {(u, v)}, with u 6= x0 , notice that it must be also v 6= x0 . It is easy to see that K5 K5 (x, y) ≡ Rα Rα 0 (x, y) ∨ (x, y) = (u, v). K5 K5 Thus, if Rα 0 (x, y) holds, then by the induction hypothesis, we get Aα0 (x, y), K5 and by monotony also Aα (x, y); if (x, y) = (u, v), we conclude by the fact that (x, y) ∈ α → AK5 α (x, y). K5 (x, y). In this respect let (x, y) ∈ (⇐) First we check that if (x, y) ∈ α then Rα K5 α, if x = x0 and (x0 , y) ∈ α then Rα (x, y); otherwise, if x 6= x0 by Proposition K5 K5 (x, y). Next we check that Rα (x, y) and 4.30 y 6= x0 ,so that by definition Rα K5 K5 Rα (x, z) imply Rα (x, z). Assume x = x0 ; by the previous propositions we K5 (x, z) (by definition); if x 6= x0 by Proposition 4.30, get y = z 6= x0 , whence Rα K5 (x, z). We finally conclude we get y 6= x0 and z 6= x0 , so that, by definition, Rα K5 by the minimality of A .
COROLLARY 4.33. Under the same conditions as the last proposition, we have: K5 K5 (x0 , x) and Rα (x0 , y) implies x = y. Rα
140
GOAL-DIRECTED PROOF THEORY
PROPOSITION 4.34. Let Q = ∆, α `? x : G, H be any query which occurs in a P(K45) deduction from a trivial database x0 : A `? x0 : B, H0 . Let K45 (x, y) be defined as follows: Rα K45 (x, y) ≡ x, y ∈ Lab(α) ∧ y 6= x0 . Rα K45 (x, y) ≡ AK45 (x, y). Then we have Rα α
Proof. Similar to that of the previous proposition, and left to the reader.
PROPOSITION 4.35. Let Q = ∆, α `? x : G, H be any query which occurs in S5 (x, y) a P(S5) deduction from a trivial database x0 : A `? x0 : B, H0 . Let Rα be defined as follows: S5 (x, y) ≡ x, y ∈ Lab(α). Rα S5 (x, y) ≡ AS5 Then we have Rα α (x, y).
Proof. We observe that AS5 is the equivalence relation which contains just one class, namely the class of the labels which are descendants of x0 , that is all labels occurring in α. From the previous propositions we can reformulate the proof systems for K5, K45 and S5 without making use of labels. For K5 the picture is as follows: either a database contains just one point x0 , or there is an initial point x0 which is connected to another point x1 , and any point excluding x0 is connected to any other. In the case of K45, x0 is connected also to any point other than itself. Thus, in order to get a concrete structure without labels we must keep distinct the initial world from all the others, and we must indicate what is the current world, that is the world in which the goal formula is evaluated. In case of K5 we must also identify the (only) world to which the initial world is connected. We are thus lead to consider the following structure. A non-empty database has the form: ∆ = B0 | | or ∆ = B0 | B1 , . . . , Bn | Bi , where 1 ≤ i ≤ n, and B0 , B1 , . . . , Bn are formulas. We also define B0 if ∆ = B0 | |, Actual(∆) = Bi if ∆ = B0 | B1 , . . . , Bn | Bi . This rather odd structure is forced by the fact that in K5 and K45 we have reflexivity in all worlds, except in the initial one and therefore, in contrast to all other systems, we have considered so far, the success of `? x0 : A ⇒ B, which means that A ⇒ B is valid, does not imply the success of
4. MODAL LOGICS OF STRICT IMPLICATION
141
x0 : A `? x0 : B, which means that A → B is valid (material implication). 5 The addition operation is defined as follows: B0 | B1 , . . . , Bn , A | A if ∆ = B0 | B1 , . . . , Bn | Bi B0 | A | A if ∆ = B0 | | ∆⊕A = > | A | A if ∆ = ∅ A query has the form ∆ `? G, H, where H = {(A1 , q1 ), . . . , (Ak , qk )}, with Aj ∈ ∆. DEFINITION 4.36 (Deduction Rules for K5 and K45). Given ∆ = B0 | B1 , . . . , Bn | B, let AK5 (∆, X, Y ) AK45 (∆, X, Y )
≡ (X = B0 ∧ Y = B1 ) ∨ (X = Bi ∧ Y = Bj ∧ i, j > 0) and ≡ (X = Bi ∧ Y = Bj with j > 0)
• (success) ∆ `? q, H succeeds if Actual(∆) = q. • (implication) From ∆ `? A ⇒ B, H step to ∆ ⊕ A `? B, H. • (reduction) if ∆ = B0 | B1 , . . . , Bn | B and C = D1 ⇒ . . . ⇒ Dk ⇒ q ∈ ∆, from ∆ `? G, H step to B0 | B1 , . . . , Bn | Ci `? Di , H ∪ {(B, q)} for i = 1, . . . , k, for some C0 , . . . , Ck ∈ ∆, such that C0 = C, Ck = Bn , and AK5 (∆, Ci−1 , Ci ) (respectively AK45 (∆, Ci−1 , Ci )) holds. • (restart) If ∆ = B0 | B1 , . . . , Bn | Bi and (Bj , r) ∈ H, with j > 0, then from ∆ `? q, H, step to B0 | B1 , . . . , Bn | Bj `? r, H ∪ {(Bi , q)}, According to the above discussion, we observe that the check of the validity of |= A ⇒ B, corresponds to the query ∅ `? A ⇒ B, ∅, which (by the implication rule) is reduced to the query > | A | A `? B, ∅. This is different from checking the validity of A → B (→ is the material implication), which corresponds to the query 5 In these two systems the validity of 2C does not imply the validity of C, as it holds for all the other systems considered in this chapter.
142
GOAL-DIRECTED PROOF THEORY
A | | `? B, ∅. The success of the former query does not imply the success of the latter. For instance in K5, 6|= (> ⇒ p) → p and indeed > ⇒ p | | `? p, ∅ fails. On the other hand we have |= (> ⇒ p) ⇒ p and indeed > | > ⇒ p | > ⇒ p `? p, ∅ succeeds. The reformulation of the proof system for S5 is similar, but simpler. In the case of S5, there is no need to keep the first formula/world apart from the others. Thus, we may simply define a non-empty database as a pair ∆ = (S, A), where S is a set of formulas and A ∈ S. If ∆ = (S, A), we let Actual(∆) = A and ∆ ⊕ B = (S ∪ {B}, B). For ∆ = ∅, we define ∅ ⊕ A = ({A}, A). With these definitions the rules are similar to those of K5 and K45, with the following simplifications: • (reduction) if ∆ = (S, B) and C = D1 ⇒ . . . ⇒ Dk ⇒ q ∈ ∆, then from ∆ `? G, H step to (S, Ci ) `? Di , H ∪ {(B, q)}, where for i = 1, . . . , k Ci ∈ ∆ and Ck = B. • (restart) If ∆ = (S, B) and (C, r) ∈ H, then from (S, B) `? q, H, step to (S, C) `? r, H ∪ {(B, q)}, EXAMPLE 4.37. In Figure 4.3 we show a derivation of the following formula in S5 ((a ⇒ b) ⇒ c) ⇒ (a ⇒ d ⇒ c) ⇒ (d ⇒ c). In the derivation we make use of restricted restart, according to Proposition 4.6. A brief explanation of the derivation: step (5) is obtained by reduction w.r.t. (a ⇒ b) ⇒ c, step (7) by restart, steps (8) and (9) by reduction w.r.t. a ⇒ d ⇒ c, and they both succeed immediately. 6
AN INTUITIONISTIC VERSION OF K5, K45, S5, KB, KBT
We have seen that if we remove the restart rule from the proof systems for K, KT, K4, and S4, the relative proof system retains its completeness. The same does not hold for the other systems: if we remove the restart rule we obtain weaker proof systems and fewer theorems. On the other hand, even without restart the proof procedures are well-defined. From a proof-theoretical point of view they
4. MODAL LOGICS OF STRICT IMPLICATION
(1)
`? ((a ⇒ b) ⇒ c) ⇒ (a ⇒ d ⇒ c) ⇒ d ⇒ c
(2) {(a ⇒ b) ⇒ c}, (a ⇒ b) ⇒ c `? (a ⇒ d ⇒ c) ⇒ d ⇒ c
(3) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c}, a ⇒ d ⇒ c `? d ⇒ c
(4) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c, d}, d `? c
(5) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c, d}, d `? a ⇒ b, (d, c)
(6) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c, d, a}, a `? b, (d, c)
(7) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c, d, a}, d `? c, (d, c)
(8) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c, d, a}, a `? a, (d, c)
(9) {(a ⇒ b) ⇒ c, a ⇒ d ⇒ c, d, }, d `? d, (d, c)
Figure 4.3. Derivation for Example 4.37.
143
144
GOAL-DIRECTED PROOF THEORY
make sense and they enjoy several properties, the most important of which is the admissibility of cut as formulated in Section 3. It is natural to wonder if there is a ‘logic’ which corresponds to these proof systems. By a logic, here we intend a semantical characterization of the set of successful formulas according to each proof system. In this section we show that by removing the restart rule we obtain an intuitionistic version of the respective classical modal logics K5, K45, S5, KB, KBT. We begin with a semantical presentation of these intuitionistic modal logics. The construction we present falls under the general methodology for combining logics presented in [Gabbay, 1996a]. A similar construction, for tense logics has been presented by Ewald in [1986]. The semantic structure introduced in the next definition can be seen as a generalization of the possible world semantics of intuitionistic and modal logic. DEFINITION 4.38. Let S be any modal system, a model M for its intuitionistic version SI, has the form M = (W, ≤, S w , Rw , hw ) where W is a non-empty set whose element are called states, ≤ is a transitivereflexive relation on W , for each w ∈ W , the triple (S w , Rw , hw ) is a Kripke model, that is S w is a non-empty set, Rw is a binary relation on S w having the properties of the accessibility relation of the corresponding system S, hw maps each x ∈ S w into a set of propositional variables. We assume the following monotony conditions: 0
1. if w ≤ w0 then S w ⊆ S w ; 0
2. w ≤ w0 then Rw ⊆ Rw ; 0
3. if w ≤ w0 and x ∈ S w then hw (x) ⊆ hw (x). Truth conditions for L(⇒) are defined as follows: S w , x |= p if p ∈ hw (x); 0 S w , x |= A ⇒ B if for all w0 ≥ w and for all y ∈ S w such that 0 Rw (x, y) we have: 0
0
S w , y |= A implies S w , y |= B. We can easily see that for every formula A and x ∈ S w : 0
if S w , x |= A and w ≤ w0 then S w , x |= A. We say that A is valid in an SI-model M = (W, ≤, S w , Rw , hw ) if ∀w ∈ W , ∀y ∈ S w S w , y |= A. As usual, a formula is called SI-valid if it is valid in every SI-model.
4. MODAL LOGICS OF STRICT IMPLICATION
145
As we have said, the above semantical definition generalizes the possible-worlds semantics of both intuitionistic and modal logic. The structures introduced in the previous definition can be seen as models of intuitionistic logic in which single worlds are replaced by Kripke models. Thus, if for any w, S w contains only one world, say {xw } and Rw = {(xw , xw )}, then we have a standard Kripke model of intuitionistic logic (see Definition 2.1); on the other hand if W contains only one state, we have a standard Kripke model of modal logics as defined at the beginning of the chapter. It can be shown that the proof procedures without restart are sound and complete for each SI ∈ {K5I, K45I, S5I, KBI, KBTI}. THEOREM 4.39. A is valid in SI iff `? x0 : A, ∅ succeeds in the proof system P (S) without using restart. Proof. The soundness is proved as usual by induction on the length of successful derivations. We leave the details to the reader. With regard to the completeness, we can define a canonical model M SI as follows: M SI = (W, S w , hw , Rw , ≤), where W is the set of finite non-empty databases (Γ, α), ≤ is ⊆, S w is Lab(Γ), w w Rw (x, y) ≡ AS α (x, y) and h is defined as follows: for x ∈ S , hw (x) = {q : w `? x : q succeeds}. Notice that M satisfies all the monotony conditions involved in the semantics. We can easily prove that M SI satisfies the property: for all w ∈ W, x ∈ S w , and formulas C, we have w, x |= C ⇔ w `? x : C succeeds. By this fact completeness follows immediately.
We conclude this section with an observation on the relationship between the classical modal logics and their respective intuitionistic version. Let us identify any system with the set of its valid formulas. We may observe that although the semantics of the intuitionistic modal logic is a combination of the modal and the intuitionistic possible-world semantics, each intuitionistic modal logic is strictly weaker than the intersection of intuitionistic logic I and the corresponding classical modal logic. PROPOSITION 4.40. Let S be any one of K5, K45, S5, KB, KBT, and let SI be its intuitionistic version as defined above, we have SI ⊂ I ∩ S.
146
GOAL-DIRECTED PROOF THEORY
Proof. The claim SI ⊆ I ∩ S easily follows from the previous theorem. To show that the inclusion is proper, consider the formula (((p ⇒ q) ⇒ q) ⇒ q) ⇒ p ⇒ q. One can check that this formula is a theorem of I and of any S in the above list; however it is not a theorem of S5I, and therefore it is not theorem of any of the other (weaker) SI. 7
EXTENDING THE LANGUAGE
In this section we extend the proof procedures into a broader language. We do our extension in three steps. We first consider a simple extension allowing conjunction, then we consider a richer fragment containing ‘local clauses’ and (∧, ∨)combinations of goals. This fragment can be considered as an analog of Harrop formulas in intuitionistic logic. We finally extend the procedures to the full propositional modal language via a translation into an implicative normal form.
7.1 Conjunction To handle conjunction in the labelled formulation, we simply drop the condition that a label x may be attached to only one formula, thus formulas with the same label can be thought as logically conjuncted. In the unlabelled formulation, for those systems enjoying such a formulation, the general principle is to deal with sets of formulas, instead of single formulas. A database will be a structured collection of sets of formulas, rather than a collection of formulas. The structure is always the same, but the constituents are now sets. Thus, in the cases of K, KT, K4 and S4, databases will be lists of sets of formulas, whereas in the cases of K5, K45 and S5, they will be clusters of sets of formulas. A conjunction of formulas is interpreted as a set, so that queries may contain sets of goal formulas. A formula A of language L(∧, ⇒) is in normal form if it is an atom or has form: ^ [S1i ⇒ . . . ⇒ Sni i ⇒ qi ] i
where Sji are conjunctions of formulas in normal form. In all modal logics considered in this chapter (formulated L(∧, ⇒)) it holds that every formula has an equivalent one in normal form. We simplify the notation for NF formulas and replace conjunctions with sets. For example the NF of (b ⇒ (c ∧ d)) ⇒ (e ∧ f )) ∧ ((g ∧ h) ⇒ (k ∧ u)) is the set containing the following formulas: {b ⇒ c, b ⇒ d} ⇒ e, {b ⇒ c, b ⇒ d} ⇒ f, {g, h} ⇒ k, {g, h} ⇒ u.
4. MODAL LOGICS OF STRICT IMPLICATION
147
For the deduction procedures all we have to do is to handle sets of formulas. We define, for x ∈ Lab(Γ) ∪ Lab(H), y 6∈ Lab(Γ) ∪ Lab(H) and finite set of formulas S = {D1 , . . . , Dt }, (Γ, α) ⊕x y : S = (∆ ∪ {y : D1 , . . . , y : Dt }, α ∪ {(x, y)}), then we change the (implication) rule in the obvious way: from ∆, α `? x : S ⇒ B, H, step to (∆, α) ⊕x y : S `? y : B, H, where S is a set of formulas in NF and y 6∈ Lab(Γ), and we add a rule for proving sets of formulas: from (∆, α) `? x : {B1 , . . . Bk }, H step to (∆, α) `? x : Bi , H for i = 1, . . . , k. Regarding the simplified formulations without labels, the structural restrictions in the rules (reduction and success) are applied to the sets of formulas, which are now the constituents of databases, considered as units; the history H, when needed, becomes a set of pairs (Si , Ai ), where Si is a set and Ai is a formula. The property of restricted restart still holds for this formulation. EXAMPLE 4.41. We show that the following is a theorem of K5, using the unlabelled formulation: 2(b → c) ∧ 2(2(2(a → b) → d) → c) → 2(a → c). Letting S0 = {b ⇒ c, ((a ⇒ b) ⇒ d) ⇒ c}, the initial query is S0 | | `? a ⇒ c. In the derivation below, we make use of restricted restart. S0 | |
`?
S0 | {a} | {a} S0 | {a} | {a}
?
` `?
S0 | {a}, {a ⇒ b} | {a ⇒ b}
?
`
a⇒c c, ({a}, c) (a ⇒ b) ⇒ d, ({a}, c) by reduction w.r.t. ((a ⇒ b) ⇒ d) ⇒ c ∈ S0 d, ({a}, c)
S0 | {a}, {a ⇒ b} | {a} S0 | {a}, {a ⇒ b} | {a}
`? `?
c, ({a}, c) by restart b, ({a}, c) by reduction w.r.t. b ⇒ c ∈ S0
S0 | {a}, {a ⇒ b} | {a}
`?
a, ({a}, c) by reduction w.r.t. a ⇒ b success.
148
GOAL-DIRECTED PROOF THEORY
7.2 Modal Harrop Formulas We can further extend the language in a similar way to [Giordano and Martelli, 1994], by allowing local clauses of the form G → q, where → denotes ordinary (material) implication. We call them ‘local’, since x : G → q can be used only in world x to reduce the goal x : q and it is not usable/visible in any other world. It is ‘private’ to x. The extension we suggest may be significant in developing logic programming languages based on modal logic [Giordano et al., 1992; Giordano and Martelli, 1994 ]. To this end, we can introduce a sort of modal Harrop fragment distinguishing database (D-formulas) and goal formulas (G-formulas). Below we define D-formulas i.e. the ones which may occur in the database and G-formulas i.e. the one which may occur in goal position. The former are further distinct in modal D-formulas (MD) and local D-formulas (LD). LD := G → q, M D := > | q | G ⇒ M D, D := LD | M D, CD := D | CD ∧ CD; G := > | q | G ∧ G | G ∨ G | CD ⇒ G. We also use 2G and 2D as syntactic sugar for > ⇒ G and > ⇒ D. Notice that atoms are both LD- and MD-formulas (as > → q ≡ q); moreover, any non-atomic MD-formula can be written as G1 ⇒ . . . ⇒ Gk ⇒ q. Finally, CD formulas are just conjunction of D-formulas. For D- and G-formulas as defined above we can easily extend the proof procedure. We give it in the most general formulation for labelled databases. It is clear that one can derive an unlabelled formulation for systems which allow it, as explained in the previous section. In the labelled formulation, queries have the form ∆, α `? x : G, H where ∆ is a set of D-formulas, G is a G-formula, and H = {(x1 , G1 ), . . . , (xk , Gk )}, where Gi are G-formulas. The additional rules are: • (true) ∆, α `? x : >, H immediately succeeds. • (local-reduction) From ∆, α `? x : q, H step to ∆, α `? x : G, H ∪ {(x : q)} if x : G → q ∈ ∆.
4. MODAL LOGICS OF STRICT IMPLICATION
149
• (and) From ∆, α `? x : G1 ∧ G2 , H step to ∆, α `? x : G1 , H and ∆, α `? x : G2 , H. • (or) From ∆, α `? x : G1 ∨ G2 , H step to ∆, α `? x : G1 , H ∪ {(x, G2 )} or to ∆, α `? x : G2 , H ∪ {(x, G1 )}. EXAMPLE 4.42. Let ∆ be the following database x0 : [(2p ⇒ s) ∧ b] → q, x0 : ([(p ⇒ q) ∧ 2a] ⇒ r) → q, x0 : a → b. We show that ∆, ∅ `? x0 : q, ∅ succeeds in the proof system for KB and this shows that the formula V ∆ → q is valid in KB. A derivation is shown in Figure 4.4. The property of restricted restart still holds for this fragment, thus we do not need to record the entire history, but only the first pair (x0 , q). At each step we only show the additional data introduced in that step. A quick explanation of the steps: step (2) is obtained by local reduction w.r.t. x0 : ([(p ⇒ q) ∧ 2a] ⇒ r) → q, step (4) by restart, steps (5) and (10) by local reduction w.r.t. x0 : [(2p ⇒ s) ∧ b] → q, step (7) by restart, step (8) by reduction w.r.t. x1 : p ⇒ q since letting α = {(x0 , x1 ), (x0 , x2 )} AKB α (x1 , x0 ) holds; KB (x2 , x0 ) step (9) is obtained by reduction w.r.t. x2 : 2p (= > ⇒ p) since Rα holds, step (11) by local reduction w.r.t. x0 : a → b, step (12) by reduction w.r.t. KB (x1 , x0 ) holds. x1 : 2a (= > ⇒ a) since Rα The soundness and completeness results of Section 4 can be extended to this fragment; to this regard let the validity of a query be defined as in that section. THEOREM 4.43. ∆ `? G, H succeeds in P(S) if and only if it is valid. Proof.[Sketch] The soundness part is proved by induction on the height of the derivations, we omit the details. In order to prove completeness we need the following properties: 1. Suppose (a) Γ ∪ {u : G → q}, α `? y : G0 , H succeeds and (b) Γ, α `? u : G, H succeeds implies Γ, α `? u : q, H succeeds, then Γ, α `? y : G0 , H succeeds. 2. Suppose
150
GOAL-DIRECTED PROOF THEORY
(1) ∆ `? x0 : q, ∅ (2) `? x0 : [(p ⇒ q) ∧ 2a] ⇒ r, (x0 , q) (3) x1 : p ⇒ q, x1 : 2a, {(x0 , x1 )} `? x1 : r, (x0 , q) (4) `? x0 : q, (x0 , q) PP PP (10) `? x0 : b, (x0 , q) (5) `? x0 : 2p ⇒ s, (x0 , q) (6) x2 : 2p, {(x0 , x1 ), (x0 , x2 )} `? x2 : s, (x0 , q)
(11) `? x0 : a, (x0 , q)
(7) `? x0 : q, (x0 , q)
(12) `? x0 : >, (x0 , q)
(8) `? x0 : p, (x0 , q) (9) `? x0 : >, (x0 , q)
Figure 4.4. Derivation for Example 4.42.
4. MODAL LOGICS OF STRICT IMPLICATION
151
(a) Γ ∪ {u : G1 ⇒ . . . ⇒ Gk ⇒ q}, α `? y : G0 , H succeeds and ? (b) for every u = x0 , x1 , . . . , xk , if AS α (xi−1 , xi ) and Γ, α ` xi : Gi , H ? succeeds for i = 1, . . . , k then also Γ, α ` xk : q, H succeeds,
then Γ, α `? y : G0 , H succeeds. The proof of these properties is by simple induction on the length of the derivation of hypothesis (a) in both cases. Then the completeness part is proved by contraposition. If a query, say Γ0 , α0 `? x0 : G0 , H0 , does not succeed, we build a model in which it is not valid; the construction is similar to the one in Theorem 4.16 and Theorem 2.58. To this regard, we consider an enumeration with infinite repetitions of pairs of the form (xi , Gi ) where Gi is a G-formula, and we define a sequence of databases, a new sequence of pairs (yn , G∗n ) where G∗n are G-formulas, and a sequence of histories Hn . The construction proceeds as in Theorem 4.16, in particular we set (y0 , G∗0 ) = (x0 , G0 ); at step n > 0 we have the following additional cases of disjunction and conjunction (see also the proof of Theorem 2.58): let (yn , G∗n ) be considered at step n, with yn ∈ Lab(Γn ): • Let G∗n = G0 ∨ G00 , we let (Γn+1 , αn+1 ) = (Γn , αn ), (yn+1 , G∗n+1 ) = (xk+1 , Gk+1 ), where k = maxt≤n ∃s≤n (ys , G∗s ) = (xt , Gt ), and Hn+1 = Hn ∪ {(yn , G∗n )}, if Γn , αn `? yn : G∗n , Hn fails; Hn+1 = Hn ∪ {(yn , G0 )}, or Hn+1 = Hn ∪ {(yn , G00 )} if Γn , αn `? yn : G∗n , Hn succeeds, but both Γn , αn `? yn : G0 and Γn , αn `? yn : G00 , Hn fails; Hn+1 = Hn otherwise. • Let G∗n = G0 ∧ G00 , we let (Γn+1 , αn+1 ) = (Γn , αn ), Hn+1 = Hn , and (yn+1 , G∗n+1 ) = (xk+1 , Gk+1 ), where k = maxt≤n ∃s≤n (ys , G∗s ) = (xt , Gt ), if Γn , αn `? yn : G∗n , Hn succeeds, (yn+1 , G∗n+1 ) = (yn , G0 ) if Γn , αn `? yn : G0 fails, (yn+1 , G∗n+1 ) = (yn , G00 ) otherwise. From the sequence of databases we define a canonical model M = (W, R, V ) as in Theorem 4.16. We then prove by mutual induction on the complexity of G the following claims: • (i) for all x ∈ W and G-formulas G, M, x |= G ⇔ ∃n x ∈ Lab(Γn ) ∧ Γn , αn `? x : G, Hn succeeds.
152
GOAL-DIRECTED PROOF THEORY
• (ii) for all x ∈ W and D-formulas D, if D ∈ ∆n then M, x |= D. From these facts the theorem follows immediately.
Disjunction Property To conclude this section, we observe that the simplifications of the proof procedure for specific systems still works for this larger language. In particular Proposition 4.26 and the elimination of restart holds for modal logics from K to S4. It is worth noticing that by means of Proposition 4.26 we can immediately obtain (a well-known) disjunction property [Chagrov and Zakharyaschev, 1997] for Harrop-formulas in modal logics from K to S4. From the computation rules and Proposition 4.26 we get if Γ, α `? x : G1 ∨ G2 , ∅ succeeds, then either Γ, α `? x : G1 , ∅ succeeds, or Γ, α `? x : G2 , ∅ succeeds. By soundness and completeness, if G1 , G2 are G-formulas we then have if |=S G1 ∨ G2 , then either |=S G1 or |=S G2 . Observe that this property holds for modal logics from K to S4, but it does not for the other systems we have considered in this chapter; for instance in S5, we have |= (p ⇒ q) ∨ ((p ⇒ q) ⇒ r), but 6|= p ⇒ q and 6|= (p ⇒ q) ⇒ r for arbitrary p, q, and r.
7.3 Extension to the Whole Propositional Language In this section we extend the procedure to the whole propositional modal language. The idea is very simple: we take the computation procedure for the classical logic of Chapter 2 and we combine it with the modal procedure of this chapter. Before presenting it, let us see what the difficulty is of such an extension. Clearly the trouble is the treatment of the 3 modality, i,e. the possibility operator, which is dual to 2. For a goal-directed procedure, the problem is similar to admitting data of the form ∃xB in a first-order database. Intuitively, a goal `? x : 3A should succeed if `? y : A succeeds for some y accessible from x. The point is that such y may not yet exist when x : 3A is asked, whence the goal x : 3A has to be properly delayed. However, taking advantage of the history component of a query, we can think of implementing this rule directly. What is real trouble is the treatment of the 3-formulas in the database: x : 3A `? y : G. What might be the contribution of x : 3A to the proof of y : G? In this case, we can think of treating x : 3A as soon as it enters the database by immediately creating an x-accessible z with z : A. This would correspond to a sort of Skolemization on worlds, in analogy with the handling of existential data in a first-order language. But what to do with
4. MODAL LOGICS OF STRICT IMPLICATION
153
A → 3B or A ⇒ 3B, i.e. 2(A → 3B) ? Remember that we want to stick to the goal-directed paradigm. In the procedure, we give below we simply remove the problem by translating 3A into 2(A → ⊥) → ⊥, or equivalently into (A ⇒ ⊥) → ⊥. However, it will turn out that the implicit treatment of the possibility operator does conform with the discussion above. To minimize the work, we introduce a simple normal form on the set of connectives (⇒, →, ∧, >, ⊥). It is obvious that this set forms a complete base. The normal form is an immediate extension of the normal form for ⇒, ∧. PROPOSITION 4.44. Every modal formula over the language (→, ¬, 3, 2) is equivalent to a set (conjunction) of NF-formulas of the form S0 → (S1 ⇒ (S2 ⇒ . . . ⇒ (Sn ⇒ q) . . .) where q is an atom, >, or ⊥, n ≥ 0, and each Si is a conjunction (set) of NFformulas. Proof. Left to the reader.
As usual we omit parentheses, so that the above will be written as S0 → S1 ⇒ S2 ⇒ . . . ⇒ Sn ⇒ q. In practice we will replace the conjunction by the set notation as we wish. When we need it, we distinguish two types of NF-formulas, (i) those with non-empty S0 , which are written as above, and (ii) those with empty S0 , which are simplified to S1 ⇒ S2 ⇒ . . . ⇒ Sn ⇒ q. For a quick case analysis, we can also say that type (i) formulas have the form S → D, and type (ii) have the form S ⇒ D, where D is always of type (ii). For instance, the NF-form of p → 3r is (p ∧ (r ⇒ ⊥)) → ⊥, or equivalently {p, r ⇒ ⊥} → ⊥. This formula has the structure S0 → q, where S0 = {p, r ⇒ ⊥} and q = ⊥. The NF-form of 2(3a → 3(b ∧ c)) is given by ((a ⇒ ⊥) → ⊥) ⇒ ((b ∧ c) ⇒ ⊥) ⇒ ⊥. We give below the rules for queries of the form Γ, α `? x : G, H,
154
GOAL-DIRECTED PROOF THEORY
where Γ is a labelled set of NF-formulas, G is a NF-formula, α is a set of links (as usual), H is a set of pairs {(x1 , q1 ), . . . (xk , qk )}, where qi is an atom. DEFINITION 4.45 (Deduction rules for whole modal logics). For each modal system S, the corresponding proof system, denoted by P(S), comprises the following rules, parametrized to predicates AS . • (success) ∆, α `? x : q ∈ ∆.
x : q, H immediately succeeds if q is an atom and
• (strict implication) From ∆, α `? x : S ⇒ D, H, step to (∆, α) ⊕x (y : S) `? y : D, H, where y 6∈ Lab(∆) ∪ Lab(H). • (implication) From ∆, α `? x : S → D, H, step to ∆ ∪ {x : A | A ∈ S}, α `? x : D, H. • (reduction) If y : C ∈ ∆, with C = S0 → S1 ⇒ S2 ⇒ . . . ⇒ Sk ⇒ q, with q atomic, then from ∆, α `? x : q, H step to ∆, α `? u0 : S0 , H 0
∆, α `? u1 : S1 , H 0
...
∆, α `? uk : Sk , H 0
where H 0 = H if q = ⊥, and H 0 = H ∪ {(x, q)} otherwise, for some u0 , . . . , uk ∈ Lab(α), such that u0 = y, uk = x, and AS α (ui , ui+1 ) holds, for i = 0, . . . , k − 1. • (restart) If (y, r) ∈ H, then, from ∆, α `? x : q, H, with q atomic, step to ∆, α `? y : r, H ∪ {(x, q)}. • (falsity) From ∆, α `? x : q, H, if y ∈ Lab(Γ) step to ∆, α `? y : ⊥, H ∪ {(x : q)}. • (conjunction) From (∆, α) `? x : {B1 , . . . Bk }, H step to (∆, α) `? x : Bi , H for i = 1, . . . , k.
4. MODAL LOGICS OF STRICT IMPLICATION
155
If the number of subgoals is 0, i.e. k = 0, the above reduction rule becomes the rule for local clauses of the previous section. On the other hand if S0 = ∅, then the query with goal S0 is omitted and we have the rule of Section 2. This procedure is actually a minor extension of the one based on strictimplication/conjunction. The rule for falsity may be source of non-determinism, as it can be applied to any label y. Further investigation should clarify to what extent this rule is needed and if it is possible to restrict its applications to special cases. However, the proof procedure is sound and complete, as asserted in the next theorem. THEOREM 4.46. Γ, γ `? x : G, H succeeds if and only if it is valid. Proof. The soundness direction is proved by induction on the height of the computation. The completeness direction, is comparably simpler than in the case of the Harrop fragment seen in the previous section. First one easily extends the cut-theorem of Theorem 4.10 to formulas in NF. Notice that such a simple extension was not possible in the Harrop case because clauses and goals (i.e. D- and G-formulas) were of different type. As a second step we modify in a straightforward way the completeness proof of Theorem 4.16. The only change required is to modify the construction leading to the canonical model in the case of the evaluation of a type (i) clause, we hence have the additional case: given (yn , Bn ), if yn ∈ Lab(Γn ) and Γn , αn `? yn : Bn , Hn fails and Bn = S → D, then we set Hn+1 = Hn , (Γn+1 , αn+1 ) = (Γn ∪ {yn : C | C ∈ S}, αn ), (yn+1 , Bn+1 ) = (yn , D). Then, everything is almost the same as in the proof of Theorem 4.16. In particular, we easily extend Lemma 4.18 to the case of Bn = S → D, so that we get similarly to Lemma 4.20: if Γn , αn `? yn : S → D, Hn fails, then ∀m > n, Γm , αm `? yn : S, Hm succeeds and Γm , αm `? yn : D, Hm fails. This fact is used to prove that the model M satisfies the fundamental property (Lemma 4.22): for all x ∈ W and formulas B, M, x |= B ⇔ ∃n x ∈ Lab(Γn ) ∧ Γn , αn `? x : B, Hn succeeds.
EXAMPLE 4.47. In K5 we have 3p → 23p. This is translated as ((p ⇒ ⊥) → ⊥) → ((p ⇒ ⊥) ⇒ ⊥). Below we show a derivation. Some explanation and remarks: at step (1) we can only apply the rule for falsity, or reduction w.r.t. K5 y : p ⇒ ⊥, since AK5 α (x0 , y) implies Aα (y, y). We apply the rule for falK5 sity. The reduction at step (2) is legitimate as AK5 α (x0 , y) and Aα (x0 , z) implies K5 Aα (y.z).
156
GOAL-DIRECTED PROOF THEORY
`? `? x0 : (p ⇒ ⊥) → ⊥ `? y : p ⇒ ⊥, α = {(x0 , y)} `? `? z : p, α = {((x0 , y), (x0 , z)} `? `?
x0 : ((p ⇒ ⊥) → ⊥) → ((p ⇒ ⊥) ⇒ ⊥ x0 : (p ⇒ ⊥) ⇒ ⊥ y:⊥ (1) x0 : ⊥ rule for ⊥ x0 : p ⇒ ⊥ z:⊥ (2) z:p success
EXAMPLE 4.48. In K we have [22(B → C) ∧ 2(A → 3(B ∨ C)) ∧ 3A] → 33C]. Translating this formula into our language, we have to show that ∆, ∅ `? x : G succeeds, where ∆ is: x : > ⇒ B ⇒ C, x : {A, ((B → C) → C) ⇒ ⊥} ⇒ ⊥, x : (A ⇒ ⊥) → ⊥, and G is (((C ⇒ ⊥) → ⊥) ⇒ ⊥) → ⊥. That is we must show: ∆ ∪ {x : ((C ⇒ ⊥) → ⊥) ⇒ ⊥}, ∅ `? x : ⊥, ∅ succeeds. We assume that A, B, C are atoms. In Figure 4.5 we show a derivation. At each step we only show the new data inserted in the database at that step (if any). We also omit the history which is uniquely determined by the branch. Step (1) is obtained by reduction w.r.t. x : (A ⇒ ⊥) → ⊥; notice that no other formula can be used for reduction at this point. Step (3) is obtained by reduction w.r.t. x : ((C ⇒ ⊥) → ⊥) ⇒ ⊥. Steps (5) and (6) are obtained by reduction w.r.t. x : ({A, ((B → C) → C) ⇒ ⊥} ⇒ ⊥. Step (8) is obtained by reduction w.r.t. y : C ⇒ ⊥ introduced in (4). Step (9) is obtained by reduction w.r.t. z : (B → C) → C, and steps (11) and (12) by reduction w.r.t. x : > ⇒ B ⇒ C. Examining the computation we can understand how a formula 3A is dealt. A goal x : 3A is translated into ∆, α `? x : (A ⇒ ⊥) → ⊥, from which we step to ∆, x : A ⇒ ⊥, α `? x : ⊥. This will succeed using the additional data if y : A is provable for a world y, accessible from x, (i.e. such that AS α (x, y) hold).This is exactly what we expect from an evaluation of a goal 3A. On the other hand having x : 3A in the database corresponds to having x : (A ⇒ ⊥) → ⊥. If this is used in any deduction, it will be used as follows: from
4. MODAL LOGICS OF STRICT IMPLICATION
157
∆ ∪ {x : (((C ⇒ ⊥) → ⊥) ⇒ ⊥}, ∅ `? x : ⊥ (1) `? x : A ⇒ ⊥ (2) y : A, {(x, y)} `? y : ⊥ (3) `? y : (C ⇒ ⊥) → ⊥ (4) y : C ⇒ ⊥ `? y : ⊥ ! aa a !!
(5) `? y : A
(6) `? y : ((B → C) → C) ⇒ ⊥
(7) z : (B → C) → C, {(x, y), (y, z)} `? z : ⊥ (8) `? z : C (9) `? z : B → C (10) z : B `? z : C Q Q ? (12) `? z : B (11) ` y : >
Figure 4.5. Derivation for Example 4.47.
158
GOAL-DIRECTED PROOF THEORY
x : (A ⇒ ⊥) → ⊥, α `? x : ⊥ we step to x : (A ⇒ ⊥) → ⊥, y : A, α ∪ {(x, y)} `? y : ⊥, that is we create a new world y accessible from x with A holding in it. Again this is what we expect. Notice, however, that because of the unrestricted falsity rule such a step is enabled whenever any atomic goal is reached. Specific restrictions on the application of the falsity rule should be studied. A final remark. The extension of the goal-directed procedure to the full propositional modal logic leads us also to consider systems where the accessibility relation satisfies the seriality property i.e. ∀x∃y xRy; the treatment of these systems were excluded because of the restriction to the strict implication fragment (see Footnote 2, p. 118). We think that the rule for seriality is the following, namely a liberalization of the falsity rule: from ∆, α `? x : q, H step to ∆, α0 `? y : ⊥, H ∪ {(x, q)} where either y ∈ Lab(∆) and α0 = α, or y 6∈ Lab(∆) ∪ Lab(H) and α0 = α ∪ {(x, y)}. We conjecture that if we take this rule in place of the falsity rule, we obtain complete proof systems for modal logics with the axiom D i.e. 2A → 3A which corresponds to seriality. The reader is invited to check (an instance of) D where A is an atom in the proof system P (K) plus the rule above (its translation in NF is {> ⇒ A, A ⇒ ⊥} → ⊥). 8 A FURTHER CASE STUDY: MODAL LOGIC G In this section we give a goal-directed procedure for the implicational fragment of modal logic G. Modal logic G was originally introduced as a modal interpretation of the notion of formal provability in Peano Arithmetic (see [Boolos, 1979]). Semantically, G is characterized by the following conditions on the accessibility relation R: (i) R is transitive, and (ii) R does not have infinitely increasing chains w0 Rw1 , . . . , wi Rwi+1 , . . .. To deal with G we adopt our deduction system for K4 and we modify the rule for implication in order to take care of the finiteness condition. To explain the idea intuitively, suppose we want to show that A ⇒ B holds in a world w, we can argue by reduction ab absurdum as follows: assume that A ⇒ B is false at w. Then there is a world w0 such that wRw0 , in which A ∧ ¬B is true. By the finiteness condition, we can assume that there is a last world w∗ in which A ∧ ¬B is true. In w∗, A holds, B does not hold, but also A ⇒ B holds, since every world
4. MODAL LOGICS OF STRICT IMPLICATION
159
w00 accessible from w∗ will not satisfy A ∧ ¬B. We have then to show that B cannot be false in w∗. According to the above argument, we can modify the rule for implication in the following way: to evaluate a goal A ⇒ B, from a database ∆, we add to ∆ the formula A ∧ (A ⇒ B), and we ask B.6 We show that this rule is sound and complete for G. We work with the unlabelled formulation of the implicational fragment of K4, and we further notice that we do not need to introduce conjunction, but only to take pairs of formulas as the unit elements of databases. A database ∆ is a sequence of pairs of formulas: S1 , . . . , Si , where each Si is a pair of formulas (Ai , Bi ). For implication we have the following: • (implication) From ∆ `? A ⇒ B step to ∆, (A, A ⇒ B) `? B. The other two rules, reduction and success, are the same as in the unlabelled version of P(K4), that is K4 with conjunction, whose rules we review below for clarity: • (success) S1 , . . . , Sn `? q succeeds if q ∈ Sn . • (reduction) From S1 , . . . , Sn `? q step to S1 , . . . , Sji `? Di , for i = 1, . . . , k if there is a formula A = D1 ⇒ . . . ⇒ Dk ⇒ q ∈ Sj , for some j, and integers j < j1 < . . . < jk = n. EXAMPLE 4.49. We show a derivation of F = ((b ⇒ a) ⇒ a) ⇒ b ⇒ a which is equivalent (in K) to L¨ob’s axiom: 2(2A → A) → 2A ((b ⇒ a) ⇒ a, F ) ((b ⇒ a) ⇒ a, F ), (b, b ⇒ a) ((b ⇒ a) ⇒ a, F ), (b, b ⇒ a) ((b ⇒ a) ⇒ a, F ), (b, b ⇒ a), (b, b ⇒ a) ((b ⇒ a) ⇒ a, F ), (b, b ⇒ a), (b, b ⇒ a)
`? `? `? `? `? `?
((b ⇒ a) ⇒ a) ⇒ b ⇒ a b⇒a a b⇒a a b success.
(1) (2)
At step (1) a can match only with (b ⇒ a) ⇒ a; at step (2) a can match with b ⇒ a in the second pair from the left. 6 Our modified implication rule corresponds closely to the rule for necessity in G in the tableau formulation: if ¬2A is in a branch then create a new world with ¬A and 2A [Fitting, 1983].
160
GOAL-DIRECTED PROOF THEORY
PROPOSITION 4.50. Let A1 , . . . , An , B1 , . . . , Bn , C be any formulas; if (A1 , B1 ), . . . , (An , Bn ) `? C succeeds, then the formula (A1 ∧ B1 ) ⇒ (A2 ∧ B2 ) . . . ⇒ (An ∧ Bn ) ⇒ C is valid in G. In particular, if ∅ `? A succeeds, then A is valid in G. Proof. The proof of the theorem is based on the following fact: for every A and B: |=G A ⇒ B ↔ (A ∧ (A ⇒ B)) ⇒ B. To see this, we have |=G |=G |=G |=G
A⇒B A⇒B A⇒B A⇒B
↔ 2(A → B) ↔ 2(2(A → B) → (A → B)) ↔ 2((A ∧ 2(A → B)) → B) ↔ (A ∧ (A ⇒ B)) ⇒ B.
Then, the proof of the proposition proceeds by induction on the height of a suc cessful derivation, using the previous fact when A is an implication. In order to prove completeness we make use of the translation of G in K4 proposed by Balbiani and Herzig [1994] that we adapt to the language L(⇒), and then we can rely on the completeness of our proof system for K4. The translation is the following: p+ = p− = p, for any atom p, (A ⇒ B)+ = (A− ∧ (A ⇒ B)) ⇒ B + ), (A ⇒ B)− = (A+ ⇒ B − ). Notice that the translation of (A ⇒ B)+ clearly resembles the way we deal with strict implication of G. In [Balbiani and Herzig, 1994] the following fact is proved. PROPOSITION 4.51 ([Balbiani and Herzig, 1994]). For any formula A, A is a theorem of G iff A+ is a theorem of K4. LEMMA 4.52. For all databases ∆, Π, sets S, and formulas E in L(⇒, ∧), for all formulas A ∈ L(⇒) the following are true: 1. if ∆, S∪{A+ }, Π `? E succeeds in P(K4), then also ∆, S∪{A}, Π `? E succeeds in P(K4); 2. if ∆, S∪{A}, Π `? E succeeds in P(K4), then also ∆, S∪{A− }, Π `? E succeeds in P(K4); 3. if ∆ `? A succeeds in P(K4), then also ∆ `? A+ succeeds in P(K4); 4. if ∆ `? A− succeeds in P(K4), then also ∆ `? A succeeds in P(K4). Moreover, if the query in the hypothesis of each claim (1)–(4) has a successful derivation of height h, then the query in the thesis has a successful derivation of height no more than h.
4. MODAL LOGICS OF STRICT IMPLICATION
161
Proof. Claims (1)–(4) are proved by a simultaneous induction on pairs (c, h) lexicographically ordered, where c = cp(A), and h is the height of a derivation of the query in the hypothesis of each (1)–(4). We omit the details. Now we can easily prove the completeness of P(G). We show that the deduction rules for G compute, so to say, the run time translation of a query in K4. LEMMA 4.53. for any formulas A1 , . . . , An , B1 , . . . , Bn , C of the language − ? C + succeeds in L(⇒), the following holds: if {A− 1 , B1 }, . . . , {An , Bn } ` ? P(K4), then (A1 , B1 ), . . . , (An , Bn ) ` C succeeds in P(G). In particular, if `? A+ succeeds in P(K4), then `? A succeeds in P(G). Proof. By induction on the height h of a succeeding derivation of the query in the hypothesis. • h = 0, then C must be an atom, so that C + = C and either C = A+ n = An , or C = Bn , in both cases the result follows immediately. • c > 0, we distinguish two cases: C = X ⇒ Y , or C is an atom. In the − ? C + succeeds (in former case we have: if {A− 1 , B1 }, . . . , {An , Bn } ` + − + P(K4)) with height h, C = {X , X ⇒ Y } ⇒ Y , we have that − − ? Y + succeeds {A− 1 , B1 }, . . . , {An , Bn }, {X , X ⇒ Y } ` (in P(K4)) with height h − 1;
thus by the inductive hypothesis, we have (A1 , B1 ), . . . , (An , Bn ), (X, X ⇒ Y ) `? Y succeeds in P(G), and therefore also (A1 , B1 ), . . . , (An , Bn ) `? X ⇒ Y succeeds in P(G). Suppose C is an atom q, we have two subcases: (a) q matches with the head of a formula A− i , or (b) q matches with the head of a formula Bi . In case (a) + + let Ai = D1 ⇒ . . . ⇒ Dk ⇒ q, then A− i is D1 ⇒ . . . Dk ⇒ q. Then there are j1 , . . . , jk , with i < j1 < j2 < . . . < jk = n, such that for l = 1, . . . , k − + ? {A− 1 , B1 }, . . . , {Ajl , Bjl } ` Dl
succeeds in P(K4) with height hl < h. By the inductive hypothesis, we may conclude (A1 , B1 ), . . . , (Ajl , Bjl ) `? Dl succeeds in P (G), for l = 1, . . . , k, so that also (A1 , B1 ), . . . , (An , Bn ) `? q succeeds in P (G). In case (b) let Bi = E1 ⇒ . . . Ez ⇒ q. Then there are j1 , . . . , jz , with i < j1 < j2 < . . . < jz = n, such that for l = 1, . . . , z, − ? {A− 1 , B1 }, . . . , {Ajl , Bjl } ` El
162
GOAL-DIRECTED PROOF THEORY
succeeds in P(K4) with height hl < h. By the previous lemma, we have also − + ? {A− 1 , B1 }, . . . , {Ajl , Bjl } ` El
succeeds in P(K4) with height h0l ≤ hl < h. We may apply the inductive hypothesis, and conclude that (A1 , B1 ), . . . , (Ajl , Bjl ) `? El succeeds in P (G), for l = 1, . . . , z, so that also (A1 , B1 ), . . . , (An , Bn ) `? q succeeds in P(G).
THEOREM 4.54. If A is valid in G, then `? A succeeds in P(G). Proof. Let A = X1 ⇒ X2 ⇒ . . . ⇒ Xn ⇒ q. We have A+ = {X1− , F1 } ⇒ {X2− , F2 } ⇒ . . . {Xn−, Fn } ⇒ q, where Fn = Xn ⇒ q, Fi = Xi ⇒ Fi+1 . If A is a theorem of G, we have that A+ is a theorem of K4. Thus, `? A+ succeeds in P(K4). From Lemma 4.53 we finally get that `? A succeeds in the proof system P(G). 9 EXTENSION TO HORN MODAL LOGICS The goal-directed proof procedure can be easily generalized to a broader class of logics. We call Horn modal logics the class of modal logics which are semantically characterized by Kripke models in which the accessibility relation is definable by means of Horn conditions. This class has been studied in [Basin et al., 1997a; Vigan`o, 1999]. All previous examples, with the exception of G¨odel logic G, fall under this class. To show how to obtain this generalization, we have to change the presentation of the proof procedure a bit. In the presentation we have given, a database is a set of labelled formulas together with a set of links (pairs of labels) α. For each specific system, we have introduced predicates AS α which specify closure conditions (reflexive, transitive, etc.) of the relation α. These predicates are external to the language and we have supposed to have an external mechanism which allows us to prove statements of the form AS α (x, y). As [Basin et al., 1997a], we can define a database as a pair ∆ = h∆F , ∆R i, where ∆F is a set of labelled formulas as before and ∆R is a set of Horn formulas in the language L(A, R). A Horn formula has the form ∀(R(t1 , t01 ) ∧ . . . ∧ R(tn , t0n ) → R(t, t0 ))
4. MODAL LOGICS OF STRICT IMPLICATION
163
where each ti ie either a variable Xi , or a constant xi ∈ A; the notation ∀ denotes the universal closure, that we will omit henceforth.7 The idea is clearly that the formulas of ∆R define the accessibility relation. By ∆R ` R(t1 , t2 ). we denote standard provability in classical logic for Horn clauses. We rephrase the reduction and implication rule as follows. • (implication) From h∆F , ∆R i `? x : A ⇒ B, H, step to h∆F ∪ {y : A}, ∆R ∪ {R(x, y)}i `? y : B, H, where y ∈ A and y 6∈ Lab(∆) ∪ Lab(H). • (reduction) If y : C ∈ ∆F , with C = B1 ⇒ B2 ⇒ . . . ⇒ Bk ⇒ q, with q atomic, then from h∆F , ∆R i `? x : q, H step to h∆F , ∆R i `? u1 : B1 , H ∪ {(x, q)}, . . . . . . , h∆F , ∆R i `? uk : Bk , H ∪ {(x, q)}, for some u0 , . . . , uk ∈ Lab(∆), with u0 = y, uk = x, such that for i = 0, . . . , k − 1, ∆R ` R(ui , ui+1 ) holds. The soundness and completeness results can easily be extended to the class of Horn modal logics. Completeness relies on the cut-admissibility property which holds also for this generalization. In this regard, let us define, given ∆ = h∆F , ∆R i A∆R = {(x, y) | x, y ∈ A ∧ ∆R ` R(x, y)}. It is almost trivial to see that A∆R satisfies the conditions (i)–(iii) of Theorem 4.10 reformulated in this new setting, i.e. (i) A∆R [u/v] = A∆R [u/v] , where A∆R [u/v] is the image of A∆R under the mapping φ(u) = v and φ(x) = x, for x 6= u; (ii) if ∆R ⊆ ΓR then A∆R ⊆ AΓR , (iii) (x, y) ∈ A∆R , then A∆R ∪{R(x,y)} = A∆R . 7 We can introduce a further distinction between an extensional part of ∆ corresponding to the old R α and an intensional part containing the Horn formulas; the former varies for each database, whereas the latter is fixed for each modal logic.
164
GOAL-DIRECTED PROOF THEORY
Notice that the above conditions hold even if ∆R is an arbitrary set of first-order formulas containing the relational symbol R. The reason why the restriction to Horn formulas does matter is that A∆R as defined above determines one model of ∆R . More precisely, let us define a first-order structure M∆R for a language which includes the relational symbol R (namely a Herbrand model), with M∆R (R) = A∆R , then we have M∆R |= ∆R . This means that the relation A∆R satisfies the properties of each modal system (the properties are expressed by the non-atomic nonclosed Horn formulas in ∆R ). This fact may not hold unless ∆R is equivalent to a set of Horn formulas. To see this, consider the property of linearity L ≡ ∀XY (R(X, Y ) ∨ R(Y, X)). Let ∆R = {L, R(u1, u2 ), R(u1 , u3 )}, then ∆R ` R(u2 , u3 )∨R(u3 , u2 ), but ∆R 6` R(u2 , u3 ) and ∆R 6` R(u3 , u2 ). If, for instance, in the database we have u2 : A ⇒ q, u3 : A, and R(u2 , u3 ) holds, we can conclude u3 : q, otherwise we may not be able to conclude that. In other words, we must embody in the computation some form of case-analysis to make it work. Notice that in this case A∆R = ∅ and clearly M∆R 6|= ∆R . Let us go back to the completeness for Horn modal logics. If we inspect the completeness proof of Theorem 4.16, we only need to check one condition, namely that the properties of the accessibility relation are preserved under countable unions of chains of relations. Given a sequence of databases ∆i = h∆F,i , ∆R,i , i ∈ ωi such that ∆i ⊆ ∆i+1 , we define AR =
[ i∈ω
A∆R,i
∆R =
[
∆R,i ,
i∈ω
and a structure MR with M (R) = AR . Then we can easily prove that MR |= ∆R . That is to say, the limit (or union) of A∆R,i satisfies the properties of each modal system. This fact is ensured when each ∆R is a set of Horn formulas. 10 COMPARISON WITH OTHER WORK Many authors have developed analytic proof methods for modal logics, (see the fundamental book by Fitting [1983], and Gor´e [1999] for a recent and comprehensive survey). We cannot even attempt to give a full account of the research in this area. We limit our consideration to some general remarks. Goal-directed Methods Giordano, Martelli and colleagues [Giordano et al., 1992; Giordano and Martelli, 1994; Baldoni et al., 1998] have developed goal-directed methods for fragments
4. MODAL LOGICS OF STRICT IMPLICATION
165
of first-order (multi-)modal logics. Their work is motivated by several purposes: introducing scoping constructs (such as blocks and modules) in logic programming, representing epistemic and inheritance reasoning. The spirit of their work is very close to the material presented in this chapter. In particular in [Giordano and Martelli, 1994] a family of first-order logic programming languages is defined, based on the modal logic S4 with the aim of representing a variety of scoping mechanisms. If we restrict our consideration to the propositional level, their languages are strongly related to the one defined in Section 7.2 in the case the underlying logic is S4. The largest (propositional) fragment of S4 they consider is the language L4 which contains G := > | 2q | G ∧ G | 2(D → G), D := 2(G → 2q) | G → 2q | D ∧ D, where q ranges on atoms, D on database formulas (D-formulas), and G on goal formulas (G-formulas). This language is very close to the one defined in section 7.2, although neither one of the two is contained in the other. The (small) difference is that in L4 D-formulas of the form G → 2q are allowed, and they are not in the language of Section 7.2; on the other hand D-formulas of the form 2(G1 → 2(G2 → q)) are allowed in the latter, but not in L4 . Of course it would be easy to extend the language of Section 7.2 with this type of D-formulas (they are explicitly allowed in the general language of Section 7.3). The proof procedure they give for L4 (at the propositional level) is essentially the same as the unlabelled version of P(S4) we have seen in Section 7.2. Abadi and Manna [1986] have defined an extension of PROLOG, called TEMPLOG based on a fragment of first-order temporal logic. Their language contains the modalities 3, 2, and the temporal operator (next). They introduce a notion of temporal Horn clause whose constituents are atoms B possibly prefixed by an arbitrary sequence of next, i.e. B ≡ k A (with k ≥ 0). The modality 2 is allowed in front of clauses (permanent clauses) and clause-heads, whereas the modality 3 is allowed in front of goals. The restricted format of the rules allows one to define an efficient and simple goal-directed procedure without the need of any syntactic structuring or labelling. An alternative, although related, extension based on temporal logic has been studied in [Gabbay, 1987]. Farin˜as in [1986] describes MOLOG a (multi)-modal extension of PROLOG. His proposal is more a general framework than a specific language, in the sense that the language can support different modalities governed by different logics. The underlying idea is to extend classical resolution by special rules of the following pattern: let B, B 0 be modal atoms (i.e. atomic formulas possibly prefixed by modal operators), then if G → B is a clause and B 0 ∧ C1 ∧ . . . ∧ Ck is the current goal, and (*) |=S B ≡ B 0 holds then the goal can be reduced to
166
GOAL-DIRECTED PROOF THEORY
G ∧ C1 ∧ . . . ∧ Ck . It is clear that the effectiveness of the method depends on how difficult it is to check (*); in case of conventional logic programming the (*) test is reduced to unification. The proposed framework is exemplified in [Farin˜as, 1986] by defining a multimodal language based on S5 with necessity operators such as Knows(a). In this case one can define a simple matching predicate for the test in (*), and hence an effective resolution rule. Explicit proof methods In general, we can distinguish two paradigms in proof systems for modal logics: on the one hand we have implicit calculi in which each proof configuration contains a set of formulas implicitly representing a single possible world; the modal rules encodes the shifts of world by manipulating sets of formulas and formulas therein. On the other hand we have explicit methods in which the possible world structure is explicitly represented using labels and relations among them; the rules can create new worlds, or move formulas around them.8 In between there are ‘intermediate’ proof methods which add some semantic structure to flat sequents, but they do not explicitly represent a Kripke model. Our proof methods are closer to the explicit and to the ‘intermediate’ ones. The use of labels to represent worlds for modal logics is rather old and goes back to Kripke himself. In the seminal work [Fitting, 1983] formulas are labelled by strings of atomic labels (world prefixes), which represent paths of accessible worlds. The rules for modalities are the same for every system: for instance if a branch contains σ : 2A, and σ 0 is accessible from σ, then one can add σ 0 : A to the same branch. For each system, there are some specific accessibility conditions on prefixes which constraint the propagation of modal formulas. This approach has been recently improved by Massacci [1994]; in his strongly analytical tableaux system, modal rules are local (or one step rules), and take care of propagating subformulas only in the neighbour worlds. In other words, every move (for modal formulas) in Fitting tableaux is split, with Massacci tableaux, into a sequence of ‘atomic’ moves. The main advantage is that there is no need of complex accessibility (or closure) conditions on prefixes. Every system in a wide range (including the 15 basic modal logics) can be obtained by taking different combinations of a certain number of propagation rules for modalities. Basin, Mattews and Vigan`o have developed a proof-theory for modal logics making use of labels and an explicit accessibility relation [Basin et al., 1997a; Basin et al., 1999; Vigan`o, 1999]. A related approach was presented in [Gabbay, 1996] and [Russo, 1996]. These authors have developed both sequent and natural deduction systems for several modal logics which are completely uniform. Databases are distinguished into a relational part ∆R and into a logical part ∆F as explained in Section 9. The class of modal logics they consider is larger than the one considered in this chapter (with the exception of G), they are able to develop 8 This
distinction is emphasized, for instance, in [Gor´e, 1999].
4. MODAL LOGICS OF STRICT IMPLICATION
167
proof system for full modal logics including the case of seriality and convergency. In further work [Basin et al., 1997b] they have studied labelled sequent calculi to obtain space-complexity bounds for modal logics. The relation between our work and theirs might be established by giving sequent calculi for the strict implication fragment (as we exemplify for Masini’s calculus in the next paragraph). Once we defined labelled sequent calculi based on strict implication, one might wonder if our goal-directed proofs correspond to uniform proof (according to the definition by Miller [1991]) in these calculi. We think so, although we have not investigated this issue further. Intermediate methods Masini [Masini, 1992] develops a cut-free calculus for modal logic KD based on 2dimensional sequents , each side of a sequent is a vertical succession of sequences of formulas (1-sequences), such a vertical sequence is called a 2-sequence; finally, a sequent is a pair of two 2-sequences; we refer to [Masini, 1992] for further terminology. Modal rules act on different levels of the 2-sequences, whereas propositional rules act on formulas on the same level of the 2-sequences which constitute a sequent. Masini also presents an intuitionistic version of his calculus (for logic KD). There is some connection with our systems. If we restrict our consideration to K(⇒, ∧) (which is the same as KD on the same fragment), an unlabelled query of the form S1 , . . . , Sn `? A corresponds to an (intuitionistic) 2-sequent of the form S1 .. . ` n−1 Sn
A.
Namely, one can develop a 2-sequent calculus for K(⇒, ∧), containing, for strict implication, the rules of Figure 4.6.9 The axioms and the rules for conjunction are as in Masini’s formulation. It is not difficult to prove that (a) the above calculus is sound, and (b) every successful derivation of a query S1 , . . . , Sn `? A in our proof system K(⇒, ∧) can be mapped into a (cut-free) derivation of S1 .. . Sn
`
A
in the above calculus. Thus, the calculus is also complete. Although Masini has not developed calculi for other modal logics, and for strict implication language, we conjecture that the above considerations (2-sequent calculus and the mapping with our proof system) can be extended to some other modal systems. Another approach which can be considered as part of the ‘intermediate methodology’ is based on Belnap’s display logic. Wansing [1994] develops sequent cal9 The following calculus makes another simplification w.r.t. the original formulation, the only formula in the consequent is always at the same level as maximum level of the 2-sequence in the antecedent, thus we can omit k in the consequent.
168
GOAL-DIRECTED PROOF THEORY
Γ
Γ
σ
σ α Γ α, A Γ α
`
`
B
α, B A
Γ0
`
C
Γ σ, A ⇒ B
`
A⇒B
α Γ0
`
C
Figure 4.6. 2-Sequent calculus for K-strict implication. culi in the framework of display logic for the basic 15 systems. Sequents are pairs of structures generated from formulas by means of some operators namely, I, ◦, •, and ∗ . I represents the empty structure, i.e. truth in antecedent and falsity in consequent position. The operator ◦ is the polyvalent comma of sequents. The unary operator • in the antecedent position has the meaning of ‘possible in the past’, whereas in the consequent position it has the meaning of ‘necessary’. The operator ∗ represents negation and allows one to move a structure between the antecedent and the consequent. This framework is very general and uniform and, in particular, the rules for modalities are the same for all systems (` 2) (2 `) (` 3) (3 `)
•X ` A ⇒ X ` 2A A ` Y ⇒ 2A ` Y X ` A ⇒ (•(X ∗ ))∗ ` 3A (•(A∗ ))∗ ` Y ⇒ 3A ` Y ,
where X, Y are structures and A is a formula. These rules define modalities, no matter what the underlying propositional logic is. Thus, for the minimal system K, a certain number of structural rules have to be postulated as the base is classical logic. Other modal systems are obtained by postulating additional structural rules, for instance KT and K4 are obtained by adding respectively: (KT) X ` •Y ⇒ X ` Y (K4) X ` •Y ⇒ X ` • • Y . Display calculi are, in some sense, on the opposite side of our goal-directed procedures: in these calculi all deductive steps are decomposed and made explicit in the form of structural rules, whereas in the goal-directed procedures there are no independent structural rules, as they are incorporated in the logical rules (i.e. for the connectives) and in the database structure.
4. MODAL LOGICS OF STRICT IMPLICATION
169
Natural Deduction Cerrato in [1994] gives a Fitch-style natural deduction formulation of the 15 basic modal systems, where strict implication is the main connective. The central notion is that one of strict subproof, which is a proof of a formula whose main connective is strict implication. Strict implication represents the deducibility link between the hypothesis and the conclusion of a strict subproof. The elimination rule for strict implication in the basic system K is as follows: if A ⇒ B occurs in the parent proof, one can import A → B (material implication) in any immediate strict subproof. Stronger systems are obtained by allowing categorical strict subproofs (system KT), and by adding some suitable reiteration rules (for importing a formula from a proof to its strict subproofs) and contraposition rules (systems with D and B). For instance, in K4 one can reiterate any strict implication formula from a proof to its strict subproofs. In case of K, to give an example, a proof of A1 , . . . , An `? B in our calculus seems to correspond to a natural deduction proof of B in the n-times nested subproof with strict hypothesis An , (the nested strict subproof at level i has hypothesis Ai ). A more precise mapping is still to be investigated.
CHAPTER 5
SUBSTRUCTURAL LOGICS
1 INTRODUCTION In this chapter we consider substructural implicational logics. These logics, as far as the implicational fragment is concerned, put some restrictions on the use of formulas in deductions. The restrictions may require either that every formula of the database must be used, or that it cannot be used more than once, or that it must be used according to a given ordering of database formulas. The denomination substructural logics can be explained by the fact that these systems restrict (some of) the structural rules of deduction, that is to say the contraction, weakening and permutation rules which are allowed both by classical and intuitionistic provability. Substructural logics have received an increasing interest in the computer science community because of their potential applications in a number of different areas, such as natural language processing, database update, logic programming, type-theoretic analysis by means of the so called Curry–Howard isomorphism and, more generally, the categorical interpretation of logics. We refer to [Doˇsen, 1993; Ono, 1998; Ono, 1993; Routley et al., 1982] for a survey. Recently, Routley– Meyer semantics for substructural logics have been re-interpreted as modelling agent-interaction [Slaney and Meyer, 1997]. We give a brief survey of the implicational systems we consider in this chapter. The background motivation of relevant logics is the attempt to formalize an intuitive requirement on logical deductions: in any deduction the hypotheses must be relevant to the conclusion. Relevant means that the hypotheses must actually be used to get the conclusion. Thus the first intuition behind relevant logics is to reject the weakening rule Γ`A Γ, B ` A since B may have nothing to do with the proof of A from Γ. In the object language this rule is reflected by the theorem (of classical, likewise intuitionistic logic) (Irrelevance) A → (B → A). The implicational fragment of Relevant logic R can be axiomatized by dropping Irrelevance from the axiomatization of intuitionistic implication. A sequent calculus for the implicational fragment of R can be obtained from the sequent formulation of (the implicational fragment of) intuitionistic logic by restricting the identity axiom to single formulas A ` A, and by dropping the above weakening 171
172
GOAL-DIRECTED PROOF THEORY
rule. Whereas system R takes into account the basic concern about the use of formulas, other systems such as E and T put some further restrictions on the order in which formulas can be used in any deduction. These restrictions can be expressed in a sequent formulation as restrictions on the rule of permutation (see below), although in a less natural way.1 The system E combines relevance and necessity. The implication of E can be read at the same time as relevant implication and strict implication. If we think of interpreting necessary as forced by truth, we can try to define 2A =def (A → A) → A. With this interpretation the logic R cannot be interpreted as a logic of necessity, since in R we have both A → 2A and 2A → A. Thus the modality collapses. The logic E rejects the thesis A → 2A, avoiding the collapse of modality; the implication → of E can be read as strict implication as in E we have A → B → 2(A → B) and 2(A → B) → (A → B). The interpretation of modality is that one of S4, as in E we have: 2(A → B) → 2A → 2B, 2A → A, 2A → 22A, if ` A then ` 2A. In the system T we cut the relationship between 2A and A, by rejecting the thesis 2A → A as well. But the intuition behind T stems from a concern about the use of the two hypotheses in an inference by Modus Ponens: the restriction is that the minor A must not be derived ‘before’ the ticket A → B. This is clarified by the Fitch-style natural deduction of T, for which we refer to [Anderson and Belnap, 1975]; in terms of natural deduction, the requirement on Modus Ponens says that the minor must be derived either in the same context of the ticket (that is from the same assumptions) or in a nested subproof occurring in the proof of the ticket. None of the systems mentioned put any bound on the number of times a formula can be used within a deduction, what only matters is that it is used at least once. On the contrary, a few of the systems we consider pay attention to the number of times a formula can be used in a deduction. Namely, in some logics, such as 1 A sequent formulation of most relevant logics which does not make use of any additional machinary is the so- called ‘merge’ formulation presented in [Anderson and Belnap, 1975]. On the other hand, many authors, [Anderson et al., 1992; Doˇsen, 1988; Gor´e, 1998] have developed sequent calculi for substructural logics putting additional structure in sequents along the line of Belnap’s Display logic, see Section 9.
5. SUBSTRUCTURAL LOGICS
173
BCK and L, a formula cannot be used more than once. From the point of view of a sequent formulation, this constraint about the use of a formula can be expressed by removing the rule of contraction Γ, A, A, ∆ ` B Γ, A, ∆ ` B
,
from the standard sequent system for intuitionistic logic. In this context, formulas may be thought as representing resources which are consumed through a logical deduction. This motivates an alternative denomination of substructural logics as bounded resource logics [Gabbay, 1996]. In more detail, BCK is the logic which result from intuitionistic logic by dropping contraction. L reject both weakening and contraction. The implicational logic L is the implicational fragment of linear logic [Girard, 1987].2 . We will also consider contractionless versions of E and T, E-Wand T-W respectively. The weakest system we consider is FL3 , which is related to the right implicational fragment of Lambek calculus, proposed by Lambek in [1958] as a calculus of syntactic categories for natural language. This system rejects all substructural rules, in particular, in addition to weakening and contraction, it rejects the law of permutation: Γ, A, B, ∆ ` C . Γ, B, A, ∆ ` C The last system we consider is RM0. This system is an extension of R obtained by adding the so-called Mingle rule Γ`A
∆`A
Γ, ∆ ` A to the sequent calculus for the implicational fragment of R. We will mainly concentrate on the implicational fragment of the systems mentioned. In Section 6, we will extend the proof systems to a fragment similar to Harrop-formulas. For the fragment considered, all the logics studied in this chapter are subsystems of intuitionistic logic. However, this is no longer true for the fragment comprising an involutive negation, which can be added (and has been added) to each system. In this chapter we do not consider the treatment of negation. We refer the reader to [Anderson and Belnap, 1975; Anderson et al., 1992] for an extensive discussion. In Figure 1 we show the inclusion relation of the systems we consider in this chapter. 2 Linear logic is acutally a much more articulated logic. It has an intuitionistic as well as a classical version. In linear logic the weakening and contraction rules are re-introduced in a controlled way by special operators called exponentials. What we call here L is also commonly known as BCI logic. 3 The denomination of the system is taken from [Ono, 1998; Ono, 1993].
174
GOAL-DIRECTED PROOF THEORY
I RMO BCK
R L
E E−W
T T−W FL
Figure 5.1. Lattice of Substructural Logics.
As we have seen in the previous chapters, we can control the use of formulas by labelling data and putting constraints on the labels. In this specific context by labelling data, we are able to record whether they have been used or not and to express the additional conditions needed for each specific systems. The use of labels to deal with substructural logics is not a novelty: it has been used for instance by Prawitz [1965], by Anderson and Belnap [1975], to develop a naturaldeduction formulation of most relevant logics. Our proof systems are similar to their natural deduction systems in this respect: we do not have explicit structural rules. The structural rules are internalized as restrictions in the logical rules.
2 PROOF SYSTEMS In this section we develop proof methods for the implicational logics: R, BCK, E, T, E-W, T-W, FL.4 DEFINITION 5.1. Let us fix a denumerable alphabet A = {x1 , . . . , xi , . . .} of labels. We assume that labels are totally ordered as shown in the enumeration, v0 is the first label. A database is a finite set of labelled formulas ∆ = {x1 : A1 , . . . , xn : An }. We assume that 4 The names R, E, T, BCK, etc. in this chapter refer mainly to the implicational fragment of the logical systems known in the literature [Anderson and Belnap, 1975] with the corresponding names. The implicational fragments are usually denoted with the subscript →. Thus, what we call R is denoted in the literature by R→ and so forth; since we are mainly concerned with the implicational systems we have preferred to minimize the notation, stating explicitly when we make exception to this convention.
5. SUBSTRUCTURAL LOGICS
175
if x : A ∈ ∆ and x : B ∈ ∆, then A = B.5 We use the notation Lab(E) for the set of labels occurring in an expression E, and we finally assume that v0 6∈ Lab(∆). Label v0 will be used for queries from the empty database. DEFINITION 5.2. A query Q is an expression of the form: ∆, δ `? x : G where ∆ is a database, δ is a finite set of labels not containing v0 ; moreover if x 6= v0 then x ∈ Lab(∆), and G is a formula. A query from the empty database has the form: `? v0 : G. Let max(δ) denote the maximum label in δ according to the enumeration of the labels. By convention, we stipulate that if δ = ∅, then max(δ) = v0 . The set of labels δ may be thought as denoting the set of resources that are available to prove the goal. Label x in front of the goal has a double role as a ‘position’ in the database from which the goal is asked, and as available resource. The rules for success and reduction are parametrized to some conditions SuccS and RedS that will be defined below. • (success)
∆, δ `? x : q; succeeds
if x : q ∈ ∆ and SuccS (δ, x). • (implication) from we step to
∆, δ `? x : C → G ∆ ∪ {y : C}, δ ∪ {y} `? y : G,
where y > max(Lab(∆)), (hence y 6∈ Lab(∆)); • (reduction) from
∆, δ `? x : q,
if there is some z : C ∈ ∆, with C = A1 → . . . → Ak → q, and there are δi , and xi for i = 0, . . . , k such that: 1. δ0 = {z}, x0 = z, S 2. ki=0 δi = δ, 3. RedS (δ0 , . . . , δk , x0 , . . . , xk ; x) 5 This
restriction will be lifted in Section 6 where conjunction is introduced in the language.
176
GOAL-DIRECTED PROOF THEORY
then for i = 1, . . . k, we step to ∆, δi , `? xi : Ai . The conditions for success are either (s1) or (s2) according to each system: (s1) SuccS (δ, x) ≡ x ∈ δ, (s2) SuccS (δ, x) ≡ δ = {x}. The conditions RedS are obtained as combination of the following clauses: (r0) xk = x; (r1) for i, j = 0, . . . , k, δi ∩ δj = ∅; (r2) for i = 1, . . . , k, xi−1 ≤ xi and max(δi ) ≤ xi ; (r3) for i = 1, . . . , k, xi−1 ≤ xi and max(δi ) = xi ; (r4) for i = 1, . . . , k, xi−1 < xi , max(δi−1 ) = xi−1 < min(δi ) and max(δk ) = xk . The conditions RedS are then defined according to Table 2. Table 5.1. Restrictions on reduction and success. Condition FL T-W T E-W E L R BCK
(r0) * * * * *
(r1)
(r2)
* * * *
(r3) * *
* *
(r4) *
(Success) (s2) (s2) (s2) (s2) (s2) (s2) (s2) (s1)
Notice that (r4) ⇒ (r3) ⇒ (r2), and (r4) ⇒ (r1). We give a quick explanation of the conditions SuccS and RedS . Remember that δ are resources which must/can be used. For the success rule, in all cases but BCK, we have that we can succeed if x : q is in the database, x is the only resource left, and q is asked from position x; in the case of BCK x must be among the available resources, but we do not require that x is the only one left.
5. SUBSTRUCTURAL LOGICS
177
The conditions for the reduction rule can be explained intuitively as follows: resources δ are split in several δi , for i = 1, . . . , k and each part δi must be used in a derivation of a subgoal Ai . In the case of logics without contraction we cannot use a resource twice, therefore by restriction (r1), the δi s must be disjointed and z, the label of the formula we are using in the reduction step, is no longer available. Restriction (r2) impose that successive subgoals are to be proved from successive positions in the database: only positions y ≥ x are ‘accessible’ from x; moreover each xi must be accessible from resources in δi . Notice that the last subgoal Ak must be proved from x, the position from which the atomic goal q is asked. Restriction (r3) is similar to (r4), but it further requires that the position xi is among the available resources δi . Restriction (r4) forces the goal Ai to be proved by using successive disjointed segments δi of δ. Moreover, z which labels the formula used in the reduction step must be the first (or least) resource among the available ones. It is not difficult to see that intuitionistic (implicational) logic is obtained by considering success condition (s1) and no other constraint. More interestingly, we can see that S4-strict implication is given by considering success condition (s1) and restrictions (r0) and (r2) on reduction. We leave the reader to prove that the above formulation coincides with the database-as-list formulation of S4 we have seen in the previous chapter. We can therefore consider S4 as a substructural logic obtained by imposing a restriction on the weakening and the permutation rules. On the other hand, the relation between S4 and E should be apparent: the only difference is the condition on the success rule which controls the weakening restriction. A few examples of derivations in each system are contained in the proof of the completeness Theorem 5.22. We introduce some notational conventions we will use later on. We use the notation Q 1 , . . . , Qn , Q ⇒Red z to denote that query Q may generate the queries Q1 , . . . , Qn , by reduction with respect to some z : D1 → . . . → Dn → q ∈ Γ. Similarly, let Q = Γ, γ `? x : A → B, if from Q we step to Q0 by the implication rule, where Q0 = ∆ ∪ {y : A}, δ ∪ {y} `? y : B, for some y > max(Lab(∆)) we write Q0 . Q ⇒Imp y
178
GOAL-DIRECTED PROOF THEORY
Given a set of labels γ and a label u, let us define: max{z ∈ γ | z < u} pred(γ, u) = ↑ if ¬∃z ∈ γ z < u. succ(γ, u) =
min{z ∈ γ | z > u} ↑ if ¬∃z ∈ γ z > u.
By convention, we assume that every inequality in which pred(γ, u) (succ(γ, u)) is one of the two terms trivially holds if pred(γ, u) =↑ (succ(γ, u) =↑); similarly, we assume that an inequality such as max{pred(γ, u), x} < y reduce to x < y, whenever pred(γ, u) is undefined. 3
BASIC PROPERTIES
In this section we list some properties of the proof system which allow some simplification. The proofs are by induction on the height of successful derivations and are mostly omitted. Unless stated otherwise the propositions hold for any system. PROPOSITION 5.3. If Γ, γ `? x : B succeeds, u : A ∈ Γ, but u 6∈ γ, then Γ − {u : A}, γ `? x : B succeeds. By this proposition it is apparent that γ specifies usage. Resources which are not included in γ are not relevant for the computation and the formulas they label can be omitted. PROPOSITION 5.4. If Γ, γ `? x : B succeeds, then γ ⊆ Lab(Γ). By the previous proposition, when we consider successful queries we can assume that γ ⊆ Lab(Γ), and hence max(γ) ≤ max(Lab(Γ)). The following proposition generalizes the success rule to the identity property. PROPOSITION 5.5. If x : A ∈ Γ, and SuccS (γ, x), then Γ, γ succeeds.
`?
x : A
Proof. By induction on the complexity of A. If cp(A) = 0, that is A is an atom, the one above is just the success rule. Let A = B1 → . . . → Bn → q. A derivation of A, will start with: Γ, γ `? x : A and then step to Q = Γ ∪ {z1 : B1 , . . . , zn : Bn }, γ ∪ {z1 , . . . , zn } `? zn : q where max(Lab(Γ)) < z1 < . . . < zn . Let Γ0 = Γ ∪ {z1 : B1 , . . . , zn : Bn } and γ 0 = γ ∪ {z1 , . . . , zn }. We define
5. SUBSTRUCTURAL LOGICS
179
γ0 = {x} and z0 = x, γ1 = (γ − {x}) ∪ {z1 }, γi = {zi }, for i > 1. We can easily see that SuccS (γi , zi ). Hence, by the induction hypothesis we have that Qi = Γ0 , γi `? zi : Bi succeeds for i = 1, . . . , n. If we can apply reduction with respect Snto x : A in Q and step to Qi we are done. By definition z0 = x, γ0 = {x}, and i=0 γi = γ 0 . Moreover, we leave the reader to check that RedS (γ0 , . . . , γn , z0 , . . . , zn ; zn ) holds for any S, so that we can perform the above reduction step. This completes the proof. PROPOSITION 5.6. If 1. Γ, γ `? x : B succeeds, Γ ⊆ ∆, then 2. ∆, γ `? x : B succeeds. In the case of BCK if γ ⊆ δ also 3. ∆, δ `? x : B succeeds. Moreover, for any successful derivation of 1. there is a corresponding derivation of 2. (viz. 3.) of no greater height. PROPOSITION 5.7. • For R, L, BCK, if Γ, γ `? x : B succeeds, then for every y ∈ γ, Γ, γ `? y : B succeeds. • For R, L, BCK, E, E-W, if Γ, ∅ `? x : B succeeds, then for every y, Γ, ∅ `? y : B succeeds. • For E, E-W, if Γ, γ `? x : B succeeds and x 6∈ γ, then, for every y ≥ max(γ), Γ, γ `? y : B succeeds. Moreover, the height of a successful derivation does not increase. Because of the previous proposition, in the case of R, L, and BCK we can omit the label x in front of the goal. However, we do not make use of this and other possible simplifications at this stage since we want to carry on a uniform development.
180
GOAL-DIRECTED PROOF THEORY
DEFINITION 5.8. We say that a query Γ, γ `? x : A is S-regular, where S in one of the logic under consideration if the following holds: • for R, L, BCK, if γ 6= ∅, then x ∈ γ; • for E, E-W, max(γ) ≤ x; • for T,T-W, FL, max(γ) = x (thus if γ = ∅ then x = v0 ). An S-regular derivation is a derivation in which every query is S-regular. Notice that a query from the empty database is S-regular for every system S. Without loss of generality, we can restrict our attention to regular queries and regular derivations. PROPOSITION 5.9. Let Q be an S-regular query, if Q succeeds, then it succeeds by an S-regular derivation. From now on we restrict our consideration to regular queries and regular derivations. The main reason is technical: this restriction helps in the proof of cutadmissibility contained in the next section. We conclude this section by showing that successful queries are closed under formula substitution. The property of closure under substitution is not to be taken for granted, as the deduction rules pay attention to the form of the formulas, that is whether they are atomic or not. THEOREM 5.10. If Q = Γ, γ `? x : A succeeds, then also Q0 = Γ[q/B], γ `? x : A[q/B] succeeds. Proof. By induction on the height h of a successful derivation of the query Q. If h = 0, then the query immediately succeeds. If A 6= q, the claim is trivial, if A = q, then x : B ∈ Γ[q/B], and the claim follows by Proposition 5.5. Let h > 0 if A = C → D, then we apply the implication rule and we step to Γ ∪ {z : C}, γ ∪ {z} `? z : D, for a suitable z. By the induction hypothesis, we have (Γ ∪ {z : C})[q/B], γ ∪ {z} `? z : D[q/B] succeeds, and we can conclude by applying the implication rule to Q0 that is Γ[q/B], γ `? x : A[q/B] as A[q/B] is C[q/B] → D[q/B] and (Γ ∪ {z : C})[q/B] is Γ[q/B] ∪ {C[q/B]}. Let A be an atom r and the first rule applied be reduction with respect to z : C ∈ Γ, where C = D1 → . . . → Dn → r, so that we step to Qi = Γ, γi `? xi : Di
5. SUBSTRUCTURAL LOGICS
181
S for some suitable γi and xi such that γ = i γi and RedS (γ0 , . . . , γn , x0 , . . . , xn ; x) holds; if r 6= q, we simply apply the induction hypothesis to Qi and we get Γ[q/B], γi `? xi : Di [q/B] succeed. We can apply reduction to Q0 w.r.t. z : C[q/B] and succeed. If r = q, letting B = B1 → . . . → Bm → p, we have that C[q/B] is D1 [q/B] → . . . → Dn [q/B] → B1 → . . . → Bm → p. Then from Q0 , we step by the implication rule to Q00 = Γ[q/B]∪{z1 : B1 , . . . , zm : Bm }, γ ∪{z1 , . . . , zm } `? zm : p where max(Lab(Γ[q/B])) < z1 < . . . < zm . Let Γ0 = Γ[q/B] ∪ {z1 : B1 , . . . , zm : Bm } and γ 0 = γ ∪ {z1 , . . . , zm }. Let γn+j = {zj }, and xn+j = zj for j = 1, . . . , m. It is easy to see that S S γ 0 = n+m i=0 γi , and also Red (γ0 , . . . , γn+m , x0 , . . . , xn+m ; zm ) holds, by the hypothesis that RedS (γ0 , . . . , γn , x0 , . . . , xn ; x) holds. Thus, from Q00 we can step to Q0i = Γ0 , γi `? xi : Di [q/B], for i = 1, . . . , n and Q0n+j = Γ0 , γn+j `? xn+j : Bj , for j = 1, . . . , m. Since the queries Qi succeed, by the induction hypothesis and Proposition 5.6 also the queries Q0i succeed. On the other hand SuccS (γn+j , xn+j ) hold and xn+j : Bj ∈ Γ0 . Thus by Proposition 5.5, all Q0n+j succeed as well; we conclude that Q00 succeeds and so does Q0 . 4
ADMISSIBILITY OF CUT
In this section we prove the admissibility of cut for all the logics we have considered so far. We first have to deal with the notion of substitution of formulas and labels by databases. In this case the proof of the admissibility of cut is more complex than in the cases we have seen in the previous chapters. The reason is that the deduction rules are more constrained than in the previous cases. In particular we have to take into account the ordering of formulas in the databases involved in a cut inference step. Since we treat the implicational fragment in this section, we need the notion of compatibility with respect to substitution that we have already met in Chapter 4. DEFINITION 5.11. Given two databases Γ, with u : A ∈ Γ and ∆, we say that Γ and ∆ are compatible for substitution if z ∈ Lab(Γ) ∩ Lab(∆), then z : C ∈ Γ iff z : C ∈ ∆, and whenever they are compatible for substitution, we let:
182
GOAL-DIRECTED PROOF THEORY
Γ[u : A/∆] = (Γ − {u : A}) ∪ ∆. Given two sets of resources γ and δ and a label u we define: (γ − {u}) ∪ δ if u ∈ γ γ[u/δ] = γ otherwise. By cut we mean the following inference rule: If Γ[u : A], γ `? x : B
and ∆, δ `? y : A
succeed, then also Γ[u : A/∆], γ[u/δ] `? x[u/y] : B succeeds. However, unless we put some constraint on γ and δ, the above rule does not hold. To give a simple example, let us consider the case of L (linear implication), we have: 1. {x2 : p → q, x3 : p}, {x2 , x3 } `? x3 : q and 2. {x1 : (p → q) → p, x2 : p → q}, {x1 , x2 } `? x2 : p both succeed, but 3. {x1 : (p → q) → p, x2 : p → q}, {x1 , x2 } `? x2 : q fails. Here we want to replace x3 : p occurring in 1. by the database in 2. . Since the database shares resource x2 , x2 : p → q must be used twice in a deduction of 3., and this is not allowed by the condition for L-computation. In this case, the restriction we must impose is that the two resource sets (in the two premises of the cut) γ and δ are disjoint. For other logics, for instance, where the order matters, we must ensure that the substitution of δ in γ respects the ordering of the labels in γ. However, we can always fulfill these compatibility constraints by renaming the labels in the proper way. In the previous example, for instance, we can rename the labels in the first query, and get 10 . {y2 : p → q, y3 : p}, {y2 , y3 } `? y3 : q by cut we now obtain: 30 . {x1 : (p → q) → p, x2 : p → q, y2 : p → q} {x1 , x2 , y2 } `? x2 : q that succeeds. Hence we have two alternatives: either we accept the above formulation of the cut rule, but we restrict its applicability to queries which satisfy specific conditions for each system; or we find a more general notion of cut that holds in every case. As we will see, we can define the more general notion of cut by incorporating the needed re-labelling. However, we will prove the admissibility of the cut in the restricted form since, in some sense it is more informative. For instance, from the
5. SUBSTRUCTURAL LOGICS
183
above premises 1. and 2., we can legitamately infer the conclusion 3. in R, T, E, and not only the weaker 30 . We introduce some notation to simplify the following development. Given two queries Q, Q0 , with Q = Γ[u : A], γ `? x : A and Q0 = ∆, δ `? y : A, no matter whether they are compatible or not, the result of cutting Q by Q0 on u : A is uniquely determined, and we denote it by the query Q00 = CU T (Q, Q0 , u). DEFINITION 5.12 (Compatibility for Cut). Given two queries Q = Γ[u : A], γ `? x : A and Q0 = ∆, δ `? y : A, we say that Q and Q0 are compatible for cut on position/resource u in system S, denoting this by COM P S (Q, Q0 , u),6 if the following holds: • Γ and ∆ are compatible for substitution; • the following combinations of the conditions (c1)– (c3) below hold. (c1) γ ∩ δ = ∅; (c2) pred(γ, u) < min(δ) and max(δ) < succ(γ, u); (c3) pred(γ, u) ≤ y, and if u < x, then y ≤ min{succ(γ, u), x}. – for R: nothing; – for L, BCK: (c1); – for FL: (c2); – for T, E: (c3); – for T-W, E-W: (c1) and (c3). Here is an explanation of the above conditions: condition (c1) says that the resources of the two premises are disjoint; this condition must be assumed for systems without contraction. Condition (c2) is stronger and requires that δ, seen as an ordered sequence of labels, must fit in between the predecessor and the successor of the label u that we replace by cut; in other words, u can be replaced by the entire ‘segment’ δ without mixing γ and δ. Condition (c3) is a weakening of (c2) allowing merging of γ and δ up to the successor of the cut formula label u. Condition (c3) says exactly this when succ(γ, u) exists (whence succ(γ, u) ≤ x) and y = max(δ). It is stated in more general terms, since succ(γ, u) may not exist, and in the cases of E and E-W it may be y ≥ max(δ). For instance, let x1 < x2 < . . . < x9 and let us consider two queries with γ = {x2 , x4 , x6 , x8 , x9 }, 6 As can be seen from the conditions below, compatibility is a relation on (sets of) labels which occur in a query, the formulas do not play any role. In Section 6 we wil use the alternative notation COM P S (Γ, γ, x, u; ∆, δ, y).
184
GOAL-DIRECTED PROOF THEORY
u = x6 , x = x9 , and δ = {x1 , x3 , x6 , x7 }, y = x7 , satisfy condition (c3). On the other hand, if δ = {x1 , x3 , x6 , x9 } and y = x9 , the two queries do not satisfy condition (c3). It should also be clear that if y = max(δ) then condition (c2) implies both condition (c1) and condition (c3). The compatibility conditions preserve the regularity of the queries. PROPOSITION 5.13. If Q and Q0 are S-regular and COM P S (Q, Q0 , u) holds, then the query Q00 = CU T (Q, Q0 , u) is S-regular. Proof. We proceed by cases, let us first assume γ[u/δ] 6= ∅. • Case of R, L, BCK, we must show that x[u/y] ∈ γ[u/δ]. By regularity of Q and Q0 , we have x ∈ γ and y ∈ δ. If x = u, then x[u/y] = y ∈ δ ⊆ γ[u/δ]. If x 6= u, then x = x[u/y] ∈ γ − {u} ⊆ γ[u/δ]. • Case of E, E-W, we must show that max(γ[u/δ]) ≤ x[u/y]. We have the following cases: 1. Let γ[u/δ] = γ and x[u/y] = x, then it follows by the regularity of Q. 2. Let γ[u/δ] = (γ − {u}) ∪ δ and x[u/y] = x. We have that max(γ[u/δ]) = max(γ − {u}) or max(γ[u/δ]) = max(δ). In the former case we conclude by regularity of Q. In the latter case since u ∈ γ and x 6= u, by regularity of Q it must be u < x, so that by COM P S (Q, Q0 , u), and by regularity of Q0 we have max(δ) ≤ y ≤ min{succ(γ, u), x} ≤ x. 3. γ[u/δ] = γ and x[u/y] = y, that is x = u. We have u 6∈ γ. It must be pred(γ, u) ↓, otherwise we have max(γ) < u by regularity of Q, whence γ = ∅ against the hypothesis. Since pred(γ, u) ↓, we have max(γ) = pred(γ, u) ≤ y, by COM P S (Q, Q0 , u). 4. Let γ[u/δ] = (γ − {u}) ∪ δ and x[u/y] = y, that is x = u. Then max(γ[u/δ]) = max(γ − {u}) or max(γ[u/δ]) = max(δ). In the former case, we have max(γ − {u}) = pred(γ, u) ≤ y, by COM P S (Q, Q0 , u). In the latter case we conclude by the regularity of Q0 . • For all the other systems we check in a similar way that max(γ[u/δ]) = x[u/y]. Details are left to the reader. • For T, T-W, FL, we check that if γ[u/δ] = ∅, then x[u/y] = v0 . If γ[u/δ] = ∅, then either γ = ∅ or γ = {u} and δ = ∅. In the former case, we have x = v0 , so that it must be v0 6= u, since v0 6∈ Lab(Γ), and the result follows. In the latter case, it must be x = u, so that x[u/y] = y = v0 , as δ = ∅.
5. SUBSTRUCTURAL LOGICS
185
The proof of the admissibility of cut relies on the lemmas below, which state properties and relations of COM P S , RedS , SuccS . We give an intuitive explanation of the lemmas. Lemma 5.14 deals with the case when the cut formula is an atom and it is used for immediate success. Lemmas 5.15 and 5.16 ensure the permutability of cut with the implication rule and with the reduction rule, respectively. Lemma 5.18 deals with the case when the cut formula is used in a reduction step, that is, it is its principal formula. To explain the role of Lemmas 5.16 and 5.18, we illustrate the cut elimination process in some detail in this last case; we do not mention the labels, but we focus on the inference steps. Let us have 1. Γ, D → q `? q and
2. ∆ `? D → q.
From 1. we step to 3. Γ, D → q `? D. By cutting 2. and 3. we obtain 4. Γ, ∆ `? D succeeds. By 2. we get: 5. ∆, D `? q succeeds. Now we cut 5. and 4. on D and obtain: 6. Γ, ∆ `? q succeeds. Since, we have labels and constraints, we cannot freely perform cut, we must ensure that the compatibility constraints are satisfied. The meaning of the lemmas is that: by 5.16 if 1. and 2. are compatible for cut, then so are 3. and 2.; by 5.18 if 1. and 2. are compatible for cut, then so are 5. and 4. . The structure of the cut elimination process is fixed and it is exactly the same as in the previous chapters. Much of the effort is to ensure the right propagation of compatibility constraints. We assume that the queries are S-regular. LEMMA 5.14. If ∆, δ `? γ[u/δ] `? y : q succeeds.
y : q succeeds and SuccS (γ, u) holds, then ∆,
Proof. Easy and left to the reader.
LEMMA 5.15. If Q ⇒Imp Q1 , and COM P S (Q, Q0 , u), then there exists a label z Imp 0 z , such that Q ⇒z0 Q1 [z/z 0 ], COM P S (Q1 [z/z 0], Q0 , u), and CU T (Q1 [z/z 0 ], Q0 , u). CU T (Q, Q0 , u) ⇒Imp z0 Proof. Let Q = Γ, γ `? x : C → D, choose z 0 such that max{max(γ), max(δ), x, u, y} < z 0 , and let Q1 = Γ ∪ {z 0 : C}, δ ∪ {z 0 } `? z 0 : D, we have that CU T (Q1 , Q0 , u) is the query
186
GOAL-DIRECTED PROOF THEORY
(Γ ∪ {z 0 : C})[u : A/∆], (δ ∪ {z 0 })[u/δ] `? z 0 [u/y] : D. By the choice of z 0 , z 0 6= u and z 0 6∈ γ ∪ δ, we have that the above query is the same as: Γ[u : A/∆] ∪ {z 0 : C}), δ[u/δ] ∪ {z 0 } `? z 0 : D. Thus, it is clear that CU T (Q1 , Q0 , u). CU T (Q, Q0 , u) ⇒Imp z0 We have still to check that COM P S (Q1 , Q0 , u). As far as condition (c1) is concerned, if γ ∩ δ = ∅, also (γ ∪ {z 0 }) ∩ δ = ∅ holds, since z 6∈ δ. For condition (c2), the only non-trivial check is that y < succ(γ ∪ {z 0 }, u) when succ(γ ∪ {z 0 }, u) = z 0 , but this is ensured by the choice of z 0 > y. Similarly, for condition (c3), the only non-trivial check is y ≤ min{succ(γ ∪ {z 0 }), z 0 , which holds again as z 0 > y. LEMMA 5.16. Suppose Q ⇒Red Q1 , . . . , Qn , where z Q = Γ[u : A], γ `? x : q Qi = Γ[u : A], γi `? xi : Di , and Q0 = ∆, δ `? y : A. If COM P S (Q, Q0 , u) then • (a) COM P S (Qi , Q0 , u); CU T (Q1 , Q0 , u), . . . , CU T (Qn , • (b) z 6= u implies CU T (Q, Q0 , u) ⇒Red z 0 Q , u). Proof. We first show that we can assume that xi ∈ γ or xi = x. For all systems, but E and E-W, this is obvious: by regularity, xi ∈ γi ⊆ γ, unless γi = ∅ (and this is allowed in some systems). But if γi = ∅ is allowed (and it is so in systems R, BCK), whatever is the choice of xi , by Proposition 5.7 the query succeeds with the same height; thus we may chose a proper xi ∈ γ, or let xi = x. In the case of E and E-W, if xi 6∈ γi ⊆ γ, by Property 5.7, we have that for every w ≥ max(γi ), Γ, γi `? w : Di succeeds, so that again we can chose a proper xi ∈ γ or take xi = x. We turn to prove (a), for each group of systems. (Condition c1) We must check that γi ∩ δ = ∅. But this trivially follows from the fact that γ ∩ δ = ∅ and γi ⊂ γ. (Condition c2) We must check that pred(γi , u) < min(δ) and max(δ) < < succ(γi , u). Notice that there is only one i0 such that u ∈ γi0 and by (r4), if pred(γi0 , u) ↓ then pred(γi0 , u) = pred(γ, u), and the same holds for succ(γi0 , u). (Condition c3) We must check that
5. SUBSTRUCTURAL LOGICS
187
(*) pred(γi , u) ≤ y and if u < xi , then y ≤ min{succ(γi , u), xi }. Notice that since γi ⊆ γ, we have (1) if pred(γi , u) ↓, then pred(γ, u) ↓ and pred(γi , u) ≤ pred(γ, u); (2) if succ(γi , u) ↓, then succ(γ, u) ↓ and succ(γi , u) ≥ succ(γ, u). Then, (*) easily follows from (1), (2), COM P S (Q, Q0 , u), and the regularity of Qi , which implies by (2), succ(γi , u) ≤ xi . We now turn to (b) We must show that if u 6= z, then the following hold: 1. γ0 [u/δ] = {z} and x0 [u/y] = x0 , Sn 2. γ[u/δ] = i=0 γi [u/δ], 3. RedS (γ0 [u/δ], . . . , γn [u/δ], x0 [u/y], . . . , xn [u/y]; x[u/y]) Condition 1. is obvious by u 6= z. For condition 2., we know that γ0 = {z}. Let u ∈ γ (otherwise the claim is trivial), then for some i it must be u ∈ γi . Let J = {j : 1, . . . , n | u ∈ γj }, and let J c = {1, . . . , n} − J. Then we have: γ[u/δ] = (
[
γi )[u/δ] = (
i=0
[
γi )[u/δ] ∪ (
i∈J
[
γj [u/δ]),
j∈J c
then the result easily follows from the fact that S S for j ∈ J c ,γj [u/δ] = γj and ( i∈J γi )[u/δ] = i∈J (γi [u/δ]). For condition 3., we have to check (r0)–(r4). We give the details of the cases (r2) and (r4), the proof of the others are similar or straightforward. (r2) max(γi [u/δ]) ≤ xi [u/y] follows by Proposition 5.13. We must prove xi−1 [u/y] ≤ xi [u/y], for i = 1, . . . , n. • Let xi−1 [u/y] = xi−1 , and xi [u/y] = y, that is xi = u; then xi−1 < u, we know that xi−1 ∈ γ, (it cannot be xi−1 = x, for otherwise, we would have x < u). Hence pred(γ, u) ↓, so that, by COM P S (Q, Q0 , u), we have: xi−1 ≤ pred(γ, u) ≤ y. • Let xi−1 [u/y] = y and xi [u/y] = xi , that is xi−1 = u. By part (a), we have COM P S (Qi , Q0 , u), so that, being u < xi , we have y ≤ min{succ(γi ), xi } ≤ xi . (r4) Only one γi and xi , say for i = i0 are affected by substitution by δ and y (i.e. for only one i, u ∈ γi ), and since z 6= u, i0 > 0. In light of 5.13, we must prove only that
188
GOAL-DIRECTED PROOF THEORY
max(γi0 −1 ) < min(γi0 [u/δ]) and max(γi0 [u/δ]) < min(γi0 +1 ). The first claim is not trivial in the case u = min(γi0 ), but in such a case, by COM P S (Q, Q0 , u) we have: max(γi0 −1 ) = pred(γ, u) < min(δ) = min(γi0 [u/δ]). Similarly, the second claim is not trivial in case u = max(γi0 ), but in such a case, by COM P S (Q, Q0 , u) we have: max(γi0 [u/δ]) = max(δ) < succ(γ, u) = min(γi0 +1 ).
REMARK 5.17. Notice that the proof of xi−1 [u/y] ≤ xi [u/y] for (r2) and (r3) does not depend on the assumption of z 6= u, that is the claim holds even if u = z. This fact will be used in the proof of Lemma 5.18. LEMMA 5.18. Let the following hold: (i) Q = Γ[u : D1 → . . . → Dn → q], γ `? x : q, (ii) Q0 = ∆, δ `? y : D1 → . . . → Dn → q, Q 1 , . . . , Qn , (iii) Q ⇒Red u (iv) COM P S (Q, Q0 , u), (v) Q0i = CU T (Qi , Q0 , u) for i = 1, . . . , n, (vi) Q000 = ∆ ∪ {z1 : D1 , . . . , zn : Dn }, δ ∪ {z1 , . . . , zn } `? zn : q, where max{max(γ), max(δ), x, y, u} < z1 < z2 < . . . < zn , (vii) Q00i = CU T (Q00i−1 , Q0i , zi ) for i = 1, . . . , n, then, COM P S (Q00i−1 .Q0i , zi ) holds for i = 1, . . . , n. Proof. Let A = D1 → . . . → Dn → q. We define: Si δi = δ ∪ {zi+1 , . . . , zn } ∪ j=1 γj [u/δ] for i = 0, . . . , n. (so that δ0 = δ ∪ {z1 , . . . , zn }). Thus, we have for i = 0, . . . , n: Q0i = Γ[u : A/∆], γi [u/δ] `? xi [u/y] : Di , and Q00i = (Γ−{u : A})∪∆∪{zi+1 : Di+1 , . . . , zn : Dn }, δi `? zn : q, (for the sake of uniformity, by Proposition 5.6, we can replace ∆ by (Γ−{u : A})∪ ∆ in Q000 ), We must prove COM P S (Q00i−1 .Q0i , zi ), for i = 1, . . . , n, checking each condition involved in COM P S .
5. SUBSTRUCTURAL LOGICS
189
(Condition c1) for systems with (r1). We have that u 6∈ γi , hence for all i, γi [u/δ] = γi ; since δ ∩ γi = ∅, by the choice of zi , we also have for i = 0, . . . , n − 1 Si (δ ∪ {zi+1 , . . . , zn } ∪ j=1 γj [u/δ]) ∩ γi+1 = ∅, (Condition c2) observe that u = min(γ) and u 6∈ γi , whence for all i, γi [u/δ] = γi . We must check that pred(δi , zi+1 ) < min(γi+1 ) and max(γi+1 ) < succ(δi , zi+1 ). The second inequality holds by the choice of zi . For the first inequality, in the case of i = 0, we have pred(δ0 , z1 ) = max(δ) < succ(γ, u) = min(γ1 ), by COM P S (Q, Q0 , u). In the case of i > 0, we have Si pred(δi , zi+1 ) = max(δ ∪ j=1 γj ) = max(γi ) < min(γi+1 ). (Condition c3) we must check that pred(δi , zi+1 ) ≤ xi+1 [u/y] and if zi+1 < zn then xi+1 [u/y] ≤ min{succ(δi , zi+1 ), zn }. The second inequality holds by the choice of zi . We prove the first one in the case of system E; for the other systems the proof is almost the same. Let i = 0, then we have pred(δ0 , z1 ) = max(δ) ≤ y. We know that u ≤ x1 , thus if u = x1 we are done. Let u < x1 , then x1 [u/y] = x1 . By COM P S (Q, Q0 , u) and by Proposition 5.16, we have COM P S (Q1 , Q0 , u), whence max(δ) ≤ y ≤ min{succ(γ1 , u), x1 } ≤ x1 . Let i > 0, then we have: pred(δi , zi+1 ) = max(δ ∪
Si j=1
γj [u/δ]).
By Remark 5.17, we know that x1 [u/y] ≤ . . . ≤ xn [u/y]. Then we have either: Si max(δ ∪ j=1 γj [u/δ]) = max(δ), so that max(δ) ≤ x1 [u/y] ≤ xi+1 [u/y]; or S max(δ ∪ ij=1 γj [u/δ]) = max(γj [u/δ]), for some j ≤ i. But by Proposition 5.13 max(γj [u/δ]) ≤ xj [u/y] ≤ xi [u/y].
190
GOAL-DIRECTED PROOF THEORY
THEOREM 5.19 (Admissibility of Cut). Let Q = Γ[u : A], γ `? x : B and Q0 = ∆, δ `? y : A be regular queries. If Q and Q0 succeed and COM P S (Q, Q0 , u), then also Q∗ = CU T (Q, Q0, u) = Γ[u : A/∆], γ[u/δ] `? succeeds.
x[u/y] : B
Proof. Let Q and Q0 be as in the statement of the theorem. As usual, we proceed by double induction on pairs (c, h), where c is the complexity of the cut formula A and h is a successful derivation of Q. We give the details for the case of immediate success (c = h = 0) and for the case of reduction with respect to u : A (c, h > 0). The other cases follow by the induction hypothesis, using Lemma 5.15 if the first step is by the implication rule, and Lemma 5.16 if the first step is by the reduction rule. Let c = h = 0, then Q immediately succeeds, that is B is an atom q, x : q ∈ Γ and SuccS (γ, x) holds. If x 6= u, then x : q is also in Γ[u : A/∆], moreover x[u/y] = x and γ[u/δ] = γ. By Propositions 5.3 and 5.6 we have that Q∗ = Γ − {u : A} ∪ ∆, γ[u/δ] `? x[u/y] : B succeeds. If x = u, then x[u/y] = y; since Q0 succeeds, by Lemma 5.14 we get that Q∗ succeeds. Let c, h > 0 and Q succeed by reduction with respect to u : A. Let A = Q1 , . . . , Qn , and each Qi succeeds D1 → . . . → Dn → q. We have Q ⇒Red u with smaller height; by Lemma 5.16 we have COM P S (Qi , Q0 , u), hence by the induction hypothesis we get that Q0i = CU T (Qi , Q0 , u) succeeds. Now let, Q∗0 = ∆ ∪ {z1 : D1 , . . . , zn : Dn }, δ ∪ {z1 , . . . , zn } `? zn : q, where max{max(γ), max(δ), x, y, u} < z1 < z2 < . . . zn ; by Lemma 5.18 we have COM P S (Q∗0 , Q01 , z1 ), notice that cp(D1 ) < cp(A) = c, thus we may apply the induction hypothesis and obtain that Q∗1 = CU T (Q∗0 , Q01 , z1 ) that is Q∗1
=
Γ − {u : A} ∪ ∆ ∪ {z2 : D2 , . . . , zn : Dn }, γ1 [u/δ] ∪ δ ∪ {z2 , . . . , zn } `? zn : q
succeeds. By Lemma 5.18 also COM P S (Q∗1 , Q02 , z2 ), and since cp(D2 ) < cp(A), we can cut Q∗1 and Q02 on z2 : D2 and obtain Q∗2
= Γ − {u : A} ∪ ∆ ∪ {z3 : D3 , . . . , zn : Dn }, γ1 [u/δ] ∪ γ2 [u/δ] ∪ δ ∪ {z3 , . . . , zn } `? zn : q
succeeds. by repeating this argument up to n (using Lemma 5.18), we finally get Q∗n = Γ − {u : A} ∪ ∆, γ1 [u/δ] ∪ . . . ∪ γn [u/δ] ∪ δ `? zn [zn /xn [u/y]] : q succeeds. Since xn = x and u ∈ γ, we have γ[u/δ] = (
n [
i=0
Hence Q∗n is Q∗ and we are done.
γi )[u/δ] =
n [ i=0
γi [u/δ] ∪ δ.
5. SUBSTRUCTURAL LOGICS
191
COROLLARY 5.20 (Modus Ponens). If `? v0 : A → B and `? v0 : A succeed, then also `? v0 : B succeeds. Proof. If `? v0 : A → B succeed, then also Q = u : A, {u} `? u : B (with u > v0 ) succeeds by the implication rule. It easily seen that Q and Q0 = `? v0 : A are compatible for cut in every system. Thus, by cut we get that `? v0 : B succeeds. A general form of Cut We have proved the admissibility of cut under specific compatibility conditions for each system. It is easily seen that we can always rename the labels in a query in order to match the compatibility constraints. But we can do more: we can incorporate the necessary relabellings in the definition of cut in order to achieve a formulation which holds for every system without restrictions. To this end, given Q1 = Γ[u : A], γ `? x : B and Q2 = ∆, δ `? y : A, let us call a pair (ψ1 , ψ2 ) of label substitutions suitable for cutting Q1 by Q2 on u if: • ψ1 and ψ2 are order-preserving on γ ∪ {u, x} and on δ ∪ {y}, respectively;7 • pred(γψ1 , ψ1 (u)) < min(δψ2 ); • ψ2 (y) < min{succ(γψ1 , ψ1 (u)), ψ1 (x)}. Now the cut rule becomes if Q1 = Γ[u : A], γ `? x : B and Q2 = ∆, δ `? y : A succeed then also Γψ1 [ψ1 (u) : A/∆ψ2 ], γ[ψ1 (u)/δψ2 ] `? ψ1 (x)[ψ1 (u)/ψ2 (y)] : B suceeds, where (ψ1 , ψ2 ) is any pair of label substitution suitable for cutting Q1 by Q2 on u. Given any query Q = Γ, γ `? x : A and a label substitution ψ, Qψ is Γψ, γψ `? ψ(x) : A, thus the formula in the conclusion is nothing other than that obtained by cutting (in the sense of Theorem 5.19) Q1 ψ1 by Q2 ψ2 on ψ1 (u). It is easy to see that this rule is admissible by Theorem 5.19. We simply observe that if (ψ1 , ψ2 ) are suitable for cutting Q1 by Q2 on u then • Qi succeeds implies Qi ψi succeeds for i = 1, 2; • COM P S (Q1 ψ1 , Q2 ψ2 , ψ1 (u)) holds for any system S. Then the result follows by Theorem 5.19. 7 We say that a label substitution ψ is order-preserving on a set I if z , z ∈ I and z < z implies 1 2 1 2 ψ(z1 ) < ψ(z2 ).
192
GOAL-DIRECTED PROOF THEORY
5
SOUNDNESS AND COMPLETENESS
We prove first the completeness of the proof procedure with respect to the Hilbert axiomatization of each system of Section 2. DEFINITION 5.21 (Axiomatization of implication). We consider the following list of axioms: (id) A → A; (h1) (B → C) → (A → B) → A → C; (h2) (A → B) → (B → C) → A → C; (h3) (A → A → B) → A → B; (h4) (A → B) → ((A → B) → C) → C; (h5) A → (A → B) → B; (h6) A → B → B. Together with the following rules: A→B B
A
(M P )
A→B
(Suff ). (B → C) → A → C Each system is axiomatized by taking the closure under modus ponens (MP) and under substitution of the combinations of axioms/rules of Table 5.2. Table 5.2. Axioms for substructural implication. Logic Axioms FL (id), (h1), (Suff) T-W (id), (h1), (h2) T (id), (h1), (h2), (h3) E-W (id), (h1), (h2), (h4) E (id), (h1), (h2), (h3), (h4) L (id), (h1), (h2), (h5) R (id), (h1), (h2), (h3), (h5) BCK (id), (h1), (h2), (h5), (h6) I (id), (h1), (h2), (h3), (h5), (h6) In the above axiomatization, we have not worried about the minimality and independence of the group of axioms for each system. For some systems the corresponding list of axioms given above is redundant, but it quickly shows some
5. SUBSTRUCTURAL LOGICS
193
inclusion relations among the systems. We just remark that in presence of (h4), (h2) can be obtained by (h1). Moreover, (h4) is a weakening of (h5). The rule of (Suff) is clearly obtainable from (h2) and (MP). To have a complete picture we have included also intutionistic logic I, although the axiomatization above is highly redundant (see Chapter 2, Section 1). We prove that each system is complete with respect to the respective axiomatization. By Corollary 5.20 and Theorem 5.10, all we have to prove is that any atomic instance of each axiom succeeds in the relative proof system. In addition, we need to show the admissibility of (Suff) rule for FL. THEOREM 5.22 (Completeness). For every system S, if A is a theorem of S, then ` v0 : A succeeds in the corresponding proof system for S. Proof. We show a derivation of an arbitrary atomic instance of each S axiom in the relative proof system. In the case of reduction, the condition γ = γi , will not be explicitly shown, as its truth will be apparent by the choice of γi . We assume that the truth of the condition for the success rule is evident and we do not mention it. At each step we only show the current goal, the available resources and the new data introduced in the database, if any. Moreover, we justify the queries obtained by a reduction step by writing the relation RedS (γ0 , . . . , γn , x0 , . . . , xn ; x) (for suitable γi , xi ) under them; the database formula used in the reduction step is identified by γ0 . (id) In all systems:
`? v0 : a → a
we step to
u : a, {u} `? u : a,
which immediately succeeds in all systems. (h1) In all systems: `? v0 : (b → c) → (a → b) → a → c. three steps of the implication rule leads to: x1 : b → c, x2 : a → b, x3 : a, {x1 , x2 , x3 }
`?
x3 : c
{x2 , x3 } S Red ({x1 }, {x2 , x3 }, x1 , x3 ; x3 )
`?
x3 : b
{x3 } Red ({x2 }, {x3 }, x2 , x3 ; x3 ).
`?
x3 : a
S
(h2) In all systems, but FL: `? v0 : (a → b) → (b → c) → a → c.
194
GOAL-DIRECTED PROOF THEORY
three steps of the implication rule leads to: x1 : a → b, x2 : b → c, x3 : a, {x1 , x2 , x3 } (∗) {x1 , x3 }
`? `?
x3 : c x3 : b,
RedS ({x2 }, {x1 , x3 }, x2 , x3 ; x3 ) {x3 }
`?
x3 : a,
S
Red ({x1 }, {x3 }, x1 , x3 ; x3 ). the step (*) is allowed in all systems, but those with (r4), namely FL. (h3) In all systems, but those with (r1) or (r4): `? v0 : (a → a → b) → a → b. Two steps of the implication rule leads to: x1 : a → a → b, x2 : a, {x1 , x2 } `? x2 : b. By reduction we step to: {x2 } `? x2 : a
and {x2 } `? x2 : a
since RedS ({x1 }, {x2 }, {x2 }, x1 , x2 , x2 ; x2 ) holds in all systems without (r1) and (r4). (h4) In all systems, but those with (r3) or (r4): `? v0 : (a → b) → ((a → b) → c) → c. two steps of the implication rule leads to: x1 : a → b, x2 : (a → b) → c, {x1 , x2 }
`?
(∗) {x1 }
?
`
RedS ({x2 }, {x1 }, x2 , x2 ; x2 ) x3 : a, {x1 , x3 } {x3 } Red ({x1 }, {x3 }, x1 , x3 ; x3 )
x2 : c x2 : a → b
`?
x3 : b
?
x3 : a
`
S
The step (*) is allowed by (r2), but not by (r3) or (r4) since max({x1 }) = x1 < x2 . (h5) In L,R,BCK:
`? v0 : a → (a → b) → b.
two steps of implication rule leads to: x1 : a, x2 : a → b, {x1 , x2 }
`?
x2 : b
{x1 } S Red ({x2 }, {x1 }, x2 , x1 ; x2 ).
`?
x1 : a
5. SUBSTRUCTURAL LOGICS
(h6) In BCK we have:
195
`? v0 : a → b → b.
two steps by implication rule leads to: x1 : a, x2 : b, {x1 , x2 } `? x2 : b which succeeds by the success condition of BCK. This formula does not succeed in any other system. (Suff) We prove the admissibility of (Suff) rule in FL. Let `? succeed. Then for any formula C, we have to show that
v0 : A → B
`? v0 : (B → C) → A → C succeeds. Let C = C1 → . . . → Cn → q. Starting from `? v0 : (B → C1 → . . . → Cn → q) → A → C1 → . . . → Cn → q by the implication rule, we step to ∆, {x1 , . . . , xn+2 } ` xn+2 : q, where ∆ =
{x1 : B → C1 → . . . → Cn → q, x2 : A, x3 : C1 , . . . , xn+2 : Cn }.
From the above query we step by reduction to: Q0 = ∆, {x2 } `? x2 : B and Qi = ∆, {xi+2 } `? xi+2 : Ci for i = 1, . . . , n. since the condition for reduction are satisfied. By hypothesis, `? v0 : A → B succeeds, which implies, by the implicational rule, that x2 : A, {x2 } `? x2 : B succeeds, but then Q0 succeeds by monotony. Queries Qi succeed by Proposition 5.5.
5.1 Soundness with Respect to Fine Semantics We prove the soundness of the proof procedure with respect to a possible world semantics for substructural implicational logics. The first possible world semantics for substructural logics has been elaborated by Urquhart [1972]; according to [Anderson and Belnap, 1975] similar ideas were developed by Routley and by Fine at about the same time. The intuitive idea of Urquhart’s semantics is to consider possible worlds equipped by a composition operation, that is forming an algebraic structure. We may think of the elements of the algebraic structure as pieces of information or resources which can be combined along a deduction. In Urquhart
196
GOAL-DIRECTED PROOF THEORY
semantics the algebraic structure is a semilattice. The semantics by Urquhart can be interestingly seen as a sort of metalevel codification of the bookkeeping mechanism used to control natural deduction proofs in relevant logics [Anderson and Belnap, 1975]. This semantical construction works well for the implicational fragment of many systems, first of all R, but it cannot cover uniformly all the systems we are considering in this chapter. For instance in the case of E we need to consider two-dimensional models. Another problem of Urquhart’s semantics is that it does not work for the language containing disjunction or negation, we will come back to this point in the next section. The semantics we adopt in this section is a simplification of the one proposed in [Fine, 1974], [Anderson et al., 1992] and elaborated more recently by Do˘sen [1988; 1989]. 8 Routley and Meyer [1973; 1972a; 1972b] and [Routley et al., 1982] have developed a possible world semantics for substructural logics based on a three-place accessibility relation. Their semantics works uniformly for all systems and the whole propositional language. In the next section we will introduce it to prove the completeness of our procedures for a broader fragment. Although semantics is not our main concern, we observe that all these semantic constructions are related; we refer to [Anderson et al., 1992], paragraph 51.5 in particular, for more details (see also [Doˇsen, 1989] and [Ono, 1993] where a more general semantics based on residuated lattices is developed). DEFINITION 5.23. Let us fix a language L, a Fine S-structure9 M is a tuple of the form: M = (W, ≤, ◦, 0, V ), where W is a non empty set, ◦ is a binary operation on W , 0 ∈ W , ≤ is a partial order relation on W , V is a function of type W → P ow(V ar). In all structures the following properties are assumed to hold: 0 ◦ a = a, a ≤ b implies a ◦ c ≤ b ◦ c, a ≤ b implies V (a) ⊆ V (b). For each systems S, a S-structure satisfies a subset of the following conditions, as specified in Figure 5.23 (a1) a ◦ (b ◦ c) ≤ (a ◦ b) ◦ c; (a2) a ◦ (b ◦ c) ≤ (b ◦ a) ◦ c; (a3) (a ◦ b) ◦ b ≤ a ◦ b; (a4) a ◦ 0 ≤ a; 8 Dealing only with the implicational fragment, we have simplified Fine semantics: we do not have prime or maximal elements. 9 We just write S-structure if there is no risk of confusion.
5. SUBSTRUCTURAL LOGICS
197
(a5) a ◦ b ≤ b ◦ a; (a6) 0 ≤ a. Truth conditions for a ∈ W , we define • M, a |= p if p ∈ V (a); • M, a |= A → B if ∀b ∈ W (M, b |= A ⇒ M, a ◦ b |= B). We say that A is valid in M (denoted by M |= A) if M, 0 |= A. We say that A is ine A if A is valid in every S-structure. S- valid, denoted by |=F S Table 5.3. Algebraic conditions of Fine Semantics. Logic FL T-W T E-W E L R BCK I
(a1) * * * * * * * * *
(a2) * * * * * * * *
(a3)
(a4)
(a5)
(a6)
* * * * * *
* * * *
* *
* * * *
We have included the intutionistic logic I in Figure 5.3 to show its proper place within this framework. Again this list of semantical conditions is deliberately redundant in order to show quickly the inclusion relation among the systems. The axiomatization given at the beginning of the section is sound and complete with respect to this semantics. In particular each axiom (hi) corresponds to the semantical condition (ai). THEOREM 5.24 (Anderson et al., 1992, Fine, 1974, Doˇsen, 1989). |=S A if and only if A is derivable in the corresponding axiom system of Definition 5.21. The following lemma, (whose proof is left to the reader), shows that the hereditary property extends to all formulas. LEMMA 5.25. Let M = (W, ≤, ◦, 0, V ) be an S-structure, let a, b ∈ W , then for any formula A: if M, a |= A and a ≤ b, then M, b |= A. We assume that ◦ associates to the left, so we write
198
GOAL-DIRECTED PROOF THEORY
a ◦ b ◦ c = (a ◦ b) ◦ c. In order to prove the soundness of our procedure, we need to interpret databases and queries in the semantics. As usual, we introduce the notion of realization of a database. DEFINITION 5.26 (Realization). Given a database Γ, and a set of resources γ, an S-realization of (Γ, γ) in an S-structure M = (W, ◦, ≤, 0, V ), is a mapping ρ:A→W such that: 1. ρ(v0 ) = 0; 2. if y : B ∈ Γ then M, ρ(y) |= B. In order to define the notion of validity of a query, we need to introduce some further notation. Given an S-realization ρ, γ and x, we define ρ(γ) = 0 if γ = ∅, ρ(γ) = ρ(x1 )◦. . .◦ρ(xn ) if γ = {x1 , . . . , xn }, where x1 < . . . < xn ρ(< γ, x >) = ρ(γ) if x ∈ γ, ρ(< γ, x >) = ρ(γ) ◦ 0 if x 6∈ γ. Moreover, if γ = {x1 , . . . , xn }, δ = {y1 , . . . , ym } (ordered as shown) we write ρ(γ) ◦ ρ(δ) = ρ(x1 ) ◦ . . . ◦ ρ(xn ) ◦ (ρ(y1 ) ◦ . . . ◦ ρ(ym )). DEFINITION 5.27 (Valid query). Let Q = Γ, γ `? x : A be a S-regular query, we say that Q is S-valid if for every S-structure M , for every realization ρ of Γ in M , we have M, ρ(< γ, x >) |= A. According to the definition above, the S-validity of the query `? v0 : A means ine A). that the formula A is S-valid (i.e. |=F S To prove the soundness of the proof systems, we need to relate the conditions involved in the reduction rule to the algebraic properties of ◦. This is the purpose of the following lemma. LEMMA 5.28. Given an S-regular query ∆, δ i = 0, . . . , k, such that 1. δ0 = {x0 }, for some x0 ∈ δ, S 2. ki=0 δi = δ, 3. RedS (δ0 , . . . , δk , x0 , . . . , xk ; x) .
`?
x : G, let δi and xi , for
5. SUBSTRUCTURAL LOGICS
199
For every S-structure M and every realization ρ of (∆, δ) in M , we have ρ(< δ0 , x0 >) ◦ ρ(< δ1 , x1 >) ◦ . . . ◦ ρ(< δk , xk >) ≤ ρ(< δ, x >). Proof. Each system S requires a separate consideration. The easiest cases are those of R, L, and BCK. By regularity, we have that x ∈ δi , and x ∈ δ, which implies that ρ(< δi , xi >) = ρ(δi ) and ρ(< δ, x >) = ρ(δ). In the cases of R, L, BCK, (W, ◦, 0) is a commutative monoid, thus the order of composition of elements does not matter. Hence, for L and BCK, the conditions Sk i=0 δi = δ and δi ∩ δj = ∅ imply ρ(δ0 ) ◦ ρ(δ1 ) ◦ . . . ◦ ρ(δk ) = ρ(δ). In the case of R, we can regard ρ(α), for any α, as denoting a multiset, then the Sk condition i=0 δi = δ implies (∗) ρ(δ) ⊆| ρ(δ0 ) ◦ . . . ◦ ρ(δk ), (considering ◦ as multiset union). But (∗) by condition (a3)10 implies ρ(δ0 ) ◦ ρ(δ1 ) ◦ . . . ◦ ρ(δk ) ≤ ρ(δ). Regarding to the other systems, we give a proof of the claim for E; the other cases are similar, namely simpler. We prove that there are β0 , . . . , βk , such that β0 = δ0 , βk = δ, and for i = 1, . . . , k, it holds: (*) ρ(βi−1 ) ◦ ρ(< δi , xi >) ≤ ρ(βi ). To this end we need the notion of ordered merge of two resource sets α, β. We can consider α and β as ordered sequence of labels without repetitions. We use the notation α ∗ x to denote the sequence α followed by x, whenever x does not occur in α and max(α) < x. We denote by the empty sequence. The ordered merge of α, β, denoted by mo(α, β) is defined as follows: mo(α, ) = mo(, α) = α, mo(α ∗ x, β ∗ y) = mo(α, β ∗ y) ∗ x if y < x, mo(α ∗ x, β ∗ y) = mo(α ∗ x, β) ∗ y if x < y, mo(α ∗ x, β ∗ y) = mo(α, β) ∗ x if x = y. Going back to what we have to show, we let βi = mo(βi−1 , δi ) for i = 1, . . . , k, notice that βk = δ. We split the proof of (*) into two cases according to whether xi ∈ δi or not. 1. If xi ∈ δi , then we prove that ρ(βi−1 ) ◦ ρ(δi ) ≤ ρ(βi ). 10 Condition
(a3) together with 0 ◦ a = a and (a2) implies a ◦ a ≤ a.
200
GOAL-DIRECTED PROOF THEORY
2. If xi 6∈ δi , then we prove that ρ(βi−1 ) ◦ (ρ(δi ) ◦ 0) ≤ ρ(βi ). Case 1 Since β0 = δ0 , it is easy to see that max(βi−1 ) ≤ xi = max(δi ), as xi ∈ δi . We prove the claim by induction on |βi−1 | + |δi |. Assume both βi−1 and δi are 6= (otherwise the argument below simplifies), so that βi−1 = β 0 ∗ u and δi = δ 0 ∗ xi . We must show ρ(β 0 ∗ u) ◦ ρ(δ 0 ∗ xi ) ≤ ρ(mo(β 0 ∗ u, δ 0 ∗ xi )). (Subcase a) u = xi , we have: ρ(β 0 ∗ xi ) ◦ ρ(δ 0 ∗ xi ) =
ρ(β 0 ) ◦ ρ(xi ) ◦ (ρ(δ 0 ) ◦ ρ(xi )))
≤ ≤
(ρ(δ 0 ) ◦ (ρ(β 0 ) ◦ ρ(xi )) ◦ ρ(xi ) by (a2) ρ(mo(δ 0 , β 0 ∗ xi )) ◦ ρ(xi ) by ind. hyp.
=
(ρ(mo(δ 0 , β 0 )) ◦ ρ(xi )) ◦ ρ(xi )
≤ =
ρ(mo(δ 0 , β 0 )) ◦ ρ(xi ) by (a3) ρ(mo(β 0 ∗ xi , δ 0 ∗ xi )).
(Subcase b) u < xi . We have ρ(β 0 ∗ u) ◦ ρ(δ 0 ∗ xi ) ≤
ρ(β 0 ) ◦ ρ(u) ◦ (ρ(δ 0 ) ◦ ρ(xi ))
≤ ≤
ρ(β 0 ) ◦ ρ(u) ◦ ρ(δ 0 ) ◦ ρ(xi ) ρ(mo(β 0 ∗ u, δ 0 )) ◦ ρ(xi ) by ind. hyp.
=
ρ(mo(β 0 ∗ u, δ 0 ∗ xi )) since u < xi .
Case 2 If βi−1 = , then we have ρ(βi−1 ) = 0, whence ρ(βi−1 ) ◦ (ρ(δi ) ◦ 0) =
0 ◦ (ρ(δi ) ◦ 0)
= ≤
ρ(δi ) ◦ 0 ρ(δi ) by (a4)
=
ρ(mo(βi−1 , δi )).
If βi−1 6= , let βi−1 = β 0 ∗ u; we have two cases: if u ≤ max(δi ), then ρ(β 0 ∗ u) ◦ (ρ(δi ) ◦ 0) ≤ ρ(β 0 ) ◦ ρ(u) ◦ ρ(δi ) ◦ 0 ≤ ρ(β 0 ) ◦ ρ(u) ◦ ρ(δi ) by (a4) then we can conclude as in Case 1. If max(δi ) < u, then ρ(β 0 ∗ u) ◦ (ρ(δi ) ◦ 0) ≤ (ρ(δi ) ◦ (ρ(β 0 ) ◦ ρ(u))) ◦ 0 ≤ ρ(δi ) ◦ (ρ(β 0 ) ◦ ρ(u)), by (a4) again we can conclude as in Case 1.
5. SUBSTRUCTURAL LOGICS
201
THEOREM 5.29. Let Q = Γ, γ `? x : A be an S-regular query, if Q succeeds in the proof system for S then Q is S-valid. In particular, if v0 : A succeeds in the ine A. proof system for S, then |=F S Proof. By induction on the height h of a successful derivation of Q. • If h = 0, then the query immediately succeeds, x : q ∈ Γ and SuccS (γ, x) holds. Let M be an S-structure and ρ an S- realization of Γ in M . We have M, ρ(x) |= q. In the case of BCK, we have that x ∈ γ; it easily seen that ρ(x) ≤ ρ(< γ, x >), whence the result follows by the hereditary property; in all the other cases, we have γ = {x}, thus the claim follows by ρ(x) = ρ(< γ, x >). • Let h > 0 and the implication rule is applied to derive Q, that is, A = B → C and from Q, we step to Q0 : Q0 = Γ ∪ {z : B}, γ ∪ {z} `? z : C, where z > max(Lab(Γ)). Q0 succeeds by a shorter derivation. Suppose that Q is not S-valid, let M = (W, ◦, 0, V ) be an S-structure and ρ a realization of Γ such that M, ρ(< γ, x >) 6|= B → C. Let ρ(< γ, x >) = a, then there is some b ∈ W , such that M, b |= B, but M, a ◦ b 6|= C. Let Q0 = Γ0 = Γ ∪ {z : C}, γ 0 = γ ∪ {z}. By the induction hypothesis Q0 is S-valid. We can define a realization ρ0 for Γ0 , by letting ρ0 (z) = b, and ρ0 (u) = ρ(u), for u 6= z. We have ρ0 (γ 0 ) = ρ(γ) ◦ ρ0 (z) = a ◦ b, whence M, ρ0 (γ 0 ) 6|= C, which contradicts the induction hypothesis on Q0 . • Let h > 0 and suppose the first step in a derivation of Q is by reduction. Then A is an atom q, there is z : C ∈ Γ, with C : D1 → . . . → Dk → q, and from Q we step to Qi : Γ, γi `? xi : Di , for some δi and xi (for i = 0, . . . , k) such that: S 1. x0 = z, γ0 = {z}, and ki=0 γi = γ. 2. RedS (γ0 , . . . , γk , x0 , . . . , xk ; x). Let M be an S-structure and ρ be a realization of Γ in M . We show that M, ρ(< γ, x >) |= q holds. Since each Qi succeeds by a shorter derivation, we get (by the induction hypothesis)
202
GOAL-DIRECTED PROOF THEORY
M, ρ(< γi , xi >) |= Di . On the other hand, being ρ a realization of Γ we have: M, ρ(< γ0 , x0 >) |= D1 → . . . → Dk → q. Thus, we obtain M, ρ(< γ0 , x0 >) ◦ ρ(< γ1 , x1 >) ◦ . . . ◦ ρ(< γk , xk >) |= q. By Lemma 5.28, we have ρ(< γ0 , x0 >)◦ρ(< γ1 , x1 >)◦. . .◦ρ(γk , xk >) ≤ ρ(< γ, x >), So that, by the hereditary property we obtain: M, ρ(< γ, x >) |= q. 6
EXTENDING THE LANGUAGE TO HARROP FORMULAS
In this section we show how we can extend the language by some other connectives. We allow extensional conjunction (∧), disjunction (∨), and intensional conjunction or tensor (⊗). The distinction between ∧ and ⊗ is typical of substructural logics and it comes from the rejection of some structural rule: ∧ is the usual latticeinf connective, ⊗ is close to a residual operator with respect to →. In relevant logic literature ⊗ is often called fusion or cotenability and denoted by ◦.11 The addition of the above connectives presents some semantic options. The most important one is whether distribition (dist) of ∧ and ∨ is assumed or not. The list of axioms/rules below characterizes distributive substructural logics. DEFINITION 5.30 (Axioms for ∧, ⊗, ∨). 1. A ∧ B → A, 2. A ∧ B → B, 3. (C → A) ∧ (C → B) → (C → A ∧ B), 4. A → A ∨ B, 5. B → A ∨ B, 6. (A → C) ∧ (B → C) → (A ∨ B → C) 7. 11 We
A B A∧B follow here the terminology and notation of linear logic [Girard, 1987].
5. SUBSTRUCTURAL LOGICS
8. 9.
203
A→B→C A⊗B →C A⊗B →C A→B→C
(e-∧) For E and E-W only 2A ∧ 2B → 2(A ∧ B) where 2C =def (C → C) → C. (dist) A ∧ (B ∨ C) → (A ∧ B) ∨ C. As we have said, the addition of distribution (dist) is a semantic choice, which may be argued. However, for the fragment of the language we consider in this section it does not really matter whether distribution is assumed or not. This fragment roughly corresponds to the Harrop fragment of the previous chapter (see Section 7.2); since we do not allow positive occurrences of disjunction, the presence of distribution is immaterial.12 We have included distribution to have a complete axiomatization with respect to the semantics we will adopt. As in the case of modal logics (see Section 7.2), we distinguish D-formulas, which are the constituents of databases, and G-formulas which can be asked as goals. DEFINITION 5.31. Let D-formulas and G-formulas be defined as follows: D := q | G → D, CD := D | CD ∧ CD, G := q | G ∧ G | G ∨ G | G ⊗ G | CD → G. A database ∆ is a finite set of labelled of D-formulas. A database corresponds to a ⊗-composition of conjunctions of D-formulas. Formulas with the same label are thought as ∧-conjuncted. Every D-formula has the form G1 → . . . → Gk → q, In the systems R, L, BCK, we have the theorems (A → B → C) → (A ⊗ B → C)
and (A ⊗ B → C) → (A → B → C)
Thus, in these systems we can simplify the syntax of (non atomic) D-formulas to G → q rather than G → D. This simplification is not allowed in the other systems where we only have the weaker relation 12 In the fragment we consider we trivially have (for any S) Γ, α `? x : A ∧ (B ∨ C) implies Γ, α `? x : A ∨ (B ∧ C), where Γ is a set of D-formulas and A, B, C are G-formulas (see below).
204
GOAL-DIRECTED PROOF THEORY
` (A → B → C) ⇔ ` A ⊗ B → C. The extent of Definition 5.31 is shown by the following propositions. PROPOSITION 5.32. Every formula on (∧, ∨, →, ⊗) without • positive13 (negative) occurrences of ⊗ and ∨ and • occurrences of ⊗ within a negative (positive) occurrence of ∧ is equivalent to a ∧-conjunction of D-formulas (G-formulas). The reason we have put the restriction on nested occurrences of ⊗ within ∧ is that, on the one hand, we want to keep the simple labelling mechanism we have used for the implicational fragment, and on the other we want to identify a common fragment for all systems to which the computation rules are easily extended. The labelling mechanism no longer works if we relax this restriction. For instance, how could we handle A ∧ (B ⊗ C) as a D-formula? We should add x : A and x : B ⊗ C in the database. The formula x : B ⊗ C cannot be decomposed, unless we use complex labels: intuitively we should split x into some y and z, add y : B and z : C, and remember that x, y, z are connected (in terms of Fine semantics the connection would be expressed as x = y ◦ z). We prefer to accept the above limitation and postpone the investigation of a broader fragment to future work.14 The computation rules can be extended to this fragment without great effort. DEFINITION 5.33 (Proof system for the extended language). We give the rules for queries of the form ∆, δ `? x : G, where ∆ is a set of D-formulas and G is a G-formula. • (success) ∆, δ `? x : q succeeds, if x; q ∈ ∆ and SuccS (δ, x). • (implication) from ∆, δ `? x : CD → G 13 Positive and negative occurrences are defined as follows: A occurs positively in A; if B#C occurs positively (negatively) in A (where # ∈ {∧, ∨, ⊗}), then B and C occur positively (negatively) in A; if B → C occurs positively (negatively) in A, then B occurs negatively (positively) in A and C occurs positively (negatively) in n A. We say that a connective # has a positive (negative) occurrence in a formula A if there is a formula B#C which occurs positively (negatively) in A. 14 In some logics, such as L, we do not need this restriction since we have the following property: Γ, A ∧ B ` C implies Γ, A ` C or Γ, B ` C. Thus, we can avoid introducing extensional conjunctions into the database, and instead introduce only one of the two conjuncts (at choice). This approach is followed by Harland and Pym [1991]. However the above property does not hold for R and other logics.
5. SUBSTRUCTURAL LOGICS
205
if CD = D1 ∧ . . . ∧ Dn , we step to ∆ ∪ {y : D1 , . . . , y : Dn }, δ ∪ {y} `? y : G where y > max(Lab(∆)), (hence y 6∈ Lab(∆)). • (reduction) from ∆, δ `? x : q if there is z : G1 → . . . → Gk → q ∈ ∆ and there are δi , and xi for i = 0, . . . , k such that: 1. δ0 = {z}, x0 = z, Sk 2. i=0 δi = δ, 3. RedS (δ0 , . . . , δk , x0 , . . . , xk ; x), then for i = 1, . . . k, we step to ∆, δi , `? xi : Gi . • (conjunction) from ∆, δ `? x : G1 ∧ G2 step to ∆, δ `? x : G1 and ∆, δ `? x : G2 . • (disjunction) from ∆, δ `? x : G1 ∨ G2 step to ∆, δ `? x : Gi for i = 1 or i = 2. • (tensor) from ∆, δ `? x : G1 ⊗ G2 if there are δ1 , δ2 , x1 and x2 such that 1. δ = δ1 ∪ δ2 , 2. RedS (δ1 , δ2 , x1 , x2 ; x), step to ∆, δ1 `? x1 : G1 and ∆, δ2 `? x2 : G2 .
206
GOAL-DIRECTED PROOF THEORY
An easy extension of the method is the addition of the truth constants t, and > which are governed by the following axioms/rules A → >, ` t → A iff ` A. Plus the axiom of (t → A) → A for E and E-W. We can think of t as defined by propositional quantification t =def ∀p(p → p). Equivalently, given any formula A, we can assume that t is the conjunction of all p → p such that the atom p occurs in A. Basing on this definition, it is not difficult to handle t in the goal-directed way and we leave to the reader to work out the rules. The treatment of > is straightforward. EXAMPLE 5.34. Let ∆ be the following database: x1 x2 x3 x3 x4
: e ∧ g → d, : (c → d) ⊗ (a ∨ b) → p, : c → e, : c → g, : (c → g) → b.
In Figure 5.2, we show a successful derivation of ∆, {x1 , x2 , x3 , x4 } `? x4 : p in relevant logic E (and stronger systems). We leave to the reader to justify the steps according to the rules. The success of this query corresponds to the validity of the following formula in E: (e ∧ g → d) ⊗ ((c → d) ⊗ (a ∨ b) →p) ⊗ ((c → e)∧ ∧(c → g)) ⊗ ((c → g) → b) → p.
6.1 Routley–Meyer Semantics, Soundness and Completeness We turn to prove the soundness and the completeness of the procedure for the extended language. In the previous section we introduced a semantics for substructural implicational logics and we have proved the soundness and completness with respect to it. We would like to extend this correspondence to the Harrop fragment of this section. We first extend Definition 5.23 by giving the truth conditions for the additional connectives. DEFINITION 5.35. Let M = (W, ◦, 0, V ) be a S-structure, let a ∈ W we stipulate:
5. SUBSTRUCTURAL LOGICS
207
∆, {x1 , x2 , x3 , x4 } `? x4 : p
∆, {x1 , x3 , x4 } `? x4 : (c → d) ⊗ (a ∨ b)
∆, {x1 , x3 } `? x 3 : c → d
∆, {x3 , x4 } `? x 4 : a ∨ b
∆, x5 : c, {x1 , x3 , x5 } `? x 5 : d
∆, {x3 , x4 } `? x 4 : b
∆, x5 : c, {x3 , x5 } `? x 5 : e ∧ g
∆, {x3 } `? x 4 : c → g
∆, x5 : c, {x3 , x5 } `? x5 : e
∆, x5 : c, {x3 , x5 } `? x5 : g
∆, x6 : c, {x3 , x6 } `? x6 : g
∆, x5 : c, {x5 } `? x5 : c
∆, x5 : c, {x5 } `? x5 : c
∆, x6 : c, {x6 } `? x6 : c
Figure 5.2. Derivation for Example 5.34.
208
GOAL-DIRECTED PROOF THEORY
M, a |= A ∧ B iff M, a |= A and M, a |= B, M, a |= A ∨ B iff M, a |= A or M, a |= B, M, a |= A ⊗ B iff there are b, c ∈ W , s.t. b ◦ c ≤ a and M, a |= A and M, a |= B. It is straightforward to extend Theorem 5.29 obtaining the soundness of the proof procedure with respect to this semantics. THEOREM 5.36. Let Γ, γ `? x : G succeeds in the system S, γ = {x1 , . . . , xk } (ordered as shown) and i = 1, . . . , k let Si = {A | xi : A ∈ Γ}; then V V ine ( S1 ⊗ . . . ⊗ Sk ) → G. |=F S What about the completeness? We will show that it holds for this fragment although we will prove it rather indirectly. We first need to clarify and discuss the basic problem of this semantics. A remarkable point is that the semantics of Definitions 5.23 and 5.35 does not match with the axiomatization given at the beginning of this section, namely it is too strong. In the case of R, E, T (i.e. the systems with contraction) it makes valid the formula (A → B ∨ C) ∧ (B → C) → (A → C). This formula is not a theorem of R, whence of the other mentioned systems. This means that one cannot extend the semantics of Section 5.23 with the simple boolean definition of conjunction and disjunction. Historically, this problem has been noticed first by Urquhart for his semi-lattice semantics and has lead to the formulation of many alternative but related semantics for susbtractural logics (see the references at the beginning of Section 5.1). To this regard the definition of S-structures given in 5.23 is a simplification of the original one by Fine, which works for the full propositional language. In the original definition, Fine structures are enriched by a subset of special worlds correponding to ‘prime’ (or saturated) theories; these special worlds are needed to interprete disjunctive formulas, they ‘decide’ disjunctions. Instead of making a distinction between prime and non-prime worlds, there is another possibility: generalize the semantics by considering a ternary relation rather than a composition operation ◦ as in Fine structures. This options leads to the semantics by Routley and Meyer [1973; 1982], that is now considered as the standard semantics for relevant and substructural logics with distribution. In Routley–Meyer structures (RM structures for short) worlds are equipped by a three-place accessibility relation Rabc. If we interpret a, b, c as pieces of information, one can read Rabc as something of the sort ‘the combination of a and b is included in c’. With this interpretation, there is an obvious connection with the Fine structures: given a, b, c ∈ W , we can define the ternary relation a ◦ b ≤ c. Thus, every S-structure determines a RM structure, but not vice versa.15 15 Another reading of Rabc is relative inclusion: b ≤ c ‘b is included in c, from the point of view a of a’.
5. SUBSTRUCTURAL LOGICS
209
Our plan is the following: we introduce RM-semantics and we prove the completeness with respect to it. Then we show that for the Harrop fragment we consider, the RM-semantics and the semantics of Definitions 5.23 and 5.35 are equivalent. This essentially means that, as far as positive disjunctions are not allowed in database-formulas, the simplified Fine semantics of Section 5.23 extended by the obvious truth-definitions of conjunction and disjunction is equivalent to the ternary-relation semantics by Routley and Meyer.16 The reason why we prove the completeness with-respect to the RM-semantics is that it is easier to make a canonical structure construction with respect to this semantics, rather than to Fine’s. Moreover, the correspondence of our deduction procedure with the Routley–Meyer semantics might be of interest in itself. In the canonical structure the ternary accessibility relation happens to be defined directly by the constraints involved in the reduction rule. DEFINITION 5.37. Let us fix a language L, a RM S-structure M for L is a tuple of the form: M = (W, R, 0, V ), where W is a non-empty set, R is a ternary relation on W , 0 ∈ W , V is a function of type W → P ow(V ar). We define the relation ≤ by stipulating: a ≤ b ≡ R0ab. In all structures the following properties are assumed to hold: (i) ≤ is transitive and reflexive. (ii) a0 ≤ a and Rabc implies Ra0 bc c ≤ c0 and Rabc implies Rabc0 . (iii) a ≤ b implies V (a) ⊆ V (b). The structures may satisfy additional properties. To formulate them we introduce the following notation: R2 (ab)cd ≡ ∃e(Rabe ∧ Recd) R2 a(bc)d ≡ ∃e(Raed ∧ Rbce). The properties are the following: (rm1) R2 (ab)cd → R2 a(bc)d (rm2) R2 (ab)cd → R2 b(ac)d (rm3) Rabc → R2 (ab)bc (rm4) ∀a∃b(Raba ∧ ∀d∀e(Rbde → R0de)) 16 We recall once more, that our language does not contain negation; if negation is present this equivalence does no longer hold.
210
GOAL-DIRECTED PROOF THEORY
Table 5.4. Specific conditions for RM semantics. Logic FL T-W T E-W E L R BCK
(rm1) * * * * * * * *
(rm2) * * * * * * *
(rm3)
(rm4)
(rm5)
(rm6)
* * * * *
* * *
*
* * *
(rm5) Rabc → Rbac. (rm6) R00a. Truth conditions for a ∈ W , we define • M, a |= p if p ∈ V (a); • M, a |= A → B if ∀b, c ∈ W (Rabc ∧ M, b |= A ⇒ M, c |= B). • M, a |= A ∧ B iff M, a |= A and M, a |= B; • M, a |= A ⊗ B iff there are b, c ∈ W , such that Rbca and M, b |= A and M, c |= B; • M, a |= A ∨ B iff M, a |= A or M, a |= B. We say that A is valid in M (denoted by M |= A) if M, 0 |= A. We say that A is A. (RM)S-valid if it is valid in all RM S-structures; validity is denoted by |=RM S The mapping between logics S and (sets of) conditions is shown in Table 5.4. Notice that each condition (rmi) is closely related to the corresponding condition (ai) of Definition 5.23. Perhaps the only one that is not apparent is the correspondence of (a4) and (rm4). The transcription of (a4) would be Ra0a. But this is too strong as argued in [Anderson et al., 1992], page 172. (rm4) says that for each a there is a b (which may depend on a) which behaves as a right identity. In order to prove the completeness, we need to ensure a form of closure under cut. Since we have made a distinction between D-formulas and G-formulas, we must pay attention to which formulas we apply cut. In general, we can prove the cut admissibility property for formulas which are simultaneously G and Dformulas. But this is not enough for the completeness. What we need is a sort of
5. SUBSTRUCTURAL LOGICS
211
‘closure under cut’ with respect to D-formulas. The point is that our proof procedure cannot prove D-formulas. A similar problem arose in the previous chapter for Harrop-modal fomulas (Section 7.2). We must express closure under cut rather indirectly. The property we need is given in the next proposition and its meaning will be apparent in the completeness proof. The cut theorem required a compatibility constraint on the queries which was denoted by COM P S . We need it again. It can be seen that this predicate only depends on the labels, but not on the formulas of the involved databases, so that given Γ, γ, x, u ∈ Lab(Γ), where u is the position of the cut-formula and x the current position on Γ, and similarly, given ∆, δ, y ∈ Lab(∆), we can write COM P S (Γ, γ, x, u; ∆, δ, y)17 . to denote the same set of conditions (c1),(c2),(c3) as in Definition 5.12 PROPOSITION 5.38. Let (*) Γ[u : D1 , . . . , u : Dk ], γ `? x : G succeed, and assume that Γ does not contain other formulas with label u than D1 , . . . , Dk . Let ∆, δ, y such that the following conditions hold 1. COM P S (Γ, γ, x, u; ∆, δ, y), 2. for i = 1, . . . , k, (a) if Di = qi is atomic, then ∆, δ `? y : qi succeeds; (b) if Di = G1 → . . . → Gm → q, then for all Σ1 , . . . , Σm ,σ1 , . . . , σm , z1 , . . . , zm , such that RedS (δ, σ1 , . . . , σm , y, z1 , . . . , zm ; zm ) and Σi , σi `? zi : Gi succeeds, Sm ∆, Σ1 , . . . , Σm , δ ∪ i σi `? zm : q succeeds. Then Γ[u : D1 , . . . , u : Dk /∆], γ[u/δ] `? x[u/y] : G succeeds. Proof. By induction on the length of a derivation of (*). In the case of reduction and ⊗ we use Lemma 5.16. The completeness is proved by a canonical structure construction. DEFINITION 5.39. Let M = (W, R, 0, V ) be the S-structure defined as follows: • 0 = (∅, v0 ), • W is the set of pairs (Γ, x) (including 0) such that Γ is a database and x is a label, with the following restrictions (confront the definition of regular query, Definition 5.8): – for R, L, BCK, if Γ 6= ∅, then x ∈ Lab(Γ), – for T, T-W, FL, x = max(Lab(Γ)) (thus, if Γ = ∅, then x = v0 ), 17 This
is a slight abuse of notation, but it is necessary, as we do not have two queries to cut.
212
GOAL-DIRECTED PROOF THEORY
– for E, E-W, x ≥ max(Lab(Γ)). • The relation R R(Γ, x)(∆, y)(Σ, z) holds iff the following conditions (1) and (2) (or (1) and (20 ) for BCK) hold (1) RedS (λΓ , λ∆ , x, y; z) (2) Γ ∪ ∆ = Σ for all systems, but BCK, (20 ) Γ ∪ ∆ ⊆ Σ for BCK. • V (Γ, x) = {p : atom | Γ, λΓ `? x : p succeeds}. Notice that the ternary relation is defined as follows: let α, β, γ be sets of labels, then we have Rαβγ holds if and only if the following conditions hold • for R, α ∪ β = γ, • for L, α ∪ β = γ and α ∩ β = ∅, • for T, α ∪ β = γ and max(α) ≤ max(β), • for FL, α ∪ β = γ and max(α) < min(β), • for BCK, α ∪ β ⊆ γ and α ∩ β = ∅. For E and E-W we must consider pairs (δ, u), where max(δ) ≤ u, and we have R(α, x)(β, y)(γ, z) iff α ∪ β = γ and x ≤ y = z. This definition of the ternary relation essentially summarizes the restrictions on Modus Ponens in the natural deduction formulation [Anderson and Belnap, 1975], where MP takes the form: α:A→B
β:A
γ:B if Rαβγ holds. We show that the structure M we have just defined satisfies all properties of Definition 5.39. PROPOSITION 5.40. The relation (Γ, x) ≤ (∆, y) ≡ R0(Γ, x)(∆, y) satisfies the conditions (i)–(iii) of Definition 5.39.
5. SUBSTRUCTURAL LOGICS
213
Proof. For condition (i), i.e. ≤ is a reflexive and transitive relation, we use the fact that for every (Γ, x), we have RedS (∅, λΓ , v0 , x; x). Condition (ii) is easy too, since (Γ, x) ≤ (∆, y) implies λΓ = λ∆ , (or λΓ ⊆ λ∆ for BCK) and x = y, in the cases where it matters (i.e. all cases, except L, R, BCK). We get that, given any (Σ, z), RedS (λ∆ , λΣ , y, z; z) implies RedS (λΓ , λΣ , x, z; z). From this (ii) easily follows. For (iii), let Γ, λΓ `? x : p succeed and (Γ, x) ≤ (Γ, y). In the case of T, T-W and FL we have that Γ = ∆ and x = y, thus the result is obvious. In all other cases, we conclude by Propositions 5.6 and 5.7. PROPOSITION 5.41. For each system S, the relation R satisfies the specific conditions for S. Proof. One has to check all properties for each specific system. As an example, we check properties (rm1) and (rm4). The others are similar and left to the reader. For (rm1), let R2 ((Γ, x)(∆, y))(Σ, z)(Π, u) Then there is (Φ, v) such that R(Γ, x)(∆, y)(Φ, v) and R(Φ, v)(Σ, z)(Π, u). We have to show that there is (Ψ, w) such that R(∆, y)(Σ, z)(Ψ, w) and R(Γ, x)(Ψ, w)(Π, u). Among the several cases, we consider the one of E and E-W, the others are similar, perhaps simpler. By hypothesis we have Γ ∪ ∆ = Φ and Φ ∪ Σ = Π, Red(λΓ , λ∆ , x, y; v) and Red(λΦ , λΣ , v, z; u), therefore y = v and z = u. (We omit the indication of S from RedS ). Let us take (Ψ, w) = (∆ ∪ Σ, z). Since Red(λΦ , λΣ , v, z, u), we have v ≤ z, whence y ≤ z. Thus max(λΨ ) ≤ z and (Ψ, z) is well-defined. Clearly, (*)
∆ ∪ Σ = Ψ and
(**) Γ ∪ Ψ = Π. From Red(λΦ , λΣ , v, z; u), λ∆ ⊆ λΦ , y = v, z = u, we get Red(λ∆ , λΣ , y, z; z), which together with (*) shows R(∆, y)(Σ, z)(Ψ, z). By Red(λΓ , λ∆ , x, y; v), we get x ≤ y and hence x ≤ z = u. For the case of E-W, by hypothesis, we also have λΓ ∩ λ∆ = ∅ and (λΓ ∪ λ∆ ) ∩ λΣ = λΦ ∩ λΣ = ∅, so that λΓ ∩ λΨ = λΓ ∩ (λ∆ ∪ λΣ ) = ∅. Thus, we can conclude that
214
GOAL-DIRECTED PROOF THEORY
Red(λΓ , λΨ , x, z; u). which, together with (**), shows R(Γ, x)(Ψ, z)(Π, u). This concludes the proof. Let us prove (rm4) for E and E-W. Let (Γ, x) be any world, let us consider the world (∅, x); we have Γ ∪ ∅ = Γ and Red(λΓ , ∅, x, x; x). This shows that R(Γ, x)(∅, x)(∅, x) holds. Moreover, for any (∆, y), (Σ, z), if R(∅, x)(∆, y)(Σ, z), holds then ∅ ∪ ∆ = Σ and Red(∅, λ∆ , x, y; z), which implies y = z. Since v0 ≤ x, we also have Red(∅, λ∆ , v0 , y; z). We have shown that R(∅, v0 ) (∆, y)(Σ, z), that is R0(∆, y)(Σ, z). PROPOSITION 5.42. For each (Γ, x) ∈ W , and goal G, we have: • (a) M, (Γ, x) |= G iff Γ, λΓ `? x : G succeeds. • (b) If ∆ = {u : D1 , . . . , u : Dm }, then M, (∆, u) |=
Vm i=1
Di .
Proof. We prove both directions of (a) and (b) by a simultaneous induction on the structure of G. Let us consider (a) first. If G is an atom, then the claim holds by definition. The cases of ∧ and ∨ are easy and left to the reader. We consider the cases of ⊗ and →, for all systems except for BCK. We leave the reader to modify (easily) the proof to cover the case of BCK. Let G ≡ G1 ⊗ G2 . (⇒) Suppose M, (Γ, x) |= G1 ⊗ G2 , then there are (∆, y) and (Σ, z), such that R(∆, y)(Σ, z)(Γ, x) and M, (∆, y) |= G1 and M, (Σ, z) |= G2 . By definition of R, we have: (*)
∆ ∪ Σ = Γ, whence λΓ = λ∆ ∪ λΣ and
(**) RedS (λ∆ , λΣ , y, z; x). By the induction hypothesis we get: ∆, λ∆ `? y : G1 and Σ, λΣ `? z : G2 . So that from (*), by monotony we obtain: (***) Γ, λ∆ `? y : G1 and Γ, λΣ `? z : G2 . By definition, from (*), (**), (***), we have Γ, λΓ `? z : G. Since either x = z, or z ∈ λΓ (in the case of R, L), by Proposition 5.7, we obtain
5. SUBSTRUCTURAL LOGICS
215
Γ, λΓ `? x : G. (⇐) Let Γ, λΓ `? x : G1 ⊗ G2 succeed; by definition there are γ1 , γ2 , x1 and x2 , such that λΓ = γ1 ∪ γ2 , and 1. RedS (γ1 , γ2 , x1 , x2 ; x), 2. either x2 ∈ γ2 (in the case L, R), or x = x2 (in all other cases), 3. Γ, γi `? xi : Gi succeed, for i = 1, 2. Let Γi = {x : D ∈ Γ | x ∈ γi }, with i = 1, 2, so that γi = λΓi and 4. Γ = Γ1 ∪ Γ2 ; then by Proposition 5.3, we have Γi , γi `? xi : Gi , for i = 1, 2. From 2., we get that, according to each system, either x ∈ γi (R,L), or max(γi ) = xi (FL, T-W, T) , or max(γi ) ≤ xi (E, E-W). Thus, (Γi , xi ) ∈ W and by the induction hypothesis, we have 5. M, (Γ1 , x1 ) |= G1 and M, (Γ2 , x2 ) |= G2 . From 1., 2., 4., we can conclude that R(Γ1 , x1 )(Γ2 , x2 )(Γ, x) holds, whence by 5. we get M, (Γ, x) |= G1 ⊗ G2 . Let G = (D1 ∧ . . . ∧ Dm ) → G0 . (⇒) Suppose M, (Γ, x) |= (D1 ∧ . . . ∧ Dm ) → G0 . Let ∆ = {u : D1 , . . . , u : Dm }, with u > x, and let Σ = Γ ∪ ∆. For all systems, we have: (*)
R(Γ, x)(∆, u), (Σ, u).
By (b), we know that Vm (**) M, (∆, u) |= i=1 Di . From the hypothesis of (⇒), by (*) and (**), we get that M, (Σ, u) |= G0 , whence by the induction hypothesis, Γ ∪ {u : D1 , . . . , u : Dm }, λΓ ∪ {u} `? u : G0 , succeeds, since Σ = Γ ∪ {u : D1 , . . . , u : Dm }. Thus, Γ, λΓ `? x : (D1 ∧ . . . ∧ Dm ) → G0 succeeds. (⇐) Let Γ, λΓ `? x : (D1 ∧ . . . ∧ Dm ) → G0 succeed. Let u > x, from the hypothesis, we have
216
GOAL-DIRECTED PROOF THEORY
1. Γ ∪ {u : D1 , . . . , u : Dm }, λΓ ∪ {u} `? u : G0 succeeds. Vm Now let R(Γ, x)(∆, y)(Σ, z) and M, (∆, y) |= i=1 Di . Thus M, (∆, y) |= Di , for i = 1, . . . , m. It easy to see that 2. COM P S (Γ, λΓ , x, u; ∆, δ, y) holds. 3. Let Di be an atom qi , then by the induction hypothesis we have: ∆, λ∆ `? y : qi succeed. 4. Let Di = G1 → . . . → Gm → q. By hypothesis we have M, (∆, y) |= Di . Let Ψj , zj for j = 1, . . . , m be such that: (i) Ψj , λΨj `? zj : Gj succeed for j = 1, . . . , m and RedS (λ∆ , λΨ1 , . . . , λΨm , y, z1 , . . . , zm ; zm ) holds. We easily obtain (ii) R(∆, y)(Ψ1 , z1 )(∆ ∪ Ψ1 , z1 ), R(∆ ∪ Ψ1 , z1 )(Ψ2 , z2 )(∆ ∪ Ψ1 ∪ Ψ2 , z2 ), R(∆ ∪ Ψ1 ∪ Ψ2 , z2 )(Ψ3 , z3 )(∆ ∪ Ψ1 ∪ Ψ2 ∪ Ψ3 , z3 ) .. . R(∆ ∪ Ψ1 ∪ . . . ∪ Ψm−1 , zm−1 )(Ψm , zm )(∆ ∪ Ψ1 ∪ . . . ∪ Ψm , zm ) hold. By (i) and the induction hypothesis of (a), we have M, (Ψj , zj ) |= Gj , whence by (ii) we get M, (∆ ∪ Ψ1 ∪ . . . ∪ Ψm , zm ) |= q, so that by the induction hypothesis of (a) again, we can conclude that Sm Sm ∆ ∪ j Ψj , λ∆ ∪ j λΨj `? zm : q succeeds. By 1.–4. all the hypotheses of Proposition 5.38 are satisfied, thus we obtain 5. Γ ∪ ∆, λΓ ∪ λ∆ `? y : G succeeds. By hypothesis, we have Σ = Γ ∪ ∆, and whenever it matters y = z, thus by the induction hypothesis of (a) on 5., we finally obtain M, (Σ, z) |= G. This concludes the proof of (a). (Part b). Let ∆ = {u : D1 , . . . , u : Dm }, we show that M, (∆, u) |= Di for each Di . If Di is an atom, then we have ∆, λ∆ `? u : Di succeeds, and the claim follows by the induction hypothesis of (a). Let Di be of the form G1 → . . . Gk → q. We want to show that M, (∆, u) |= Di . To this end, let R(∆, u)(Σ1 , v1 )(Ψ1 , z1 ) R(Ψ1 , z1 )(Σ2 , v2 )(Ψ2 , z2 ) .. . R(Ψk−1 , zk−1 )(Σk , vk )(Ψk , zk ) and
5. SUBSTRUCTURAL LOGICS
217
(i) M, (Σj , vj ) |= Gj for j = 1, . . . , k. We must show that M, (Ψk , zk ) |= q. By definition of R, we easily get S 1. Ψj = ∆ ∪ jl=1 Σl , Sj−1 2. RedS (λ∆ ∪ l=1 λΣl , λΣj , vj , zj ; zj ), 3. vj = zj for the systems where this matters; From 2. and 3. it is not difficult to see that (ii) RedS (λ∆ , λΣ1 , . . . , λΣk , u, v1 , . . . , vk ; vk ) holds. From (i), by the induction hypothesis of (a), we get Σj , λΣj `? vj : Gj succeeds, so that by monotony and 1., (iii) Ψk , λΣj `? vj : Gj succeeds. Since u : G1 → . . . → Gk → q ∈ Ψk , by (ii) and (iii) we get Sk Ψk , λ∆ ∪ j=1 λΣj `? vk : q succeeds, that is to say, (iv) Ψk , λΨk `? zk : q succeeds. By (iv) and the induction hypothesis of (a), we finally get M, (Ψk , zk ) |= q.
THEOREM 5.43 (Completeness). Let Γ be a labelled set of D-formulas with labels λΓ = {x1 , . . . , xk } ordered as shown. Let Si = {D | xi : D ∈ Γ} for i = 1, . . . , k, x = max(λΓ ), and let G be a G-formula, then we have V V V Sk ) → G (equivalently |=RM S1 → if |=RM S V ( S1 ⊗ . . . ⊗ S ? . . . → Sk → G ), then Γ, λΓ ` x : G succeeds. Proof. By contraposition, suppose that Γ, λΓ `? x : G does not succeed. Then by Proposition 5.42, we have M, (Γ, x) 6|= G. Let us consider the following databases Γi ⊆ Γ corresponding to Si : Γ1 = {x1 : D | x1 : D ∈ S1 }, . . . , Γk = {xk : D | D ∈ Sk }, then we have: (Γi , xi ) ∈ W , for i = 1, . . . , k. By Lemma 5.42, we also have M, (Γi , xi ) |= D for each D ∈ Si and for i = 1, . . . , k,
218
GOAL-DIRECTED PROOF THEORY
whence (*)
M, (Γi , xi ) |=
V
Si , for i = 1, . . . , k.
It is easy to see that RedS (γΓ1 , γΓ2 , x1 , x2 ; x2 ) holds and that (Γ1 ∪ Γ2 , x2 ) ∈ W , so that also R(Γ1 , x1 )(Γ2 , x2 )(Γ1 ∪ Γ2 , x2 ) holds. By (*), we can conclude that V V M, (Γ1 ∪ Γ2 , x2 ) |= S1 ⊗ S2 . We can repeat the same argument and show that R(Γ1 ∪ Γ2 , x2 )(Γ3 , x3 )(Γ1 ∪ Γ2 ∪ Γ3 , x3 ) holds, V V V so that we get M, (Γ1 ∪ Γ2 ∪ Γ3 , x3 ) |= S1 ⊗ S2 ⊗ S3 holds. Proceeding in this way, we finally get V V 2. M, (Γ, x) |= S1 ⊗ . . . ⊗ Sk . Since R(∅, v0 )(Γ, x)(Γ, x), by 1. and 2., we get V V M, 0 6|= ( S1 ⊗ . . . ⊗ Sk ) → G,
which completes the proof.
We finally show that for G- and D-formulas RM-semantics is equivalent to Fine semantics of Definitions 5.23 and 5.35. We show that any RM structure determines a Fine structure essentially equivalent as far as D- and G-formulas are concerned. The construction is rather standard (cfr. [Anderson et al., 1992] page 223) Let M = (W, R, 0, V ) be a RM-structure. Call a subset X ⊆ W upward-closed if a ∈ X and a ≤ b (i.e. R0ab) implies b ∈ X. We define the Fine-structure SM = (WM , ◦M , 0M , ≤, VM ), by stipulating WM 0M
= =
{X ⊆ W | X is upward closed}, {a ∈ W | ∀bc ∈ W (Rabc → R0bc)},
= {c ∈ W | ∃a ∈ X ∃b ∈ Y Rabc}, ⇔ Y ⊆ X, \ V (a). VM (X) = X◦MY X≤Y
a∈X
Notice that ◦M and 0M are well-defined, that is X◦MY is upward-closed, and so is 0M . The reader is invited to check the following. PROPOSITION 5.44. (i) SM satisfies all the conditions of definition 5.39 common to all structures. (ii) If M satisfy condition (rmi) of 5.39, then SM satisfies the corresponding condition (ai) of 5.23.
5. SUBSTRUCTURAL LOGICS
219
Proof. We exemplify (ii) for the case of condition (a4), namely X◦M 0M ≤ X. The others are left to the reader as an exercise. We have to show X ⊆ X◦M 0M . Let a ∈ X, by condition (rm4), we have that there is b such that Raba and for ∀c, d ∈ W Rbcd → R0cd. Thus, by definition of SM , we have b ∈ 0M so that a ∈ X◦M 0M . Our purpose is to prove the equivalence of the two semantics for G- and Dformulas. The equivalence actually holds for a class of formulas a bit more general than D- and G-formulas as defined above. Let the class of extended G-formulas and D-formulas be defined as follows D := q | D ∧ D | D ⊗ D | G → D, G := q | G ∧ G | G ⊗ G | D → G | G ∨ G. The precise relation of the two semantics is made clear in the next proposition. PROPOSITION 5.45. Given a RM structure M = (W, R, 0, V ) let SM = (WM , ◦M , 0M , ≤, VM ) be the corresponding Fine structure as defined above. For any extended G-formula G and extended D-formula D, and X ∈ WM 1. ∀a ∈ X, M, a |= D implies SM , X |= D; 2. SM , X |= G implies ∀a ∈ X, M, a |= G. Proof. The two claims are proved by mutual induction on the complexity of G and D. We exemplify the case of implication and disjunction. Let D = G → D1 , assume ∀a ∈ X, M, a |= D. Let SM , Y |= G, by the induction hypothesis of 2. we have (*)
∀b ∈ Y , M, b |= G.
Let c ∈ X◦MY , we have for some a ∈ X and b ∈ Y , Rabc, thus by hypothesis and (*) we get M, c |= D1 . We have shown that ∀c ∈ X◦M Y , M, c |= D1 , thus by the induction hypothesis of 1. we get SM , Y |= D1 . Conversely, let G = D → G1 and SM , X |= D → G1 . Let a ∈ X, we show M, a |= D → G1 . To this end let b, c ∈ W , such that Rabc and M, b |= D. Define Y = {u ∈ W | b ≤ u}. By monotony we get ∀u ∈ Y, M, u |= D. By the induction hypothesis of 1. we can conclude SM , Y |= D. Therefore by hypothesis, we have SM , X◦MY |= G1 ,
220
GOAL-DIRECTED PROOF THEORY
Since c ∈ X◦MY , by the induction hypothesis of 2., we obtain M, c |= G1 . Let G = G1 ∨ G2 , assume SM , X |= G, then SM , X |= G1 or SM , X |= G2 . Suppose the former holds, then by the induction hypothesis of 2. we get ∀a ∈ X, M, a |= G1 , thus also ∀a ∈ X, M, a |= G1 ∨ G2 . If SM , X |= G2 we argue similarly. Notice that the above proof does not work if disjunction of D-formulas is allowed. For instance, the formula (A → B ∨ C) ∧ (B → C) → (A → C), with A, B, C atoms is not an extended G-formula. From the previous proposition we easily obtain: THEOREM 5.46. Let Γ, γ `? x : G be a query with γ = {x1 , . . . , xk } (ordered as shown), and let Si = {A | xi : A ∈ Γ}, i = 1, . . . , k. We have V V ine ( S1 ⊗ . . . ⊗ Sk ) → G |=F S if and only if V V ( S1 ⊗ . . . ⊗ Sk ) → G. |=RM S Proof. The (only if) part is proved by noticing that a Fine structure M = (W, ◦, 0, ≤, V ) determines trivially a RM structure by defining the ternary relation Rabc ≡ a ◦ b ≤ c. V V For the (if) part, observe first that F = ( S1 ⊗ . . . ⊗ Sk ) → G is an ine F . Let M be any RM-structure extended G-formula. By the hypothesis |=F S and let SM be the Fine- structure built from M as described above. By hypothesis SM , 0M |= F . By the previous proposition, we get M, 0 |= F , as 0 ∈ 0M . To conclude, we submarize all the results of this section in the next theorem. THEOREM 5.47. Let Γ, γ `? x : G be a query with γ = {x1 , . . . , xk } (ordered as shown), and let Si = {A | xi : A ∈ Γ}, i = 1, . . . , k. The following are equivalent 1. Γ, γ `? x : G succeeds in the proof systems for S, V V 2. ( S1 ⊗ . . . ⊗ Sk ) → G is valid under Fine semantics for S, V V 3. ( S1 ⊗ . . . ⊗ Sk ) → G. is valid under RM semantics for S.
7
A DECISION PROCEDURE FOR IMPLICATIONAL R
For logics without contraction, namely, FL, T-W, E-W, L, and BCK, the proof systems we have defined can be the base of decision procedures for the respective systems. To see this, simply observe that
5. SUBSTRUCTURAL LOGICS
221
1. No database formula can be used more than once in a reduction step. This prevents loops. 2. There are only finitely many γi and label xi to check for the constraints RedS (that are obviuosly decidable). Thus a procedure implementing the proof systems will terminate, and this gives us decidability. The situation changes radically for the the other systems, R, E, T. The implicational fragments of R and E are decidable, whereas the same fragment of T is not known to be decidable. In this section we modify the proof procedure for R in order to obtain a decision procedure. The implicational fragment of R was shown to be decidable by Kripke in a seminal work [Kripke, 1959]. This result has been generalized to the whole propositional R without distribution (dist). It has been proved in [Urquhart, 1984] that R, (as well as E and T) with distributive conjunction and disjunction is undecidable. We first reformulate the procedure for R with the following modifications: 1. we insert each formula in the database only once, but we keep track of how many copies of each formula are present; thus a database is still a set of labelled formulas, with a 1–1 relation beween formulas and labels. On the other hand we keep track in the goal label of the multiplicity of the formulas by using multisets of labels; 2. we add a loop-checking mechanism which ensures termination of the deduction procedure. The loop-checking mechanism is very similar to the one described in Chapter 2 for intutionistic implication and is based on the fundamental lemma by Kripke about irredundant sequences of multisets; 3. when we perform a reduction step, we are allowed to reduce the atomic goal with respect to a formula, say z : A1 → . . . → An → q, even if z is not in the goal label, say α; we cancel one occurrence of z from the goal label (if any), and we do require that the αi be disjoint, i.e. ti αi = α − [z]. This gives a better control of the splitting of the labels, although it is not strictly necessary to ensure termination. By this modification, the set of resources α in a query does not record the available resources anymore, but it records what resources we have still to use in a proof. We first notice that for R the structure of a query can be simplified: in a regular query Γ, α `? x : G. the occurrence of x is not necessary, as x can be chosen arbitrarily in α.18 DEFINITION 5.48 ( Ploop ). A query has the form: 18 The same simplification can be done in all other systems, except for E and E-W, since either x can be chosen arbitrarily in α, or x is uniquely determined as the maximum label in α; thus in both cases, we do not need to keep track of it.
222
GOAL-DIRECTED PROOF THEORY
Γ, `? α : G, H, where Γ = {x1 : A1 , . . . , xn : An } is a set of labelled formulas with xi = xj ↔ Ai = Aj , α is a multiset of labels, H is a set of pairs {(α1 , q1 ), . . . , (αn , qn )}, each αi is a multiset of labels, and each qi is an atomic goal. H is called the history. Deduction rules of Ploop are as follows: (success) Γ `? α : q, H immediately succeeds if for some label x, α ⊆ [x], and x : q ∈ Γ. (implication1) if for no label y, y : A ∈ ∆, from ∆ `? α : A → B, H we step to
∆ ∪ {x : A} `? α t [x] : B, ∅
where x 6∈ Lab(∆). (implication2) if for some label y, y : A ∈ ∆, from ∆ `? α : A → B, H we step to (reduction) from
∆ `? α t [y] : B, H. ∆ `? α : q, H
if there is some y : C ∈ ∆, with C : A1 → . . . → Ak → q, we step, for i = 1, . . . k, to ∆ `? αi : Ai , H ∪ {(α, q)}, provided the following conditions hold: 1. there is no (β, q) ∈ H, such that β ⊆| α; Fk 2. ( i=1 αi ) = α − [y]. To check the validity of A, we search a derivation of the query `? ∅ : A, ∅. The loop-checking condition is expressed by condition 1. of reduction rule. The idea of loop-checking is the following: we stop the computation when we perform a reduction step and we re-ask the same atomic goal from a database, which (possibly) contains more copies of some formulas than the database in the query in which the atomic goal was asked the first time. Intuitively, if the latter query succeeds (with the bigger label), then the former succeeds, because of contraction; thus, there is no reason to carry on such a branch.
5. SUBSTRUCTURAL LOGICS
EXAMPLE 5.49. Let We step to: and then to:
223
x : p → p `? [x] : p, ∅. x : p → p `? ∅ : p, {([x], p)},
x : p → p `? ∅ : p, {([x], p), (∅, p)},
at this point the computation is stuck by condition 1. on reduction. EXAMPLE 5.50. Let x : (q → p) → p `? [x] : p, ∅. We step to: and then to: then to:
x : (q → p) → p `? ∅ : q → p, {([x], p)} x : (q → p) → p, y : q `? [y] : p, ∅, x : (q → p) → p, y : q `? [y] : q → p, {([y], p)},
and then to: x : (q → p) → p, y : q `? [y, y] : p, {([y], p)}, now the computation is stuck since [y] ⊆| [y, y]. EXAMPLE 5.51. We show that the following formula is a theorem of R: (a → b → b) → a → a → b → b. In Figure 5.3 we show a derivation. At every step we only show the additional formulas introduced in the database, if any. We start with `? ∅ : (a → b → b) → a → a → b → b, ∅; after a few steps by implication rule, we arrive at (5), then we go on by reduction w.r.t. x1 : a → b → b. Notice that no other rule is applicable. We generate (6) and (7). Query (6) immediately succeeds; we can apply reduction to query (7) with respect. x1 : a → b → b again (no other rule is applicable), and we get: (8) and (9), which immediately succeed. This formula was considered as an efficiency test for proof systems for Relevant logic (see [Thistlewaite et al., 1988]). A proof of the above formula by the theorem prover presented in [Ohlbach and Wrightson, 1983], based on a semantical translation of relevant logic in classical logic, took about 10 minutes of CPU time, whereas it took only 1/10 of a second by the program KRIPKE by Thistlewaite, McRobbie and Meyer. In this example some non-atomic data has to be used twice along the same branch. In a naive implementation of the sequent system for R one
224
GOAL-DIRECTED PROOF THEORY
(1) `? 0 : (a → b → b) → a → a → b → b, ∅
(2) x1 : a → b → b `? [x1 ] : a → a → b → b, ∅
(3) x2 : a `? [x1 , x2 ] : a → b → b, ∅
(4) [x2 ] : a `? [x1 , x22 ] : b → b, ∅
(5) [x3 ] : b `? [x1 , x22 , x3 ] : b, ∅
(6) `? [x2 ] : a, {([x1 , x22 , x3 ], b)}
(7) `? [x2 , x3 ] : b, {([x1 , x22 , x3 ], b)}
(8) `? [x2 ] : a, {([x1 , x22 , x3 ], b), ([x2 , x3 ], b)}
(9) `? [x3 ] : b, {([x1 , x22 , x3 ], b), ([x2 , x3 ], b)}
Figure 5.3. Derivation for example 5.51. has to use contraction to duplicate the formula a → b → b, and this must be done at the beginning of the proof.19 It is not true that the first time the goal repeats in one branch, we are in a loop, but what we can prove is: if there is a loop (that is a branch which can be continued forever), then eventually, we will find a (β, q) and a successive (α, q) with β ⊆| α. In what follows we prove soundness, completeness and termination of Ploop . First we prove that the procedure Ploop without the loop-checking condition, called Pnop , is sound and complete. Then we prove that the loop-checking restriction preserves the completeness, i.e. Ploop is complete; finally we show that Ploop always terminates. 19 The decision procedure implemented by the mentioned program KRIPKE is based on the possibility of eliminating contraction by building its effects in the logical rules in a controlled way.
5. SUBSTRUCTURAL LOGICS
225
THEOREM 5.52. The procedure Pnop is sound and complete for R. Proof. Let P old be the proof system of section 2 (with the obvious simplification introduced in this section). We prove that Pnop is equivalent to P old. We break the proof in two steps by introducing an intermediate proof system called P int. P int works the same as P old, but the conditions on success and on reduction are changed as follows (the αs are sets in this context): (Success) α ⊆ {x}, and x : q ∈ Γ; (Reduction) Let y : C be the formula used for reduction, then S ( ki=1 αi ) = α − {y}, αi ∩ αj = ∅, and we do not require y ∈ α. We first prove the equivalence of P old and P int. (⇐) We prove: If Q = ∆ `? α : G succeeds in P old, then it succeeds in P int. This claim is proved by induction on the height of a successful derivation of Q in P old. The only non-trivial case is that one of reduction: We need the following fact about P int, whose proof is by induction on derivations and is left to the reader: (fact) If Σ `? σ : G succeeds and δ ⊆ σ, then also Σ `? δ : G succeeds. Now let a successful derivation of Q proceed by reduction, then G is an atom q, there is some y : C ∈ ∆, with C : A1 → . . . → Ak → q, there are αi , and xi for i = 0, . . . , k such that: 1. α0 = {y}, Sk 2. i=0 αi = α, and for i = 1, . . . k, we step to Qi = ∆, `? αi : Ai . By the induction hypothesis, Qi succeed in P int. Now let S α0i = αi − 0≤j 0. Inspecting D from the root downward, choose a query Qα which violates the loop-checking constraint. Let Qα = Γ `? α : q, H 0 . If Qα is a violation, then there is (β, q) ∈ H 0 , such that β ⊆| α. This means that above Qα , on the path from Q to Qα in D, there occurs a query: Qβ = ∆ `? β : q, H 00 . Let Dα be the subtree rooted in Qα and Dβ be the subtree rooted in Qβ ; let r(Dα ) = (hα , vα ) and r(Dβ ) = (hβ , vβ ). We have hα < hβ ≤ h, moreover vα < vβ , whence:
5. SUBSTRUCTURAL LOGICS
229
r(Dα ) < r(Dβ ) ≤ r(D). Since (β, q) ∈ H 0 , the history has not been cleared (because of the implication1 rule) along the path from Qβ to Qα ; but this entails that no new formulas have been introduced in ∆, that is Γ = ∆ and H 00 ⊆ H 0 . By the previous lemmas, we have that the query Q∗β = Γ `? β : q, ∅ succeeds by a derivation D1 embeddable in Dα of height h01 ≤ hα < h, whence r(D1 ) < r(D). We may apply the induction hypothesis on Q∗β and D1 , so that we obtain that Q∗β succeeds by a derivation Dβ0 of rank (h0β , 0), with h0β ≤ h01 ≤ hα < hβ ; moreover, Dβ0 is embeddable in D1 , and hence in Dα . Now let Dβ00 be obtained from Dβ0 by inserting H 00 in Q∗β , that is the top query of Dβ00 is Qβ , and the history is propagated accordingly. We have that also Dβ00 is embeddable in Dα , since H 00 ⊆ H 0 . Obviously, derivation Dβ00 is successful. We have that r(Dβ00 ) = (h0β , vβ00 ), for some vβ00 . We show that it must be vβ00 < vβ . By hypothesis, we have that vα < vβ . We prove that vβ00 ≤ vα . To this end, let Qt be a violation in Dβ00 . Since Dβ0 does not contain violations, Qt must be a violation because of the history H 00 which has been inserted in Qβ and propagated in accordance with the deduction rules. It must be that Qt has the following form: Qt = Γ `? δ : r, Ht , where H 00 ⊆ Ht and there is (γ, r) ∈ H 00 , with γ ⊆| δ. Notice that the database must be Γ, otherwise the history would have been cleared. Since Dβ00 is embeddable in Dα , there is a corresponding query Q0t in Dα of the form: Q0t = Γ `? δ 0 : r, Ht0 , with H 00 ⊆ Ht0 and δ ⊆ δ 0 . We show that Q0t must be a violation in Dα , that is γ ⊆| δ 0 . Suppose it is not the case. First observe that since (γ, r) ∈ Ht0 , it must be (*) δ¯0 ⊆ γ¯, this follows from the fact that, if new labels had been introduced in δ 0 , the history would have been cleared, and this is not the case. Suppose that γ 6⊆| δ 0 ; by (*), we then have either δ¯0 ⊂ γ¯ or δ 0 ⊂| γ. ¯ against the fact that δ ⊆ δ 0 . In the latter In the former case we have δ¯0 ⊂ γ¯ = δ, 0 case, we have δ ⊂| γ ⊆| δ, and hence δ 0 ⊂| δ, contradicting again the fact that δ ⊆ δ 0 . We have shown that, for each violation in Dβ00 there is one corresponding violation in Dα , whence vβ00 ≤ vα < vβ .
230
GOAL-DIRECTED PROOF THEORY
We obtain a new successful derivation D∗ of the original query Q, by replacing Dβ by Dβ00 in D. Since h0β < hβ and vβ00 < vβ , we have that h(D∗ ) ≤ h and v(D∗ ) < v, whence r(D∗ ) < r(D). To conclude the proof, we apply the induction hypothesis to D∗ . To prove that Ploop always terminates, we need a property of sequences of multisets, the so-called Kripke’s (or Curry’s) lemma [Kripke, 1959] . Let α1 , α2 . . . , . . . , αi ,αi+1 . . . be a sequence of multisets on a finite set S, we say that the sequence is irredundant if, whenever i < j, it is not the case that αi ⊆| αj . LEMMA 5.57. Any irredundant sequence of multisets α1 , α2 , . . . , αi , αi+1 . . . on a finite set is finite. THEOREM 5.58 (Termination). Procedure Ploop always terminates. Proof. Let Q = ∆ `? α : G, ∅ be a query, we show that any derivation D with root Q in Ploop is finite. We argue by absurdity. Suppose D is infinite. Since D is a finitely-branching tree (by K¨onig’s lemma) it has an infinite branch B. Let Q0 = Σ `? γ : G0 , H be any query in B. Since the number of subformulas of ∆ and G is finite, say k, no more than k distinct labels, say S = {x1 , . . . , xk }, can occur in γ. That is to say, γ is a finite multiset on S. Moreover, we can assume also that Lab(∆) ⊆ Lab(Σ) ⊆ S. Let Q0 = Q, Q1 , . . . , Qn , Qn+1 , . . . be the sequence of the query on B. There must be a query Qi = Γi `? αi : Gi , Hi on B, such that: for all Qj = Γj `? γj : Gj , Hj on B with j ≥ i, Γj = Γi . This follows from the fact that, for every Γj , Lab(Γj ) ⊆ S, and from the fact that if Γ `? α : G, H precedes Γ0 `? α0 : G0 , H 0 , then Γ ⊆ Γ0 . But this implies that, from Qi onwards on B, the history will never be cleared, that is: (*) for all j ≥ i, Hj ⊆ Hj+1 . Since B is infinite, after Qi , there must be infinitely many queries Q0j with the same atomic goal q 0 . Let us consider the infinite subsequence of such queries Q0j = Γ `? γj0 : q 0 , Hj0 . Now we are ready to derive a contradiction. By Kripke’s 0 , . . . must contain some lemma, the infinite sequence of multisets (on S) γi0 , γi+1 0 0 0 0 γl and γk such that l < k and γl ⊆| γk ; they come from two queries: Q0l = Γ `? γl0 : q 0 , Hl
and
Q0k = Γ `? γk0 : q 0 , Hk .
0 0 : Gl+1 , Hl+1 which follows Q0l in B must be The query Q0l+1 = Γ `? γl+1 obtained from Q0l by reduction (it may be Q0l+1 = Q0k ), hence by (*) we have: 0 = Hl0 ∪ {(γl0 , q 0 )} ⊆ Hk0 . Hl+1
5. SUBSTRUCTURAL LOGICS
231
But this implies that (γl0 , q 0 ) ∈ Hk0 , and hence, by the loop-checking condition, branch B must end with Q0k . Although the procedure for implicational R terminates, we have not analyzed its complexity. Urquhart [1990; 1997] has proved an exponential-space lower bound for the implicational fragment and un upper bound which is primitive–recursive in the Ackermann function . The latter bound is also the lower bound for the fragment with →, ∧. There is, therefore, a big gap between the lower and the upper bound for the implicational fragment. We guess that our proof method for R has the Ackermann function upper bound, but we do not know how sharp it is. We only observe that the method cannot be extended as it is to the →, ∧-fragment. In particular, the idea of recording as multisets the formulas we still have to use, rather than the available formulas, does no longer work if conjunction is present. Consider the formula (A ∧ B → C) ∧ (A → B) → A → C which is not a theorem of R. By applying the method of this section extended by the rule for ∧, we get (assume A, B, C are atoms): x : A ∧ B → C, x : A → B, y : A
`?
[x, y] : C,
x : A ∧ B → C, x : A → B, y : A
`?
[y] : A ∧ B.
Now, we split
`? [y] : A and
`? : [y] : B.
The former succeedes immediately, the latter is reduced to the identical `? [y] : A and hence succeeds. Thus, the method is no longer sound. To get a correct method we should reformulate the reduction rule in a way similar way to the calculus L5 presented in [Thistlewaite et al., 1988]. 8
A FURTHER CASE STUDY: THE SYSTEM RM0
The system RM0 is an extension of R which formalizes a more liberal discipline on the use of formulas. In R we have that Γ ` A if there is a derivation in which every hypothesis in Γ is used. In RM0 we demand less: for every hypothesis in Γ there must be a derivation of A which makes use of it. It may happen that no derivation alone is capable of exhausting all formulas of the database, but if we take together alternative derivations they jointly exhaust all formulas. Consider the following example: (a → c) → (b → c) → a → b → c. Let us try to derive this formula using the procedure for R of Section 5.2, from `? ∅ : (a → c) → (b → c) → a → b → c,
232
GOAL-DIRECTED PROOF THEORY
we step to 1. x : a → c, y : b → c, z : a, u : b `? {x, y, z, u} : c we can use x : a → c and step to (from now on we omit the database since it does not change): 2. `? {y, z, u} : a and we fail. Alternatively at step 1. we can choose y : b → c and step to 3. `? {x, z, u} : b we fail again. The above formula is not a theorem of R. According to RM0, we can say the goal a in 2. succeeds as z : a is in the database, (and it ‘consumes the label z’), provided we can ‘consume’ the remaining labels to prove a previous goal. Thus after 2. we can restart by asking: `? {y, u} : c, and by reduction w.r.t. y : b → c we step to `? {u} : b, which succeeds as u : b is in the database. The special rule for RM0 can be informally stated as follows: if the current goal succeeds, but leaves some resources unused, then restart the computation from a previous goal (satisfying certain conditions), and try to consume the unused data. We first give a procedure which extends the one for R by allowing to restart the computation in the above sense. Then, we present an equivalent procedure which use an explicit rule to partition the available resources in alternative subproofs, in place of the restart rule. This rule is the Mingle rule, from which the system takes its name. A remark on terminology: RM0 is the system obtained by adding to the implicational fragment of R the axiom: A→A→A or equivalently, (A → C) → (B → C) → A → B → C. If we add these axioms to the whole system R (which contains other connectives) we obtain a system called R-Mingle (RM) [Anderson and Belnap, 1975; Anderson et al., 1992], whose implicational fragment is strictly stronger than RM0, and will not be treated here. Database and labels are defined as in the previous sections. A query has the form: Γ `? α : A, H,
5. SUBSTRUCTURAL LOGICS
233
where Γ, and α are as in the case of R (Γ is a set of labelled formulas and α is a set of labels) and H, called the history, is a list of triplets of the form (∆i , βi , qi ) where ∆i is a database, βi is a set of labels, and qi is an atom. We use the append notation and write H = (∆1 , β1 , q1 ) ∗ . . . ∗ (∆n , βn , qn ). We adopt a similar policy to the one of Section 7, i.e. when we perform a reduction step, we are allowed to reduce the atomic goal w.r.t. a formula, say z : A1 → . . . → An → q, even if z is not in the goal label, say α, then we cancel z from the goal label and we do require that the αi be disjointed. This makes control of the labels more efficient. We limit our discussion to the implicational fragment. The rules are as follows: • (success)
∆ `? α : q, H,
succeeds, if for some x, x : q ∈ ∆ and α = {x} or α = ∅. • (implication) from we step to
∆ `? α : C → G, H ∆ ∪ {x : C} `? α ∪ {x} : G, H 0
where x is a new label, that is not occurring neither in ∆, nor in α, nor in H, and H 0 = H ∗ (∆, α ∪ {x}, G) if G is an atom, and H 0 = H otherwise. • (reduction) from
∆ `? α : q, H
if there is some y : C ∈ ∆, with C : A1 → . . . → Ak → q, then we step, for i = 1, . . . k, to ∆ `? αi : Ai , Hi , where 1. αi ∩ αj = ∅, for i 6= j; Sk 2. i=1 αi = α − {y}. 3. Hi = H ∗ (∆, αi , Ai ) if Ai is an atom, and Hi = H otherwise. (Notice that we do not require that y ∈ α).
234
GOAL-DIRECTED PROOF THEORY
• (Restart) from
∆ `? α : q, H
if H = H1 ∗ (Γ, β, r) ∗ H2 ∗ (∆, α, q), and the following conditions hold: 1. for some x, x : q ∈ ∆, 2. letting α0 = α − {x}, we have α0 ⊆ β then we step to
Γ `? α0 : r, H 0
where H 0 = H1 ∗ (Γ, α0 , r). The first condition ensures that q succeeds if we ignore the label. The second condition may be explained by saying that if we select (Γ, β, r) in the history, we commit to ‘consume’ β. That is why the current label α0 must be included in β, otherwise we would re-try the goal r with extraneous resources not occurring when it was asked before (represented by β). EXAMPLE 5.59. We reconsider the previous example. Let ∆ = {x : a → c, y : b → c, z : a, u : b}. ∆ ∆
`? `?
{x, y, z, u} : c, (∆, {x, y, z, u}, c) {y, z, u} : a, (∆, {x, y, z, u}, c) ∗ (∆, {y, z, u}, a)
∆ ∆
`? `?
{y, u} : c, (∆, {y, u}, c) by restart {u} : b, (∆, {y, u}, c) ∗ (∆, {u}, b) success.
EXAMPLE 5.60. We check `? ∅ : a → a → a, ∅. x:a
`?
x : a, y : a x : a, y : a
?
` `?
x : a → a,
∅
{x, y} : a, ({x : a, y : a}, {x, y}, a) {y} : a, ({x : a, y : a}, {y}, a) by restart success.
In the example above, the query to which we apply the restart rule is identical to the query from which we restart, that is if x : q ∈ Γ, then from Γ `? α : q, H ∗ (Γ, α, q), we step to
Γ `? α − {x} : q, H ∗ (Γ, α − {x}, q).
We call this limit case of restart self-restart. We will see at the end of this section that we can eliminate this type of restart by changing the labelling discipline.
5. SUBSTRUCTURAL LOGICS
235
In the sequent formulation, RM0 is obtained by adding to the calculus for the implicational fragment of R the mingle rule : Γ`A ∆`A
Γ, ∆ ` A.
We can formulate the mingle rule in our proof method as follows (Mingle) From step to
∆ `? α : A ∆ `? α1 : A and ∆ `? α2 : A
where α = α1 ∪ α2 , α1 ∩ α2 = ∅. That is we can split the label α in two labels and carry on two computations one consuming α1 and the other consuming α2 . Our restart rule can be seen as a linearization of the mingle-rule. As we prove below, the mingle rule is equivalent to the restart rule. Given a derivation which use the mingle rule, we show how to transform it into a derivation which use the restart rule (and does not use mingle) and vice versa. At the intermediate step of the transformation (in each direction) the derivation will contain both applications of mingle and restart. For this reason, we consider a (redundant) proof method PRM0 which contains both rules. Since queries in PRM0 have the history, we reformulate the mingle rule by adding the history bookeeping: (Mingle) from ∆ `? α : A, H step to ∆ `? α1 : A, H1
and ∆ `? α2 : A, H2
where α = α1 ∪ α2 , α1 ∩ α2 = ∅, and Hi = H ∗ (∆, αi , A), if A is an atom, and Hi = H otherwise. The way we modify the history itself is ad hoc, it is just so to make the transformation simpler. It is clear that in a deduction which only uses mingle the history is not needed. PROPOSITION 5.61. In the proof system PRM0 we have: (a) if Γ ⊆ ∆, then Γ `? α : G, H succeeds with height h implies ∆ `? α : G, H succeeds with height h0 ≤ h. (b) if β ⊆ α, then Γ `? α : G, H, succeeds with height h implies Γ `? β : G, H succeeds with height h0 ≤ h. (c) If Γ `? α : A, H succeeds then it has a successful derivation in which the mingle rule is restricted to atoms.
236
GOAL-DIRECTED PROOF THEORY
THEOREM 5.62. Let D be a successful PRM0 -derivation which only uses Mingle of the query K = Γ `? ψ : A, H, then there is a successful PRM0 -derivation D0 of K which only uses restart. Proof. By Proposition 5.61(c), we can assume that every application of mingle in D is restricted to atomic goals. We transform D by replacing every application of mingle by restart. At each intermediate step the obtained derivation will satisfy the property that no application of restart precedes any application of mingle. More precisely, if restart is applied to N using a previous query N 0 , and mingle is applied to N 00 , then N 00 cannot be equal to N 0 , or be a descendant of N 0 . The initial derivation D trivially satisfies this property and the stepwise transformation will preserve it. We say that an application of mingle to a query Q in D is maximal if there are no applications of mingle to any descendant of Q. At each step of the transformation, the derivation may increase in length, but each transformation decreases the number of maximal applications of mingle, without introducing new applications. Thus, the process terminates. We describe a generic transformation step. Suppose D has the form shown in Figure 5.4. In the shown derivation, α = α1 ∪ α2 , and the subtree T1 and T2 do K .. . Γ `? α : q, H ∗ (Γ, α, q)
Γ `? α1 : q, H ∗ (Γ, α1 , q) T1
Γ `? α2 : q, H ∗ (Γ, α2 , q) T2
∆ `? ψ : r, H1∗
Figure 5.4. Derivation using Mingle (Proof of Theorem 5.62). not contain any application of the mingle rule. Moreover, if restart is applied in Ti , then the query used by restart must be a descendant of N0 = Γ `? α : q, H ∗ (Γ, α, q). We let N = Γ `? α1 : q, H ∗ (Γ, α1 , q). T1 must have a leaf of the form
5. SUBSTRUCTURAL LOGICS
237
Nf = ∆ `? ψ : r, H1∗ , such that for some x, x : r ∈ ∆ and ψ = ∅ or ψ = {x}. Notice that in the latter case x 6∈ α2 . Moreover we have H1∗ = H ∗ (Γ, β, q) ∗ H10 ∗ (∆, ψ, r), where β = α1 or β = α1 − {z} for some z, since the successor of N in T1 might be obtained by self- restart. Let T10 be obtained from T1 as follows: first add α2 to the goal-label of every query Q occurring in the path leading from N (included) to Nf , and to the label occurring in triplet corresponding to Q in the history. We have that the root of T10 is N0 : Γ `? α : q, H ∗ (Γ, α, q), and the leaf corresponding to Nf is Nf0 = ∆ `? α2 ∪ ψ : r, H1 , with H1 = H ∗ (Γ, β ∪ α2 , q) ∗ H100 ∗ (∆, α2 ∪ ψ, r), where H100 is obtained by adding α2 to the label of each triplet in H10 . Since x : r ∈ ∆ and γ = (α2 ∪ ψ) − {x} ⊆ α ∪ β, we can go on by restart from Nf0 by using N0 or its immediate successor (in the case it is obtained by self-restart) and step to Q = Γ `? γ : q, H ∗ (Γ, γ, q). We have that either γ = α2 or γ = α2 − {x}. To Q we append the tree T20 , which is T2 if γ = α2 , or it is obtained by deleting the atomic label x from all goal labels (and history) of the queries in T2 . In Figure 5.5 we display the resulting derivation. In this way we have obtained a new derivation D0 , which has one (maximal) application of mingle less than the original D. The only difference between the two derivations is below N0 . But the only difference between T1 and T10 is that the first one has a branch ending with Nf , whereas the latter has a branch going on with Nf0 and then T20 . If D is successful, then T1 and T2 are successful subtrees. By Proposition 5.61(b) , T20 is successful, and hence so is T10 ; we can conclude that D0 is successful. THEOREM 5.63. Let D be a successful PRM0 -derivation of the query K = Γ `? ψ : A, H, which uses only restart, then there is a successful PRM0 derivation D0 of K which uses only Mingle. Proof. Given D as in the claim of the theorem, we show how to replace every application of the restart rule by an application of the mingle rule. As in the previous theorem, the intermediate derivations will satisfy the restriction that no application of restart precedes any application of mingle. An application of the restart rule is called minimal if it is applied to a query N by using a previous query N 0 , and no application of restart to any query on any branch going through N 0 uses a query which precedes N 0 . The application is minimal w.r.t. N 0 in the sense that no other application of restart uses a query ‘older’ than N 0 . Each transformation
238
GOAL-DIRECTED PROOF THEORY
K .. . Γ `? α : q, H ∗ (Γ, α, q) (∗) (Γ `? α2 ∪ β : q, H ∗ (Γ, α2 ∪ β, q)) T10
∆ `? α2 ∪ ψ : r, H1 Γ `? γ : q, H ∗ (Γ, γ, q)
T20
Figure 5.5. Converted derivation using Restart (Proof of Theorem 5.62). The query (∗) occurs in the case of self-restart. step described below replaces a minimal application of restart by an application of mingle, and it yields a successful derivation of no greater height, in which there is at least one minimal application of restart less than in the original one and no new applications of restart. Thus the process terminates. Let us describe a generic transformation step. Suppose D has the form shown in Figure 5.6. In the derivation shown, β 0 = β − {x} ⊆ α, x : r ∈ ∆, and (1) the subtrees T1 and T2 do not contain any application of the mingle rule, (2) there is no application of restart using a query which is an ancestor of Γ `? α : q, H ∗ (Γ, α, q). Let α = β 0 ∪ γ. Then, we obtain a transformed derivation D0 from D as shown in Figure 5.7. In the shown derivation, T10 is obtained from T1 by deleting β 0 on any node on the path in T1 leading from the root of T1 (the query Γ `? α : q, H ∗ (Γ, α, q) to (**) ∆ `? β : r, H ∗ (Γ, α, q) ∗ H1 ∗ (∆, β, r).
5. SUBSTRUCTURAL LOGICS
239
K .. . Γ `? α : q, H ∗ (Γ, α, q)
T1
∆ `? β : r, H ∗ (Γ, α, q) ∗ H1 ∗ (∆, β, r) Γ `? β 0 : q, H ∗ (Γ, β 0 , q)
T2 Figure 5.6. Derivation using Restart (Proof of Theorem 5.63). Thus the root of T10 is Γ `? γ : q, H ∗ (Γ, γ, q) and the query corresponding to (**) is N = ∆ `? φ : r, H ∗ (Γ, γ, q) ∗ H10 ∗ (∆, φ, r), where φ = {x} or φ = ∅. (H10 is obtained from H1 , by deleting β 0 ). Notice that, by the splitting condition in the reduction rule, β can only be on that path. We know that x : r ∈ ∆, and hence N succeeds. The subderivation T10 differs from T1 only for the path from the root (that is now Γ `? γ : q, H ∗ (Γ, γ, q)) to N ; it is easy to see (by Proposition 5.61(b)) that if T1 is successful, then so is T10 . We can therefore conclude that if D is successful, so is D0 . But D0 contains a minimal application of restart less than D. By the two theorems, the restart and mingle rules are equivalent. We now show the soundness and completeness of the procedure Pmin which makes use only of mingle rule, because is technically simpler. In Pmin , we no longer need to record the history of the computation, thus we will drop the parameter H from queries. We prove the completeness of the procedure with respect to the semilattice semantics for RM0 introduced by Urquhart. This semantics is similar to Fine semantics of Section 5.1. Namely, in addition to the condition of Definition 5.23 (1)–(6) satisfied by R, we postulate, that
240
GOAL-DIRECTED PROOF THEORY
K .. . Γ `? α : q, H ∗ (Γ, α, q)
Γ `? β 0 : q, H ∗ (Γ, β 0 , q)
Γ `? γ : q, H ∗ (Γ, γ, q)
T2
T10
Figure 5.7. Converted derivation using Mingle (Proof of Theorem 5.63). (*) x ≤ x ◦ x and that the evaluation fuction satisfies (M) V (x) ∩ V (y) ⊆ V (x ◦ y). Since ≤ is a partial order, the condition (5) x◦x ≤ x and (*) make ◦ an idempotent operation which can be thought as a semilattice operation ∪ with the identity 0. However, neither x ≤ x ∪ y, nor x ∪ y ≤ x is assumed. We can get rid of the partial order and reformulate the semantic as follows. DEFINITION 5.64. A structure M for RM0 is a tuple M = (W, ∪, 0, V ), where • (W, ∪, 0) is a semi-lattice with zero (the element 0), and • V is a function of type W → P ow(V ar), satisfying the following condition (M): for all p ∈ V ar, x, y ∈ W p ∈ V (x) ∧ p ∈ V (y) → p ∈ V (x ∪ y). Truth and validity are defined as in the case of R, we refer to Section 5.1. We notice that condition (M) implies that for any formula A, M, x |= A and M, y |= A implies M, x ∪ y |= A. We we only sketch the proofs of the soundness and completeness. With respect to soundness, we observe that we have to state it in a more general form, since the label x of some formula x : A ∈ ∆ actually used in a derivation of ∆ `? α : G need not be in α. Let λΓ , λ∆ , etc. denote the set of labels in Γ, ∆, etc. THEOREM 5.65. Let Γ be a database, for any α and A, if Γ `? α : A succeeds in Pmin , then there is a database ∆ ⊆ Γ such that
5. SUBSTRUCTURAL LOGICS
241
(i) ∆ `? λ∆ : A succeeds; (ii) α ⊆ λ∆ ⊆ λΓ ; (iii) letting ∆ = {y1 : B1 , . . . , yk : Bk }, we have B1 → B2 → . . . → Bk → A is valid in RM0. Proof. By induction on the height h of a successful derivation of Γ `? α : A. We sketch the cases of Reduction and Mingle. (Reduction) Let h > 0 and A = q is an atom. Suppose that the reduction rule is applied, then for some y : C ∈ ∆ with C : A1 → . . . → Ak → q, from ∆ `? α : q, we step to ∆ `? αi : Ai , which succeed with height hi < h, for i = 1, . . . k, and it hold: 1. αi ∩ αj = ∅, for i 6= j; S 2. ki=1 αi = α − {y}. By the induction hypothesis, there are ∆i ⊆ Γ, for i = 1, . . . , k, such that letting λ∆i = γi , we have: ∆i `? γi : Ai αi ⊆ γi ⊆ λΓ ,
succeeds with height h0i ≤ hi , and
and assuming ∆i = {xi,1 : Ci,1 , . . . , xi,ri : Ci,ri }, we have: |= Ci,1 → . . . → Ci,ri → Ai . We can define S - ∆ = i ∆i ∪ {y : C}, and, Sk - δi = (γ Si − {y}) − j=i+1 γj , for i = 1, . . . , k - γ = i δi ∪ {y}. S It is immediate toScheck that (1) y 6∈ δi , (2) γ = i γi ∪ {y}, (3) δi ⊆ γi , (4) δi ∩ δj = ∅, (5) i δi = γ − {y}. Since ∆i ⊆ ∆ and (3), we have ∆ `? δi : Ai succeeds with height h00i ≤ h0i , so that by (4) and (5), we can conclude that ∆ `? γ : q, succeeds by the reduction rule. Since ∆i ⊆ Γ, and y : C ∈ Γ, we have ∆ ⊆ S Γ. It is also clear that γ = λ∆ ⊆ λΓ . We have still to check that α ⊆ γ. It holds i αi = α − {y}, so that we have: [ [ αi ∪ {y} ⊆ γi ∪ {y} = γ. α⊆ i
This concludes (a) and (b). (Part (c)). We know that for i = 1, . . . , k
i
242
GOAL-DIRECTED PROOF THEORY
(i) |=RM0 Ci,1 → . . . → Ci,ri → Ai . S If y : C 6∈ ∆i , we show that E = C1,1 → . . . → C1,r1 → . . . → Ck,1 → . . . → Ck,rk → C → q is valid. Suppose it is not. Then there is a structure M = (W, ∪, 0, V ) such that M, 0 6|= E then, there are a1,1 , . . . , a1,r1 , . . . , ak,rk , b ∈ W , such that (ii) M, ai,j |= Ci,j for j = 1, . . . , ri , i = 1, . . . , k, and (iii) M, b |= A1 → . . . → Ak → q, and, letting si = ai,1 ∪ . . . ∪ ai,ri it holds: M, s1 ∪ . . . ∪ sk ∪ b 6|= q. By (i) and (ii) we can conclude (iv) M, si |= Ai . It is easy to see, using (iii) and (iv) that for i = 1, . . . , k − 1 we get: M, s1 ∪ . . . ∪ si ∪ b |= Ai+1 → . . . → Ak → q, so that we S finally get M, s1 ∪ . . . ∪ sk ∪ b |= q against the hypothesis. If y : C ∈ ∆i , we show in a similar way that C1,1 → . . . → C1,r1 → . . . → Ck,1 → . . . → Ck,rk → q is valid. (Mingle) Suppose that the mingle rule is applied to Γ `? α : q so that we step to (*) Γ `? α1 : q
and Γ `? α2 : q,
with α1 ∩ α2 and α1 ∪ α2 = α. Since (*) succeeds with height hi < h, we can apply the inductive hypothesis and obtain that there are ∆i (for i = 1, 2) such that ∆i ⊆ Γ, αi ⊆ λ∆i ⊆ λΓ , ∆i `? λ∆i : q succeeds with h0i ≤ hi , and letting ∆i = {Ai,1 , . . . , Ai,ki }, we have that |=RM0 Ai,1 → . . . → Ai,ki → q. We let ∆ = ∆1 ∪∆2 = {A1,1 , . . . , A1,k1 , A2,1 , . . . , A2,k2 }, so that we easily have that α ⊆ λ∆ ⊆ λΓ , and A1,1 → . . . → A2,k1 → A2,1 → . . . → A2,k2 → q is valid in RM0, and by Propositions 5.61(a), 5.61(b) the two queries ∆ `? λ∆1 : q and ∆ `? (λ∆2 − λ∆1 ) : q succeed. Thus, from ∆ `? λ∆ : q, we can step to the above two queries and succeed.
5. SUBSTRUCTURAL LOGICS
243
Completeness can be proved by showing first the admissibility of the cut-rule and then by providing a canonical model construction. The proof of this theorem is similar to the one of Theorem 5.19, but acutally simpler. THEOREM 5.66. Let Γ = {x1 : A1 , . . . , xn : An }, and ∆ = {y1 : B1 , . . . , yk : Bk }, and let α = λΓ , β = λ∆ , u 6∈ α ∪ β, if Γ[u : C] `? α : D
and
Σ `? β : C succeed in Pmin
then also Γ[u/Σ] `? α[u/β] : D succeeds in Pmin . The canonical structure M = (W, ∪, ∅, V ) is defined as follows: ∪ is set-union, ∅ is the emptyset, and V is the evaluation function of type W → P ow(V ar), defined by the condition below: for ∆ ∈ W , p ∈ I(∆)
⇔
∆ `? λ∆ : p.
It is easy to see that M satisfies all the conditions of Definition 5.64 and that for every formula C: M, ∆ |= C
iff ∆ `? λ∆ : C succeeds,
from which the completeness easily follows. THEOREM 5.67. If A is valid in RM0, then `? ∅ : A succeeds in Pmin . At the beginning of the section, we have seen that self-restart is needed to consume several copies of an atomic formula which immediately succeeds. Selfrestart can be avoided if we change the label discipline of formulas. It is easy to see that if a formula occurs in a database, its number of copies is immaterial, in the sense that: 1. ∆, x : A `? α ∪ {x} : B, H succeeds iff 2. ∆, x1 : A, . . . , xk : A `? α ∪ {x1 , . . . xk } : B, H succeeds, (it is k > 0). To see this, take a successful derivation of 1., this could be also a derivation of 2. , apart from some copies of A (witnessed by the respective labels) which remain unused in some node; in order to consume them we restart and attach below any such node another copy of the derivation of 1. . Self restart is needed to deal with the above situation when B = A is atomic, whence the query immediately succeeds. The above fact means that we can economize labels and eliminate self restart by labeling with the same label all copies of A in the database. That is, a database contains at most one occurrence of any formula. For this purpose we first modify the implication rule, so that we no longer introduce a formula in the database if it already occurs in it. Moreover, it is not difficult to devise a loop-checking condition as the one used in Section 7 to make the proof procedure terminating. We leave the reader to work out the details of the loopchecking mechanism.
244
GOAL-DIRECTED PROOF THEORY
9 RELATION WITH OTHER APPROACHES
Relevance Logics: Tableaux and Goal-directed Proofs In principle a proof procedure (but not a decision procedure) for R, (say without distribution and negation), can be obtained as follows: adopt any proof method for intuitionistic logic and then add a mechanism to check the usage of formulas. This checking can be done by marking formulas as long as they are used. Every time we use a formula we mark it; if the proof succeeds according to the method for intuitionistic logic, we check that every formula in the database has been marked. If it is so, we succeed, otherwise we fail. This idea has been employed in [McRobbie and Belnap, 1979] to define tableau procedures for many relevant logics. If we adopt this strategy in a goal-directed procedure, we encounter a problem. A derivation may split in branches (or subderivations). A formula might be used (and marked) in one branch, but not in another. Therefore, we cannot say whether a query in a derivation tree is successful or not, unless we inspect the whole derivation tree. Even if this mechanism may be effective, it is not very neat proof-theoretically: whether a query in a leaf of a derivation tree is to be regarded as successful or not, will come to depend on what there is on other branches of the derivation. We would like instead that Γ `? A being provable or not only depends on the subtree whose root is Γ `? A. Bollen has developed a goal-directed procedure for a fragment of R [Bollen, 1991]), which does not have this problem. His method avoids splitting derivations in branches by maintaining a global proof-state ∆ `? [A1 , . . . , An ], where all A1 , . . . An have to be proved (they can be thought as linked by ⊗). For example, from a → b → c, a, b `? c, we step to (a → b → c)∗ , a, b `? a, b and we succeed by marking both a and b (a → b → c)∗ , a∗ , b∗ `? a, b. However, if we want to keep all subgoals together, we must take care that different subgoals Ai may happen to be evaluated in different contexts. For instance in C `? D → E, F → G according to the implication rule, we must evaluate E from {C, D}, and G from {C, F }. Bollen accommodates this kind of context-dependency by indexing subgoals with a number which refers to the part of the database that can be used to
5. SUBSTRUCTURAL LOGICS
245
prove it. More precisely, in the implicational fragment, the database is regarded as a tree of formulas (corresponding to the nesting of subderivations) and the index of each goal denotes the path of formulas in the database tree which can be used to prove the goal. Furthermore, a list of numbers is maintained to remember the usage; in a successful derivation the usage list must contain the numbers of all formulas of the database. Bollen’s proof system is more extended than ours, as it is defined for a fragment of first-order R; it is a logic programming language for conditional reasoning.
Linear Logic Many people have developed goal-directed procedures for fragments of linear logic leading to the definition of logic programming languages based on linear logic . This follows the tradition of the uniform proof paradigm proposed by Miller [Miller et al., 1991]. We notice, in passing, that implicational R can be encoded in linear logic by defining A → B by A (!A ( B, where ! is the exponential operator which enables the contraction of the formula to which is applied. The various proposals differ in the choice of the language fragment. Much emphasis is given to the treatment of the exponential !, as it is needed for defining logic programming languages of some utility: in most applications, we need permanent resources or data, (i.e. data that are not ‘consumed’ along a deduction); permanent data can be represented by using the ! operator. Some proposals, such as [Harland and Pym, 1991] and [Andreoli and Pareschi, 1991; Andreoli, 1992] take as basis the multi-consequent (or classical) version of linear logic. Moreover, mixed systems have been studied [Hodas and Miller, 1994] and [Hodas, 1993] which combine different logics into a single language: linear implication, intutionistic implication and more.20 A detailed comparison with these works is out of the scope of the present chapter. We notice, that the exponential-free (propositional) fragment of linear logic is one the simplest among the substructural logics met in this chapter. For instance we have a perfect duality between ∧ and ∨, which is lost if only contraction, but not weakening is present. That is why the addition of the ! make the system more complex and one has to put significant restrictions on the form of databases formulas and goals. In order to achieve an implementable system, we might import solutions and techniques developed in the context of linear logic programming. For instance, in the reduction and in the ⊗-rule we have to guess a partion of the goal label. A similar problem has been discussed in the linear logic programming community [Hodas and Miller, 1994; Harland and Pym, 1997] where one has to guess a split of the sequent (only the antecedent in the intutitionistic version, both the antecedent and consequent in the ‘classical’ version). 20 In [Hodas, 1993], Hodas has proposed a language called O which combines intutionistic, linear, affine and relevant implication. The idea is to partition the ‘context’, i.e. the database in several (multi) sets of data corresponding to the different handling of the data according to each implicational logic.
246
GOAL-DIRECTED PROOF THEORY
A number of solutions have been provided, most of them based on a lazy computation of the split parts. Perhaps the most general way to handle this problem has been addressed in [Harland and Pym, 1997] where it is proposed to represent the sequent split by means of Boolean constraints expressed by labels attached to the formulas. Different strategies of searching the partitions correspond to different strategies of solving the constraints (lazy, eager and mixed). We conjecture that similar techniques can be used in implementing our proof methods. To conclude this section, O’Hearn and Pym [1999] have recently proposed an interesting development of linear logic called the logic of bunched implications. Their system has strong semantical and cathegorical motivations. The logic of bunched implications combines a multiplicative implication, namely the one of linear logic and an additive (or intutionistic) one. The proof contexts, that is the antecedents of a sequent, are structures built by two operators: ‘,’ corresponds to the multiplicative conjunction ⊗ and ‘;’ corresponds to the additive conjunction ∧. These structures are called bunches. Similar structures have been used to obtain calculi for distributive relevant logics [Dunn, 1986]. The authors define also an interesting extension to the first-order case by introducing intensional quantifiers. Moreover they develop a goal-directed proof procedure for a Harrop-fragment of this logic and show significant applications to logic programming. It is not difficult to extend our machinery to cope with bunched implications. 21 To represent the additive implication we conjecture that it is sufficient to add a partial order on the labels as we have done for I with disjunction (Chapter 2) and the intermediate logics of Chapter 3. The rule for the addittive implication should be similar to those cases.
Display Logic Display logic for substructural logics have been developed mainly by Anderson, Belnap and Dunn [1992], Wansing [1990] and Doˇsen [1988] and, recently by Gor´e [1998]. The strong point in favour of display calculi is that they provide a uniform analytic proof-theory for a broad class of logics, including substructural logics. The display calculi were originally presented by Belnap for a broad family of nonclassical logics, including, modal, relevant, and intutionistic logic. Moreover, display calculi have been partially motivated by Dunn’s Gaggle theory [Dunn, 1986]. Another related tradition goes back to Lambek’s early work [Lambek, 1958] and have evolved recently in the so-called Categorial Type logics [Moortgat, 1991]. Here we just try to give a flavour of display logic for substructural logics; mainly we follow the presentation of Doˇsen and Belnap, warning the reader that relevant variations/extensions on the basic idea can be found in the work of each author quoted above. 21 The extension is easy as far as we do not allow nesting of ‘,’ within ‘;’ for the same reason we have put a restriction on the positive occurrences of ⊗ when we have defined the Harrop fragment in Section 6.
5. SUBSTRUCTURAL LOGICS
247
As we have already remarked in Chapter 4, the main feature of display logic is to express the structural properties by suitable operators/constructors of sequent constituents. There are three levels of objects: the level of formulas, the level of structures and the level of sequents which represent an entailment relation. Structures are terms formed by the structural operators and a sequent is a pair of structures. As Belnap originally proposed, structures are formed from formulas by the following constructors (I, ∗, ◦), where I represents the constant t, or the empty structure, ∗ represents negation and ◦ represents an intensional conjunction. One has to postulate a set of basic equivalences (display equivalences) which allow one to make a constituent explicit as a whole antecedent, or as a whole consequent according to its polarity (display theorem). The rules for the connectives are fixed and common to all systems. What changes are just the structural rules. To give a flavour of the structural rules, we list some of the rules for R: W ◦ (X ◦ Y ) ` Z (W ◦ X) ◦ Y ` Z W ◦ (X ◦ Y ) ` Z X ◦ (W ◦ Y ) ` Z X ◦ (X ◦ Y ) ` Z X ◦Y `Z It should be intuitively clear the relation with Fine semantics. Doˇsen [1988] gives a direct completeness proof with respect to his Grupoid models. More generally, we can say that display logics are a way of bringing the semantics into the syntax by means of the structural operators and their specific rules. This type of calculi is somewaht dual to the approach we have followed. Display logic is hinged on a strong separation between the structural and logical rules, and provides a language to express the former. Conversely, in the goal-directed procedures we have completely internalized the structural rules in the logical rules themselves.
Labelled Deductive Systems The use of labels we make in our procedures falls under the general methodology of the Labelled Deductive Systems (LDS) [Gabbay, 1996]. Labels are used to control derivations by expressing semantical information accompanying formulas. One possible development of LDS theory is the so-called notion of algebraic LDS. In this methodology, formulas are equipped with labels which are terms in an algebra. The specific algebraic laws on the labels are characteristic for each logic. Again this is a way of bringing the semantics into the syntax, namely through the labelling algebra. The LDS-approach is somewhat independent from the form of deduction rules one formulates: one can equally well formulate tableaux, naturaldeduction, or sequent systems based on a labelled deductive system. For instance in [D’Agostino and Gabbay, 1994] and, more recently in [Broda et al., 1999],
248
GOAL-DIRECTED PROOF THEORY
tableau systems for a family of substructural logics are defined. Again the strong point of the LDS methodology is uniformity. The weak point may be that one has to be able to reason on the (compound) labels by using the specific algebraic laws. It might turn out that deciding s ≤ t or s = t according to the specific algebraic laws, for two complex labels s and t, can be as difficult as the logical deduction itself. In contrast, in the proof systems defined here, we do not use the whole power of the LDS methodology. The use we make of labels is elementary, much as in the spirit of the natural deduction of [Anderson and Belnap, 1975]. Labels are just ordered sets of atomic labels and we have considered only Boolean and order constraints on them. This is much simpler than the use of algebraic labels, in which deciding (in)-equalities might not be obvious (word problems), even for contraction-less logics where decidability is easy. One can also give an LDS presentation of substructural logics based on the ternary relation semantics by Routley and Meyer. This is the approach followed by Basin, Vigan`o and Matthews in [Basin et al., 1999; Vigan`o, 1999]. Their proof systems (in the form of either natural deduction, or sequent calculi) are uniform for a wide range of logics. The logical rules are the same for all systems, whereas the specific properties of the ternary relation are expressed by relational rules. This neat presentation allows one to give a uniform proof of soundness, completeness, and cut-elimination. We observe that for all systems considered in this chapter (from FL to R), the relational theory cannot be expressed by pure universal sentences (or it can at the price of introducing Skolem functions). For this reason the decidability of some systems/fragments might not be evident, such as those without contraction. Moreover, we know that R with distributive ∧, ∨ is undecidable, whereas it is decidable without them. Since the ternary relation semantics and, therefore, also the relational LDS system, covers uniformly the whole R (this was indeed the main motivation and merit of the semantics!), it is not evident where the border between decidability and undecidability lies. Of course this is not a fault of the LDS calculus itself, it rather shows that the ternary relation semantics is too general to argue about decidability.
CHAPTER 6
CONCLUSIONS AND FURTHER WORK
1
A MORE GENERAL VIEW OF OUR WORK
This book presents a uniform goal-directed algorithmic proof theory for a variety of logics. The logics involved have a wide range, and small variations in the goaldirected algorithm can take us from one logic to a completely different one. In this chapter we will try to hint at a broader point of view. In the traditional view, a logical system can be mathematically presented either as a syntactical consequence relation satisfying certain properties, or as a set of axioms in some language generating a consequence relation, or finally, through its semantics, as a class of mathematical structures giving rise to a consequence relation. Usually one starts with such a mathematical presentation of a logical system and later on studies the problem of deduction for it. Suppose we have fixed a logical system S, with its mathematical presentation and we have therefore defined a notion of consequence relation relative to S, denoted by ∆ `S A. The consequence relation `S is expected to satisfy certain properties to be considered a logic, namely: reflexivity, transitivity and cut. It is convenient to refer to the presentation of a system as described above as the mathematical stage in defining a logic. We mean that `S is defined mathematically, but not necessarily algorithmically. We do not necessarily provide a recipe or an algorithm to check given ∆ and A whether ∆ `S A holds or not, nor can we necessarily recursively generate all pairs (∆, A) for which ∆ `S A holds. We regard the second stage in the presentation of a logic as the algorithmic stage. This means that we actually have an algorithm or a proof theory for generating pairs (∆, A) such that ∆ `S A holds. We do not mean an algorithm just in the mathematical sense, i.e. any recursive function generating the pairs (∆, A) such that ∆ `S A. We want the steps of the algorithm to have more or less intuitive logical meaning, in other words we want some proof theory (Gentzen, Tableaux style, Hilbert axioms style, etc.). We call this proof theory the algorithmic stage. It need not be a practical automated algorithm which is actually implemented. For any one consequence relation there can be more than one mathematical presentation and many algorithmic presentations. Our claim is that to present a logic we need also to present an algorithmic proof system for it (if it is recursive enumerable) and different algorithmic choices make different logics. 249
250
GOAL-DIRECTED PROOF THEORY
Consider an algorithmic proof system for a logic L1. Let us call it S1. Thus whenever ∆ `L1 A holds, the algorithmic procedure S1 would succeed when applied to ∆ `? A. S1 contains manipulative rules. These rules can be tinkered with, changed, modified and made more efficient. It happens in many cases that by making natural changes in S1 we get a new algorithmic system S2 which defines a new logic L2. L2 may be a well known logic already mathematically defined, with completely different motivation. The insight that S2, the result of tinkering with S1, is an algorithmic system for L2 can deepen our understanding of L2. We can thus obtain a network of logics interconnected on many levels, mathematical and algorithmic, where different logics can be obtained in different ways from other logics in the network by making some natural changes. This is abundantly exemplified by our goal directed methodology. The declarative nature is only a component in the formulation of the logic. This is somewhat of a departure from the traditional point of view. We can define the notion of a recursively enumerable logical system L as a pair L = (`, S), where ` is a mathematically defined consequence relation and S is an algorithmic proof system for `. The algorithmic system is sound and complete for `. Thus different algorithmic systems for the same ` give rise to different logics, according to our definition. To make this new notion more intuitively acceptable, consider the following example. Take a Gentzen style formulation for intuitionistic logic. A minor change in the rules will yield classical logic. Another minor change will yield linear logic. Thus from the point of view of the algorithmic proof system (i.e. Gentzen proof theory) linear logic, intuitionistic logic and classical logic are neighbours or brother and sister. Now consider classical logic from the point of view of the two-valued truth table. It is easy to generalize from two values to Łukasiewicz n-valued logic Ln . From the truth table point of view, classical logic and Ln are neighbours. However, there is no natural Gentzen formulation for Ln and so it cannot be directly related to intuitionistic logic. Now consider a Hilbert-style presentation of classical logic, intuitionistic logic and Łukasiewicz n-valued logics. Through the Hilbert presentation, the relationship between the three systems is very clear. Some axioms are dropped and/or added from one to obtain the other. Our view is that different algorithmic proof presentations of a logic (in the old sense, i.e. the set of theorems) give us different logics (in the new sense). Thus we have three distinct logics (all versions of classical logic) namely: • classical logic truth table formulation; • classical logic Hilbert system formulation; • classical logic sequent formulation. These are also different from the point of view of the kind of information on the logic they highlight. A Gentzen system and a Hilbert system can give us a rather
6. CONCLUSIONS AND FURTHER WORK
251
different types and styles of information about classical logic (in its mathematical presentation). The same can be said for other logics. We can look at different styles of algorithmic systems as (algorithmic) proof methodologies. To summarise, this book introduces another style of presentation of logics: the goal-directed one and develops it to a variety of logics. A number of issues deserve broader investigation. In the next section we list a few of them that we hope to address in future work. 2
FUTURE WORK
Relation with other proof systems It is important to compare the goal-directed methodology with other methodologies to the purpose of understanding the relative merits and de-merits of each one. For logics having well-known proof systems (like sequent calculi or tableaux) the purpose is to see how the goal-directed formulation helps in proof search. To this end we should be able to define a formal mapping between goal-directed proofs and proofs within other systems. For other systems which do not have a wellunderstood proof theory, the goal-directed approach can suggest how to develop more traditional proof methods such as tableaux or sequent calculi (it is the case of strict implication logics, their intuitionistic variants, and of the intermediate logics BHn ). Some notions arising naturally in the goal directed computation seems to have a relation, but they do not have an exact counterpart in other systems. This is the case of the restart rule. Another example is the notion of ‘diminishing resource’ computation. The latter corresponds to some limitation of duplication or contraction in sequent systems, but the exact corresponding notion in other proof systems still has to be found.
Interpolation and other properties We can use goal-directed methods to prove and define properties of logics. In this respect, we have seen in Chapter 2 that we can constructively prove an interpolation property for intuitionistic implicational logic. Can a corresponding property be proved in a similar way for other logical systems? An important property of the goal-directed proof procedures is that the cut-rule, suitably formulated, is admissible. This property has the same importance as the cut-elimination property in sequent calculi. It would be interesting to calculate an upper bound on the cutelimination process and compare it with what is known about sequent calculi for each specific logic.
Decidability and complexity bounds By definition, goal-directed proof procedures favour the proof search. The search of a proof is driven by the goal and only when we reach atomic constituents we
252
GOAL-DIRECTED PROOF THEORY
really have a non-deterministic step (reduction or restart). However, the simple goal-directed proof procedures are subject to two problems: the first is the possible non-termination and the other is that the computation may contain repeated subproofs. Regarding non-termination, we have exemplified two methodologies: the first of these is to adopt a ‘diminishing resource policy’, the second is to augment the system by a loop-checking mechanism. We have seen both strategies for intuitionistic logic, and the loop-checking one for R. There are logics which are known to be decidable, but we have not yet devised any mechanism of either two kinds for them. This is the important case, for instance, of the strict implication logics. Once we have a terminating procedure we can study further optimizations.1 For the other issue, i.e. the problem of repeated subproofs, we can handle it storing successful subgoals; this corresponds to (implicitly) applying a form of analytic cut: from Γ, A → B → q ` q, we step to Γ ` A and Γ, A ` B rather than Γ ` B. That is to say, we remember when we try B (the second subgoal) that A has already succeeded.
Type-theoretic connections The goal-directed procedures might be used to reason about type-inhabitation via the Curry–Howard isomorphism, (see [Hindley, 1997] for an introduction for the case of intuitionistic logic). It might be also worth investigating whether the class of λ-terms which represent goal-directed proofs can be characterised in any way.2
Extensions of the language We have concentrated on the implicational fragment. For the case of intuitionistic logic, we have extended our procedures to the full propositional calculus and to the ∀, →-fragment. Can we make the same extensions for other systems? Here there are a number of problems. Notice that we could obtain an extension with disjunction for intuitionistic logic only by making use of labels. Thus, we are expected to need such machinery for other logics. The other problem is that in some cases (such as relevance logics) the mere addition of the usual distributive lattice connectives makes the respective systems undecidable (actually T is not known to be decidable even in its pure implicational fragment). We do not expect the extension be easy or computationally favourable. 1 As a preliminary result, in [Gabbay et al., 1999] it is shown that a straightforward refinement of the goal-directed procedure gives an O(n logn)-space procedure for intuitionistic implication/negation fragment, the optimal space upper bound found so far. 2 In [1999] Galmiche and Pym discuss extensively proof search for type-theoretic languages, including goal-directed search and possible extensions of logic programming.
6. CONCLUSIONS AND FURTHER WORK
253
Starting with the implicational fragment, we have seen that one can easily extend the methods to a broader class of formulas, roughly corresponding to Harrop formulas in intuitionistic logic. On the other hand, it might be interesting to study subclasses of the implicational fragment, for instance to isolate a notion of Horn database and optimize our procedures for this case. In the case of modal logic, the most prominent extension is to treat directly database formulas with 3-operator in their heads. This is not expected to be easy. One has a similar problem as that of allowing existentially quantified data in a first-order database. Another prominent extension is that of first-order languages. Following the line of logic programming (as we did in Chapter 2, for intuitionistic logic), we want to define proof procedures to compute answer-substitutions. In general, in most nonclassical logics there are several options in the interpretation of quantifiers and terms according to the intended semantics (typically, constant domain, increasing domains etc.); moreover one may adopt either rigid, or non-rigid interpretation of terms. It is likely that in order to develop the first-order extensions we will need to label the data as we did in several cases. We would like to represent the semantic options we have mentioned by tinkering with the unification and the Skolemisation mechanism. The restrictions on unification and Skolemisation might depend on the labels associated with the data. The treatment of the ∀-, →-fragment can be a reasonable starting point.
Deductive databases in non-classical logics The long-term objective is to use non-classical logics as a representation language and implement goal-directed procedures as a conventional query-answering mechanism. Of course this enterprise presupposes there are areas in which the nonclassical logics we have studied in this book may find an application. We believe that the usefulness of non-classical logic programming languages (either extensions or alternatives to the classical Horn one) as specification and representation languages should and might be investigated more. Examples of systems developed so far include some use of intuitionistic, linear, relevant, modal/multi-modal, and higher order logic.3 Our proof procedures might be considered as ‘abstract operational semantics’ for non-classical logic programming languages. The entire battery of logic programming concepts and methods may be imported in our contexts. For instance one can define non-monotonic extensions of the basic procedures, such as the concept of negation by failure. One can also define abductive extensions of the basic proof mechanism to compute abductive explanations of given data. Rather than an intellectual exercise the study of these issues should be motivated by possible applications or methodological insight. 3 A (partial) list of works is: [Gabbay and Reyle, 1984; Miller, 1989; McCarty, 1988a; Bollen, 1991; Miller et al., 1991; Hodas and Miller, 1994; Harland and Pym, 1991; Gabbay, 1987; Giordano et al., 1992; Baldoni et al., 1998]. More references are given in the previous chapters.
254
GOAL-DIRECTED PROOF THEORY
Extension to other logics Although our methods seem to cover a broad family of logics there are some important holes in the landscape of logics which we have been able to treat. More than capturing specific (and exotic) systems, families of logics are important here. Perhaps the most important class of logic which we have been unable to capture so far is the class of many-valued logics. The only exception has been Dummett– G¨odel’s logic LC that we have seen in Chapter 3. All the other systems, notably the infinite-valued ones such as Łukasiewicz and product logic (which are considered together with LC the main formalizations of fuzzy logics, see [Hajek, 1998]) are out of our reach for the moment. There is also much work to be done on extensions of classical logic. For instance multi-modal logic and conditional logics which have received a strong interest for their application in a number of areas.4 To conclude, a further topic is to study the combination of logics by combining their proof procedures [Beckert and Gabbay, 1999]. The above list of topics is by no means concluded. We plan to carry on this research in a latter volume.
4 Just to mention a few: reasoning about actions, distributed knowledge, and agent communication (multi-modal logics), and hypothetical, counterfactual reasoning, belief change and theory update (conditional logics).
BIBLIOGRAPHY
[Abadi and Manna, 1986] M. Abadi and Z. Manna. Modal theorem proving. In Proceedings of the 8th International Conference on Automated Deduction, pp. 172–189. LNCS 230, Springer Verlag, 1986. [Abadi and Manna, 1989] M. Abadi and Z. Manna. Temporal logic programming. Journal of Symbolic Computation, 8, 277–295, 1989. [Andreoli, 1992] J. M. Andreoli. Logic programming with focusing proofs in linear logic. Journal Logic and Computation, 2, 297–347, 1992. [Andreoli and Pareschi, 1991] J. M. Andreoli and R. Pareschi. Linear objects: logical processes with built in inheritance. New Generation Computing, 9, 445– 474, 1991. [Avellone et al., 1998] A. Avellone, M. Ferrari and P. Miglioli. Duplication-free tableau calculi together with cut-free and contraction-free sequent calculi for the interpolable propositional intermediate logics. Journal of the IGPL, 7, 447– 480, 1999. [Anderson and Belnap, 1975] A. R. Anderson and N. D. Belnap. Entailment, The Logic of Relevance and Necessity, Vol. 1. Princeton University Press, New Jersey, 1975. [Anderson et al., 1992] A. R. Anderson, N. D. Belnap and J. M. Dunn. Entailment, The Logic of Relevance and Necessity, Vol. 2. Princeton University Press, New Jersey, 1992. [Avron, 1987] A. Avron. A constructive analysis of RM. Journal of Symbolic Logic, 52, 277–295, 1987. [Avron, 1991a] A. Avron. Hypersequents, logical consequence and intermediate logics for concurrency. Annals of Mathematics and Artificial Intelligence, 4, 225–248, 1991. [Avron, 1991b] A. Avron. Simple consequence relations. Information and Computation, 92, 105–139, 1991. [Avron, 1996] A. Avron.The Method of Hypersequents in the Proof Theory of Propositional Nonclassical Logics. In Logic: from Foundations to Applications, European Logic Colloquium. (W. Hodges et al. eds.), pp. 1-32, Oxford Science Publications, Clarendon Press, 1996. [Baaz and Ferm¨uller, 1999] M. Baaz and C. Ferm¨uller. Analytic calculi for projective logics. In Proceedings of TABLEAUX99, LNCS 1617, pp.36–50, Springer-Verlag, 1999. [Balbiani and Herzig, 1994] P. Balbiani and A. Herzig. A translation from modal logic G into K4. Journal of Applied Non-Classical Logics, 4, 73–77, 1994.
256
GOAL-DIRECTED PROOF THEORY
[Baldoni et al., 1998] M. Baldoni, L. Giordano and A. Martelli. A modal extension of logic programming, modularity, beliefs and hypothetical rerasoning. Journal of Logic and Computation, 8, 597–635, 1998. [Basin et al., 1997a] D. Basin, S. Matthews and L. Vigan`o. Labelled propositional modal logics: theory and practice. Journal of Logic and Computation, 7, 685– 717, 1997. [Basin et al., 1997b] D. Basin, S. Matthews and L. Vigan`o. A new method for bounding the complexity of modal logics. In Proceedings of the Fifth G¨odel Colloquim, pp. 89–102, LNCS 1289, Springer-Verlag, 1997. [Basin et al., 1999] D. Basin, S. Matthews and L. Vigan`o. Natural deduction for non-classical logics. Studia Logica, 60, 119–160, 1998. [Beckert and Gabbay, 1999] B. Beckert and D. M. Gabbay. Fibring logic programs. Journal of Logic Programming, to appear, 1999. [Bollen, 1991] A. W. Bollen. Relevant logic programming, Journal of Automated Reasoning, 7, 563–585, 1991. [Boolos, 1979] G. Boolos. The Unprovability of Consistency—An Essay in Modal Logic. Cambridge University Press, 1979. [Broda et al., 1999] K. Broda, M. D’Agostino and A. Russo. Transformation methods in LDS. In Logic, Language and Reasoning, Essays in Honour of Dov Gabbay. Kluwer Academic Publishers, to appear, 1999. [Cerrato, 1993] C. Cerrato. Modal sequents for normal modal logics. Mathematical Logic Quarterly, 39, 231–240, 1993. [Cerrato, 1994] C. Cerrato. Natural deduction based upon strict implication for normal modal logics. Notre Dame Journal of Formal Logic, 35, 471–495, 1994. [Chagrov and Zakharyaschev, 1997] A. Chagrov and M. Zakharyaschev. Modal Logic. Oxford Logic Guides 35, Oxford University Press, 1997. [Ciabattoni et al., 1999] A. Ciabattoni, D.Gabbay and N. Olivetti. Cut-free proof systems for logics of weak excluded middle. Soft Computing, 2, 147–156, 1999. [Ciabattoni and Ferrari, 1999] A. Ciabattoni, M. Ferrari. Hypersequent calculi for some intermediate logics with bounded Kripke models. Submitted for publication, 1999. [Corsi, 1987] G. Corsi. Weak logics with strict implication. Zeitschrift f¨ur Mathematische Logik und Grundlagen der Mathematik, 33, 389–406, 1987. [D’Agostino and Gabbay, 1994] M. D’Agostino and D. Gabbay. A generalization of analytic deduction via labelled deductive systems I: basic substructural logics. Journal of Automated Reasoning, 13, 243–281, 1994. [Dershowitz and Manna, 1979] N. Dershowitz and Z. Manna. Proving termination with multiset orderings. Communications of the ACM, 22, 465-472, 1979. [Doˇsen, 1985] K. Doˇsen. Sequent systems for modal logic. Journal of Symbolic Logic, 50, 149–168, 1985. [Doˇsen, 1988] K. Doˇsen. Sequent systems and grupoid models I. Studia Logica, 47, 353–385, 1988. [Doˇsen, 1989] K. Doˇsen. Sequent systems and grupoid models II. Studia Logica, 48, 41–65, 1989.
BIBLIOGRAPHY
257
[Doˇsen, 1993] K. Doˇsen. A historical introduction to substructural logics. In Substructural Logics, P. Schroeder-Heister and K. Doˇsen, eds. Oxford University Press, 1993. [Dummett, 1959] M. Dummett. A propositional calculus with denumerable matrix. Journal of Symbolic Logic, 24, 96–107, 1959. [Dummett, 1977] M. Dummett. Elements of Intuitionism, Oxford University Press, 1977. [Dung, 1991] P. M. Dung. Negations as hypotheses: an abductive foundation for logic programming. In Proceedings of ICLP-91 Conference, pp. 3–17, 1991. [Dung, 1993] P. M. Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning and logic programming. In Proceedings of IJCAI93, pp. 852–857, 1993. [Dunn, 1986] J. M. Dunn. Relevance logic and entailment. In Handbook of Philosophical Logic, vol III, D. Gabbay and F. Guenthner, eds. pp. 117–224. D. Reidel, Dordercht, 1986. [Dyckhoff, 1992] R. Dyckhoff. Contraction-free sequent calculi for intuitionistic logic. Journal of Symbolic Logic, 57, 795–807, 1992. [Dyckhoff, 1999] R. Dyckhoff. A deterministic terminating sequent calculus for G¨odel–Dummett logic. Logic Journal of the IGPL, 7, 319–326, 1999. [Eshghi, 1989] K. Eshghi and R. Kowalski. Abduction compared with negation by failure. In Proceedings of the 6th ICLP, pp. 234–254, Lisbon, 1989. [Ewald, 1986] W. B. Ewald. Intuitionistic tense and modal logic. Journal of Symbolic Logic, 51, 166–179, 1986. [Enjalbert and Farin˜as, 1989] P. Enjalbert and L. Farin˜as del Cerro. Modal resolution in clausal form. Theoretical Computer Science, 65, 1–33, 1989. [Farin˜as, 1986] L. Farin˜as del Cerro. MOLOG: a system that extends Prolog with modal logic. New Generation Computing, 4, 35–50, 1986. [Fine, 1974] K. Fine. Models for entailment, Journal of Philosophical Logic, 3, 347–372, 1974. [Fitting, 1983] M. Fitting. Proof methods for Modal and Intuitionistic Logic, vol 169 of Synthese library, D. Reidel, Dorderecht, 1983. [Fitting, 1990] M. Fitting. First-order Logic and Automated Theorem Proving. Springer-Verlag, 1990. [Fitting, 1990a] M. Fitting. Destructive modal resolution, Journal of Logic and Computation, 1, 83–98, 1990. [Gabbay, 1981] D. M. Gabbay. Semantical Investigations in Heyting’s Intuitionistic Logic. D. Reidel, Dordrecht, 1981. [Gabbay, 1984] D. M. Gabbay. Lecture notes on Elementary Logic - A Procedural Perspective, Imperial College, 1984. [Gabbay, 1985] D. M. Gabbay. N -Prolog Part 2. Journal of Logic Programming, 251–283, 1985. [Gabbay, 1987] D. M. Gabbay. Modal and temporal logic programming. In Temporal Logics and their Applications, A. Galton, ed. pp/ 197–237. Academic Press, 1987.
258
GOAL-DIRECTED PROOF THEORY
[Gabbay, 1991] D. M. Gabbay. Algorithmic proof with diminishing resources. In Proceedings of CSL’90, pp. 156–173. LNCS vol 533, Springer-Verlag, 1991. [Gabbay, 1992] D. M. Gabbay. Elements of algorithmic proof. In Handbook of Logic in Theoretical Computer Science, vol 2. S. Abramsky, D. M. Gabbay and T. S. E. Maibaum, eds., pp. 307–408, Oxford University Press, 1992. [Gabbay, 1993] D. M. Gabbay. General theory of structured consequence relations. In Substructural Logics, P. Schroeder-Heister and K. Doˇsen, eds. pp. 109–151. Studies in Logic and Computation, Oxford University Press, 1993. [Gabbay, 1996] D. M. Gabbay, Labelled Deductive Systems (vol I), Oxford Logic Guides, Oxford University Press, 1996. [Gabbay, 1996a] D. M. Gabbay. Fibred Semantics and the Weaving of Logics. Part I: modal and intuitioistic logic. Journal of Symbolic Logic, 61, 1057–1120, 1996. [Gabbay, 1998] D. M. Gabbay. Elementary Logic, Prentice Hall, 1998. [Gabbay and de Queiroz, 1992] D. M. Gabbay and R. De Queiroz. Extending the Curry-Howard interpretation to linear, relevant and other resource logics. Journal of Symbolic Logic, 57, 1319–1365, 1992. [Gabbay and Kriwaczek, 1991] D. M. Gabbay and F. Kriwaczek. A family of goal-directed theorem-provers based on conjunction and implication. Journal of Automated Reasoning, 7, 511–536, 1991. [Gabbay and Olivetti, 1997] D. M. Gabbay and N. Olivetti. Algorithmic proof methods and cut elimination for implicational logics - part I, modal implication. Studia Logica, 61, 237–280, 1998. [Gabbay and Olivetti, 1998] D. Gabbay and N. Olivetti. Goal-directed proofprocedures for intermediate logics. In Proc. of LD’98 First International Workshop on Labelled Deduction, Freiburg, 1998. [Gabbay and Reyle, 1984] D. M. Gabbay and U. Reyle. N -Prolog: an Extension of Prolog with hypothetical implications, I. Journal of Logic Programming, 4, 319–355, 1984. [Gabbay and Reyle, 1993] D. M. Gabbay and U. Reyle. Computation with run time Skolemisation (N -Prolog, Part 3). Journal of Applied Non Classical Logics, 3, 93–128, 1993. [Gabbay et al., 1999] D. M. Gabbay, N. Olivetti and S. Vorobyov. Goal-directed proof method is optimal for intuitionistic propositional logic. Manuscript, 1999. [Gallier, 1987] J. H. Gallier. Logic for Computer Science, John Wiley, New York, 1987. [Galmiche and Pym, 1999] D. Galmiche and D. Pym. Proof-Search in typetheoretic languages. To appear in Theoretical Computer Science, 1999. [Giordano et al., 1992] L. Giordano, A. Martelli and G. F. Rossi. Extending Horn clause logic with implication goals. Theoretical Computer Science, 95, 43–74, 1992. [Giordano and Martelli, 1994] L. Giordano and A. Martelli. Structuring logic programs: a modal approach. Journal of Logic Programming, 21, 59–94, 1994.
BIBLIOGRAPHY
259
[Girard, 1987] J. Y. Girard. Linear logic. Theoretical Computer Science 50, 1– 101, 1987. [G¨odel, 1932] K. G¨odel. Zum intuitionistischen Aussagenkalk¨ul. Anzeiger Akademie der Wissenschaften Wien, Math - naturwissensch, Klasse 69, pp. 65– 66, 1932. [Gor´e, 1999] R. Gor´e. Tableaux methods for modal and temporal logics. In Handbook of Tableau Methods, M. D’Agostino et al., eds. Kluwer Academic Publishers, 1999. [Gor´e, 1998] R. Gor´e. Substructural logics on display. Journal of IGPL, 6, 451– 504, 1998. [H¨ahnle and Schmitt, 1994] R. H¨ahnle and P. H. Schmitt. The liberalized δ-rule in free-variables semantic tableaux. Journal Automated Reasoning, 13, 211–221, 1994. [Hajek, 1998] P. Hajek. Metamathematics of Fuzzy Logic, Kluwer Acedemic Publishers, 1998. [Halpern and Moses, 1990] J. Y. Halpern and Y. Moses. Knowledge and common knowledge in a distributed environment. Journal of ACM, 37, 549–587, 1990. [Harland and Pym, 1991] J. Harland and D. Pym. The uniform proof-theoretic foundation of linear logic programming. In Proceedings of the 1991 International Logic Programming Symposium, San Diego, pp. 304–318, 1991. [Harland, 1997] J. Harland. On Goal-Directed Provability in Classical Logic, Computer Languages, pp. 23:2-4:161-178, 1997. [Harland and Pym, 1997] J. Harland and D. Pym. Resource-distribution by Boolean constraints. In Proceedings of CADE 1997, pp. 222–236, Springer Verlag, 1997. [Heudering et al., 1996] A. Heudering, M. Seyfried and H. Zimmerman. Efficient loop-check for backward proof search in some non-classical propositional logics. In (P. Miglioli et al. eds.) Tableaux 96, pp. 210–225. LNCS 1071, Springer Verlag, 1996. [Heyting, 1956] A. Heyting. Intuitionism: An Introduction. North-Holland, Amsterdam, 1956. [Hindley, 1997] J. R. Hindley. Basic Simple Type Theory. Cambridge University Press, 1997. [Hodas, 1993] J. Hodas. Logic programming in intuitionistic linear logic: theory, design, and implementation. PhD Thesis, University of Pennsylvania, Department of Computer and Information Sciences, 1993. [Hodas and Miller, 1994] J. Hodas and D. Miller. Logic programming in a fragment of intuitionistic linear logic. Information and Computation, 110, 327–365, 1994. [Hosoi, 1988] T. Hosoi. Gentzen-type formulation of the propositional Logic LQ. Studia Logica, 47. 41–48. 1988. [Hudelmaier, 1990] J. Hudelmaier. Decision procedure for propositional n- prolog. In (P. Schroeder-Heister ed.) Extensions of Logic Programming, pp. 245– 251, Springer Verlag, 1990.
260
GOAL-DIRECTED PROOF THEORY
[Hudelmaier, 1990] J. Hudelmaier. An O(n log n)-space decision procedure for intuitionistic propositional logic. Journal of Logic and Computation, 3, 63–75, 1993. [Jankov, 1968] V. A. Jankov. The calculus of the weak “law of excluded middle”. Mathematics of the USSR, 8,648–650, 1968. [Kripke, 1959] S. Kripke. The problem of entailment (abstract), Journal of Symbolic Logic, 24, 324, 1959. [Lambek, 1958] J. Lambek. The mathematics of sentence structure, American Mathematics Monthly, 65, 154–169, 1958. [Lewis, 1912] C. I. Lewis. Implication and the algebra of logic. Mind, 21, 522– 531, 1912. [Lewis and Langford, 1932] C. I. Lewis and C. H. Langford. Symbolic Logic, The Century Co., New York, London, 1932. Second edition, Dover, New York, 1959. [Lincoln et al., 1991] P. Lincoln, P. Scedrov and N. Shankar. Linearizing intuitionistic implication. In Proceedings of LICS’91, G. Kahn ed., pp. 51–62, IEEE, 1991. [Lloyd, 1984] J. W. Lloyd, Foundations of Logic Programming, Springer, Berlin, 1984. [Loveland, 1991] D. W. Loveland. Near-Horn prolog and beyond. Journal of Automated Reasoning, 7, 1–26, 1991. [Loveland, 1992] D. W. Loveland. A comparison of three Prolog extensions. Journal of Logic Programming, 12, 25-50, 1992. [Manna and Pnueli, 1981] Z. Manna and A. Pnueli. Verification of concurrent programs: the temporal framework. In The Correctness Problem in Computer Science, Academic Press, London, 1981. [Martins and Shapiro, 1988] J. P. Martins and S. C. Shapiro. A model for belief revision, Journal of Artificial Intelligence, 35, 25-79, 1988. [Masini, 1992] A. Masini. 2-Sequent calculus: a proof theory of modality. Annals of Pure and Applied Logics, 59, 115–149, 1992. [Massacci, 1994] F. Massacci. Strongly analytic tableau for normal modal logics. In Proceedings of CADE ‘94, LNAI 814, pp. 723–737, Springer-Verlag, 1994. [McCarty, 1988a] L. T. McCarty. Clausal intuitionistic logic. I. Fixed-point semantics. Journal of Logic Programming, 5, 1–31, 1988. [McCarty, 1988b] L. T. McCarty. Clausal intuitionistic logic. II. Tableau proof procedures. Journal of Logic Programming, 5, 93–132, 1988. [McRobbie and Belnap, 1979] M. A. McRobbie and N. D. Belnap. Relevant analytic tableaux. Studia Logica, 38, 187–200, 1979. [Meredith and Prior, 1964] C. A. Meredith and A. N. Prior. Investigations into implicational S5. Zeitschr. f. Logik und Grundlagen d. Math., 10, 203–220, 1964. [Miglioli et al., 1993] P. Miglioli, U. Moscato and M. Ornaghi. An improved refutation system for intuitionistic logic. In Proceedings of the Second Workshop on Theorem Proving with Analytic Tableaux and Related Methods, pp. 169–178. Max-Plank-Institut f¨ur Informatik, MPI-I-93-213, 1993.
BIBLIOGRAPHY
261
[Miller, 1989] D. A. Miller. A logical analysis of modules in logic programming. Journal of Logic Programming, 6, 79–108,1989. [Miller et al., 1991] D. Miller, G. Nadathur, F. Pfenning and A. Scedrov. Uniform proofs as a foundation for logic programming. Annals of Pure and Applied Logic, 51, 125–157, 1991. [Miller, 1992] D. A. Miller. Abstract syntax and logic programming. In Logic Programming: Proceedings of the 2nd Russian Conference, pp. 322–337, LNAI 592, Springer Verlag, 1992. [Moortgat, 1991] M. Moortgat. Categorial Investigations. Logical and linguistic aspects of the Lambek Calculus, Foris, 1991. [Nadathur, 1998] G. Nadathur. Uniform provability in classical logic. Journal of Logic and Computation, 8, 209–229,1998. [O’Hearn and Pym, 1999] P. W. O’Hearn and D. Pym. The logic of bunched implications. Bulletin of Symbolic Logic, 5, 215–244, 1999. [Ohlbach, 1991] H. J. Ohlbach. Semantics based translation methods for modal logics. Journal of Logic and Computation, 1, 691–746, 1991. [Ohlbach and Wrightson, 1983] H. J. Ohlbach and G. Wrightson. Solving a problem in relevance logic with an automated theorem prover. Technical Report 11, Institut f¨ur Informatik, University of Karlsruhe, 1983. [Olivetti, 1995] N. Olivetti. Algoirthmic Proof-theory for Non-classical and Modal Logics. PhD Thesis, Dipartimento di Informatica, Universita’ di Torino, 1995. [Ono, 1993] H. Ono. Semantics for substructural logics. In P. Schroeder-Heister and K. Doˇsen(eds.) Substructural Logics, pp. 259–291, Oxford University Press, 1993. [Ono, 1998] H. Ono. Proof-theoretic methods in non-classical logics-an introduction. In (M. Takahashi et al. eds.) Theories of Types and Proofs, pp. 267–254, Mathematical Society of Japan, 1998. [Ono and Komori, 1985] H. Ono and Y. Komori. Logics without the contraction rule. Journal of Symbolic Logic, 50, 169–201, 1985. [Pnueli, 1981] A. Pnueli. The temporal logic of programs, Theoretical Computer Science, 13, 45–60, 1981. [Prior, 1961] A. N. Prior. Some axiom-pairs for material and strict implication. Zeitschr. f. Logik und Grundlagen d. Math. 7, 61–65, 1961. [Prawitz, 1965] D. Prawitz. Natural Deduction. Almqvist & Wiksell, 1965. [Prior, 1963] A. N. Prior. The theory of implication. Zeitschr. f. Logik und Grundlagen d. Math., 9, 1–6, 1963. [Pym and Wallen, 1990] D. J. Pym and L. A. Wallen. Investigation into proofsearch in a system of first-order dependent function types. In Proc. of CADE’90, pp. 236–250. LNCS vol 449, Springer-Verlag, 1990. [Rasiowa, 1974] H. Rasiowa. An Algebraic Approach to Non Classical Logics, North-Holland, 1974. [Restall, 1999] G. Restall. An introduction to substructural logics. Routledge, to appear, 1999.
262
GOAL-DIRECTED PROOF THEORY
[Ritter et. al, 1999] E. Ritter, D. Pym and L. Wallen. On the intuitionistic force of classical search. To appear in Theoretical Computer Science, 1999. [Routley and Meyer, 1973] R. Routley and R. K. Meyer. The semantics of entailment I. In Truth, Syntax and Modality, H. Leblanc, ed. pp. 199–243. NorthHolland, Amsterdam, 1973. [Routley and Meyer, 1972a] R. Routley and R. K. Meyer. The semantics of entailment II. Journal of Philosophical Logic, 1, 53–73, 1972. [Routley and Meyer, 1972b] R. Routley and R. K. Meyer. The semantics of entailment II. Journal of Philosophical Logic, 1, 192–208,1972. [Routley et al., 1982] R. Routley, V. Plumwood, R. K. Meyer, and R. Brady. Relevant Logic and their Rivals. Part I. The Basic Philosophical and Semantical Theory. Ridgeview Publishing Company, Atascadero, California, 1982. [Russo, 1996] A. Russo. Generalizing propositional modal logics using labelled deductive systems. In (F. Baader and K. Schulz, eds.) Frontiers of Combining Systems, pp. 57–73. Vol. 3 of Applied Logic Series, Kluwer Academic Publishers, 1996. [Sahlin et al., 1992] D. Sahlin, T. Franz´en, and S. Haridi. An intuitionistic predicate logic theorem prover. Journal of Logic and Computation, 2, 619–656, 1992. [Sahlqvist, 1975] H. Sahlqvist. Completeness and correspondence in first- and second-order semantics of modal logics. In (S. Kanger ed.), Proceedings of the 3rd Scandinavian Logic Symposium, pp. 110–143, North-Holland, 1975. [Schroeder-Heister and Doˇsen, 1993] P. Schroeder-Heister and K. Doˇsen(eds.) Substructural Logics. Oxford University Press, 1993. [Scott, 1974] D. Scott. Rules and derived rules. In (S. Stenlund ed.) Logical Theory and Semantical Analysis, pp.147–161, Reidel, 1974. [Shankar, 1992] N. Shankar. Proof search in the intuitionistic sequent calculus. In Proc. of CADE’11, pp. 522–536. LNAI 607, Springer Verlag, 1992. [Slaney and Meyer, 1997] G. Slaney and R. Meyer. Logics for two: the semantics of distributive substructural logics. In Proceedings of Qualitative and Quantitative Practical Reasoning, pp. 554–567. LNCS 1244, Springer Verlag, 1997. [Statman, 1979] R. Statman. Intuitionistic propositional logic is polynomial-space complete. Theoretical Computer Science, 9, 67–72, 1979. [Thistlewaite et al., 1988] P. B. Thistlewaite, M. A. McRobbie and B. K. Meyer. Automated Theorem Proving in Non Classical Logics, Pitman, 1988. [Turner, 1985] R. Turner. Logics for Artificial Intelligence, Ellis Horwood Ltd., 1985. [Troelstra, 1969] A. S. Troelstra. Principles of Intuitionism. Springer-Verlag, Berlin, 1969. [Urquhart, 1972] A. Urquhart. The semantics of relevance logics. The Journal of Symbolic Logic, 37, 159–170, 1972. [Urquhart, 1984] A. Urquhart. The undecidability of entailment and relevant implication. The Journal of Symbolic Logic, 49, 1059–1073, 1984.
BIBLIOGRAPHY
263
[Urquhart, 1990] A. Urquhart. The complexity of decision procedures in relevance logic. In (J. M. Dunn and A. Gupta eds.) Truth or Consequnces, pp.77–95, Kluwer Academic Publisher, 1990. [Urquhart, 1997] A. Urquhart. The complexity of decision procedures in relevance logic II. Technical Report, University of Toronto, Submitted for publication, 1997. [van Dalen, 1986] D. Van Dalen. Intuitionistic logic. In Handbook of Philosophical logic, Vol 3, D. Gabbay and F. Guenther, eds. pp. 225–239. D. Reidel, Dorderecht, 1986. [Vigan`o, 1999] L. Vigan`o. Labelled Non-classical Logics. Kluwer Academic Publishers, 1999, to appear. [Wallen, 1990] L. A. Wallen, Automated Deduction in Nonclassical Logics, MIT Press, 1990. [Wallen, 1999] L. A. Wallen, Tableaux for intutionistic logic. In Handbook of Tableau Methods, D’Agostino et al. eds., pp. 255–296.Kluwer Academic Publishers, Dorderecht, 1999. [Wansing, 1990] H. Wansing. Formulas as types for a hierarchy of sublogics of intuitionistic propositional logic, Technical Report n.9/90, Free University of Berlin, 1990. [Wansing, 1994] H. Wansing. Sequent Calculi for normal propositional modal logics. Journal of Logic and Computation , 4, 125–142, 1994.
INDEX
Ackermann function, 231 Admissibility of cut, 14, 210 for intuitionistic implication, 29 for disjunctive databases, 69 for (∀ →)-fragment, 85 for LC, 105 for modal logics, 123 for substructural logics, 181 Backtracking rule unlabelled version, 103 labelled version, 100 Bounded height Kripke models, logics BHn , 94 Bounded Kripke models, logic LQ, 113 Classical logic C, 23, 25, 47 Contraction rule, 14, 171 Contraction-free sequent calculi for intuitionistic logic, 46 Display logic, 167, 172 Dummett–G¨odel logic LC, 99 Formula, 16 complexity, 16 disjunctive D- and G-form, 64 Harrop, 17, 146, 202 Horn, 17, 162 substitution, 16 Fuzzy logic, 100 Horn modal logics, 162 Hypersequent calculi, 101, 111 Intuitionistic logic axiomatization, 21 consequence relation, 24
Kripke semantics, 23 Kripke semantics for L(∀,→,∧), 88 Irrelevance axiom, 171 Kripke’s lemma, 230 Labelled deductive systems, 247 Lambek calculus, 173 Linear logic L, 12, 39, 173, 245 Linearity, 94 Logic programming, 1, 4, 27, 78, 165 in linear logic, 245 Loop-checking, 34, 221, 222 Many-valued semantics, 99 Mingle rule, 173, 235 Modal logic intuitionistic version, 142 Kripke semantics, 117 Modal logic G, 118, 158 Modal logic K, 118, 133 Modal logic KB, 10, 118 Modal logic K5, 118, 138 Modal logic K4, 118, 133 Modal logic K45, 118, 138 Modal logic KT, 118, 133 Modal logic KBT, 118 Modal logic S5, 118, 138 Modal logic S4, 118, 133, 172, 177 Multiset, 18 Natural deduction, 2 labelled, 166, 248 Peirce’s axiom, 13, 23, 94 Permutation rule, 171
266
GOAL-DIRECTED PROOF THEORY
Query labelled, 64, 119, 175 S-regular, 180 Realization, 69, 96, 126, 198 Relevant logic E, 172 contractionless version E-W, 173 Relevant logic RM0, 173, 231 Relevant logic R, 171 Relevant logic T (Ticket Entailment), 13, 172 contractionless version T-W, 173 Resolution, 2, 165 Restart, 13 n-shifting restart, 95 bounded, 39 for RM0, 234 for LC, labelled, 100 for LC, unlabelled, 103 for classical logic, 47 for modal logics, 120 Run-time Skolemization, 79 Sequent calculi, 2, 167 labelled, 166 Skolem functions, 78 Strict implication, 117 Substitution, 78, 81 Substructural logic BCK, 173 Substructural logic FL, 173 Substructural logics axiomatization, 192, 202 Fine semantics, 195, 218 Routley-Meyer semantics, 206 Urquhart semilattice semantics, 196, 239 Tableaux, 2, 159, 166, 244, 248 Undecidability of relevant logics, 221 Uniform proof, 5, 112, 167 Weakening rule, 14, 171