Handbook of Philosophical Logic 2nd Edition Volume 4
edited by Dov M. Gabbay and F. Guenthner
CONTENTS Editorial Pref...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

Handbook of Philosophical Logic 2nd Edition Volume 4

edited by Dov M. Gabbay and F. Guenthner

CONTENTS Editorial Preface Dov M. Gabbay

Conditional Logic

D. Nute and C. B. Cross

Dynamic Logic

D. Harel, D. Kozen, and J. Tiuryn

Logics for Defeasible Argumentation H. Prakken and G. Vreeswijk

Preference Logic S. O. Hansson

Diagrammatic Logic E. Hammer

Index

vii 1 99 219 319 395 423

PREFACE TO THE SECOND EDITION It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the rst edition and there have been great changes in the landscape of philosophical logic since then. The rst edition has proved invaluable to generations of students and researchers in formal philosophy and language, as well as to consumers of logic in many applied areas. The main logic article in the Encyclopaedia Britannica 1999 has described the rst edition as `the best starting point for exploring any of the topics in logic'. We are con dent that the second edition will prove to be just as good.! The rst edition was the second handbook published for the logic community. It followed the North Holland one volume Handbook of Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook of Philosophical Logic, published 1983{1989 came at a fortunate temporal junction at the evolution of logic. This was the time when logic was gaining ground in computer science and arti cial intelligence circles. These areas were under increasing commercial pressure to provide devices which help and/or replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisation on the one hand and to provide the theoretical basis for the computer program constructs on the other. The result was that the Handbook of Philosophical Logic, which covered most of the areas needed from logic for these active communities, became their bible. The increased demand for philosophical logic from computer science and arti cial intelligence and computational linguistics accelerated the development of the subject directly and indirectly. It directly pushed research forward, stimulated by the needs of applications. New logic areas became established and old areas were enriched and expanded. At the same time, it socially provided employment for generations of logicians residing in computer science, linguistics and electrical engineering departments which of course helped keep the logic community thriving. In addition to that, it so happens (perhaps not by accident) that many of the Handbook contributors became active in these application areas and took their place as time passed on, among the most famous leading gures of applied philosophical logic of our times. Today we have a handbook with a most extraordinary collection of famous people as authors! The table below will give our readers an idea of the landscape of logic and its relation to computer science and formal language and arti cial intelligence. It shows that the rst edition is very close to the mark of what was needed. Two topics were not included in the rst edition, even though

viii

they were extensively discussed by all authors in a 3-day Handbook meeting. These are:

a chapter on non-monotonic logic

a chapter on combinatory logic and -calculus

We felt at the time (1979) that non-monotonic logic was not ready for a chapter yet and that combinatory logic and -calculus was too far removed.1 Non-monotonic logic is now a very major area of philosophical logic, alongside default logics, labelled deductive systems, bring logics, multi-dimensional, multimodal and substructural logics. Intensive reexaminations of fragments of classical logic have produced fresh insights, including at time decision procedures and equivalence with non-classical systems. Perhaps the most impressive achievement of philosophical logic as arising in the past decade has been the eective negotiation of research partnerships with fallacy theory, informal logic and argumentation theory, attested to by the Amsterdam Conference in Logic and Argumentation in 1995, and the two Bonn Conferences in Practical Reasoning in 1996 and 1997. These subjects are becoming more and more useful in agent theory and intelligent and reactive databases. Finally, fteen years after the start of the Handbook project, I would like to take this opportunity to put forward my current views about logic in computer science, computational linguistics and arti cial intelligence. In the early 1980s the perception of the role of logic in computer science was that of a speci cation and reasoning tool and that of a basis for possibly neat computer languages. The computer scientist was manipulating data structures and the use of logic was one of his options. My own view at the time was that there was an opportunity for logic to play a key role in computer science and to exchange bene ts with this rich and important application area and thus enhance its own evolution. The relationship between logic and computer science was perceived as very much like the relationship of applied mathematics to physics and engineering. Applied mathematics evolves through its use as an essential tool, and so we hoped for logic. Today my view has changed. As computer science and arti cial intelligence deal more and more with distributed and interactive systems, processes, concurrency, agents, causes, transitions, communication and control (to name a few), the researcher in this area is having more and more in common with the traditional philosopher who has been analysing 1 I am really sorry, in hindsight, about the omission of the non-monotonic logic chapter. I wonder how the subject would have developed, if the AI research community had had a theoretical model, in the form of a chapter, to look at. Perhaps the area would have developed in a more streamlined way!

PREFACE TO THE SECOND EDITION

ix

such questions for centuries (unrestricted by the capabilities of any hardware). The principles governing the interaction of several processes, for example, are abstract an similar to principles governing the cooperation of two large organisation. A detailed rule based eective but rigid bureaucracy is very much similar to a complex computer program handling and manipulating data. My guess is that the principles underlying one are very much the same as those underlying the other. I believe the day is not far away in the future when the computer scientist will wake up one morning with the realisation that he is actually a kind of formal philosopher! The projected number of volumes for this Handbook is about 18. The subject has evolved and its areas have become interrelated to such an extent that it no longer makes sense to dedicate volumes to topics. However, the volumes do follow some natural groupings of chapters. I would like to thank our authors are readers for their contributions and their commitment in making this Handbook a success. Thanks also to our publication administrator Mrs J. Spurr for her usual dedication and excellence and to Kluwer Academic Publishers for their continuing support for the Handbook.

Dov Gabbay King's College London

x

Logic

IT Natural language processing

Temporal logic

Expressive power of tense operators. Temporal indices. Separation of past from future

Modal logic. Multi-modal logics

generalised quanti ers

Action logic

Algorithmic proof

Discourse representation. Direct computation on linguistic input Resolving ambiguities. Machine translation. Document classi cation. Relevance theory logical analysis of language Quanti ers in logic

Montague semantics. Situation semantics

Nonmonotonic reasoning

Probabilistic and fuzzy logic Intuitionistic logic

Set theory, higher-order logic, calculus, types

Program control speci cation, veri cation, concurrency Expressive power for recurrent events. Speci cation of temporal control. Decision problems. Model checking.

Arti cial intelligence

Logic programming

Planning. Time dependent data. Event calculus. Persistence through time| the Frame Problem. Temporal query language. temporal transactions. Belief revision. Inferential databases

Extension of Horn clause with time capability. Event calculus. Temporal logic programming.

New logics. Generic theorem provers

General theory of reasoning. Non-monotonic systems

Procedural approach to logic

Loop checking. Non-monotonic decisions about loops. Faults in systems.

Intrinsic logical discipline for AI. Evolving and communicating databases

Negation by failure. Deductive databases

Real time systems

Semantics for logic programs

Constructive reasoning and proof theory about speci cation design

Expert systems. Machine learning Intuitionistic logic is a better logical basis than classical logic

Non-wellfounded sets

Hereditary nite predicates

-calculus ex-

Negation by failure and modality

Horn clause logic is really intuitionistic. Extension of logic programming languages tension to logic programs

PREFACE TO THE SECOND EDITION

xi

Imperative vs. declarative languages

Database theory

Complexity theory

Agent theory

Special comments: A look to the future

Temporal logic as a declarative programming language. The changing past in databases. The imperative future

Temporal databases and temporal transactions

Complexity questions of decision procedures of the logics involved

An essential component

Temporal systems are becoming more and more sophisticated and extensively applied

Dynamic logic

Database updates and action logic

Ditto

Possible tions

Multimodal logics are on the rise. Quanti cation and context becoming very active

Types. Term rewrite systems. Abstract interpretation

Abduction, relevance

Ditto

Agent's implementation rely on proof theory.

Inferential databases. Non-monotonic coding of databases

Ditto

Agent's reasoning is non-monotonic

A major area now. Important for formalising practical reasoning

Fuzzy and probabilistic data Database transactions. Inductive learning

Ditto

Connection with decision theory Agents constructive reasoning

Major now

Semantics for programming languages. Martin-Lof theories Semantics for programming languages. Abstract interpretation. Domain recursion theory.

Ditto

Ditto

ac-

area

Still a major central alternative to classical logic More central than ever!

xii

Classical logic. Classical fragments

Basic ground guage

Labelled deductive systems

Extremely useful in modelling

A unifying framework. Context theory.

Resource and substructural logics Fibring and combining logics

Lambek calculus

Truth maintenance systems Logics of space and time

backlan-

Dynamic syntax

Program synthesis

Modules. Combining languages

A basic tool

Fallacy theory

Logical Dynamics Argumentation theory games

Widely applied here Game semantics gaining ground

Object level/ metalevel

Extensively used in AI

Mechanisms: Abduction, default relevance Connection with neural nets

ditto

Time-actionrevision models

ditto

Annotated logic programs

Combining features

PREFACE TO THE SECOND EDITION

Relational databases Labelling allows for context and control. Linear logic Linked databases. Reactive databases

Logical complexity classes

xiii

The workhorse of logic

The study of fragments is very active and promising.

Essential tool.

The new unifying framework for logics

Agents have limited resources Agents are built up of various bred mechanisms

The notion of self- bring allows for selfreference Fallacies are really valid modes of reasoning in the right context.

Potentially applicable

A dynamic view of logic On the rise in all areas of applied logic. Promises a great future

Important feature of agents

Always central in all areas

Very important for agents

Becoming part of the notion of a logic Of great importance to the future. Just starting

A new theory of logical agent

A new kind of model

DONALD NUTE AND CHARLES B. CROSS

CONDITIONAL LOGIC

Prior to 1968 several writers had explored the conditions for the truth or assertability of conditionals, but this work did not result in an attempt to provide formal models for the semantical structure of conditionals. It had also been suggested that a proper logic for conditionals might be provided by combining modal operators with material conditionals in some way, but this suggestion never led to any widely accepted formal logic for conditionals.1 Then Stalnaker [1968] provided both a formal semantics for conditionals and an axiomatic system of conditional logic. This important paper eectively inaugurated that branch of philosophical logic which we today call conditional logic. Nearly all the work on the logic of conditionals for the next ten years, and a great deal of work since then, has either followed Stalnaker's lead in investigating possible worlds semantics for conditionals or posed problems for such an approach. But in 1978, Peter Gardenfors [1978] initiated a new line of inquiry focused on the use of conditionals to represent policies for belief revision. Thus, two main lines of development appeared, one an ontological approach concerned with truth or assertability conditions for conditionals and the other an epistemological approach focused on conditionals and change of belief. With these two major lines of development, the material which has appeared on conditionals is prodigious. Consequently, we have had to focus upon certain aspects of conditional logic and to give other aspects less attention. We have followed the trend set in the literature and given the most attention to the analysis of so-called subjunctive conditionals as they are used in ordinary discourse and to triviality results for the Ramsey test. Accordingly, our discussion of conditionals and belief revision will be more heavily technical than our discussion of subjunctive conditionals. Other topics are discussed in less detail. Some of the important papers which it has not been possible to review are included in the accompanying bibliography, but the bibliography itself is far from complete. 1 ONTOLOGICAL CONDITIONALS

1.1 Introduction Conditional logic is, in the rst place, concerned with the investigation of the logical and semantical properties of a certain class of sentences occurring 1 Another suggestion which has never been fully developed (but see Hunter [1980; 1982] is that an adequate theory of ordinary conditionals may be derived from relevance logic. We will say no more about this suggestion than it seems to us that conditional logic and relevance logic are concerned with very dierent problems, and it would be a tremendous coincidence if the correct logic for the conditionals of ordinary usage should turn out to resemble some version of relevance logic at all closely.

2

DONALD NUTE AND CHARLES B. CROSS

in a natural language. We will draw our examples from English, but much of what we have to say can be applied, with due caution, to other natural languages. Paradigmatically, a conditional declarative sentence in English is one which contains the words `if' and `then'. Examples include 1. If it is raining, then we are taking a taxi. and 2. If I were warm, then I would remove my jacket. We could delete the occurrences of `then' in (1) and (2) and we would still have perfectly acceptable sentences of English. In the case of (2), we can omit both `if' and `then' if we change the word order. Example (2) surely says the same thing as 3. Were I warm, I would remove my jacket. Other conditionals in which neither `if' nor `then' occur include 4. When I nd a good man, I will praise him. and 5. You will need my number should you ever wish to call me. Notice that all of these examples involve two component sentences or clauses, one expressing some sort of condition and another expressing some sort of claim which in some way depends upon the condition. The conditional or `if' part of a conditional sentence is called the antecedent, and the main or `then' part its consequent even when `if' and `then' do not actually occur. Notice that the antecedent precedes the consequent in (1){(4), but the consequent comes rst in (5). These examples should give the reader a fair idea of the types of sentences with which conditional logic is concerned. While the verbs in (1) are in the indicative mood, those in (2) are in the subjunctive mood. Researchers often rephrase (2), forming a new conditional in which the verbs contained in antecedent and consequent are in the indicative mood. This practice implicitly assumes that (2) has the same content as 6. If it were the case that I am warm, then it would be the case that I remove my jacket. Even without the rephrasing, it is sometimes said that `I am warm' is the antecedent of both (2) and (6). Thus the mood of the verbs in the grammatical antecedent and consequent of (2) are taken logically to be a component of the conditional construction, while the logical antecedent and consequent

CONDITIONAL LOGIC

3

are viewed as containing verbs in the indicative mood. Seen in this way, the conditional constructions in (1) and (2) look quite dierent and investigators have as a consequence made a distinction between indicative conditionals like (1) and subjunctive conditional like (2). This distinction is important because it appears that these two kinds of conditionals have dierent logical and semantical properties. Much of the work done in conditional logic has focused on conditionals having antecedents and consequents which are false. Such conditionals are called counterfactuals. In actual practice, little distinction is made between counterfactuals and subjunctive conditionals which have true antecedents or consequents. Authors frequently refer to conditionals in the subjunctive mood as counterfactuals regardless of whether their antecedents or consequents are true or false. Another special kind of conditional is the so-called counterlegal conditional whose antecedent is incompatible with physical law. An example is 7. If the gravitational constant were to take on a slightly higher value in the immediate vicinity of the earth, then people would suer bone fractures more frequently. Also recognized are counteridenticals like 8. If I were the pope, I would support the use of the pill in India. and countertemporals like 9. If it were 3.00 a.m., it would be dark outside. Analysis of these special conditionals may involve special diÆculties, but we can say very little about these special problems in a paper of this length. Two other interesting conditional constructions are the even-if construction used in 10. It would rain even if the shaman did not do his dance. and the might construction used in 11. If you don't take the umbrella, you might get wet. We might paraphrase (10) using the word `still' to get 12. It would still rain if the shaman did not do his dance. even-if and might conditionals have somewhat dierent properties from those of other conditionals. It is believed by many, though, that these two kinds of conditionals can be analyzed in terms of subjunctive conditionals once we have an acceptable analysis of these. The strategy in this

4

DONALD NUTE AND CHARLES B. CROSS

paper will be to concentrate on the many proposals for subjunctive conditionals, returning later (brie y) to the topics of indicative, even-if and might conditionals. We will use two dierent symbols to represent indicative and subjunctive conditionals. For indicative conditionals we will use the double arrow ), and for the subjunctive conditional we will use the corner >. (Where context makes our intention clear, we will sometimes use symbols and formulas autonomously to refer to themselves.) With these devices we may represent (1) as 13. It is raining ) I am taking a taxi. and represent (2) as 14. I am warm > I remove my jacket. Frequently we will have no particular antecedent or consequent in mind as we discuss one or the other of these two kinds of conditionals and as we examine forms which arguments involving these conditionals may take. In these cases we will use standard notation for classical rst-order logic augmented by our symbols for indicative and subjunctive conditionals to represent the forms of sentences and arguments under discussion. We assume, as have nearly all investigators, that conditional have truth values and may therefore appear as arguments for truth-functional operators. Students in introductory symbolic logic courses are normally taught to treat English conditionals as material conditionals. By material conditionals we mean certain truth-functional compounds of simpler sentences. A material condition ! is true just in case is false or is true. There can be little doubt that neither material implication nor any other truth function can be used by itself to provide an adequate representation of the logical and semantical properties of English conditionals or, presumably, the conditionals of any other language. Consider the following two examples. 15. If I were seven feet tall, then I would be over two meters tall. 16. If I were seven feet tall, then I would be less than two yards tall. In fact one of the authors is more than two yards tall but less than two meters tall, so for him the common antecedent and the two consequents of (15) and (16) are all false. Yet surely (15) is true while (16) is false. When both the antecedent and the consequent of an English subjunctive conditional are false, the conditional may be either true or false. Now consider two more examples. 17. If I were eight feet tall, I would be less than seven feet tall.

CONDITIONAL LOGIC

5

18. If I were seven feet tall, I would be over six feet tall. Here we have two conditionals each of which has a false antecedent and a true consequent. but the rst of these conditionals is false and the second is true. The moral of these examples is that when the antecedent of an English subjunctive conditional is false, the truth value of the conditional is not determined by the truth values of the antecedent and the consequent of the conditional alone. Some other factors must be involved in determining the truth values of such conditionals. But what about English conditionals with true antecedents? It is generally accepted that any conditional with a true antecedent and a false consequent is false, but the situation is more controversial where the conditionals with true antecedents and true consequents are concerned. Some researchers have maintained that all such conditionals are true while others have claimed that such conditionals are sometimes false. Later we will consider some of the issues involved in this controversy. For now we simply recognize that there are some very good reasons for rejecting the view that all English conditionals can be represented adequately by material implication or by any other truth function.

1.2 Cotenability theories of conditionals Chisholm [1946], Goodman [1955], Sellars [1958], Rescher [1964] and others have proposed accounts of conditionals which share some important features. Borrowing a term from Goodman, we can call these proposals cotenability theories of conditionals. The basic idea which these proposals share is that the conditional > is true in case , together with some set of laws and true statements, entails . A crucial problem for such an analysis is that of determining the appropriate set of true statements to involve in the truth condition for a particular conditional. If the antecedent of the conditional is false, then of course its negation is true. But any proposition together with its negation will entail anything. The set of true statements upon which the truth of the conditional is to depend must at least be logically compatible with the antecedent of the conditional or the conditional will turn out to be trivially true on such an account. But logical compatibility is not enough either. We can have a true proposition such that and are logically compatible but such that > : is also true. Then we should not wish to include in the set of propositions upon which the evaluation of > depends. Goodman said of such a that it is not cotenable with . So Goodman's ultimate position is that > is true just in case is entailed by together with the set of all physical laws and the set of all true propositions cotenable with , i.e. with the set of all true propositions such that no member of that set counterfactually implies the negation of and the negation of no member

6

DONALD NUTE AND CHARLES B. CROSS

of that set is counterfactually implied by . Such an account is obviously circular since the truth conditions for counterfactuals are given in terms of cotenability, while cotenability is de ned in terms of the truth values of various counterfactual conditionals. Although this is certainly a serious problem, it is not the only problem which theories of this type encounter. As a result of the role which law plays in such a theory, all counterlegal conditionals are counted as trivially true, and this is counterintuitive. Furthermore, even if we could provide a noncircular account of cotenability, another problem arises for conditionals which are not counterlegal. Suppose two true propositions and are each cotenable with , but that ^ is not. In selecting the set of propositions upon which the evaluation of > shall rest we must omit either or since otherwise our conditional will be trivially true once again. But which of these two propositions shall we omit? Most recent work in conditional logic is compatible with cotenability theory even though no attempt is made to de ne and use the notion of cotenability. We might view the resultant theories at least in part as attempts to determine, without ever specifying exactly what cotenability is, the logical and semantical properties which conditionals must have if the cotenability approach is essentially correct for conditionals without counterlegal antecedents. Indeed, the vagueness deliberately built into many of these recent theories suggests that our notion of cotenability, if we have one, varies according to our purposes and the context in which we use a conditional.2

1.3 Strict Conditionals We have seen that the truth value of a conditional is not always determined by the actual truth values of its antecedent and consequent, but perhaps it is determined by the truth values which its antecedent and consequent take in some other possible worlds. One way such an analysis might be developed is suggested by the role laws play in the cotenability theories. Perhaps we should look not only at the truth values of the antecedent and the consequent in the actual world, but also at their truth values in all possible worlds which have the same laws as does our own. When two worlds obey the same physical laws, we can say that each is a physical alternative of the other. The proposal, then, is that > is true if is true at every physical alternative to the actual world at which is true. Suppose we say a proposition is physically necessary if and only if it is true at every physical alternative to the actual worlds, and suppose we express 2 Bennett [1974] and Loewer [1978] arrive at opposite conclusions concerning the question whether Lewis's semantics is compatible with cotenability theory. Their discussions are instructive for other semantics as well.

CONDITIONAL LOGIC

7

the claim that a proposition is physically necessary by . Then the proposal we are considering is that the following equivalence always holds: 19. ( > ) $ ( ! ). Another way of arriving at (19) is the following. English subjunctive conditionals are not truth-functional because they say more than that the antecedent is false or the consequent is true. The additional content is a claim that there is some sort of connection between the antecedent and the consequent. The kind of connection which seems to occur to people most readily in this context is a physical or causal connection. How can we represent this additional content in our formalization of English subjunctive conditionals? One way is to interpret > as involving the claim that it is physically impossible that be true and false. Once again we come up with (19). A proposal resembling the one we have outlined can be found in [Burks, 1951], although we do not wish to suggest that Burks arrived at his account by exactly the same line of reasoning as we have suggested. We can generalize the proposal represented by (19). We might suppose that the basic form of (19) is correct but that the short of necessity involved in English subjunctive conditionals is not pure physical necessity. One reason for suspecting this is that the notion of cotenability has been ignored. It is not simply a consequence of physical law that Jane would develop hives if she were to eat strawberries; it is also in part a consequence of her having a particular physical make-up. In evaluating the claim that Jane would become ill if she were to eat strawberries, we do not count the fact that in some worlds which share the same physical laws as our own but in which Jane has a radically dierent physical make-up, she is able to eat strawberries with impunity, as a legitimate reason for rejecting this claim. Another reason for seeking a dierent kind of necessity for the analysis of conditionals is that some conditionals may be true because of connections between their antecedents and consequents which are not physical connections at all. Consider, for example, conditionals such as `If you deserted your family you would be a cad', which seems to be founded on normative rather than physical connections. The general theory we are considering, then, is that English subjunctive conditionals are strict conditionals of some sort, i.e. that their logical form is given by the equivalence (19). There remains the problem of determining which kind of necessity is involved in these conditionals. Regardless of the kind of necessity we choose in such an analysis of conditionals, we should expect our modal logic to have certain minimal properties. By a modal logic we mean any set L of sentences formed from the symbols of classical sentential logic together with the symbol in the usual ways, provided that L contains all tautologies and is closed under the rule modus ponens. We should expect that for any tautology our modal logic will contain . We should also expect our modal logic to contain all substitution

8

DONALD NUTE AND CHARLES B. CROSS

instances of the following thesis: 20.

( !

) ! ( ! ).

But when we de ne our conditionals according to (19), our logic will then also contain all substitution instances of the following theses: Transitivity: [( > ) ^ ( > )] ! ( > ) Contraposition: ( > : ) ! ( > :) Strengthening antecedents: ( > ) ! [( ^ ) > ]. But none of these theses seem to be reliable for English subjunctive conditionals. As a counterexample to Transitivity, consider the following conditionals: 21. If Carter had not lost the election in 1980, Reagan would not have been President in 1981. 22. If Carter had died in 1979, he would not have lost the election in 1980. 23. If Carter had died in 1979, Reagan would not have been President in 1981. (21) and (22) are true, but is far from clear that (23) is true. As a counterexample to Contraposition, consider: 24. If it were to rain heavily at noon, the farmer would not irrigate his eld at noon. 25. If the farmer were to irrigate his eld at noon, it would not rain heavily at noon. And nally, for Strengthening Antecedents, consider: 26. If the left engine were to fail, the pilot would make an emergency landing. 27. If the left engine were to fail and the right wing were to shear o, the pilot would make an emergency landing. Since even very weak modal logics will contain all substitution instances of these three theses, and since most speakers of English nd counterexamples of the sort we have considered convincing, most investigators are convinced that English conditionals are not a variety of strict conditional.

CONDITIONAL LOGIC

9

1.4 Minimal Change Theories While treating ordinary conditionals as strict conditionals does not seem too promising, investigators have still found the possible worlds semantics often associated with modal logic very attractive. The basic intuition, that a conditional is true just in case its consequent is true at every member of some set of worlds at which its antecedent is true, may yet be salvageable. We can avoid Transitivity, etc. if we allow that the set of worlds involved in the truth conditions for dierent conditionals may be dierent. But we do not wish to allow that this set of worlds be chosen arbitrarily for a given conditional. Stalnaker [1968] proposes that the conditional > is true just in case is true at the world most like the actual world at which is true. According to Stalnaker, in evaluating a conditional we add the antecedent of the conditional to our set of beliefs and modify our set of beliefs as little as possible in order to accommodate the new belief tentatively adopted. Then we consider whether the consequent of the conditional would be true if this revised set of beliefs were all true. In the ideal case, we would have a belief about every single matter of fact before and after this operation of adding the antecedent of the conditional to our stock of beliefs. Possible worlds correspond to these epistemically ideal situations. Stalnaker's assumption, then, is that at least when the antecedent of a conditional is logically possible, there is always a unique possible world at which the antecedent is true and which is more like the actual world than is any other world at which the antecedent is true. We will call this Stalnaker's Uniqueness Assumption. On some fairly reasonable assumptions about the notion of similarity of worlds, Stalnaker's truth conditions generate a very interesting logic for conditionals. Essentially these assumptions are that any world is more similar to itself than is any other world, that the -world closest to world i (that is, the world at which is true which is more similar to i than is any other world at which is true) is always at least as close as the ^ -world closest to i, and that if the - world closest to i is a -world and the -world closest to i is a -world, then the -world closest to i and the -world closest to i are the same world. The model theory Stalnaker develops is complicated by his use of the notion of an absurd world, a world at which every sentence is true. This invention is motivated by the need to provide truth conditions for conditionals with impossible antecedents. Stalnaker's semantics can be simpli ed by omitting this device and adjusting the rest of the model theory accordingly. When we do this, we produce what could be called simpli ed Stalnaker models. Such a model is an ordered quadruple hI; R; s; [ ]i where I is a set of possible worlds, R is a binary re exive (accessibility) relation on I , s is a partial world selection function which, when de ned, assigns to sentence and a world i in I a world s(; i) (the -world closest to i), and [ ] is a

10

DONALD NUTE AND CHARLES B. CROSS

function which assigns to each sentence a subset [] of I (all those worlds in I at which is true). Stalnaker's assumptions about the similarity of worlds become a set of restrictions on the items of these models: (S1)

s(; i) 2 [];

(S2)

hi; s(; i)i 2 R;

(S3)

if s(; i) is not de ned then for all j j 62 [];

(S4) (S5) (S6)

2I

such that hi; j i

2 R,

if i 2 [] then s(; i) = i;

if s(; i) 2 [ ] and s( ; i) 2 [], then s(; i) = s( ; i);

i 2 [ > ] if and only if s(; i) 2 [ ] or s(; i) is unde ned.

Until otherwise indicated, we will understand by a conditional logic any set L of sentences which can be constructed from the symbols of classical sentential logic together with the symbol >, provided that L contains all tautologies and is closed under the inference rule modus ponens. The conditional logic determined by Stalnaker's model theory is the smallest conditional logic which is closed under the two inference rules RCEC: from $ , to infer ( > ) $ ( > ) RCK:

from (1 ^ : : : ^ n ) ! , to infer [( > 1 ) ^ : : : ^ ( > n )] ! ( > ), n 0

and which contains all substitution instances of the theses ID:

>

MP:

( > ) ! ( ! )

MOD: CSO: CV: CEM:

(: > ) ! ( > )

[( > ) ^ ( > )] ! [( > ) $ ( > )] [( > ) ^ :( > :)] ! [( ^ ) > ] ( > ) _ ( > : )

Together with modus ponens and the set of tautologies, these rules and theses can be viewed as an axiomatization of Stalnaker's logic, which he calls C2. While Stalnaker supplies a rather dierent axiomatization for C2, these rules and theses enjoy the advantage that they allow easy comparison of C2 with other conditional logics. Several of these rules and theses are due to Chellas [1975]. It can be shown that a sentence is a member of C2 if and only if that sentence is true at every world in every simpli ed

CONDITIONAL LOGIC

11

Stalnaker model. Thus we say that the class of simpli ed Stalnaker models determines or characterizes the conditional logic C2. None of Transitivity, Contraposition, and Strengthening Antecedents is contained in C2. A variation of the semantics developed by Stalnaker treats the function s as taking sets of worlds rather than sentences as arguments and values. In this variation, s is a function which assigns to each subset A of I and each member i of I a subset s(A; i) of I . Then > will be true at i just in case s([]; i) [ ]. By setting our semantics up in this way, we ensure that we can substitute one antecedent for another in a conditional provided that the two antecedents are true at exactly the same worlds, and we can do this without any additional restrictions on the function s. Since many authors have called sets of worlds propositions, we could call Stalnaker's original semantics a sentential semantics and the present variation on Stalnaker's semantics a propositional semantics to represent this dierence in the kind of argument the function s takes. As we look at alternatives to Stalnaker's semantics we will always consider the sentential forms of these semantics although equivalent propositional forms will often be available. Equivalence of the two versions of a particular semantics is guaranteed so long as the conditional logic characterized by the sentential version is closed under substitution of provable equivalents, i.e. so long as it is closed under both RCEC and RCEA: from $ to infer ( > ) $ ( > ). C2 is closed under RCEA as is any conditional logic closed under RCK and containing all substitution instances of CSO. The dierence between sentential and propositional formulations of a particular kind of model theory becomes important if we wish to consider conditional logics which are not closed under RCEA. Reasons for considering such `non-classical' logics are discussed in Section 1.7 below. For parallel development of sentential and propositional versions of certain kinds of model theories for conditional logics, see [Nute, 1980b]. Lewis [1973b; 1973c] questions Stalnaker's assumptions about the similarity of worlds and thus his semantics for conditionals. It is Stalnaker's Uniqueness Assumption which Lewis rejects. Lewis argues that there may be no unique - world which is closer to i than is any other -world. As an example, Lewis asks us to consider a straight line printed in a book and to suppose that this line were longer than it is. No matter what greater length we choose for the line, there is a shorter length which is still greater than the actual length of the line. The conclusion is that worlds which dier from the actual world only in the length of the sample line may be more and more like the actual world as the length of the line in those worlds comes closer to the line's actual length. But none of these worlds is the closest world at which the line is longer. In fact, examples of this sort can also be oered against an assumption about similarity of worlds which is

12

DONALD NUTE AND CHARLES B. CROSS

weaker than Stalnaker's Uniqueness Assumption. This assumption, which Lewis calls the Limit Assumption, is that, at least for a sentence which is logically possible, there is always at least one -world which is as much like i as is any other -world. Both the Uniqueness Assumption and the weaker Limit Assumption are highly suspect. If we follow Lewis's advice and drop the Uniqueness Assumption, we must give up Conditional Excluded Middle (CEM). But this is exactly the feature of Stalnaker's logic which is most often cited as objectionable. Both disjuncts in CEM will be true if is impossible and hence s is not de ned for and the actual world. On the other hand, if is possible, then must be either true or false at the nearest -world. Lewis ([1973b], p. 80) oers the following as a counterexample to CEM: 28a It is not the case that if Bizet and Verdi were compatriots, Bizet would be Italian; and it is not the case that if Bizet and Verdi were compatriots, Bizet would not be Italian; nevertheless, if Bizet and Verdi were compatriots, Bizet either would or would not be Italian. Lewis [1973b] admits that (28a) sounds, ohand, like a contradiction, but he insists that the cost of respecting this ohand opinion is too high: However little there is to choose for closeness between worlds where Bizet and Verdi are compatriots by both being Italian and worlds where they are compatriots by both being French, the selection function still must choose. I do not think it can choose|not if it is based entirely on comparative similarity, anyhow. Comparative similarity permits ties, and Stalnaker's selection function does not.3 Van Fraassen [1974] has employed the notion of supervaluation in defense of CEM. The suggestion is that in actual practice we do not depend upon a single world selection function s in evaluating conditionals. Instead we consider a number of dierent ways in which we might measure the similarity of worlds, each with its appropriate world selection function. Each world selection function provides a way of evaluating conditionals. A sentence can also have the property that it is true regardless of which world selection function we use. We can call such a sentence supertrue. If we accept Stalnaker's semantics together with a multiplicity of world selection functions, it turns out that every instance of CEM is supertrue even though it may be the case that neither disjunct of some instance of CEM is supertrue. In fact, all the members of C2 are supertrue when we apply Van Fraassen's method of supervaluation, and the method mandates the following reinterpretation of the Bizet-Verdi example: 3 [Lewis,

1973b], p. 80.

CONDITIONAL LOGIC

13

28b `If Bizet and Verdi were compatriots, Bizet would be Italian' is not supertrue; and `If Bizet and Verdi were compatriots, Bizet would not be Italian' is not supertrue; nevertheless, `If Bizet and Verdi were compatriots, Bizet either would or would not be Italian' is supertrue. (The relevant instance of CEM is also supertrue: `Either Bizet would be Italian if Bizet and Verdi were compatriots, or Bizet would not be Italian if Bizet and Verdi were compatriots.') In the Bizet-Verdi example, what Lewis accounts for as a tie in comparative world similarity, the method of supervaluation accounts for as a case of indeterminacy in the choice of a closest compatriot-world. Lewis [1973b] admits that ohand opinion seems to favor CEM, but, Stalnaker [1981a] shows that there is systematic intuitive evidence for CEM: the apparent absence of scope ambiguities in conditionals where Lewis' theory predicts we should nd them. Consider the following dialogue (see [Stalnaker, 1981a], pp. 93{95): X:

President Carter has to appoint a woman to the Supreme Court.

Y:

Who do you think he has to appoint?

X:

He doesn't have to appoint any particular woman; he just has to appoint some woman or other.

There is a clear scope ambiguity in X's statement, and this scope ambiguity explains why X's response to Y makes sense: Y reads X as having intended `a woman' to have wide scope, and X's response corrects Y by making it clear that X intended `a woman' to have narrow scope. Now compare this dialogoue to another, in which necessity is replaced by the past-tense operator: X:

President Carter appointed a woman to the Supreme Court.

Y:

Who do you think he appointed?

X:

He didn't appoint any particular woman; he just appointed some woman or other.

In this case X's response does not make sense. There is no semantically distinct narrow scope reading that X could have had in mind, so there is no room for Y to have misunderstand X's statement. Finally, consider a dialogue involving a conditional instead of a necessity or past tense statement: X:

President Carter would have appointed a woman to the Supreme Court last year if there had been a vacancy.

Y:

Who do you think he would have appointed?

14

X:

DONALD NUTE AND CHARLES B. CROSS

He wouldn't have appointed any particular woman; he just would have appointed some woman or other.

If Lewis' analysis of counterfactuals is correct, then in this dialogue, as in the rst dialogue, one should perceive an ambiguity in the scope of `a woman' in X's statement, and X's response should make sense as a correction of Y's misinterpretation. In fact there is no room for Y to have misunderstood X's statement, and X's response simply doesn't make sense. In this respect, the third dialogue parallels the second dialogue, not the rst, and the apparent lack of a scope ambiguity in X's statement in the third dialogue is evidence for CEM.4 If Stalnaker's example does not convince one to accept CEM, it is quite possible to formulate a logic and a semantics for conditionals which resembles Stalnaker's but which does not include CEM. Lewis [1971; 1973b; 1973c] suggests more than one way of doing this. The rst way is to replace the Uniqueness Assumption with the weaker Limit Assumption. Instead of looking at the closest antecedent-world, we look at all closest antecedent-worlds. These functions might better be called class selection functions rather than world selection functions. It is also not necessary to incorporate the accessibility relation into our models for conditionals if we use class selection functions since, if we make a certain reasonable assumption, we can de ne such a relation in terms of our class selection function. The assumption is that if is possible at all at i, then there is at least one closest -world for our selection function to pick out. Our models are then ordered triples hI; f; [ ]i such that I and [ ] are as before and f is a function which assigns to each sentence and each world i in I a subset of I (all the - worlds closest to i). By restricting these models appropriately, we can characterize a logic very similar to Stalnaker's C2. This logic, which Lewis calls VC, is the smallest conditional logic which is closed under the same rules as those listed for C2 and which contains all those theses used in de ning C2 except that we replace CEM with CS:

( ^ ) ! ( > ).

CS is contained by C2 although CEM is not contained by VC.5 A sentence 4 A dierent sort of argument for CEM can be found in [Cross, 1985], which adopts Bennett's [1982] analysis of `even if' conditionals and argues for the validity of CEM based on the intuitive validity of the following formulas: (e > ) ! ( > ) ( ^ :( > : )) ! (e > ); where (e > ) means `Even if , '. The argument turns on the fact that in any system of conditional logic that includes classical propositional logic and RCEC, CEM is a theorem i ( ^ :( > : )) ! ( > ) is a theorem. 5 This and other independence results cited in this paper are provided in Nute [1979;

CONDITIONAL LOGIC

15

is a member of VC if and only if it is true at every world in every class selection function model which satis es the following restrictions: (CS1): (CS2): (CS3): (CS4): (CS5): (CS6):

if j 2 f (; i) then j 2 [];

if i 2 [] then f (; i) = fig;

if f (; i) is empty then f ( ; i) \ [] is also empty;

if f (; i) [ ] and f ( ; i) [], then f (; i) = f ( ; i); if f (; i) \ [ ] 6= ;, then f ( ^ ; i) f (; i);

i 2 [ > ] i f (; i) [ ].

Although Lewis endorses VC as the proper logic for subjunctive conditionals, he nds the Limit Assumption and, hence, the version of class selection function semantics we have developed, to be no more satisfactory than the Uniqueness Assumption. Consequently, Lewis proposes an alternative semantics for subjunctive conditionals. This alternative is also based on the similarity of worlds. The dierence is in the way Lewis uses similarity in giving the truth conditions for conditionals. A conditional > with a logically possible antecedent is true at a world i, according to Lewis, if there is a ^ -world which is closer to i than is any ^ : -world. Lewis uses nested systems of spheres in his models to indicate the relative similarity of worlds. A system-of-spheres model is an ordered triple hI; $; [ ]i such that I and [ ] are as before and $ is a function which assigns to each i in I a nested set $i of subsets of I (the spheres about i). If there is some sphere S about i such that j is in S but k isn't in S , then j is closer to or more similar to i than is k. To characterize the logic VC, we must adopt the following restrictions of system-of-spheres models: (SOS1): fig 2 $i ;

(SOS2): i 2 [ > ] if and only if $i \ [] is empty or there is an S such that S \ [] is not empty and S \ [] [ ].

2 $i

While Lewis rejects the Limit Assumption, it should be noted that in those cases in which there is a closest -world to i the conditions for a conditional with antecedent being true at i are exactly the same for system-of-spheres models as for the type of class selection function model we examined earlier. For this reason we classify Lewis's semantics as a minimal change semantics to contrast it with other accounts which lack this feature. Pollock [1976] also develops a minimal change semantics for conditionals. In fact, Pollock's semantics is a type of class selection function semantics. There are two primary reasons why Pollock rejects Lewis's semantics and the 1980b].

16

DONALD NUTE AND CHARLES B. CROSS

conditional logic VC. First Pollock rejects the thesis CV, a thesis which is unavoidable in Lewis's semantics. Second Pollock embraces the Generalized Consequence Principle: GCP:

If is a set of sentences such that > and if entails , then > is true.

is true for each

2

,

GCP does not hold in all system-of-spheres models, but it does hold in all class selection function models.6 The conditional logic SS which Pollock favors is the smallest conditional logic closed under the rules listed for VC and containing all those theses used in de ning VC except that we replace CV with CA:

[( > ) ^ ( > )] ! [( _ ) > ].

This again gives us a weaker system since CA is contained by VC while CV is not contained by SS. Obviously, SS is not determined by the class of class selection function models which satisfy conditions (CS1){(CS6) since this class of models characterizes the logic VC. Let's replace the condition (CS5) with (CS50 ) f ( _ ; i) f (; i) [ f ( ; i). Then SS is determined by the class of all class selection function models which satisfy this new set of conditions. One reason for Pollock's lack of concern for Lewis's counterexamples to the Limit Assumption may be that Pollock conceives of what would count as a minimal change quite dierently from the way Lewis does. Pollock [1976] oers a detailed account of the notion of a minimal change, an account 6 To see how GCP might fail in Lewis's semantics, consider the example Lewis uses to show that for a particular there may be no -world closest to i. The example, which we considered earlier, involves a line printed on a page of [Lewis, 1973b]. Lewis invites us to consider worlds in which this line is longer than its actual length, which we will suppose to be exactly one inch. If the only way in which these worlds dier from the actual world is in the length of Lewis's line, then it is plausible that we rank these worlds in their similarity to the actual world according to how close to one inch Lewis's line is in each of these worlds. But no matter how close to one inch the line is, so long as it is longer than one inch there will be another such world in which it is closer to an inch in length. This means that for any length m greater than one inch, there is a world in which the line is longer than one inch and in which the line does not have length m which is nearer the actual world than is any world in which the line has length m. but then Lewis's truth conditions for conditionals dictate that if the line were longer than one inch, its length would not be m, and this is true for any length m greater than the actual length of the line. then consider the set of sentences of the form `Lewis's line is not length m' where m ranges over every length greater than the actual length of Lewis's line. But entails the sentence `Lewis's line is not greater than one inch in length'. Applying GCP, we conclude that if Lewis's line were greater than one inch in length, then it would not be greater than one inch in length. This conclusion is not intuitively reasonable nor is it true at any world in the system-of- spheres model which Lewis describes in his discussion.

CONDITIONAL LOGIC

17

which is later modi ed in [Pollock, 1981]. The later view, which avoids many problems of the earlier view, will be discussed here. While Stalnaker, Lewis and others maintain that the notions of similarity of worlds and of minimal change are vague notions which may change given dierent purposes and contexts, thus accounting for the vagueness we often nd in the use of conditionals, Pollock claims that the similarity relation is not vague but quite de nite. Pollock's account rests upon his use of two epistemological notions, that of a subjunctive generalisation and that of a simple state of aairs. Subjunctive generalisations are statements of the form `Any F would be a G;. The truth of some subjunctive generalisations like `Anyone who drank from the Chisholm's bottle would die' depends upon contingent matters of fact, in this case the fact that Chisholm's bottle contains strychnine and the fact that people have a certain physical makeup. Other subjunctive generalisations like `Any creature with a physical make up like ours who drank strychnine would die' do not depend for their truth on contingent matters of fact in the same way. Pollock calls the former `weak' subjunctive generalisations and the latter `strong' subjunctive generalisations. Some subjunctive generalisations are supposed by Pollock to be directly con rmable by their instances, and these he calls basic. The problem of con rmation is discussed in [Pollock, 1984]. The second crucial ingredient in Pollock's analysis is the notion of a simple state of aairs. A state of aairs is simple if it can be known non-inductively to be the case without rst coming to know some other state(s) of aairs which entail(s) it. The actual world is supposed by Pollock to be determined by the set of true basic strong subjunctive generalisations together with the set of true simple states of aairs. The justi cation conditions for a subjunctive conditional > are stated in terms of making minimal changes in these two sets in order to accommodate . The rst step is to generate all maximal subsets of the set of true basic strong subjunctive generalisations which are consistent with . For each such maximally -consistent set N of true basic strong subjunctive generalisations, we then generate all sets of true simple states of aairs which are maximally consistent with N [ fg. Finally, we consider every possible world at which , every member of some maximally -consistent set N of true basic strong subjunctive generalisations, and every member of some set S of true basic strong subjunctive generalisations, and every member of some set S of true simple states of aairs maximally consistent with N [ fg are all true. If is true at all such worlds, then > is true at the actual world. The set of worlds determined by this procedure serves as the value of a class selection function. If we try to de ne a relative similarity relation for worlds based upon Pollock's analysis of minimal change, we come up with a partial order rather than the `complete' order assumed by Lewis and, apparently, by Stalnaker. Because we can have two worlds j and k such that their similarity to a

18

DONALD NUTE AND CHARLES B. CROSS

third world i is incomparable, the thesis CV does not hold for Pollock's semantics.7 A simple model of Pollock's sort which rejects CV as well as another thesis which has been attributed to Pollock's conditional logic SS is developed in [Mayer, 1981]. Several authors have proposed theories which resemble Pollock's in important respects. One of these is Blue [1981] who suggests that we think of subjunctive conditionals as metalinguistic statements about a certain semantic relation between an antecedent set of sentences in an object language and another sentence of the object language viewed as a consequent. A theory (set of sentences of the object language) and the set of true basic (atomic and negations of atomic) sentences of the language play roles similar to those played by laws (true basic strong subjunctive generalisations) and simple states of aairs in Pollock's account. One problem with Blue's proposal is that treating conditional metalinguistically as he does prevents iteration of conditionals without climbing a hierarchy of metalanguages. Another problem concerns the role which temporal relations between the basic sentences plays in his theory, a problem for other theories as well. (This problem is discussed in Section 1.8 below.) For a more detailed discussion of Blue's view, see [Nute, 1981c]. The similarity of an account like Pollock's or Blue's to the cotenability theories of conditionals should be obvious. A conditional is true just in case its consequent is entailed (Blue uses a somewhat dierent relation) by its 7 Pollock has oered various counterexamples to CV, the most recent of which involves a circuit having among its components two light bulbs L1 and L2 , three simple switches A; B , and C , and a power source. These components are supposed to be wired together in such a way that bulb L1 is lit exactly when switch A is closed or both switches B and C are closed, while bulb L2 is lit exactly when switch A is closed or switch B is closed. At the moment, both bulbs are unlit and all three switches are open. Then the following conditionals are true: (5a) :(L2 > :L1 ) (5b) :[(L2 ^ L1 ) > :(B ^ C )] The justi cation for (5a) is that one way to bring it about that L2 (i.e. that bulb L2 is lit) is to bring it about that A (i.e. that switch A is closed), but A > L1 is true. The justi cation for (5b) is that one way to make L1 and L2 both true is to close both B and C . Pollock claims that the following counterfactual is also true: (5c) L2 > :(B ^ C ) If Pollock is correct, then these three counterfactuals comprise a counterexample to CV. Pollock's argument for (5c) is that L2 requires only A or B , and to also make C the case is a gratuitous change and should therefore not be allowed. But this is an oversimpli cation. It is not true that only A; B , and C are involved. Other changes which must be made if L2 is to be lit include the passage of current through certain lengths of wire where no current is now passing, etc. Which path would the current take if L2 were lit? We will probably be forced to choose between current passing through a certain piece of wire or switch C being closed. It is diÆcult to say exactly what the choices may be without a diagram of the kind of circuit Pollock envisions, but without such a diagram it is also diÆcult to judge whether closing switch C is gratuitous in the case of (5c) as Pollock claims.

CONDITIONAL LOGIC

19

antecedent together with some subset of the set of laws or theoretical truths and some (cotenable) set of simple states of aairs or basic sentences. Veltman [1976] and Kratzer [1979; 1981] also propose theories of conditionals which resemble Pollock's in important respects. We will discuss Kratzer's view, although the two are similar. Kratzer suggests what can be called a premise or a partition semantics for subjunctive conditionals. Like Pollock, she associates with each world i a set Hi of propositions or states of aairs which uniquely determines that world. The set Hi is called a partition for i or a precise set for i. Kratzer proposes that we evaluate a subjunctive conditional > by considering - consistent subsets of Hi . > is true at a world i if and only if each -consistent subset X of Hi is contained in some -consistent subset Y of H such that Y [ fg entails . Kratzer points out that if every - consistent subset of Hi is contained in some maximally - consistent subset of Hi , then this truth condition is equivalent to the condition that > is true at i just in case is entailed by X [ fg for every maximally -consistent subset X of Hi . If we assume that every -consistent subset of Hi is contained in some maximally -consistent subset of Hi , the specialized version of Kratzer's semantics we obtain looks very much like Pollock's. Lewis [1981a] notes that this assumption plays the same role in premise semantics that the Limit Assumption plays in class selection function semantics. In fact, Lewis shows that on this assumption Kratzer's premise semantics is formally equivalent to Pollock's semantics. Given this equivalence, these two semantics will determine exactly the same conditional logic SS. Even if we assume that the required maximal sets always exist and adopt a version of premise semantics which is formally equivalent to Pollock's semantics, Kratzer's position would still dier radically from Pollock's since she does not assign to laws and simple states of aairs a privileged role in her analysis. Nor does she prefer Blue's object language theory and basic sentences for such a role. The set of premises which we associate with a world and use in the evaluation of conditionals varies, according to Kratzer, as the purposes and circumstances of the language users vary. Thus Kratzer reintroduces the vagueness which so many investigators have observed in ordinary usage and which Pollock and Blue would deny or at least eliminate. Apparently Kratzer does not accept the Limit Assumption, in her case the assumption that the required maximal sets always exist. Yet in [Kratzer, 1981] she describes what she calls the most intuitive analysis of counterfactuals, saying that The truth of counterfactuals depends on everything which is the case in the world under consideration: in assessing them, we have to consider all the possibilities of adding as many facts to the antecedent as consistency permits.

20

DONALD NUTE AND CHARLES B. CROSS

This certainly suggests maximal antecedent-consistent subsets of a premise set (the Limit Assumption) and a minimal change semantics. But if the Limit Assumption is unacceptable, this initial intuition must be modi ed. Kratzer's modi cation takes the form of the truth condition reported earlier. Besides the Limit Assumption, Kratzer's semantics also fails to support the GCP. One principle which does remain, a principle common to all the semantics discussed in this section, is the thesis CS. Beginning with some sort of minimalist intuition, all of these authors claim subjunctive conditionals with true antecedents have the same truth values as their consequents. When the antecedent of the conditional is true, the actual world is the unique closest antecedent world and hence the only world to be considered in evaluating the conditional. If Lewis's counterexamples to the Limit Assumption are conclusive, we must conclude that all the semantics for subjunctive conditionals which we have discussed in this section must be inadequate except for Lewis' system-of-spheres semantics and the general version of Kratzer's premise semantics. And if the GCP is a principle which we wish to preserve, then Lewis's semantics and Kratzer's semantics are also inadequate. Besides these diÆculties, minimal change theories have been criticized because they endorse the thesis CS. As was mentioned in Section 1.1, many researchers claim that some conditionals with true antecedent and consequent are false. For an excellent polemic against the minimal change theorists on this issue, see [Bennett, 1974].

1.5 Small Change Theories Aqvist [1973] presents a very interesting analysis of conditionals in which the conditionals in which the conditional operator is de ned in terms of material implication and some unusual monadic operators. Simplifying a bit, Aqvist's semantics involves ordered quintuplets hI; i; R; f; [ ]i such that I and [ ] are as in other models we have discussed, i is a member of I; R is an accessibility relation on I , and f is a function which assigns to each sentence a subset f () of [] such that for every member j of f (); hi; j i 2 R. A sentence whose primary connective is the monadic star operator is true at a world j in I just in case j 2 f (). The usual truth conditions are provided for a necessity operator, so that is true at j in I just in case for every world k such that hj; ki 2 R, is true at k, i.e. just in case k 2 []. Finally, a conditional > is true at a world j just in case ( ! ) is true at i. Aqvist modi es this semantics in an appendix. The modi cation involves a set of models of the sort described, each with the same set of possible worlds, the same accessibility relation, and the same valuation function, but each with its own designated world and selection function. The resulting semantics turns out once again to be equivalent to a version of class selection semantics.

CONDITIONAL LOGIC

21

While the interesting formal details of Aqvist's theory are quite dierent from those of other investigators, the most signi cant feature of his account may be his suggestion that a class selection function might properly pick out for a sentence and a world i all those -worlds which are `suÆciently' similar to i rather than only those -worlds which are `most' similar to i. By changing the intended interpretation for the class selection function, we avoid the trivialisation of the truth conditions for conditionals in all those cases where the Limit Assumption in either of its forms fails. At the same time, class selection function semantics supports the GCP. Aqvist's suggestion looks very promising. A similar approach is taken by Nute, [1975a; 1975b; 1980b], but the semantics Nute proposes is explicitly a version of class selection function semantics. This model theory diers from versions of class selection function semantics we examined earlier in two important ways. First the intended interpretation is dierent, i.e. there is a dierent informal explanation to be given for the role which the class selection functions play in the models. Second the restriction (CS2) is replaced by the weaker restriction (CS20 ) if i 2 [] then i 2 f (; i). The second change is related to the rst. Surely any world is more similar to itself than is any other. Thus, if f picks out for and i the -worlds closest to i, and if i is itself a -world, then f will pick out i and nothing else for and i. The objection to the thesis CS, though can be thought of as a claim that there may be other worlds suÆciently similar to the actual world so that in some cases we should consider these worlds in evaluating conditionals with true antecedents. When we modify our earlier semantics for Lewis's system VC by replacing (CS2) with (CS20 ), the resulting class of models characterizes the logic which Lewis [1973b] calls VW. VW is the smallest conditional logic which is closed under all the rules and contains all the theses listed for VC except for the thesis CS. By weakening our semantics further we can characterize a logic which is closed under all the rules and contains all the theses of VW except CV. This, of course, would give us a logic for which Pollock's SS would be a proper extension. Although many count it as an advantage of small change class selection function semantics that such theories allow us to avoid CS, it should be noted that such semantics do not commit us to a rejection of CS. As we have seen, both Lewis's VC and Pollock's SS are characterized by classes of class selection function models. For those who favor CS, it is still possible to avoid the diÆculties of the Limit Assumption and embrace the GCP by adopting one of these versions of the class selection function semantics but giving a small change interpretation of the selection functions upon which such a semantics depends. It is possible to avoid CS within the restrictions of a minimal change semantics. We can do this by `coarsening' our similarity relation, to use

22

DONALD NUTE AND CHARLES B. CROSS

Lewis's phrase, counting worlds as equally similar to some third world despite fairly large dierences in these worlds. For example, we might count some worlds other than i as being just as similar to i as is i itself. When we do this for a minimal change version of class selection function semantics, the formal results are exactly the same as those proposed earlier in this section and the resulting logic is VW. Of course, we must still cope with Lewis's objections to the Limit Assumption. But it is even possible to avoid CS within Lewis's system-of-spheres semantics. All we need to do is replace the restriction (SOS1) with the following: (SOS10 ) i 2 \si . The class of all those system-of-spheres models which satisfy (SOS10 ) and (SOS2) determines the conditional logic VW. While such a concession to the critics of CS is possible within the con nes of Lewis's semantics, Lewis does not favor such a move. We should also remember that the resulting semantics still does not support the GCP. Since we can formulate a kind of minimal change semantics which avoids the controversial thesis CS, the only advantage we have shown for small change theories is that they avoid the problems of the Limit Assumption while giving support for the GCP. But this advantage may be illusory. Loewer [1978] shows that for many versions of class selection function semantics we can always view the selection function as picking out closest worlds. For a model of such a semantics, we can de ne a relative similarity relation R between the worlds of the model in terms of the model's selection function f . It can then be shown that for a sentence and a world i; j 2 f (; i) if and only if j is a -world which is at least as close to i with respect to R as is any other -world. Consider such a model and consider a proposition which is true at world j just in case there are in nitely many worlds closer to i with respect to R than is j . There will be no -world closest to i. Consequently f (; i) will be empty and any conditional with as antecedent will be trivially true. How seriously we view this example depends upon our attitude toward the assumption that there exists a proposition which has the properties attributed to . If we take propositions to be sets of worlds, then the existence of such a proposition is very plausible. We should also note that this argument involves a not so subtle change in our semantics. Until now we have been thinking of our selection functions as taking sentences as arguments rather than propositions. If we restrict ourselves to sentences it is very unlikely that our language for conditional logic will contain a sentence which expresses the troublesome proposition. Nevertheless, it is not entirely clear that every small change version of class selection function semantics will automatically avoid the problems associated with the Limit Assumption. There is another advantage which can be claimed for small change theories which doesn't involve the logic of conditionals. If, for example, VW

CONDITIONAL LOGIC

23

is the correct logic for conditionals, we have seen that it is possible to take either the minimal change or the small change approach to semantics for conditionals and still provide a semantics which determines VW. But even if agreement is reached about which sentences are valid, these two approaches are still likely to result in dierent assignments of truth values to contingent conditional sentences. Suppose for example that Fred's lawn is just slightly too short to come into contact with the blades of his lawnmower. Thus his lawnmower will not cut the grass at present. Suppose further that the engine on Fred's lawnmower is so weak that it will only cut about a quarter of an inch of grass. If the height of the grass is more than a quarter of an inch greater than the blade height, the mower will stall. Then is the following sentence true or false? 29. If the grass were higher, Fred's mower would cut it. On the minimal change approach, whether we use class selection function semantics or system-of-spheres semantics, the answers to this question must be `yes' for there will be worlds at which the lawn is higher than the blade height but no more than a quarter inch higher than the blade height, which are closer to the actual world than is any world at which the grass is more than a quarter inch higher than the blade height. But the correct answer to the question would seem to be `no'. If someone were to assert (29) we would likely object, `Not if the grass were much higher'. This shows that we are inclined to consider changes which are more than minimally small in our evaluations of conditionals. We might avoid particular examples of this sort by `coarsening' the similarity relation, but it may be possible to generate such examples for any similarity relation no matter how coarse. All of the small change theories we have considered propose semantics which are at least equivalent to some version of class selection function semantics. There is, however, at least one small change theory which does not share this feature. Warmbrod [1981] presents what he calls a pragmatic theory of conditionals. This theory is based on similarity of worlds but in a radically dierent way than are any of the theories we have yet examined. According to Warmbrod, the set of worlds we use in evaluating a conditional is determined not by the antecedent of that particular conditional but rather by all the antecedents of conditionals occurring in the piece of discourse containing that particular conditional. Thus a conditional is always evaluated relative to a piece of discourse rather than in isolation. For any piece of discourse D and world i we select a set of worlds S which satis es the following conditions: (W1) if > occurs in D and is logically possible, then some world j in S is a -world; (W2) for some > occurring in D; j 2 s if and only if j is at least as close to i as are the closest - worlds to i.

24

DONALD NUTE AND CHARLES B. CROSS

Condition (W1) ensures that S is what Warmbrod calls normal for D and (W2) ensures that S is what Warmbrod calls standard for some antecedent occurring in D. (Warmbrod formulates his theory in terms of an accessibility relation, but the semantics provided here is formally equivalent.) Then a conditional > is true at i with respect to D if and only if ! is true at every world in S . The resulting semantics resembles both class selection function semantics and an analysis of conditionals as strict conditionals, but it diers from each of these approaches in important respects. Like other proposals which treat subjunctive conditionals as being strict conditionals, Warmbrod's theory validates Transitivity, Contraposition, and Strengthening Antecedents. Warmbrod argues that the evidence against these theses can be explained away. Apparent counterexamples to transitivity, for example, depend according to Warmbrod on the use of dierent sets S in the evaluation of the sentences involved in the putative counterexamples. Consider the example (21){(23) in Section 1.3 above. According to Warmbrod, this example can be a counterexample to Transitivity only if there is some set of worlds S which contains worlds at which Carter did not lose in 1980, contains some worlds at which Carter died in 1979, which is normal for these two antecedents, and for which the material conditional corresponding to (21) and (22) are true at all members of S while the material conditional corresponding to (23) is false at some world in S . But this, Warmbrod claims, is exactly what does not happen. The apparent counterexample depends upon an equivocation, a shift of the set S during the course of the argument. Warmbrod's theory has a certain attraction. It is certainly true that Transitivity and other controversial theses are harmless in many contexts, and it is certainly true that these theses are frequently used in ordinary discourse. The problem is to provide an account of the dierence between those situations in which the thesis is reliable and those in which it is not. Warmbrod's strategy is to consider the thesis to be always reliable and then to provide a way of falsifying the premises in unhappy cases. An alternative approach is to count these theses as being invalid and then to look for those features of context which sometimes allow us to use them with impunity. We think the second strategy is safer. It is probably better to occasionally overlook a good argument than it is to embrace a bad one. Or to put a bit dierently, it is better to force the argument to bear the burden of proof rather than to consider it sound until proven unsound. Another problem with Warmbrod's theory is that it suggests that we should nd apparent counterexamples to certain theses which have until now been considered uncontroversial. For example, we should nd apparent counterexamples for CA. (See [Nute, 1981b] for details.) Warmbrod's semantics also runs into diÆculty with the Limit Assumption. The requirement (W2) that S be standard for some antecedent in D involves the Limit Assumption explicitly. although Warmbrod's semantics

CONDITIONAL LOGIC

25

may tolerate small, non-minimal changes for some of the antecedents in a piece of discourse, it demands that only minimal changes be considered for at least one such antecedent. Of course, we might be able to modify (W2) in such a way as to avoid this problem. There remains, though, the nagging suspicion that none of the small change theories we have considered will in the end be able to escape the Limit Assumption, with all its diÆculties, in some form or other.

1.6 Maximal Change Theories Both minimal change theories and small change theories of conditionals are based on the premise that a conditional > is true at i just in case is true at some - world(s) satisfying certain conditions. The dierence, of course, is that for the one approach it is suÆcient that be true at all closest -worlds while the other requires that be true at all -worlds which are reasonably or suÆciently close to i. There is a third type of theory which shares the same basic premise as these two but which does not require that the worlds upon which the evaluation of > at i depends be very close or similar to i at all. According to this way of looking at conditionals, all that is required is that the relevant worlds resemble i in certain very minimal respects. Otherwise the relevant worlds may dier from i to any degree whatever. We might even think of this approach as requiring us to consider worlds which dier from i maximally except for the narrowly de ned features which must be shared with i. One theory of this sort is developed by Gabbay [1972]. To facilitate comparison, we will simplify Gabbay's account of conditionals rather drastically. When we do this, Gabbay's semantics for conditionals resembles the class selection function semantics we have discussed, but there are some very important dierences. A simpli ed Gabbay model is an ordered triple hI; g; [ ]i such that I and [ ] are as in earlier models, and g is a function which assigns to sentences and and world i in I a subset g(; ; i) of I . A conditional > is true at i in such a model just in case g(; ; i) [ ! ]. The dierence between this and class selection function semantics of the sort we have seen previously is obvious: the selection function g takes both antecedent and consequent as argument. This means that quire dierent sets of worlds might be involved in the truth conditions for two conditionals having exactly the same antecedent. This change in the formal semantics re ects a dierence in Gabbay's attitude toward conditionals and toward the way in which we evaluate conditionals. When we evaluate > , we are not concerned to preserve as much as we can of the actual world in entertaining ; instead we are concerned to preserve only those features of the actual world which are relevant to the truth of , or perhaps to the eect would have on the truth of . In actual practice the kind of similarity which is required is supposed by Gabbay to be determined by , by , and

26

DONALD NUTE AND CHARLES B. CROSS

also by general knowledge and particular circumstances which hold in i at the time when the conditional is uttered. What this involves is left vague, but it is not more vague than the notions of similarity assumed in earlier theories. When we modify Gabbay's semantics in this way, we must impose three restrictions on the resulting models: (G1)

i 2 g(; ; i);

(G2)

if [] = [ ] and [] = [] then g(; ; i) = g( ; ; i);

(G3)

g(; ; i) = g(; : ; i) = g(:; ; i).

With these restrictions Gabbay's semantics determines the smallest conditional logic which is closed under RCEC and the following two rules: RCEA: from $ , to infer ( > ) $ ( > ). RCE: from ! , to infer > .

We will call this logic G. At the end of [Gabbay, 1972], a conjectured axiomatisation of G is presented, but it was later shown to be unsound and incomplete in [Nute, 1977], where the axiomatisation of G presented here was conjectured to be sound and complete (see [Nute, 1980b]). Working independently, David Butcher [1978] also disproved Gabbay's conjecture, and proved the soundness and completeness of G for the Gabbay semantics (see [Butcher, 1983a]). It is obvious that G is the weakest conditional logic we have yet considered. We can characterize a stronger logic if we place additional restrictions on our Gabbay models, but we may not be able to guarantee a suÆciently strong logic without restricting our models to the point where they become formally equivalent to models we examined earlier. Consider, for example the theses CC: [( > ) ^ ( > )] ! [ > ( CM: [ > (

^ )] ! [( >

^ )]

) ^ ( > )].

To ensure that our conditional logic contains CC and CM, we could impose the following restriction on Gabbay's semantics: (G4) g(; ; i) = g(; ; i). Once we do this we have eliminated the most distinctive feature of Gabbay's semantics. According to David Butcher [1983a], it is possible to ensure CC and CM by adopting conditions weaker than (G4). However, Butcher has indicated that these conditions are problematic for other reasons.8 8 Many

of these isues are also discussed in Butcher [1978; 1983a].

CONDITIONAL LOGIC

27

A rather dierent and very specialized maximal change theory has been developed in two dierent forms by Fetzer and Nute [1979; 1980] and by Nute [1981a]. Both forms of this theory are intended not as analyses of ordinary subjunctive conditionals as they are used in ordinary discourse, but rather as analyses of scienti c, nomological, or causal conditionals, i.e. of subjunctive conditionals as they are used in the very special circumstances of scienti c investigation. Formally the two theories propose class selection function semantics for scienti c conditionals, but the intended interpretation is quite dierent from that of any theory we have yet considered. In the version of the theory developed by Fetzer and Nute the selection function f is intended to pick out for a sentence and a world i the set of all those -worlds at which all the individuals mentioned in possess, in so far as the truth of will allow, all those dispositional properties which they permanently possess in i. This forces us to ignore all features of worlds except those assumed by the underlying theory of causality to aect the causal eÆcacy of the situation, events, etc., described in . We are forced, in other words, to consider worlds which preserve only these features and otherwise dier maximally from the world at which the scienti c conditional is being evaluated. In this way we can ensure that the conditional in question is true if but only if the antecedent and the consequent are related causally or nomologically in an appropriate manner. Physical law statements are then analysed as universal generalisations or sets of universal generalisations of such scienti c conditionals. The view subsequently developed in [Nute, 1981a] departs a bit from the requirement of maximal speci city which we seek in our scienti c pronouncements and in doing so comes closer to representing a kind of conditional used in ordinary discourse. Nute suggests that the selection function f selects for a sentence and a world i all those -worlds at which all those individuals mentioned in possess, so far as the truth of allows, not only all those dispositional properties which they permanently possess in i but also all those dispositional properties which they accidentally or as a matter of particular fact possess in i. For example, a particular piece of litmus paper permanently possesses the tendency to turn red when dipped in an acidic solution, since it could not lose this tendency and still be litmus paper, but it only accidentally possesses the tendency to re ect blue light, since it could certainly lose this disposition through being dipped in acid and yet still be litmus paper. Where it is impossible to accommodate without giving up some of the dispositional properties possessed by individuals mentioned in , preference is given to dispositions which are possessed permanently. On this account, but not on the account developed by Fetzer and Nute conjointly, the following conditional is true where x is a piece of litmus paper which is in fact blue: 30. If x were cut in half, it would be blue.

28

DONALD NUTE AND CHARLES B. CROSS

Nute [1981a] suggests that many ordinary conditionals may have such truth conditions, or may be abbreviations of other more explicit conditionals which have such truth conditions. Each of the theories presented in this section is in fact only a fragment of a more complex theory. It is impossible to discuss the larger theories in any greater detail and the reader is encouraged to consult the original publications. What allows us to consider them under a single category is their departure from the premise that the truth of a conditional depends upon what happens in antecedent- worlds which are very much like the actual world. Each of these theories assumes and even requires that the divergence from the actual world be rather larger than minimal or small change theories would indicate.

1.7 Disjunctive Antecedents One thesis in particular has caused considerable controversy among the investigators of conditional logic. This thesis is Simpli cation of Disjunctive Antecedents: SDA: [( _ ) > ] ! [( > ) ^ ( > )]. The intuitive plausibility of SDA has been suggested in [Fine, 1975], in [Nute, 1975b] and in [Ellis et al., 1977]. Unfortunately, any conditional logic which contains SDA and which is also closed under substitution of provable equivalents will also contain the objectionable thesis Strengthening Antecedents. If we add SDA to any of the logics we have discussed, then Transitivity and Contraposition will be contained in the extended logic as well.9 Ellis et al. suggest that the evidence for SDA is so strong and the problems involved in trying to incorporate SDA into any account of conditionals based upon possible worlds semantics is so great that the possibility of an adequate possible worlds semantics for ordinary subjunctive conditionals is quite eliminated. With all the problems which the various theories encounter, the possible worlds approach has still proven to be a powerful tool for the investigation of the logical and semantical properties of conditionals and we should be unwilling to abandon it without rst trying to defend it against such a charge. The rst line of defence has been a `translation lore' approach to the problem of disjunctive antecedents. It is rst noted that, despite the intuitive appeal of SDA, there are examples from ordinary discourse which show that SDA is not entirely reliable. The following sentences comprise one such example: 9 As further evidence of the problematic character of SDA, David Butcher [1983b] has shown that any logic containing SDA and CS will contain ! , where is de ned as : > .

CONDITIONAL LOGIC

29

31a. If the United States devoted more than half of its national budget to defence or to education, it would devote more than half of its national budget to defence. 31b. If the United States devoted more than half of its national budget to education, it would devote more than half of its national budget to defence. Contrary to what we should expect if SDA were completely reliable, it looks very much as if (31a) is true even though (31b) cannot be true. Fine [1975], Loewer [1976], McKay and Van Inwagen [1977] and others have suggested that those examples which we take to be evidence for SDA actually have a quite dierent logical form from that which supporters of SDA suppose them to have. While a sentence like (31a) really does have the form ( _ ) > , a sentence like 32. If the world's population were smaller or agricultural productivity were greater, fewer people would starve. has the quite dierent logical form ( > ) ^ ( > ). According to this suggestion, the word `or' represents wide scope conjunction rather than narrow scope disjunction in (32). Since we can obviously simplify a conjunction, this confusion about the logical form of sentences like (32) results in the mistaken commitment to a thesis like SDA. This would be a neat solution to the problem if it would work, but the translation lore approach has a serious aw. According to the translation lorist, the two sentences (31a) and (32) have dierent logical forms even though they share the same surface or grammatical structure. We can point out an obvious dierence in surface structure since one of the apparent disjuncts in the antecedent of (31a) is also the consequent of (31a), a feature which (32) lacks. But we can easily produce examples where this is not the case. Suppose after asserting (31a) a speaker went on to assert 33. So if the United States devoted over half its national budget to defence or education, my Lockheed stock would be worth much more than it is. It would be very reasonable to accept this conditional but at the same time to reject the following conditional: 34. So if the United States devoted over half of its national budget to education, my Lockheed stock would be worth much more than it is. The occurrence of the same component sentence in antecedent and consequent is not a necessary condition for the failure of SDA and cannot be used as a criterion for distinguishing those cases in which English conditional with `or' in their antecedents are of the logical form ( _ ) > from

30

DONALD NUTE AND CHARLES B. CROSS

those in which they are of the logical form ( > ) ^ ( > ). We cannot decide on purely syntactical grounds which of the two possible symbolisations is proper for an English conditional with `or' in its antecedent. Loewer [1976] suggests that this decision may be made on pragmatic grounds, but it is diÆcult to see what the distinguishing criterion is to be except that English conditionals with disjunctive antecedents are to be symbolized as ( > ) ^ ( > ) when simpli cation of their disjunctive antecedents is legitimate and to be symbolized as ( _ ) > when such simpli cation is not legitimate. Until Loewer's suggestion concerning the pragmatic pressures which prompt one symbolisation rather than another can be provided with suÆcient detail, the translation lore account of disjunctive conditional does not provide us with an adequate solution to our problem. We nd an interesting variation on the translation lore solution in [Humberstone, 1978] and in [Hilpinen, 1981]. Both suggest the use of an antecedent forming operator like Aqvist's . We will discuss Hilpinen's theory here since it diers the most from Aqvist's view. Hilpinen's analysis utilizes two separate operators which we can represent as If and Then. The If operator attaches to a sentence to produce an antecedent If . the Then operator connects an antecedent and a sentence to form a conditional Then . The role of the dyadic truth functional connectives is expanded so that _, for example, can connect two antecedents and to form a new antecedent _ . An important dierence between Hilpinen's If operator and Aqvist's is that for Aqvist is a sentence or proposition bearing a truth value while for Hilpinen If is not. Finally Hilpinen proposes that sentences like (31a) be symbolized as If ( _ ) Then while sentences like (32) be symbolized as (If _ If ) Then . Hilpinen then accepts a rule similar to SDA for sentences having the latter form but not for sentences having the former. This proposal allows us to incorporate a rule like SDA into our conditional logic while avoiding Strengthening Antecedents, etc., and, unlike other versions of the translation lore approach, Hilpinen's proposal seems to suggest how it might be possible for sentences like (31a) and (32) to have a legitimate scope ambiguity in their syntactical structure, like the scope ambiguity in `President Carter has to appoint a woman'. In fact, however, the ambiguity postulated by Hilpinen's proposal does not seem simply to be a scope ambiguity. The sentence `President Carter has to appoint a woman' is ambiguous with respect to the scope of the phrase `a woman', but the phrase `a woman' has the same syntactical function and the same semantics on both readings of the sentence. The same cannot be said of the word `or' in Hilpinen's account of disjunctive antecedents: on one resolution of the ambiguity, what `or' connects in examples like (31a) and (32) are sentences; on the other resolution of the ambiguity, `or' connects phrases that are not sentences. It is diÆcult to see how the ambiguity in (31a) and (32) can be simply a scope ambiguity if `or' does not have the same syntactical role in both readings of a given sentence.

CONDITIONAL LOGIC

31

Another approach to disjunctive antecedents is developed by Nute [1975b; 1978b] and [1980b]. Formally the problem with SDA is that it together with substitution of provable equivalents results in Strengthening Antecedents and other unhappy results. The translation lorist's suggestion is that we abandon SDA. Nute's suggestion, on the other hand, is that we abandon substitution of provable equivalents, at least for antecedents of subjunctive conditionals. One fairly strong logic which does not allow substitution of provably equivalent antecedents is the smallest conditional logic which is closed under RCEC and RCK and contains ID, MP, MOD, CV, and SDA. Logics of this sort have been called `non- classical' or `hyperintensional' to contrast them with those intensional logics which are closed under substitution of provable equivalents. Classical logics (those closed under substitution of provable equivalents) are preferred by most investigators. Besides the fact that non-classical logics are much less elegant than classical logics, Nute's proposal has other very serious diÆculties. First, substitution of certain provable equivalents within antecedents appears to be perfectly harmless. For example, we can surely substitute _ for _ in ( _ ) > with impunity. How are we to decide which substitutions are to be allowed and which are not? Non-classical conditional logics which allow extensive substitutions are developed in Nute [1978b] and [1980b]. But these systems are extremely cumbersome and there still is the extra-formal problem of justifying the particular choice of substitutions which are to be allowed in the logic. Second, we are still left with the apparent counterexamples to SDA like (31a). Nute suggests a pluralist position, maintaining that there are actually several dierent conditionals in common use. For some of these conditionals SDA is reliable while for others it is not. The conditional involved in (31a), it is claimed, is unusual and should not be represented in the same way as other subjunctive conditionals. While there is good reason to admit a certain pluralism, to admit, for example, the distinction between subjunctive and indicative conditionals, Nute's proposal is little more than a new translation lore in disguise. The translation lore we discussed earlier at least has the virtue that it attempts to explain the perplexities surrounding disjunctive antecedents in terms of a widely accepted set of logical operators without requiring the recognition of any new conditional operators. Non-classical logic appears to be a dead end so far as the problem of disjunctive antecedents is concerned. A completely dierent solution is suggested in [Nute, 1980a], a solution based upon the account of conversational score keeping developed in [Lewis, 1979b]. Basically, the proposal concerns the way in which the class selection function (or the system-of-spheres if Lewis-style semantics is employed) becomes more and more de nite as a linguistic exchange proceeds. During a conversation, the participants tend to restrict the selection function which they use to interpret conditionals in such a way as to accommodate claims made by their fellow participants. This growing set of restrictions on the se-

32

DONALD NUTE AND CHARLES B. CROSS

lection function forms part of what Lewis calls the score of the conversation at any given stage. Some accommodations, of course, will not be forthcoming since some participant will be unwilling to evaluate conditionals in the way which these accommodations would require. Each restriction on the selection function which the participants implicitly accept will also rule out other restrictions which might otherwise have been allowed. Nute's suggestion is that our inclination is to restrict the selection function in such a way to make SDA reliable, but that this inclination can be overridden in certain circumstances by our desire to accommodate the utterance of another speaker. When we hear the utterance of a sentence like (31a), for example, we restrict our selection function so that SDA becomes unreliable for sentences which have `the United States devotes more than half its national budget to defence or education' as antecedent. Once (31a) is accommodated in this way, this restriction on the selection function remains in eect so long as the conversational context does not change. Nute completes his account by formulating some `accommodation' rules for class selection functions. By oering a pragmatic account of the way in which the selection function becomes restricted during the course of a conversation, and by paying attention to the inclination to restrict the selection function in such a way as to make SDA reliable whenever possible, it may be possible to explain the fact that SDA is usually reliable while at the same time avoiding the many diÆculties involved in accepting SDA as a thesis of our conditional logic. This proposal is similar in certain respects to Loewer's [1976]. Like Loewer, Nute is recognising the important role which pragmatic features play in our use of conditionals with disjunctive antecedents. However, Nute's use of Lewis's notion of conversational score keeping results in an account which provides more details about what these pragmatic features might be than does Loewer's account. We also notice that Nute's suggestions might provide the criterion which Loewer needs to distinguish those conditionals which should be symbolized s ( _ ) > from those which should be symbolized as ( > ) ^ ( > ). But once the distinction is explained in terms of the evolving restrictions on class selection functions, there is no need to require that these conditionals be symbolized dierently. The point of Nute's theory is that all such conditionals have the same logical form, but the reliability of SDA will depend on contextual features. There is also considerable similarity between Nute's second proposal and Warmbrod's semantics for conditionals which was discussed in Section 1.5. In fact, Warmbrod's semantics is oered at least in part as an alternative to Nute's proposed solution to the problem of disjunctive antecedents. The important similarity between the two approaches is that both recognize that the interpretation of a conditional is a function not of the conditional alone but also of the situation within which the conditional is used. The important dierence is that Warmbrod's semantics makes SDA, Transitivity, Contraposition, Strengthening Antecedents, etc. valid and uses pragmatic

CONDITIONAL LOGIC

33

considerations to explain and guard us from those cases where it seems to be a mistake to rely upon these principles, while Nute ultimately rejects all of these principles, but uses pragmatic considerations to explain why it is perfectly reasonable to use at least one of these theses, SDA, in many situations. Warmbrod also oers a translation lore as part of his account. His suggestion about the way in which we should symbolize English conditionals with disjunctive antecedents is essentially that of Fine, Lewis, Loewer, and others, but he oers purely syntactic criteria for determining which symbolisation is appropriate in a particular case. His semantics is oered as a justi cation for his translation lore in an attempt to make his rules for symbolisation appear less ad hoc. Warmbrod points out some diÆculties with Nute's rules of accommodation for class selection functions, and his translation rules might be used as a model for improving the formulation of Nute's rules. Nute's theory of disjunctive antecedents in terms of conversational score might also be proposed as an alternative justi cation for Warmbrod's translation rules.

1.8 The Direction of Time We turn now to a problem alluded to in Section 1.4, a problem which concerns the role temporal relations play in the truth conditions for subjunctive conditionals. Actually, there are two dierent sets of problems to be considered. One of these involves the use of tensed language in conditionals and the other does not depend essentially on the use of tense and conditionals together. We will consider the latter set of problems in this section and save problems concerning tense for the next section. A particularly thorny problem for logicians working on conditionals has to do with so-called backtracking conditionals, i.e. conditionals having antecedents concerned with events or states of aairs occurring or obtaining at times later than those involved in the consequents of the conditional. It is widely held that such conditionals are rarely true, and that when they are true they usually involve much more complicated antecedents and consequents than do the more usual true non-backtracking conditionals. Consider, for example, the two conditionals: 35. If Hinckley had been a better shot, Reagan would be dead. 36. If Reagan were dead, Hinckley would have been a better shot. The rst of these two conditionals is an ordinary non-backtracking conditional, while the second is a backtracking conditional. the rst is very plausible and perhaps true, while the second is surely false. The problem with (36) which makes it so much less plausible than (35) is that Reagan might have died subsequent to the assassination attempt from any number

34

DONALD NUTE AND CHARLES B. CROSS

of causes which would not involve an improvement of Hinckley's aim. The problem for the logician or semanticist is to explain why non-backtracking conditionals are more often true than are backtracking conditionals. The primary goal of Lewis [1979a] is to explain this phenomenon. Lewis's proposal makes explicit, extensive use of the technical notion of a miracle. In a certain sense miracles do not occur at all in Lewis's analysis: rather a miracle occurs in one world relative to another world. No event ever occurs in any world which violates the physical laws of that world, but events can certainly occur in one world which violate the physical laws of some other world. These are the kinds of miracles Lewis relies upon. Assuming complete determinism, which Lewis does at least for the sake of argument, any world which shares a common history with the actual world up to a certain point in time but which diverges from the actual world after that time cannot obey the same physical laws as does the actual worlds. Basically Lewis proposes that the worlds most similar to the actual world in which some counterfactual sentence is true are those worlds which share their history with the actual world up until a brief transitional period beginning just prior to the times involved in the truth conditions for . In the case of (35) this might mean that everything happens exactly as it did except that Hinckley miraculously aimed better than he actually did. this might only require something as small as a neuron ring at a slightly dierent time than it actually did. This is about as small a miracle as we could hope for. Once this miracle occurs, events are assumed by Lewis to once again follow their lawful course with the result, perhaps, that Reagan is mortally wounded. In the case of (36), on the other hand, Reagan might be dead if the FBI agent miraculously failed to jump in front of Reagan, if Reagan miraculously moved in such a way that the bullet struck him dierently, or even if Reagan miraculously had a massive stroke at any time after the assassination attempt. Even if events followed their lawful course after any of these miracles, Hinckley's aim would not be improved. Lewis notes that the vagueness of conditionals requires that there may be various ways of determining the relative similarity of worlds, dierent ways being employed on dierent occasions. There is one way of resolving vagueness which Lewis considers to be standard, and it is this way which provides us with the explanation of (35) and (36) given above. This standard resolution of vagueness is expressed in the following guidelines for determining the relative similarity of worlds: (L1) It is of the rst importance to avoid big, complicated, varied, widespread violations of law. (L2) It is of the second importance to maximize the spatio- temporal region throughout which perfect match of particular fact prevails.

CONDITIONAL LOGIC

35

(L3) It is of the third importance to avoid even small, simple, localized violations of law. (L4) It is of little or no importance to secure approximate similarity of particular fact, even in matters that concern us greatly. Lewis would maintain that application of these guidelines together with his system-of-spheres semantics for subjunctive conditionals will have the desired result of making (35) at least plausible while making (36) clearly false. One major objection to Lewis's account is that once we allow miracles in order to produce a world which diverges from the actual world, there is nothing in Lewis's guidelines to prevent us from allowing another small miracle in order to get the worlds to converge once again. Since Lewis's guidelines place a higher priority on maximising the area of perfect match of particular facts over the avoidance of small, localized violations of law, we should prefer a small convergence miracle to a future which is radically dierent. Lewis's response to such a suggestion is that divergence miracles tend to be much smaller than convergence miracles or, what amounts to the same thing, that past events are overdetermined to a greater extent than are future events. If correct, then Lewis's guidelines would place greater importance on avoidance of a large convergence miracle than on maximising the area of perfect match of a particular fact. and careful consideration of examples indicates that Lewis's suggestion is at least plausible, although no conclusive argument has been provided. In [Nute, 1980b] examples of very simple worlds are given in which convergence miracles could be quite small and in which Lewis's guidelines would thus dictate that for some counterfactual antecedents the nearest antecedent worlds are those in which such small convergence miracles occur. In these examples, we get the (intuitively) wrong result when we apply Lewis's standard method for resolving the vagueness of conditionals. Lewis [1979a] warns that his guidelines might not work for very simple worlds, though, so the force of Nute's examples is uncertain. Lewis's guidelines may give an adequate explanation for our use of conditionals in the context of a complex world like the actual world, and since our intuitions are developed for such a world they may be unreliable when applied to very simple worlds. If we consider Lewis's proposal in the context of a probabilistic world, we discover that we no longer need employ the troublesome notion of a miracle. Instead of a miracle, we can accommodate a counterfactual antecedent in a probabilistic world by going back to some underdetermined state of aairs among the causal antecedents of the events or states of aairs which must be eliminated if the antecedent is to be true and change them accordingly. Since these states of aairs were underdetermined to begin with, they could have been otherwise without any violation of the probabilistic laws governing the

36

DONALD NUTE AND CHARLES B. CROSS

universe. But if we do this, Lewis's emphasis on maximising the spatiotemporal area of perfect match of particular fact would require that we always change a more recent rather than an earlier causal antecedent when we have a choice. This consequence is very much like the Requirement of Temporal Priority in [Pollock, 1976], a principle which is superseded by the more complex account to be discussed below. Such a principle is unacceptable. Suppose, for example, that Fred left his coat unattended in a certain room yesterday. Today he returned to the room and found the coat had not been disturbed. Suppose that both yesterday and earlier today a number of people have been in the room who had an opportunity to take the coat. Then a principle like Lewis's L2 or Pollock's RTP will dictate that if the coat had been taken, it would have been taken today rather than yesterday. Other things being equal, the later the coat is taken the greater the area of perfect match of particular fact. But this is counterintuitive. (In fact, experience teaches that unguarded objects tend to disappear earlier rather than later.) While Lewis's theory is intended to explain why many backtracking conditionals are false, a consequence of the theory is that some very unattractive backtracking conditionals turn out to be true. In fact, this particular problem plagues Lewis's analysis whether the world is determined or probabilistic. As it is presented, Lewis's account does rely upon miracles. As a result, Lewis in eect treats all counterfactual conditionals as also being counterlegals. This is the feature of his account which most writers have found objectionable. Pollock, Blue, and others place a much higher priority on preservation of all law than on preservation of particular fact no matter how large the divergence of particular fact might be. Given such priorities, and given a deterministic world of the sort Lewis supposes, any change in what happens will result in a world which is dierent at every moment in the past and every moment in the future. If we adopt such a position, how can we hope to explain the asymmetry between normal and backtracking counterfactual conditionals? Probably the most sophisticated attempt to deal with these problems within the framework of a non-miraculous analysis of counterfactuals is that developed by John Pollock [1976; 1981]. Pollock has re ned his account between 1976 and 1981, but we will try to explain what we take to be his latest position on conditionals and temporal relations. Pollock says that a state of aairs P has historical antecedents if there is a set of true simple states of aairs such that all times of members of are earlier than the time of P and nominally implies P . nominally implies P just in case together with the set of universal generalisations of material implications corresponding to Pollock's true strong subjunctive generalisations entail P (or entail a sentence which is true just in case P obtains). Pollock next de nes a nomic pyramid which is supposed to be a set of states of aairs which contains every historical antecedent of each of its members. Then P

CONDITIONAL LOGIC

37

undercuts another state of aairs Q if and only if for every set of true states of aairs such that is a nomic pyramid and Q 2 , nominally implies that P does not obtain. In revising his set S of true simple states of aairs to accommodate a particular counterfactual antecedent P , Pollock tells us that we are to minimize the deletion of members of S which are not undercut by P . (We hope the reader will forgive the vacillation here since Pollock talks about entailment and other logical relations holding between states of aairs where most authors prefer to speak of sentences or propositions.) Perhaps this procedure will give us the correct results for backtracking and non-backtracking conditionals as Pollock suggests it will if the world is deterministic, but problems arise if we allow the possibility that there may be indeterministic states of aairs which lack historical antecedents. Consider a modi ed version of an example taken from [Pollock, 1981]. Suppose that protons sometimes emit photons when subjected to a strong magnetic eld under a set of circumstances C , but suppose also that protons never emit photons under circumstances C if they are not also subjected to a strong magnetic eld. As a background condition, let us assume that circumstances C obtain. Now let be true just in case a certain proton is subjected to a strong magnetic eld at time t and let be true just in case the same proton emits a photon shortly after t. Suppose that both and are true. Assuming that no other states of aairs nomologically relevant to obtain, we would intuitively say that : > : is true, i.e. if the proton hadn't been subjected to the magnetic eld at t, then it would not have emitted a proton shortly after t. But Pollock cannot say this. Since has no historical antecedents in Pollock's sense, it cannot be undercut by :. Because Pollock does not recognize historical antecedents of states of aairs when the nomological connection involved is merely probable, he must say that : > is true. Pollock's earlier account, which included the Requirement of Temporal Priority, and Lewis's account with its principle L2, in either its original miraculous formulation or the probabilistic, non-miraculous version, both tend to make objectionable backtracking conditionals true when they are intended to explain why they should be false. Blue [1981] includes a feature in his analysis which produces the same result in much the same way. While Pollock's latest theory of counterfactuals avoids examples like that of the unattended coat, it nevertheless encounters new problems with backtracking conditionals in the context of a probabilistic universe. It makes certain backtracking counterfactuals false which our intuitions say are true while making others true which appear to be false. Yet these are the only positive proposals known to the authors at the time of this writing. Other work in the area such as [Nute, 1980b] and [Post, 1981] is essentially critical. An adequate explanation of the role the temporal order plays in the truth conditions for conditionals is still a very live issue.

38

DONALD NUTE AND CHARLES B. CROSS

1.9 Tense There are relatively few papers among the large literature on conditionals which attempt an account of English sentences which involve both tense and conditional constructions. Two of the earliest are [Thomason and Gupta, 1981] and [Van Fraassen, 1981]. Both of these papers attempt the obvious, a fairly straightforward conjunction of tense and conditional operators within a single formal language. Basic items in the semantics for this language are a set of moments, an earlier-than relation on the set of moments which orders moments into tree-like structures, and an equivalence relation which holds between two moments when they are `co-present'. A branch on one of these trees plays the role of a possible world in the semantics. Such a branch is called a history, and sentences of the language are interpreted as having truth values at a moment-history pair, i.e. at a moment in a history. Note that a moment is not a clock time but rather a time-slice belonging to each history that passes through it. The tense operators in the language include two past-tense operators P and H , two future-tense operators F and G, and a `settledness' or historical necessity operator S . P is true at moment i in history h just in case is true at some moment j in h where j is earlier than i. H is true at some moment i in h if and only if is true at j in h for every moment j in h which is earlier than i. F is true at i in h if is true at a moment later than i in h, and G is true at i in h if is true at every moment later than i in h. S is true at i in h if and only if is true at i in every history h0 which contains i. For a further discussion of semantics for such tense operators, see Burgess [1984] (Chapter 2.2 of this Handbook). In both of these papers, that part of the semantics which is used to interpret conditionals is patterned after the semantics of Stalnaker. A conditional > is true at a moment i in a history h just in case is true at the pair hi0 ; h0 i at which is true which is closest or most similar to the pair hi; hi. Much of the discussion in the two papers is devoted to the eort to assure that certain theses which the authors favor are valid in their model theories. The measures needed to ensure some of the desired theses within the context of a Stalnakerian semantics are quite complicated, but the set of theses that represents the most important contribution of the account of [Thomason and Gupta, 1981], namely the doctrine of Past Predominance, turns out to be quite tractable model theoretically. According to Past Predominance, similarities and dierences with respect to the present and past have lexical priority over similarities and dierences with respect to the future in any evaluation of how close hi; hi is to hi0 ; h0 i, where i and i0 are co-present moments. This doctrine aects the interaction between the settledness operator S and the conditional. For example, Past Predominance implies the validity of the following thesis: (:S : ^ S ) ! ( > ):

CONDITIONAL LOGIC

39

This thesis is clearly operative in the reasoning that leads to the two-box solution to Newcomb's Problem: `If it's not settled that I won't take both boxes but it is settled that there is a million dollars in the opaque box, then if I take both boxes there will (still) be a million dollars in the opaque box.'10 Cross [1990b] shows that since, concerning the selection of a closest momenthistory pair, Past Predominance places no constraints on what is true at past or future moments, Past Predominance can be formalized and axiomatized in terms of settledness and the conditional using ordinary possible worlds models in which relations of temporal priority between moments are not represented. The issue of how the conditional interacts with tense operators, such as P , H , F and G, is more problematic. The accounts presented by Thomason and Gupta and by Van Fraassen adopt the hypothesis that English sentences involving both tense and conditional constructions can be adequately represented in a formal language containing a conditional operator and the tense operators mentioned above. Nute [1983] argues that this is a mistake. Consider an example discussed in [Thomason and Gupta, 1981]: 37. If Max missed the train he would have taken the bus. According to Thomason and Gupta, this and other English sentences of similar grammatical form are of the logical form P ( > F ). Nute argues that this is not true. To see why, consider a second example. Suppose we have a computer that upon request will give us a `random' integer between 1 and 12. Suppose further that what the computer actually does is increment a certain location in memory by a certain amount every time it performs other operations of certain sorts. When asked to return a random number, it consults this memory location and uses the value stored there in its computation. Thus the `random' number one gets depends upon when one requests it. We just now left the keyboard to roll a pair of dice. If anyone cares, we rolled a 9. Consider the following conditional: 38. If we had used the computer instead of dice, we would have got a 5 instead of a 9. It is certainly true that there is a time in the past such that if we had used the computer at that time we would have got a 5, so a sentence corresponding to (38) of the form P ( > F ) is certainly true. Yet (38) itself is not true. Depending upon when we used the computer and what operations the computer had performed before we used it, we could have obtained any integer from 1 to 12. Perhaps we are simply using the wrong combination of operators. Instead of P ( > F ), perhaps sentences like (37) and (38) are of the form H ( > F ). A problem with this suggestion is that such conditionals do 10 See

[Gibbard and Harper, 1981].

40

DONALD NUTE AND CHARLES B. CROSS

not normally concern every time prior to the time at which they are uttered but only certain times or periods of time which are determined by context. Suppose in a football game Walker carries the ball into the end zone for a touchdown. During the course of his run, he came very close to the sideline. Consider the conditional 39. If Walker had stepped on the sideline, he would not have scored. Can this sentence be of the form H ( > F )? Surely not, for Walker could have stepped on the sideline many times in the past, and probably did, yet he did score on this particular play. Perhaps we can patch things up further by introducing a new tense operator H which has truth conditions similar to H except that it only concerns times going a certain distance into the past, the distance to be determined by context. Once again, Nute argues, this will not work. Consider the conditional 40. If Fred had received an invitation, he would have gone to the party. This sentence might very well be accepted even though Fred would not have gone to the party had he received an invitation ve minutes before the party began. The period of time involved does not begin with the present moment and extend back to some past moment determined by context. Indeed if this were the case, for (40) to be true it would even have to be true that Fred would have gone to the party if he had received an invitation after the party ended. It would seem, then, that if a context-dependent operator is to be the solution to the problem Nute describes, then the contextually determined period of time involved in the truth conditions for English sentences of the sort we have been investigating must be some subset of past times, but one that need not be a continuous interval extending back from the present moment. This is the solution suggested by Thomason [1985].11 Nute [1991] argues for a dierent approach: the introduction of a new tensed conditional operator, i.e. an operator which involves in its truth conditions both dierences in time and dierences in world. Using a class selection function semantics for this task, we could let our selection function f pick out for a sentence , a moment or time i, and a history or world h a set f (; i; h) of pairs hi0 ; h0 i of times and histories at which is true and which are otherwise similar enough to hi; hi for our consideration. We would introduce into our formal language a new conditional operator, say iP F i, and sentences of the form iP F i would be true in an appropriate model at hi; hi if and only if for every pair hi0 ; h0 i 2 11 The following example may be linguistic evidence for this sort of context-dependence in tensed constructions not involving conditionals: a dean, worried about faculty absenteeism, asks a department chair, `Was Professor X always in his classroom last term?' the correct answer may be `Yes' even though Professor X was not in his classroom at times last term when his classes were not scheduled to meet.

CONDITIONAL LOGIC

41

f (; i; h) such that there is a time j in h0 which is copresent with i and later than i0 ; is true at hj; h0 i. It appears that three more operators of this sort will be needed, together with appropriate truth conditions. These operators may be represented as iP P i, iF F i, and iF P i. These operators would be used to represent sentences like 41. If Fred had gone to the party, he would have had to have received an invitation. 42. If Fred were to receive an invitation, he would go to the party. 43. If Fred were to go to the party, he would have to have received an invitation. Notice that (41) and (43) are types of backtracking conditionals. Since such conditionals are rarely true, we may use the operators iP P i and iF P i infrequently. This may also account for the cumbersomeness of the English locution which we must use to clearly express what is intended by (41) and (43). A number of other interesting problems concerning tense and conditionals occur to us. One of these is the way in which the consequent may aect the times included in the pairs picked by a class selection function. Consider the sentences 44. If he had broken his leg, he would have missed the game. 45. If he had broken his leg, the mend would have shown on his X- ray. The times at which the leg might have been broken varies in the truth conditions for these two conditionals. This suggests that a semantics like Gabbay's which makes both antecedent and consequent arguments for the class selection function might after all be the preferred semantics. Another possibility is that despite its awkwardness we must introduce some sort of context-dependent tense operator like the operator H discussed earlier. When we represent (44) as H ( > F ), H has the whole of the conditional within its scope and can consider the consequent in determining which times are appropriate. A third possibility is that the consequent does not gure as an argument for the selection function but it does gure as part of the context which determines the selection function which is, in fact, used during a particular piece of discourse. This sort of approach utilizes the concept of conversational score discussed in Section 1.7 of this paper. One piece of evidence in favor of this approach is the fact that it would be unusual to assert both (44) and (45) in the same conversation. Whichever of these two sentences was asserted rst, the antecedent of the other would likely be modi ed in some appropriate way to indicate that a change in the times to be considered was required. Besides these interesting puzzles, we need

42

DONALD NUTE AND CHARLES B. CROSS

also to explain the fact that we maintain the distinction between indicative and subjunctive conditionals involving present and past tense much more carefully than we do where the future tense is concerned. These topics are considered in more detail in [Nute, 1982 and 1991] and [Nute, 1991].

1.10 Other Conditionals Besides the subjunctive conditionals we have been considering, we also want an analysis for the might conditionals, the even-if conditionals, and the indicative conditionals mentioned in Section 1.1. It is time we took another look at these important classes of conditionals. Most authors who discuss the might and the even-if conditional constructions propose that their logical structure can be de ned by reference to subjunctive conditionals. Lewis [1973b] and Pollock [1976] suggest that English sentences having the form `If were the case, then might be the case' should be symbolized as :( > : ). Stalnaker [1981a] presents strong linguistic evidence against this suggestion, but the suggestion has achieved wide acceptance nonetheless. Pollock [1976] also oers a symbolisation of even-if conditionals. English sentences of the form ` even if ', he suggests, should be symbolized as ^ ( > ). The adequacy of this suggestion may depend upon our choice of conditional logic and particularly upon whether we accept the thesis CS. If we accept both CS and Pollock's proposal, then ` even if ' will be true whenever both and are true. An alternative analysis of even-if conditionals is developed in [Gardenfors, 1979]. Gardenfors's objection to Pollock's proposal seems to be that a person who knows that both and are true might still reject an assertion of the sentence ` even if '. Normally, says Gardenfors, one does not assert ` even if ' when one knows that is true; an assertion of ` even if ' presupposes that is true and is false. Even when the presupposition that is false truth turns out to be incorrect, Gardenfors argues that there is a presumption that the falsity of would not interfere with the truth of . Consequently, Gardenfors suggests that ` even if ' has the same truth conditions as ( > ) ^ (: > ). Another suggestion comes from Jonathan Bennett [1982]. Bennett gives a comprehensive account of even-if conditionals, tting them into the context of uses of `even' that don't involve `if', and uses of `if' that don't involve `even'. That is, Bennett rejects the treatment of `even if' as an idiom with no internal structure. The rst of three proposals we will consider concerning the analysis of indicative conditionals, which can be found in [Lewis, 1973b; Jackson, 1987] and elsewhere, is that indicative conditionals have the same truth conditions as do material conditionals, paradoxes of implication and problems with Transitivity, Contraposition, and Strengthening Antecedents notwithstanding. It is diÆcult and perhaps impossible to nd really persuasive

CONDITIONAL LOGIC

43

counterexamples to Transitivity and Strengthening Antecedents using only indicative conditionals, but apparent counterexamples to Contraposition are easy to construct. Consider, for example, the following two sentences: 46. If it is after 3 o'clock, it is not much after 3 o'clock. 47. If it is much after 3 o'clock, it is not after 3 o'clock. It is easy to imagine situations in which (46) would be true or appropriate, but are there any situations in which (47) would be true or appropriate? Another problem with this analysis concerns denials of indicative conditionals. Stalnaker [1975] oers an interesting example: 48. If the butler didn't do it, then Fred did it. Being quite sure that Fred didn't do it, we would deny this conditional. At the same time, we may believe that the butler did it, and therefore when we hear someone say what we would express by 49. Either the butler did it or Fred did it. We might respond, \Yes, one of them did it, but it wasn't Fred". Yet (48) and (49) are equivalent if (48) has the same truth conditions as the corresponding material conditional. One possible response to these criticisms is that we must distinguish between the truth conditions for an indicative conditional and the assertion conditions for that conditional. It may be that a conditional is true even though certain conventions make it inappropriate to assert the conditional. This might lead us to say that (47) is true even though it would be inappropriate to assert it. We might also attempt to explain away the paradoxes of implication in this way, relying on the assumed convention that it is misleading and therefore inappropriate to assert a weaker sentence when we are in a position to assert a stronger sentence which entails . For example, it is inappropriate to assert _ when one knows that is true. Just so, the argument goes, it is inappropriate to assert ) when one is in a position to assert either : or . and in general we may reject other putative counterexamples to the proposal that indicative conditionals have the same truth conditions as material conditionals by saying that in these cases not all the assertion conditions are met for some conditional rather than admit that the truth conditions for the conditional are not met. This line of defence is suggested, for example, by [Grice, 1967; Lewis, 1973b; Lewis, 1976] and by [Clark, 1971]. A second proposal is that indicative conditionals are Stalnaker conditionals, i.e. that Stalnaker's world selection function semantics is the correct semantics for indicative conditionals and Stalnaker's conditional logic C2 is the proper logic for these conditionals. This suggestion is found in [Stalnaker, 1975] and in [Davis, 1979]. While both Stalnaker and Davis propose

44

DONALD NUTE AND CHARLES B. CROSS

the same model theory for indicative and subjunctive conditionals, both also suggest that the properties of the world selection function appropriate to indicative conditionals are dierent from those of the world selection function appropriate to subjunctive conditionals. The dierence for Stalnaker has to do with the presuppositions involved in the utterance of the conditional. During the course of a conversation, the participants come to share certain presuppositions. In evaluating an indicative conditional ) , Stalnaker says that we look for the closest -world at which all of these presuppositions are true. In the case of a subjunctive conditional, on the other hand, we may look outside this `context set' for the closest -world. Of course the overall closest -world may not be a world at which all of the presuppositions are true since making true could tend to make one of the presuppositions false. This means that dierent worlds may be chosen by the selection function used to evaluate indicative conditionals and the selection function used to evaluate subjunctive conditionals. While accepting Stalnaker's model theory for both indicative and subjunctive conditionals, Davis oers a dierent distinction between the world selection function appropriate to indicative conditionals and that the appropriate to subjunctive conditionals. In fact, Davis claims that Stalnaker's analysis of subjunctive conditionals is actually the correct analysis of indicative conditionals. To evaluate an indicative conditional ! , Davis says we look at the -world which bears the greatest overall similarity to the actual world to see if it is a -world. For a subjunctive conditional > , we look at the - world which most resembles the actual world up until just before what Davis calls the time of reference of . Apparently, the time of reference of is the time at which events reported by occur, or states of aairs described by obtain, or etc. A third proposal, due to Adams [1966; 1975b; 1975a; 1981], holds that indicative conditionals lack truth conditions altogether. They do, however, have probabilities and these probabilities are just the corresponding standard conditional probabilities. Thus pr( ) ) = pr( ^ )=pr(), at least in those cases where pr() is non-zero. We must remember that Adams does not identify the probability of a conditional with the probability that that conditional is true since he rejects the very notion of truth values for conditionals. Adams proposes that an argument involving indicative conditionals is valid just in case its structure makes it possible to ensure that the probability of the conclusion exceeds any arbitrarily chosen value less than 1 by ensuring that the probabilities of each of the premises exceeds some appropriate value less than 1. In other words, we can push the probability of the conclusion arbitrarily high by pushing the probabilities of the premises suitably high. When an argument is valid in this sense, Adams says that the conclusion of the argument is `p-entailed' by its premises and the argument itself is `p-sound'. Since Adams rejects truth values for conditionals, conditionals can cer-

CONDITIONAL LOGIC

45

tainly not occur as arguments for truth functions. Given his identi cation of the probability of a conditional with the corresponding standard conditional probability, this further entails that conditionals may not occur within the scope of the conditional operator. Adams attempts to justify this consequence of this theory by suggesting that we don't really understand sentences which involve the embedding of one conditional within another in any case. This claim, though is far from obvious. Such sentences as 50. If this glass will break if it is dropped on the carpet, then it will break if it is dropped on the bare wooden oor. seem absolutely ordinary and at least as comprehensible as most other indicative conditionals. the inability to handle such conditionals must count as a disadvantage of Adams's theory. In [Adams, 1977] it is shown that p-soundness is equivalent to soundness in Lewis's system-of-spheres semantics. This implies that the proper logic for indicative conditionals is the ` rst-degree fragment' of Lewis's VC. By the rst degree fragment of VC We mean the set of all those sentences in VC within which no conditional operator occurs within the scope of any other operator. Since the logic Adams proposes for indicative conditionals can be supported by a semantics which also allows us to interpret sentences involving iterated conditional operators, we will need very strong reasons to accept Adam's account with its restrictions rather than some possible worlds account like Lewis's. In fact it may be possible to reconcile Lewis's view that the truth conditions for indicative conditionals are the same as those for the corresponding material conditionals with Adams work on the probabilities of conditionals and p-entailment. Lewis [1973b] suggests that the truth conditions for ) are given by ! while the assertion conditions for ) are given by the corresponding standard conditional probability. Jackson [1987] also entertains such a possibility. If we accept this, then we might accept Adams's theory as a basis for an adequate account of the logic of assertion conditions for indicative conditionals. Since we would be assuming that conditionals have truth values as well as probabilities, we could also overcome the restrictions of Adams's theory and assign probabilities to conditionals which have other conditionals embedded in them. One problem with this approach, though, is that it would seem to require that we identify the probability of a conditional with the probability that the conditional is true. When we do this and also take the probability of a conditional to be the corresponding standard conditional probability, serious problems arise as is shown in [Lewis, 1976] and in [Stalnaker, 1976]. These diÆculties will be discussed brie y in Section 3.

46

DONALD NUTE AND CHARLES B. CROSS

2 EPISTEMIC CONDITIONALS The idea that there is an important connection between conditionals and belief change seems to have been inspired by this suggestion of Frank Ramsey's: If two people are arguing \if p will q?" and are both in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q.12 The issue of how, precisely, to formalize Ramsey's suggestion and extend it from the case where p is in doubt to the general case has received a great deal of attention|too much attention to permit an exhaustive survey here. We will focus here on the Gardenfors triviality result for the Ramsey test (see [Gardenfors, 1986]) and related results, and the implications of these results for the project of formalizing the Ramsey test for conditionals. Despite the narrowness of this topic our discussion will not mention all worthy contributions to the subject. Sections 2.1 and 2.2 provide a general framework for formalizing belief change and the Ramsey test. Section 2.3 makes connections between this framework and the literature on belief change and the Ramsey test. Section 2.4 presents the Ramsey test itself, and Section 2.5 presents versions of several triviality results found in the literature, including a version of Gardenfors' 1986 result that subsumes several of the de nitions of triviality found in the literature. Section 2.6 examines how triviality can be avoided, and section 2.7 examines systems of conditional logic associated with the Ramsey test. We will provide proofs for some of the results stated below and in other cases refer the reader to the literature.

2.1 Languages By a Boolean language we will mean any logical language containing at least the propositional constant `?', the binary operator `^', and the unary operator `:'. We will assume that `>' is de ned as `:?' and that any other needed Boolean operators are de ned. We do not assume anything at this stage about how the operators and propositional constant of a Boolean language are interpreted, but it will turn out that in most cases `^', `:', and `?' will receive classical truth-functional interpretations. We will use the symbol ``' as a variable ranging over logical inference relations. Eective immediately we will cease using quotes when mentioning formulas and logical symbols. We de ne a language (whether Boolean or nonBoolean) to be of type L0 i it contains the propositional constants > and ? but does not contain the 12 [Ramsey,

1990], p. 155.

CONDITIONAL LOGIC

47

binary conditional operator >. We next de ne two language-types for doing conditional logic. A language, whether Boolean or nonBoolean, is of type L1 i it contains > and ? and the only >-conditionals allowed as formulas are rst-degree or \ at" conditionals, i.e. conditionals > where ; are conditional-free. We de ne a language, whether Boolean or nonBoolean, to be of type L2 (a \full" conditional language) i it contains > and ? and allows arbitrary nesting of conditionals in formulas.

2.2 A general framework for belief change We will describe belief change using a framework that is related to the AGM (Alchourron, Gardenfors, Makinson) framework for belief revision.13 Our framework extends that of AGM and is adapted (with further enrichment) from the notion of an enriched belief revision model introduced in [Cross, 1990a]. For a given language L containing >; ? as formulas, let wffL be the set of all formulas of L and let KL be P (wffL ) f;g (where P (wffL ) is the powerset of wffL). For a given inference relation ` and set of formulas of L, de ne Cn` ( ) (the `-consequence set for ) to be f : ` g, and let TL;` = f : wffL and Cn` ( ) = g be the set of all theories in L with respect to `. A set is `-consistent i 6` ?. We next de ne the notion of a belief change model : (DefBCM) A belief change model on a language L containing >; ? as formulas is an ordered septuple

hK; I ; `; K? ; ; ; si whose components are as follows:

13 See

1. K KL and ` is a subset of P (wffL ) wffL ; 2. I and K? are sets of formulas meeting the following requirements: (a) K? 2 K; (b) >; ? 2 I ; (c) K? is the set of all formulas of L or a fragment of L; (d) I is the set of all formulas of L or a fragment of L, and I K?. (e) K K? for all K 2 K. 3. and are binary functions mapping each K 2 K and each 2 I to sets K and K , respectively, where K K? and K K?;

[Alchourron et al., 1985] and [Gardenfors, 1988].

48

DONALD NUTE AND CHARLES B. CROSS

4. s is a function taking values in P (wffL ), where K dom(s) P (wffL ). A classical belief change model is a belief change model de ned on a Boolean language whose logical consequence relation ` includes all classical truthfunctional entailments and respects the deduction theorem for the material conditional. A deductively closed belief change model is a belief change model for which K = Cn` (K ) \ K? and K = Cn` (K ) \ K? and K = Cn` (K ) \ K? for all K 2 K and all 2 I . Note that in a deductively closed belief change model on language L, belief sets are theories in the fragment of L represented by K? and not necessarily theories in L itself. Informally, the items in a belief change model can be described as follows. K represents the set of all possible belief states recognized by the model; often K will be a subset of TL;` but not always. I represents the set of all formulas eligible to serve as inputs for contraction and revision. ` is an inference relation de ned on L and will in most cases be an extension of truth-functional propositional logic. K? contains all of the formulas of that fragment of L from which the belief sets in K are constructed and represents the absurd belief state; thus every belief set in K is a subset of K? . For each K 2 K and each 2 I , K represents the result of contracting K to remove (if possible), whereas K represents the result of revising K to include as a new belief. Revision is normally assumed to involve not only adding the given formula to the given belief set but also resolving any inconsistencies thereby created. For the sake of generality, we have not stipulated that K ; K 2 K, though this will usually be the case. Finally, s is the support function for the model, which determines for each belief state K (and perhaps for other sets, as well) the set of formulas of L supported by K . For belief sets K in belief change models for which the Ramsey test holds, s(K ) will contain Ramsey test conditionals even if K does not.

2.3 Comparisons With an eye toward our presentation of the basic triviality result for the Ramsey test we will brie y review diering positions about the elements making up a belief change model. The list of authors we mention here is not exhaustive but constitutes a representative sample of the diversity of positions taken with respect to belief change models and their elements in discussions of the Ramsey test. Belief states: the language of the model and the set K

Segerberg places no restrictions on the language in his discussion of the triviality result in [Segerberg, 1989]. Gardenfors ([1988] and elsewhere),

CONDITIONAL LOGIC

49

Rott [1989], and Cross [1990a] all adopt a type-L2 language for their Ramsey test belief change models, whereas Makinson [1990], Morreau [1992], Hansson ([1992], section III), Arlo-Costa [1995], and Levi ([1996] and elsewhere) restrict themselves to a type-L1 language. Hansson, Arlo-Costa, and Levi allow only the type-L0 formulas of a type-L1 language to belong to the sets that individuate belief states. Makinson and Morreau, like Gardenfors, Rott, Segerberg, and Cross, do not restrict the membership of belief-stateindividuating sets to type-L0 formulas. Most authors on the Ramsey test follow Gardenfors in representing the set of all possible belief states as a set of theories. One exception is Hansson [1992], who takes each possible belief state to be represented by a pair consisting of a set of formulas and a revision operator that de nes the dynamic properties of the belief state. For Hansson, the set of formulas in question is a belief base , a set of conditional-free formulas that need not be deductively closed. The belief base of a belief state is a (not necessarily nite) axiom set for the belief state, the idea being to allow dierent belief states to be associated with the same deductively closed theory. A belief state in Hansson's model can still be individuated by means of its belief base, however, because the revision operator of a belief state is a function of that belief state's belief base. Morreau [1992] also gives a two-component analysis of belief states, but in Morreau's analysis the two components are a set of \worlds" (truth-value assignments to atomic formulas) and a selection function (of the Stalnaker-Lewis variety) that determines which conditionals are believed in the belief state. A third exception is Rott [1991], who identi es belief states with epistemic entrenchment relations and notes that a nonabsurd belief set can be recovered from an epistemic entrenchment relation that supports at least one strict entrenchment: the belief set will be the set of all formulas strictly more entrenched than ?. Among those authors who take belief states to be deductively closed theories, most follow Gardenfors in assuming that not every theory corresponds to a possible belief state. On this issue Segerberg and Makinson are exceptions. In their respective extensions of Gardenfors' basic triviality result Segerberg and Makinson assume that revision is de ned on all theories in a given language rather than on a nonempty subset of the set of all theories for that language.14 We note above that Hansson, Arlo-Costa, and Levi allow only conditionalfree formulas into the sets that individuate belief states.15 Why exclude conditionals from these sets? In Levi's view, the formulas eligible for membership in the theories that individuate belief states are precisely those statements about which agents can be concerned to avoid error. Levi argues against including conditionals in the theories that individuate belief 14 See [Segerberg, 1989] and [Makinson, 1990]. 15 See, for example, [Hansson, 1992], [Arl o-Costa,

and Levi, 1996].

1995], [Levi, 1996], and [Arlo-Costa

50

DONALD NUTE AND CHARLES B. CROSS

states because in his view conditionals do not have truth conditions or truth values and so are not sentences about which agents can be concerned to avoid error.16 On Levi's view, a conditional > in a type-L1 language is acceptable relative to a belief set K in a type-L0 language i : is not epistemically possible relative to the result of revising K to include , and the negated conditional :( > ) is acceptable relative to K i : is epistemically possible relative to the revision of K to include . The part of this view governing negated conditionals, the negative Ramsey test, will be discussed later.17 An important consequence of Levi's view is the thesis that conditionals are \parasitic" on conditional-free statements in the following sense: the set of conditionals supported by a given belief state is determined by the conditional-free formulas accepted in that belief state or a subset thereof. Hansson [1992] shows, however, that it is possible to motivate a parasitic account of conditionals without taking a position on whether conditionals have truth conditions or truth values. Gardenfors [1988] criticizes Levi's view of conditionals on the grounds that it fails to account for iterated conditionals, a species of conditional about which Levi has expressed skepticism, but Levi [1996] and Hansson [1992] show that iterated conditionals can be accounted for (if necessary) even if conditionals do not have truth conditions or truth values. Levi [1996] points out, however, that axiom schema (MP) fails to be valid in the sense he favors if iterated conditionals are allowed.18 In this connection Levi exploits examples like the following, which was described by McGee [1985] as a counterexample to modus ponens : Opinion polls taken just before the 1980 election showed the Republican Ronald Reagan decisively ahead of the Democrat Jimmy Carter, with the other Republican in the race, John Anderson, a distant third. Those apprised of the poll results believed, with good reason: If a Republican wins the election, then if it's not Reagan who wins it will be Anderson. A Republican will win the election. Yet they will not have good reason to believe If it's not Reagan who wins, it will be Anderson.19 16 See [Levi, 1988] and [Levi, 1996], for example. As Arl o-Costa and Levi [1996] point out, Ramsey agreed that conditionals lack truth conditions and truth values: this is clear from the context of the quote from Ramsey with which we began Section 2. 17 For the most recent account of Levi's views on this topic, see [Levi, 1996]. 18 See [Levi, 1996], pp. 105-112. 19 [McGee, 1985], p. 462.

CONDITIONAL LOGIC

51

Arlo-Costa [1998] embraces iterated conditionals and uses McGee's example to argue, via the Ramsey test, against the following principle of invariance for iterated supposition . (K INV) If 2 K 6= K? , then (K ) = K .

Supposition, i.e. hypothetical revision of belief \for the sake of argument," is the notion of revision that Arlo-Costa [1998] and Levi [1996] both associate with Ramsey test conditionals. Since (K INV) holds in any deductively closed classical belief change model that satis es (K 3) and (K 4), Arlo-Costa takes McGee's example as evidence against (K 4) as a principle governing supposition. Contraction and revision inputs: the set I

Gardenfors does not exclude conditionals from the class of formulas eligible to be inputs for belief change in the models he formulates, but Morreau, Arlo-Costa, and Levi do. In Levi's case this restriction clearly follows from his view that conditionals have neither truth conditions nor truth values, and Arlo-Costa appears to agree with this view. Morreau's exclusion of conditionals as revision inputs appears to be an artifact of the nontriviality theorem he proves for the Ramsey test ([Morreau, 1992], THEOREM 14, p. 48) rather than indicative of a philosophical position about the status of conditionals. Logical consequence and support:

` and s

Most authors on the Ramsey test follow Gardenfors in assuming a compact background logic ` that includes all truth functional propositional entailments while respecting the deduction theorem for the material conditional, but there has been research on the Ramsey test in frameworks where the background logic is nonclassical or not necessarily classical. For example, Segerberg's triviality result in [Segerberg, 1989] assumes only the minimal constraints of Re exiveness, Transitivity, and Monotony for `,20 and in [Gardenfors, 1987] Gardenfors credits Peter Lavers with having established in an unpublished note a triviality result for the Ramsey test in which ` is de ned to be minimal logic 21 instead of an extension of classical truth-functional logic. Also, Cross and Thomason [1987; 1992] investigate a four-valued system of conditional logic that is motivated by an application of the Ramsey test in the context of the nonmonotonic logic of multiple inheritance with exceptions in semantic networks. 20 See the de nition of a Segerberg belief change model near 21 Minimal logic has modus ponens as its only inference rule

following schemata as axioms:

the end of section 2.3. and every instance of the

52

DONALD NUTE AND CHARLES B. CROSS

Levi [1988] introduces the function RL, which maps a conditional-free belief set to a conditional-laden belief set via the Positive and Negative Ramsey tests. Cross [1990a] formulates a version of the triviality result proved in [Gardenfors, 1986] in a framework where an extension ` of classical logic is coupled with a not-necessarily-monotonic consequence operation cl . Makinson [1990] does the same, calling his not-necessarily-monotonic consequence operation C . Hansson [1992] makes use of a function s which maps each belief state to the set of all formulas the belief state \supports." Support functions are also adopted by Arlo-Costa [1995] and by Arlo-Costa and Levi [1996]. Our view is that Levi's RL, Cross' cl , Makinson's C , and Hansson's s should be regarded as variations on the same theoretical construct, and we will follow Hansson in calling this construct a support function and in using s to represent it. More on this in Section 2.6 below. The following postulates are examples of requirements that might be imposed on s. Assume a belief change model on a language L, and assume that ranges over dom(s), which always includes K as a subset: (Identity over K) s(K ) = K for all K 2 K. (Monotonicity over K) For all H; K 2 K, if H K then s(H ) s(K ). (Re exivity)

s(

).

(Closure) Cn` [s( )] = s( ). (Consistency) If is `-consistent then s( ) is `-consistent. (Superclassicality) Cn` ( ) s( ). (Transitivity) s( ) = s[s( )].

(Reasoning by Cases) s( [fg) \ s( [f:g) s( ) for all ; such that [ fg 2 dom(s) and [ f:g 2 dom(s).

(Conservativeness) L has type-L0 fragment L0 and for all 2 wffL0 , 2 s( ) i 2 Cn` ( ). None of Gardenfors, Morreau, or Segerberg uses the notion of a support function: they assume, in eect, that s(K ) = K for all K 2 K. 1: ( ^ ) ! : 2: ( ^ ) ! : 3: ! ( _ ): 4: ! ( _ ): 5: ( ! ) ! [( ! ) ! (( _ ) ! )]: 6: ( ! ) ! [( ! ) ! (( ! ( ^ )]: 7: [ ! ( ! )] ! [( ! ) ! ( ! )]: 8: ! ( ! ): The formula : is de ned to be ! ?. This axiomatization is found in [Segerberg, 1968].

CONDITIONAL LOGIC

Contraction and revision: postulates for belief change

53

Since and are to represent functions legitimately describable as contraction and revision, respectively, it is appropriate to consider additional conditions on these functions. Which additional conditions should be imposed is a matter of dispute, and some of the additional postulates that will be under consideration are listed below. In the case of postulates (K+ 1), (K 1), (K 2), (K 3), (K 4), (K 5), (K 6), (K 7), (K 8), (K 1), (K 2), (K 3), (K 4), (K 5), (K 6), (K 7), (K 8), (K L), (K M), and (K P) we follow the labeling used in [Gardenfors, 1988]. Please note that we have not adopted any of the postulates given below in the de nition of belief change model . In each postulate, the variable K is understood to range over K; also ; are understood to range over I . We begin with a de nition of a third important belief change operation: expansion. De nition of and postulate for expansion

(Def+) K+ = Cn` (K [ fg) \ K? . (K+ 1)

K+ 2 K. (K+ is a belief set.)

Postulates for contraction

(K 1) (K 2) (K 3) (K 4) (K 4w) (K 5) (K 6) (K 7) (K 8)

2 K. (K is a belief set.) K K . If 62 K , then K = K . If 6` , then 62 K . If 6` and K = 6 K? , then 62 K . If 2 K , then K (K )+ . If ` $ , then K = K . K \ K K^ . If 62 K^ , then K^ K . K

Postulates for revision (K 1) K 2 K. (K is a belief set.) (K 2) 2 K .

54

(K 3) (K 4)

DONALD NUTE AND CHARLES B. CROSS

K K+ .

If : 62 K , then K+ K .

(K 4s) If : 62 K , then K+ = K .

(K 4ss) If K+ 6= K? , then K+ = K . (K 4w) If 2 K 6= K? , then K K. (K 5) K = K? i ` :.

(K 5w) If K = K? , then ` :. (K 5ws) If K = K? , then Cn` (fg) = K?. (K C) (K 6)

If K 6= K? and K = K? , then ` :. If ` $ , then K = K . (K 6s) If 2 K and 2 K , then K = K . (K 7) K^ (K )+ . (K 7 0 ) K \ K K_ . (K 8) If : 62 K , then (K )+ K^ . (K L) If :( > : ) 2 K , then (K )+ K^ . (K M) If s(K ) s(K 0 ), then K K0 . (K IM) If K 6= K? 6= K 0 and s(K ) s(K 0 ), then K 0 K . (K T) (K P)

If K 6= K? , then K> = K . If : 62 K , then K K.

(K PI ) If : 62 K then K \ I K \ I . (LI) K = (K: )+ . A few other postulates will be identi ed as needed. Our treatment of contraction and revision is not general enough to include every treatment of contraction and revision as a special case. For example, in the formalization of belief revision in [Morreau, 1992], the revision operation is nondeterministic, i.e. its value for a given belief set K and proposition is a set of belief sets rather than a belief set. We will not attempt to formalize nondeterministic contraction or revision. Also, for

CONDITIONAL LOGIC

55

Levi, contraction and revision are to be evaluated by means of a measure of informational value , which we do not explicitly formalize. The contraction operation, which we include in every belief change model, is not often used in the presentation of triviality results for the Ramsey test. Exceptions to this pattern include Cross [1990a] and Makinson [1990], who involve contraction explicitly in their respective formulations of triviality results for the Ramsey test. A catalog of belief change models

We conclude our discussion of comparisons by de ning several categories of belief change model that illustrate how the framework de ned above can be made to re ect the diering assumptions of a subset of authors who have written on belief revision and the Ramsey test. In associating a name with a class of belief change models we do not claim that the person named de ned this class of models; rather, we claim that the belief change models associated with this name are the appropriate counterpart in our framework of models that the named person did de ne in the context of work on the Ramsey test. Note that postulates on contraction and revision are not part of these de nitions. 1. By a Gardenfors belief change model (see, for example, [Gardenfors, 1986], [Gardenfors, 1987], and [Gardenfors, 1988]) we will mean a deductively closed classical belief change model hK; I ; `; K? ; ; ; si de ned on a type-L2 language L where I = wffL = K?, and dom(s) = K, and s satis es Identity over K. 2. By a Segerberg belief change model (see [Segerberg, 1989]) we will mean a belief change model hK; I ; `; K? ; ; ; si, de ned on any language, such that the following hold: K = TL;` ; I = wffL = K? ; dom(s) = K; s satis es identity over K; and Cn` meets the following requirements, for all ; wffL : (Re exivity for `) Cn` ( ). (Monotonicity for `) If , then Cn` () Cn` ( ). (Transitivity for `) Cn` ( ) = Cn` [Cn` ( )]. 3. By a Makinson belief change model (see [Makinson, 1990]) we will mean a belief change model hK; I ; `; K?; ; ; si de ned on a typeL1 language L and satisfying the following: K = f : wffL and s( ) = g; I = wffL = K? , ` is classical propositional consequence; dom(s) = P (wffL ); and s satis es Superclassicality, Transitivity, and Reasoning by Cases.22 22 Note that in a Makinson belief change model s satis es both Re exivity and Closure. Closure holds since Superclassicality and Transitivity for s imply that for each wffL , we have s( ) Cn` [s( )] s[s( )] = s( ).

56

DONALD NUTE AND CHARLES B. CROSS

4. By a Morreau belief change model (see [Morreau, 1992]) we will mean a deductively closed classical belief change model hK; I ; `; K?; ; ; si de ned on a type-L1 language L whose type-L0 fragment is L0 and where the following hold: I = wffL0 ; K? = wffL; dom(s) = K; and s satis es Identity over K. 5. By a Hansson belief change model (see [Hansson, 1992], section 3) we will mean a classical belief change model hK; I ; `; K?; ; ; si de ned on a type-L1 language L whose type-L0 fragment is L0 and where the following hold: K P (wffL0 ); I = wffL0 = K? ; dom(s) = K; and s satis es Re exivity, Conservativeness, and Closure. 6. By an Arlo-Costa/Levi belief change model (see [Arlo-Costa, 1990], [Arlo-Costa, 1995], [Arlo-Costa and Levi, 1996], and [Levi, 1996]) we will mean a deductively closed classical belief change model hK; I ; ` ; K? ; ; ; si de ned on a type-L1 language L whose type-L0 fragment is L0 and where the following hold: I = wffL0 = K?; dom(s) = K; and s satis es Re exivity, Conservativeness, and Closure. As we have already noted, our belief change models do not capture every feature of every belief revision model appearing in the literature on the Ramsey test, and the models we associate with the names of authors in some cases omit some of the structure that these authors include in their own respective accounts of what constitutes a belief revision model. On the other hand, we have stipulated more detail for the models we associate with certain authors than do the authors themselves. For example, none of Gardenfors, Morreau, or Segerberg uses the notion of a support function s in the sources cited above, and neither Gardenfors, nor Makinson, nor Segerberg restricts the applicability of contraction and revision to a subset I of the set of formulas of the language on which the model is de ned. Finally, as was pointed out earlier, the contraction operation, which we include in every belief change model, is not often discussed in connection with the Ramsey test. In general, the stipulation of extra detail will serve to highlight tacit assumptions and make comparisons easier.

2.4 The Ramsey test for conditionals Ramsey's original suggestion can be put as follows: if an agent's beliefs entail neither nor :, then the agent's beliefs support > i his or her initial beliefs together with entail , i.e. (RTR) For all K 2 K and all 2 I such that ; : 62 K and all > 2 s(K ) i 2 K+ .

2 K?,

This suggestion covers only the case in which the epistemic status of is undetermined. What about the case in which the agent's initial beliefs

CONDITIONAL LOGIC

57

entail and the case in which the agent's initial beliefs entail :? Stalnaker [1968] suggests the following rule for evaluating a conditional in the general case: First, add the antecedent (hypothetically) to your stock of beliefs; second, make whatever adjustments are required to maintain consistency (without modifying the hypothetical belief in the antecedent); nally, consider whether or not the consequent is true.23 Stalnaker's proposal handles the general case by substituting the operation of revision for that of expansion in Ramsey's original proposal. In our framework Stalnaker's suggestion amounts to the following: (RT)

For all K 2 K .

2 K and all 2 I and all 2 K?, > 2 s(K ) i

Revision postulates (K 3) and (K 4) jointly entail (K 4s) If : 62 K then K + = K .

Hence, if (K 3), (K 4) are assumed, then (RT) agrees with (RTR) in the case where neither nor : belongs to K . That is, if (K 3) and (K 4) hold, then (RT) can be considered an extension of Ramsey's original proposal. In [Gardenfors, 1978] and in later writings Gardenfors adopts Stalnaker's version of the Ramsey test for type-L2 languages and assumes, in addition, the following: every formula of a type-L2 language L is an eligible input for revision and an eligible member of a belief set, i.e. I = wffL = K?, and a conditional, like any other formula, is accepted with respect to (supported by) a belief set K i it belongs to K , i.e. s(K ) = K for all K 2 K. We have already noted that Levi, in contrast to Gardenfors, excludes conditionals as revision inputs and as members of belief sets. Levi's view is that the conditional > in a type-L1 language expresses the attitude of an agent for whom : is not epistemically possible relative to K, and the negated conditional :( > ) expresses the attitude of an agent for whom : is epistemically possible relative to K. Assuming a type-L1 language L with type-L0 fragment L0 , and assuming that K TL0 ;` and I = wffL0 = K?, Levi's view amounts in our framework to the conjunction of the following: (PRTL) For all 2 I and all 2 K? and all K > 2 s(K ) i 2 K. (NRTL) For all 2 I and all :( > ) 2 s(K ) i

2 K such that K 6= K?,

2 K? and all K 2 K such that K 6= K? , 62 K.

23 [Stalnaker, 1968], p. 44. (The page reference is to [Harper et al., 1981], where [Stalnaker, 1968] is reprinted.)

58

DONALD NUTE AND CHARLES B. CROSS

Note that in both (PRTL) and (NRTL), unlike in (RT), K is restricted to `-consistent members of K. Note also that the adoption of (RT) (or of (PRTL) without (NRTL)) places no constraints on how negated conditionals are related to belief change. Other versions of the Ramsey test appearing in the literature include the following, due to Hans Rott, who, like Gardenfors, assumes a language L of type L2 and no restrictions on which formulas can appear as members of belief sets or as revision inputs (i.e. I = wffL = K?): (R1) (R2) (R3)

For all K 2 K and all 2 K and 62 K . For all K 2 K and all 2 K and 62 K: . For all K 2 K and all 2 (K ) .

2I

and all

2 K? , >

2K

i

2I

and all

2 K? , >

2K

i

2I

and all

2 K? , >

2K

i

Here we follow the labeling in [Gardenfors, 1987]. The interest of (R1)-(R3) stems in part from the fact that whereas (RT) can be used with (K 3) and (K 4) to derive the following thesis (U), none of (R1)-(R3) can be so used: (U)

If 2 K and

2 K , then > 2 K .

Thesis (U) is related to the strong centering axiom CS of VC, and Rott [1986] suggests that (U) should be rejected. Since none of (R1)-(R3) entails (K M), one of the assumptions of Gardenfors' 1986 triviality result for the Ramsey test, (R1)-(R3) might seem worth investigating as alternatives to (RT), but Gardenfors [1987] shows that (R1)-(R3) do not avoid the problem faced by (RT). Consider the Weak Ramsey Test: (WRT) For all K 2 K and all 2 I and all > 2 K i 2 K.

2 K? such that _ 62 K ,

Each of (R1)-(R3) entails (WRT), and Gardenfors [1987] proves a triviality result that holds for any version of the Ramsey test which entails (WRT), including (R1)-(R3) and (RT).24

2.5 Triviality results for the Ramsey test The basic result

Many versions of the basic triviality result for the Ramsey test have appeared in the literature, all of them variations on the result proved by Gardenfors [1986]. All proofs of the basic triviality result we know of exploit the same maneuver, however, one which Hansson [1992] makes explicit: 24 See

also [Gardenfors, 1988], Chapter 7, Corollary 7.15

CONDITIONAL LOGIC

59

in terms of our framework, the nding of forking support sets within a belief change model. (DefFORK) A belief change model hK; I ; `; K?; ; ; si will be said to contain forking support sets i there exist H; J; K 2 K, such that H = Cn` (H ) \ K?, and J = Cn` (J ) \ K?, and K = Cn` (K ) \ K? 6= K? , and H \ I 6 J , and J \ I 6 H , and s(H ) s(K ), and s(J ) s(K ). For Gardenfors and Segerberg belief change models this condition can be stated in the form in which Hansson originally formulated it: PROPOSITION 1. A Gardenfors or Segerberg belief change model hK; I ; `; K?; ; ; si contains forking support sets i there exist H; J; K 2 K, where H; J K 6= K? , H 6 J , and J 6 H . This proposition follows from the fact that in Gardenfors and Segerberg belief change models (i) s(K ) = K = Cn` (K ) for all K 2 K, and (ii) I and K? both exhaust the formulas of the language of the model. Next we present the main lemmas for the basic triviality result: LEMMA 2. If (RT) holds in a belief change model, then so does (K M).

Proof. Trivial; left to reader.

Postulate (K M) is a postulate of monotonicity for belief revision. We discuss Gardenfors' argument against (K M) in Section 2.6 below. LEMMA 3. No classical belief change model containing forking support sets satis es (K 2), (K C), (K P), and (K M).

Proof. Assume for reductio that hK; I ; `; K?; ; ; si is a classical belief change model that contains forking support sets and satis es (K 2), (K C), (K P), and (K M). For clarity, we follow the example of [Rott, 1989] in numbering the steps in the reductio argument. (1) H = Cn` (H )\K? , J = Cn` (J )\K? , K = Cn` (K ) \ K? 6= K? , H \I 6 J , J \ I 6 H , and s(H ); s(J ) s(K ), for some H; J; K 2 K (2) 2 (H \ I ) J , for some (3) 2 (J \ I ) H , for some (4) :( ^ ) 2 I (5) ::( ^ ) 62 H (6) H H: (^ )

(DefFORK)

(1) (1) (2), (3), (DefBCM) (3), classicality of `, fact that H = Cn` (H ) \ K? (4), (5), (K P)

60

DONALD NUTE AND CHARLES B. CROSS

(7) 2 H: (^ ) (8) ::( ^ ) 62 J (9) (10) (11) (12) (13) (14) (15)

J J:(^ ) 2 J:(^ ) H: (^ ); J:(^ ) K: (^ ) ; 2 K: (^ ) :( ^ ) 2 K: (^ ) K: (^ ) is `-inconsistent K is `-consistent

(16) (17)

` ::( ^ 6` ::( ^

) )

(2), (6) (2), classicality of `, fact that J = Cn` (J ) \ K? (4), (8), (K P) (3), (9) (1), (4), (K M) (7), (10), (11) (4), (K 2) (12), (13), classicality of ` classicality of `, fact that K = Cn` (K ) \ K? 6= K? (14), (15), (K C) (5), classicality of `, fact that H = Cn` (H ) \ K?

Since (17) contradicts (16), this completes the proof.

Lemmas 2 and 3 suÆce to prove the following: THEOREM 4. No classical belief change model de ned on a language of type L1 or type L2 and containing forking support sets satis es (K 2), (K C), (K P), and (RT). Note that we have not assumed that K is a set of theories either in the language of the model or in the fragment thereof represented by K?. We have not even assumed (K 1): that the sets produced by revision always belong to K. It is however required that the belief sets H , J , and K used in the proof be theories in the fragment of the language represented by K?. Do we have a triviality result? Not yet: we do not yet have a criterion of triviality. The following criteria have appeared in the literature: 1. A belief change model is Gardenfors nontrivial i there is a K 0 2 K and ; ; 2 I such that :; : ; : 62 Cn` (K 0 ) and ` :( ^ ) and ` :( ^ ) and ` :( ^ ). 2. A belief change model is Rott nontrivial i there is a K 0 2 K and ; 2 I such that 6` and 6` and _ ; :_ ; _: ; : _: 62 Cn` (K 0 ). 3. A belief change model is Segerberg nontrivial i there exist ; ; 2 I such that 6` and 6` and Cn` f; ; g = K? and Cn` (fg), Cn` (f g), Cn` (f; g) 2 K.

Recall that a support function s is monotone over K i s(H ) s(K ) for all H; K 2 K such that H K . A support function can be monotone

CONDITIONAL LOGIC

61

over K even if it is a nonmonotonic consequence operation provided that K does not exhaust dom(s). For example, s is monotone over K (but not necessarily over dom(s)) in all Makinson belief change models, since in a Makinson belief change model s(K ) = K for all K 2 K. Recall that the operation of expansion is de ned by (Def+); it turns out that if (K+ 1) and the monotonicity of s over K are assumed, then nontriviality by any of the above criteria will imply the existence of forking support sets: LEMMA 5. A classical belief change model de ned on a language of type L1 or L2 contains forking support sets if it satis es (K+ 1) and its support function is monotone over K and it is Gardenfors nontrivial.25

Proof. Suppose that the model is Gardenfors nontrivial; we will show that it contains forking support sets. Let K 0 , , , and be as in the de nition of Gardenfors nontriviality; also, let H = K0+_ ; let J = K0+_ ; and let K = K0+ . Then by (Def+) and the classicality of `, H = Cn` (H ) \ K? , J = Cn` (J )\K? , and K = Cn` (K )\K? 6= K?. (Def+) and the classicality of ` also imply that H; J K , hence by the monotonicity of s we have that s(H ); s(J ) s(K ). H \I 6 J holds because _ 2 (H \I ) J ; J \I 6 H holds because _ 2 (J \ I ) H . LEMMA 6. A classical belief change model de ned on a language of type L1 or L2 contains forking support sets if it satis es (K+ 1) and its support function is monotone over K and it is Rott nontrivial.26

Proof. Like the proof of Lemma 5, but let H = K0+_ ; let J = K0+_: , let K = K:0+, where K 0 , , and are as in the de nition of Rott nontriviality. H \ I 6 J holds because _ 2 (H \ I ) J ; J \ I 6 H holds because _ : 2 (J \ I ) H . LEMMA 7. A classical belief change model de ned on a language of type L1 or L2 contains forking support sets if it satis es (K+ 1) and its support function is monotone over K and it is Segerberg nontrivial.

Proof. Like the proof of Lemma 5, but let H = Cn` (fg), J = Cn` (f g), K = Cn` (f; g), where , , and are as in the de nition of Segerberg nontriviality. Theorem 4 and Lemmas 5, 6, and 7 immediately imply Theorem 8, the basic triviality result for the Ramsey test: THEOREM 8. No classical belief change model de ned on a language of type L1 or L2 that satis es (K+ 1), (K 2), (KC), (K P), and (RT) and 25 See 26 See

[Gardenfors, 1986]. [Rott, 1989].

62

DONALD NUTE AND CHARLES B. CROSS

whose support function is monotonic over K is Gardenfors nontrivial or Rott nontrivial or Segerberg nontrivial. The basic result of [Gardenfors, 1986] can be derived by applying Theorem 8 to Gardenfors belief change models. Gardenfors [1987; 1988] notes that (K P) and (K 2) can be replaced by (K 4) in the triviality result he proves there, and this same replacement can be made in Theorem 8, with a corresponding change in Lemma 3 and its proof.27 As was mentioned in Section 2.3 above, Segerberg has proved a version of the Gardenfors result in which the constraints on ` are limited to Re exivity, Transitivity, and Monotonicity. The counterpart in our framework of Segerberg's result is the following: THEOREM 9. No Segerberg nontrivial Segerberg belief change model satis es (K M), (K 4ss), and (K5ws).28 If contraction and revision are assumed to be related by the Levi Identity (LI) in a deductively closed belief change model, then triviality results for the Ramsey test can be formulated in terms of contraction rather than in terms of revision. In particular, we have the following as a corollary of Theorem 8:29 THEOREM 10. No deductively closed classical belief change model de ned on a language of type L1 or L2 that satis es (K+ 1), (K 3), (K 4w), (LI), and (RT) and whose support function is monotonic over K is Gardenfors nontrivial or Rott nontrivial or Segerberg nontrivial.

Proof. It suÆces to note that where ` is classical, we have the following: (Def+) and (LI) jointly imply (K 2); (LI) and (K 4w) jointly imply (K C); (Def+), (K 3), and (LI) jointly imply (K P). Makinson [1990] proves a variant of Theorem 10 for type-L1 languages that replaces (K 4w) and weakens both (RT) and (LI) while making stronger assumptions about s than merely that it is monotone over K: THEOREM 11. Let hK; I ; `; K?; ; ; si be a Makinson belief change model de ned on a language L of type L1 . De ne postulates (RTM), (K 4c), and (MI) as follows: (RTM) For all ; 2 wffL0 , where L0 is the type-L0 fragment of L, > 2 s(K ) i 2 K . 27 Steps (6), (9), and (13) must be dierently justi ed. 28 See [Segerberg, 1989]. Segerberg's version of the G ardenfors

triviality result makes no assumption about which operators are available in the language, hence is used in the de nition of Segerberg nontriviality to play the role that :( ^ ) plays in the proof of Lemma 3. Also, Segerberg's result does not assume that the language contains both > and ?, which we assume here in (DefBCM). 29 A similar result is proved in [Cross, 1990a].

CONDITIONAL LOGIC

63

(K 4c) If 2 s(K ), then 2 s(;). (MI)

K: K s(K: [ fg).

Then we have the following: (1) Limiting Case. If (RTM), (K 4c), and (MI) hold for K = K?, then the model is trivial in the sense that s(;) = K? . (2) Principal Case. If (RTM), (K 3), (K 4c), and (MI) hold for all K 2 K such that K 6= K?, then the model is trivial in the sense that there are no conditional-free formulas and of L such that ^ 62 s(;) and 62 s(f g) and 62 s(fg) and s[s(fg) [ s(f g)] 6= K? .

The Limiting Case generalizes Theorem 12 discussed below. It is the Principal Case that more closely corresponds to Theorem 10. (K 4c) neither entails nor is entailed by (K 4), its AGM counterpart, but (MI), which we will refer to as Makinson's Inequality, is the result of weakening (LI), the Levi Identity, to say that a revision of K to include must lie \between" K: and s(K: [ fg). Since, as we have seen, (LI), (Def+), and (K 3) entail (K P), one might expect that replacing (LI) with (MI) would leave (K P) unsupported, but this is not the case: (MI), (Def+), and (K 3) already entail (K P). Making up for the fact that (MI) is weaker than (LI) are Makinson's strengthened assumptions about s: that it satis es Superclassicality, Transitivity, and Reasoning by Cases. Makinson [1990] points out that these conditions are known not to imply that s is monotone over its entire domain (P (wffL )), but since contraction and revision in a Makinson belief change model are de ned only on K such that s(K ) = K , s is nevertheless monotone \where it counts", namely over the set K of belief sets on which contraction and revision are de ned. Several authors (e.g. Grahne [1991], Hansson [1992], and Morreau [1992]) have concluded from Makinson's result that nonmonotonic consequence does not provide a way out of the Gardenfors triviality result. In fact, adopting a nonmonotonic consequence operation does provide a way out, provided that this consequence relation plays the role of a support function s that is nonmonotonic over the belief sets to which contraction and revision are applied. Indeed, it is by adopting such support functions that Hansson, Arlo-Costa and Levi are able to make the Ramsey test nontrivial, though these authors do not describe the support function as a consequence operation. (See also Section 2.6 below.) Theorem 8 and its variants pose a dilemma: which of an inconsistent set of constraints on belief change models should be rejected? We return to this later in Section 2.6 below.

64

DONALD NUTE AND CHARLES B. CROSS

The problem with (K 5w)

In the version of Theorem 8 that Gardenfors proves in [Gardenfors, 1988], postulate (K C) is replaced by the stronger (K 5w), but Arlo-Costa [1990] proves that (K 5w) faces problems that have nothing to do with (K P). Arlo-Costa's result, which is not so much a triviality result as an impossibility result, can be formulated as follows in our framework: THEOREM 12. There is no Gardenfors belief change model de ned on a language of type L1 or L2 in which 6` ?, (K 5w), and (RT) hold. Whereas Gardenfors' results against the Ramsey test exploit the fact that (RT) entails (K M), Arlo-Costa's result exploits the fact that (RT) entails the following, which Arlo-Costa calls \Unsuccess": (US) If K = K? , then K = K? .

In other words, the Ramsey test requires revision into inconsistency if the initial belief state is already inconsistent, regardless whether the revision input is a consistent proposition. Contrary to this, (K 5w) prohibits revision into inconsistency when the revision input is a consistent proposition, regardless whether the initial belief state is consistent. The labeling of (US) as a postulate of \unsuccess" is appropriate since (K 5w), which (US) contradicts, follows from (Def+), (LI), and the Postulate of Success for contraction (K 4). Arlo-Costa's result can be strengthened to include belief change models in which s(K ) = K does not always hold: THEOREM 13. There is no deductively closed classical belief change model de ned on a language of type L1 or L2 whose support function satis es Re exivity and Closure and in which 6` ?, (K 5w), and (RT) hold.

Proof. Let a deductively closed classical belief change model on a language of type L1 or L2 be given and suppose for reductio that 6` ?, that s satis es Re exivity and Closure, and that the model satis es (K 5w) and (RT). First we prove (US). Let K = K?; then we have K

s(K )

Re exivity of s = Cn` [s(K )] Closure of s

Since ? 2 K? = K we have s(K ) = wffL , by the classicality of `. Next, let 2 K? , and let 2 I . Since s(K ) = wffL , we have > 2 s(K ). Hence by (RT) we have B 2 K . Thus K? K ; the converse inclusion holds by (DefBCM), so K = K?, as required to show (US). By (DefBCM) K? 2 K and :? 2 I ; by (US) we have (K? ):? = K? . By hypothesis 6` ?, hence by the classicality of ` we have not only (K? ):? = K? but also 6` ::?, which contradicts (K 5w).

CONDITIONAL LOGIC

65

The Limiting Case of Theorem 11, like Theorem 13, is a strengthening of Theorem 12. The problem posed by Theorems 12 and 13 can be solved by retreating from (K 5w) to something weaker, such as the postulate (K C) mentioned in Theorem 8, or by restricting the applicability of the Ramsey test, as Aro-Costa and Levi both do by adopting (PRTL) (see Section 2.4 above), which eliminates K? from the domain of belief sets to which the Ramsey test can be applied. The negative Ramsey test Levi has argued (see, for example, [Levi, 1988] and [Levi, 1996]) that a negated conditional :( > ) expresses the propositional attitude of an agent for whom : is a serious (i.e. epistemic) possibility relative to K . Abstracting from Levi's requirements on what is allowed to be a revision input, the result is this thesis, the negative Ramsey test: (NRTL) For all 2 I and all 2 K? and all K 2 K such that K 6= K? , :( > ) 2 s(K ) i 62 K. Rott [1989] takes the view that adopting both the negative Ramsey test and the Ramsey test amounts to an assumption of autoepistemic omniscience. Given the view of Gardenfors, Rott, and others that conditionals and negated conditionals belong in belief sets along with other beliefs (so that s satis es Identity over K), the conjunction of (RT) and (NRTL) does amount to a kind of epistemic omniscience. That is, if s satis es Identity over K, then \closing" each belief set under (RT) and (NRTL) amounts to an idealization that parallels the idealization represented by \closing" each belief set under `. On Levi's view, conditionals do not express propositions and so are not objects of belief, thus on Levi's view the positive and negative Ramsey tests cannot be said to represent an idealization concerning what beliefs an agent holds. For Levi, what the positive and negative Ramsey tests represent is not a pair of closure conditions on the unary propositional attitude of belief but rather a de nition of a binary propositional attitude toward the antecedent and consequent of a conditional that an agent is said to `accept'. Regardless how the issue of autoepistemic omniscience is resolved, the adoption of (NRTL) has consequences. Gardenfors, Lindstrom, Morreau, and Rabinowicz [1991] prove what they consider to be a triviality result for (NRTL) with assumptions weaker than those needed for Gardenfors' 1986 triviality result for (RT); in particular, (K P) is not needed. In our framework their result is equivalent to the following: THEOREM 14. If hK; I ; `; K? ; ; ; si is a belief change model de ned on a language of type L1 or L2 for which both (NRTL) and (K T) hold and for which s satis es Identity over K, then there are no K; K 0 2 K such that K 6= K? 6= K 0 and K 0 6= K K 0 .

66

DONALD NUTE AND CHARLES B. CROSS

The latter can be derived as a corollary of the following stronger result: THEOREM 15. If hK; I ; `; K? ; ; ; si is a belief change model de ned on a language of type L1 or L2 for which both (NRTL) and (K T) hold and for which s is monotone over K, then there are no K; K 0 2 K such that K 6= K? 6= K 0 and K 0 6= K K 0.

Proof. Suppose that hK; I ; `; K? ; ; ; si is a belief change model de ned on a language of type L1 or L2 such that s is monotone over K. First we prove that (NRTL) implies (K IM): assume (NRTL) and suppose that K; K 0 2 K and 2 I and K 6= K? 6= K 0 and s(K ) s(K 0 ), and let 62 K. Then by (NRTL) :( > ) 2 s(K ), hence :( > ) 2 s(K 0 ). By (NRTL) it follows that 62 K0 , as required to establish (K IM). Now suppose for reductio that hK; I ; `; K?; ; ; si satis es both (NRTL) and (K T), that s is monotone over K, and there are K; K 0 2 K such that K 6= K? 6= K 0 and K 0 6= K K 0 . Since K K 0 we have s(K ) s(K 0 ) by the monotonicity of s. By (DefBCM) we know that > 2 I , so we have K>0 K> by (K IM). But K> = K and K>0 = K 0 by (K T), hence K 0 K , which contradicts K 0 6= K K 0 , completing the reductio. Note that neither theorem assumes that all members of K must be deductively closed, nor does either result include any assumption about `. In [Gardenfors et al., 1991] Theorem 14 is presented as a triviality result because the authors maintain that a model of belief change is trivial if it contains no consistent, conditional-laden belief sets K; K 0 such that K is a proper subset of K 0 . As Rott [1989], Morreau [1992], and Hansson [1992] point out, however, it is a substantive (and, they argue, mistaken) assumption to hold that principles of belief revision that are justi ed in the context of conditional-free belief sets (e.g. the closure of K under expansions) can be carried over without modi cation to conditional-laden belief sets. More on this in Section 2.6 below. One might therefore respond to Theorem 14 by questioning the criterion of triviality: perhaps a model of belief change whose belief sets are conditional-laden should not be classi ed as trivial simply because it contains no consistent K; K 0 such that K is a proper subset of K 0 , even though a belief change model with conditional-free belief sets would be trivial in that case. But if the criterion of triviality espoused by Gardenfors, et al [1991] is appropriate for conditional-free belief sets, then what about Theorem 15, which does cover belief change models with conditional-free belief sets? Our discussion in Section 2.6 below may appear to suggest that the problem raised by Theorems 14 and 15 might ultimately

CONDITIONAL LOGIC

67

be solved by giving up the monotonicity of s over K, but even that is not guaranteed to be enough. Consider this result:30 THEOREM 16. No classical belief change model de ned on a language of type-L1 or type-L2 that satis es (K T), (PRTL), and (NRTL), and whose support function satis es Conservativeness and Closure, is Gardenfors nontrivial or Rott nontrivial or Segerberg nontrivial.

Proof. By Lemmas 5, 6, and 7, it suÆces to show that no classical belief change model de ned on a language of type-L1 or type-L2 that satis es (K T), (PRTL), and (NRTL), and whose support function satis es Conservativeness and Closure, contains forking support sets. Consider a classical belief change model hK; I ; `; K? ; ; ; si de ned on a language of type-L1 or type-L2 that satis es (K T), (PRTL), and (NRTL), and whose support function satis es Conservativeness and Closure. Note rst that since s satis es Conservativeness and Closure, s must also satisfy Consistency. Suppose the model contains forking support sets. Then there exist H; J; K 2 K such that H = Cn` (H ) \ K? , and J = Cn` (J ) \ K? , and K = Cn` (K ) \ K? 6= K? , and H \ I 6 J , and J \ I 6 H , and s(H ) s(K ), and s(J ) s(K ). Since J \ I 6 H we have 2 J but 62 H for some conditional-free . By (K T) we have H = H> and J = J> and K = K> , hence 2 J> and 62 H> . By (PRTL) we have > > 2 s(J ), and by (NRTL) we have :(> > ) 2 s(H ). We also have > > 2 s(K ), since s(J ) s(K ); hence s(K ) is not `-consistent. This contradicts the `-consistency of K , since s satis es Consistency. As Theorem 16 shows, (NRTL), (K T) and (PRTL) cannot be nontrivially combined, even in a broad category of models where s is not monotone over K, unless we abandon the Rott, Gardenfors, and Segerberg criteria of nontriviality.

2.6 Resolving the con ict On giving up (RT) Gardenfors interprets Theorem 8 as forcing a choice between (K P) and the Ramsey test (RT), and he has argued (see, e.g., [Gardenfors, 1986], pp. 8687 and [Gardenfors, 1988], p. 59 and p. 159) that (K M) and with it (RT) should be rejected. In this connection he oers the following example:

Let us assume that Miss Julie, in her present state of belief K , believes that her own blood group is O and that Johan is her 30 The authors thank Horacio Arl o-Costa for showing us the proof of this result in correspondence.

68

DONALD NUTE AND CHARLES B. CROSS

father, but she does not know anything about Johan's blood group. Let A be the proposition that Johan's blood group is AB and C the proposition that Johan is Miss Julie's father. If she were to revise her beliefs by adding the proposition A, she would still believe that C , that is, C 2 KA . But in fact she now learns that a person with blood group AB can never have a child with blood group O. This information, which entails C ! :A, is consistent with her present state of belief K , and thus her new state of belief, call it K 0 , is an expansion of K . If she then revises K 0 by adding the information that Johan's blood group is AB, she will no longer believe that Johan is her father, that is C 62 KA0 . Thus (K M) is violated. ([Gardenfors, 1986], pp. 86-87) The example assumes that s satis es Identity over K, so let us assume that as well. In reply to Gardenfors one might say that if (RT) and Identity over K are assumed, then the presence of conditionals in belief sets prevents this example from being a counterexample to (K M): if (RT) and Identity over K are assumed, then since we have C 2 KA and C 62 KA0 it follows that A > C 2 K and A > C 62 K 0 , in which case K 6 K 0 , i.e. K 0 is not an expansion of K . But to accept this, Gardenfors argues, would violate certain intuitions: [I]f we assume (RT) and not only (K M), then Miss Julie would have believed A > C in K . But then the information that a person with blood group AB can never have a child with blood group O, would contradict her beliefs in K , which violates our intuitions that this information is indeed consistent with her beliefs in K . ([Gardenfors, 1986], p. 87) Let B stand for the statement that a person with blood group AB can never have a child with blood group O. Gardenfors has claimed in a context where s satis es Identity over K that if (RT) holds, then Miss Julie's beliefs in K contradict B , but this claim requires further justi cation: how exactly does K contradict B ? We might suppose that B entails L(C ! :A), where L is an alethic nomological necessity operator expressing the modal force of B . Assuming (RT) and Identity over K, the question whether K contradicts B depends on whether the set fC; B; A > C g is consistent, and this can be assumed to depend on whether the set fC; L(C ! :A); A > C g is consistent. But the latter set is consistent if the semantics for > is not tied to nomological necessity. For example, given a selection function semantics for > and an accessibility relation semantics for L, all of C , A > C , and L(C ! :A) can be true at possible world w if none of the A-worlds selected relative to w happen to be nomologically accessible at w.31 So in 31 For

a less abstract version of essentially this point, see [Cross, 1990a], pp. 229-232.

CONDITIONAL LOGIC

69

order to sustain Gardenfors' claim in the passage cited above, the claim that K contradicts B if (RT) holds (and if s satis es Identity), we would have to assume the right sort of semantic connection between Ramsey test conditionals and the nomological modality in B , but the case for assuming that connection is not at all obvious: Ramsey test conditionals, after all, are epistemic . And if K indeed does not contradict B , then the same conditional that prevents the example from being a counterexample to (K M) makes the example a counterexample to (K P): since K 0 = KB , and since C 2 KA but C 62 KA0 , it follows that if (RT) holds, then A > C 2 K 6 KB 63 A > C even though :B 62 K . So it might be argued that the presence of Ramsey test conditionals in belief sets will render (K M) intuitively innocuous while providing perfectly reasonable counterexamples to (K P). Still, the arguments in favor of (K P) seem strong. One argument appeals to the Bayesian model of rationality. Suppose that an agent's belief state is represented as a probability function P . According to Bayesian doctrine, upon becoming certain of a rational agent in belief state P revises her belief state by conditionalizing on , assuming P () > 0. If this doctrine is correct and if an agent's belief set consists of those statements to which she assigns unit probability, then (K P) reduces to a theorem of probability theory: if P (:) 6= 1 and P ( ) = 1, then P ( j) = 1. A second argument appeals to the doctrine that revision can be de ned in terms of contraction and expansion via the Levi Identity (LI), which prescribes the following: to revise with , rst contract relative to : and then expand with . If ` is classical and if (LI), (Def+), and (K 3) hold, then (K P) follows, the role of (K 3) being to require any contraction of K to be vacuous if the proposition contracted does not belong to K : if one does not believe a given proposition then no prior belief need be discarded when one contracts one's beliefs to exclude that proposition|it is already excluded. As Theorem 10 shows, we can incorporate the second of these arguments for (K P) directly into the triviality result by recasting Theorem 8 in terms of an inconsistency between (RT), (LI), and postulates (K+ 1), (K 3), and (K 4w). (K 3) deals with what is in some sense the degenerate case of contraction: contraction with respect to an absent proposition. Postulate (K 4w) is similarly weak: it requires only that a contraction should really be a contraction in any case where a logically contingent proposition is contracted from a logically consistent belief set. Both postulates are very weak constraints on contraction, and their weakness makes the case against (RT) seem strong, as long as we assume that s satis es Identity over K or at least Monotonicity over K. Is there a weakened version of (RT) that is compatible with the other postulates mentioned in Theorem 8? Lindstrom and Rabinowicz [1992] show that there is. They suggest replacing (RT) with a condition whose counterpart in our framework is the following:

70

DONALD NUTE AND CHARLES B. CROSS

(SRT)

For all K 2 K and all 2 I and all 2 K? , > 2 K0 for all K 0 2 K such that K K 0 .

2 s(K ) i

Like Gardenfors, Lindstrom and Rabinowicz do not distinguish between K and s(K ), and in a context where this distinction is not made (i.e where s satis es Identity over K), replacing (RT) with (SRT) has the eect of excluding from belief sets many of the conditionals that must be present in them if (RT) and Identity over K are assumed. For example, in Gardenfors' Miss Julie case, (RT) and Identity over K force the conclusion that A > C 2 K and A > C 62 KA0 , since C 2 KA and C 62 KA0 , giving us a counterexample to (K P) since :B 62 K and K 0 = KB . If (SRT) and Identity over K are assumed instead, then the falsity of (K P) no longer follows. The question whether A > C belongs to K depends not simply on KA but on the revision behaviour of all belief sets that include K as a subset, and similarly for the question whether A > C belongs to K 0 . On giving up (K P)

Should the Ramsey test be preserved at the expense of (K P)? The answer is certainly yes if the Ramsey test is applied to the notion of theory change to which Katsuno and Mendelzon in [Katsuno and Mendelzon, 1992] attach the label update . They write:

: : : [U]pdate , consists of bringing the knowledge base up to date when the world described by it changes. For example, most database updates are of this variety, e.g. \increase Joe's salary by 5%". Another example is the incorporation into the knowledge base of changes caused in the world by the actions of a robot.32 Update, according to Katsuno and Mendelzon, contrasts with revision :33

: : : [R]evision, is used when we are obtaining new information about a static world. For example, we may be trying to diagnose a faulty circuit and want to incorporate into the knowledge base the results of successive tests, where newer results may contradict old ones. We claim the AGM postulates describe only revision.34 Katsuno and Mendelzon represent knowledge bases as formulas and introduce a binary modal connective to represent the update operation. Following [Grahne, 1991] we will use the symbol `Æ' for this operation; then the 32 [Katsuno and Mendelzon, 33 See also [Winslett, 1990]. 34 Ibid.

1992], p. 183.

CONDITIONAL LOGIC

71

formula Æ is the knowledge base that results from updating knowledge base with new information . Grahne [1991] provides an interpretation of the Ramsey test in terms of update. Given a type-L2 language that includes the binary connective `Æ' Grahne simply adds to Lewis' system VCU the following (validity preserving) rule of inference: RR:

From ! ( > ) infer ( Æ ) ! , and from ( Æ ) ! ! ( > ).

infer

Grahne calls the resulting logical system VCU2 . In Grahne's framework, the formula Æ is true in possible world w i w belongs to the set of closest worlds to w0 in which is true for at least one world w0 in which is true. Grahne proves soundness, completeness, decidability, and nontriviality results for VCU2 , and he notes that VCU2 fails to satisfy the following principle: (U 4s) If 6` :( ^ ), then ` ( Æ ) $ ( ^ ). (U 4s) states that if is consistent with knowledge base , then the result of updating with is a formula logically equivalent to ^ . Grahne cites the following example to illustrate the failure of (U 4s), which is the update counterpart of revision postulate (K 4s): A room has two objects in it, a book and a magazine. Suppose p1 means that the book is on the oor, and p2 means that the magazine is on the oor. Let the knowledge base be (p1 _ p2 ) ^ :(p1 ^ p2 ), i.e. either the book or the magazine is on the oor, but not both. Now we order a robot to put the book on the oor, that is, our new piece of knowledge is p1 . If this change is taken as a revision [so that (K 4s) is assumed], then we nd that since the knowledge base is consistent with p1 , our new knowlege base will be equivalent to p1 ^ :p2 , i.e. the book is on the oor and the magazine is not. But the above change is inadequate. After the robot moves the book to the oor, all we know is that the book is on the oor; why should we conclude that the magazine is not on the oor?35 That is, upon updating to include p1 , we should give up something we believed in our initial epistemic state, namely :(p1 ^ p2 ), even though the new information p1 is consistent with our initial epistemic state. Apparently, then, we have made a belief change using a method that does not satisfy an appropriate counterpart of (K P). Isaac Levi disagrees. The mechanism which underlies update is imaging : the \image" of a set S of possible worlds 35 [Grahne,

1991], pp. 274{275.

72

DONALD NUTE AND CHARLES B. CROSS

under is the set of worlds each of which is one of the closest -worlds to some world belonging to S . Levi [Levi, 1996] argues that while imaging may be useful for describing how changes over time in the state of a system (such as the room in Grahne's example) are regulated, such changes are not an example of belief change. We may, of course, have beliefs about how changes in a system over time are regulated, but an analysis of Grahne's example along the lines recommended by Levi would show it to be a straightforward case in which belief revision took place via expansion : if t is a time before the book was moved and t0 is a time just after the book is moved and if propositional variables p1 and p2 are replaced by formulas containing predicates P1 and P2 , where Pi u means that pi is true at time u, then our initial epistemic state can be represented as (P1 t _ P2 t) ^ :(P1 t ^ P2 t), and upon learning of the change in the position of the book our new epistemic state is (P1 t _ P2 t) ^ :(P1 t ^ P2 t) ^ P1 t0 . On giving up (K+ 1)

Gardenfors interprets Theorem 8 as forcing a choice between (RT) and (K P), but Rott [1989], Morreau [1992], and Hansson [1992] have argued that (K+ 1) is the real culprit. Postulates (K 3) and (K 4) entail (K 4s): if is consistent with K , then a revision to accept should be the result of expanding K with . Rott [1989] argues that (K 4s), while ne for belief revision in a type-L0 language, is an inappropriate requirement on belief revision in a language with Ramsey test conditionals. Once (K 4s) is rejected in the context of Ramsey test conditionals, Rott argues, (K+ 1) is robbed of any intuitive basis: the only reason for thinking that belief change models should be closed under expansion would be the assumption that expansion is a species of revision. Why think that expansion is a species of revision in the rst place? One could justify (K 4s) as the qualitative analog of the Bayesian doctrine that upon becoming certain of a rational agent whose belief state is represented by probability function P revises her belief state by conditionalizing on if P () > 0. This doctrine supports (K 4s) because if P () > 0, then the set f : P ( j) = 1g is precisely the result of expanding the set f : P ( ) = 1g with . But, Morreau [1992] counters, Bayesian doctrine supports (K 4s) in this way only for belief sets over a type-L0 language. Still, one might argue, regardless whether revision ever leads from a belief set to one of its expansions, should not the expansion of every belief set in a belief change model be available in the model as a possible starting point for revision? Not so, argues Morreau [1992]: a belief change model over which (RT) holds and in which belief sets contain conditionals incorporates the idealizing assumption that the conditionals an agent believes form a complete and correct record of how the agent would revise his or her beliefs.

CONDITIONAL LOGIC

73

Not just any collection of theories in a conditional language can be the belief sets of a Ramsey test respecting belief change model because not just any theory will conform to the idealization. Morreau interprets the Gardenfors triviality result as showing in particular that the idealization required by the Ramsey test cannot be achieved in a nontrivial belief change model that respects (K+ 1) while incorporating conditionals in belief sets. But are there in fact nontrivial belief change models containing conditionalladen belief sets in which (RT) holds but (K+ 1) does not? Morreau's Example 6 ([Morreau, 1992], p. 41), which we adapt to our framework, con rms that there are. Let L be a type-L1 language and let L0 be its type-L0 fragment. Assume that L0 contains at least two distinct atomic formulas. Let `0 be truth-functional consequence, and assume (Def+), and let K0 be the set of all `0 -theories in L0 . De ne a belief revision operation ? as follows for all 2 K0 and all formulas of L0 : 8 if ? 2 ; : 2 ? g). Let K = fK : 2 K0 g; let (K ) = K? ; let I = wffL0 (as before); let K? = wffL; let dom(s) = K; and let s(K ) = K for all K 2 K. Letting contraction ( ) again be arbitrary, hK; I ; `0; K? ; ; ; si satis es (K 2), (K C), and (RT), but this model, unlike the rst, does not satisfy (K+ 1) or (K P).36 For example, let A; B be distinct atomic formulas of L0, and let 0 = Cn`0 (fB g); thus 0 2 K0 . Since :A 62 0 , we have that 0A? = 0A+ = Cn`0 (fA; B g). Accordingly, A > B 2 K 2 K, but note that (K )+:A does not belong to K, for there is no 2 K0 such that both :A and A > B belong to K . The second belief change model constructed above is the result of closing the rst model under the Ramsey test (restricted to non-nested conditionals), and both models are Gardenfors nontrivial. The Gardenfors nontriviality of the second model is established by KCn 0 (;) , which belongs to K, and A ^ :B , B ^ :A, and A ^ B which belong to L. In addition to this example Morreau provides a general recipe for constructing nontrivial models of belief revision in a type-L1 language L whose type-L0 fragment is L0 and where (K 1), (K 2), (K C), (RT), and (K PI ) hold and I = wffL0 . 0

0

`

36 The model does satisfy a weakened version of (K P), however, as Morreau points out: (K PI ) For all 2 I , if : 62 K then K \ I K \ I .

74

DONALD NUTE AND CHARLES B. CROSS

Where does the triviality proof break down when applied to Morreau's example? Hansson [1992] proves a theorem that provides the answer: Morreau's example is one of a set of belief change models in which forking support sets cannot be constructed. The counterpart of Hansson's theorem in our framework is the following: THEOREM 17. Suppose that hK; I ; `; K?; ; ; si is a belief change model de ned on a language L of type L1 or L2 and that ` includes all truthfunctional entailments and respects the deduction theorem for the material conditional. Suppose also that dom(s) = K, that s(K ) wffL for all K 2 K, and that s satis es the following for all K 2 K, all 2 I and all ; 2 K? :

s(K ) and ` then 2 s(K ). 2. If K is `-consistent and 2 s(K ), then : 62 s(K ). 3. If K is `-consistent and 6` : and > ; > 2 s(K ), then 6` :( ^ ). 4. If > ( ^ ) 2 wffL and 2 s(K ) and 62 s(K ) and : 62 s(K ), then > ( ^ ) 2 s(K ). Suppose that K1 ; K2 ; K 2 K fK?g and that s(K1 ) and s(K2 ) are both subsets of s(K ). Then either s(K1 ) s(K2 ) or s(K2 ) s(K1 ). 1. If

Conditions 1 and 2 are equivalent to Closure and Consistency for s, respectively. Note that the Ramsey test itself is not assumed: the point is that simply having conditionals in a belief change model that meets these four conditions ensures that forking support sets cannot be constructed. On giving up the monotonicity of s over K Rott [1989; 1991] suggests that nonmonotonic reasoning may provide a solution to the dilemma posed by the Gardenfors triviality result, and Cross [1990a] argues that Gardenfors' triviality result should be interpreted as showing not that the Ramsey test should be abandoned but that, given the Ramsey test, s must be nonmonotonic over K, i.e. for some H; K 2 K H K but s(H ) 6 s(K ).37 Makinson counters in [Makinson, 1990] with a triviality result for models in which s is permitted to be nonmonotonic, but Makinson's result does not bear on the suggestion endorsed by Cross and by Rott. More on this below. Other authors have brought nonmonotonic reasoning into the discussion of the Ramsey test without advertising it as such. For example, in [Hansson, 1992] Hansson writes: 37 Since the monotonicity of s is not assumed in Theorem 13, however, it is clear that the problem for (RT) posed by (K 5) cannot be solved by making s nonmonotonic.

CONDITIONAL LOGIC

75

: : : the addition of an indicative sentence that is compatible with all previously supported indicative sentences typically withdraws the support of conditional sentences that were previously supported.38 The type-L1 statements that are in Hansson's sense supported by a given \indicative" (i.e. conditional-free) belief base K represent what Cross (and possibly Rott) would classify as the nonmonotonic consequences of K . Hansson and Cross both think of the sets that individuate belief states as belief bases and de ne contraction and revision as operations on these sets, but Cross' belief bases dier from Hansson's in two respects: rst, whereas Hansson's belief bases are conditional-free, Cross' are not; secondly, whereas Hansson's belief bases are not closed under ` or closed under s, in Cross' enriched belief revision models belief bases are closed under `, though not under s. That is, for Hannson, belief states are individuated in terms of sets that function as belief bases with respect to both ` and s, whereas for Cross, belief states are individuated in terms of sets that function as belief bases only with respect to s. For Hansson, belief bases need not be closed under ` and are never closed under s. Makinson [1990], like Cross [1990a], supplements the classical ` with a not-necessarily-monotonic s,39 and like Cross, Makinson explicitly advertises s as a consequence operation. But in Makinson's discussion revision and contraction are de ned only on K that are closed under s, and the proof of Makinson's triviality theorem, whose counterpart here is Theorem 11 above, requires a belief change model containing three belief sets closed under s. Makinson's triviality result does not apply to belief change models in which K contains no K such that s(K ) = K , and such authors as Arlo-Costa and Levi (see [Levi, 1988], [Arlo-Costa, 1995], and [Arlo-Costa and Levi, 1996]) avoid Makinson's triviality result precisely by requiring s(K ) 6= K for all K 2 K. As we noted above, Hansson does not explicitly speak of the support function as a consequence operation, nor does Arlo-Costa or Levi. Yet, if one looks at the conditions that Hansson, Arlo-Costa, and Levi place on the support function, mirrored here in the de nitions of Hansson and Arlo-Costa/Levi belief change models as the requirements of Re exivity, Conservativeness, and Closure, it seems natural to think of s as a nonmonotonic consequence operation. But if we do think of s in a Hansson or Arlo-Costa/Levi belief change model as a nonmonotonic consequence operation, what sort of nonmonotonic reasoning does it represent? In [Moore, 1983] Robert Moore distinguishes two types of nonmonotonic reasoning: By default reasoning, we mean drawing plausible inferences from less than conclusive evidence in the absence of any information 38 [Hansson, 39 Makinson

1992], p. 526. uses the symbol C and Cross the symbol cl for s.

76

DONALD NUTE AND CHARLES B. CROSS

to the contrary. The examples about birds being able to y are of this type.40 He continues: Default reasoning is nonmonotonic because, to use a term from philosophy, it is defeasible . Its conclusions are tentative, so, given better information, they may be withdrawn.41 Default reasoning, according to Moore, contrasts with autoepistemic reasoning , or reasoning about one's state of belief. Moore writes: Autoepistemic reasoning is nonmonotonic because the meaning of an autoepistemic statement is context-sensitive; it depends on the theory in which the statement is embedded.42 For example, if } is de ned as being accepted in belief state K just in case : is not accepted in K , then } is an autoepistemic statement in Moore's sense. If the support function s in a belief change model is thought of as a nonmonotonic consequence operation, then how should s be classi ed with respect to Moore's distinction? It depends on the properties s is assumed to have. If a belief change model satis es some version of the Ramsey test (e.g. (RT), (PRTL), or (NRTL)), then the support function of that model is at least a form of autoepistemic reasoning. This is clear since the acceptability of a Ramsey test conditional for a given agent is in part a function of the agent's current epistemic state, and this holds true regardless whether conditionals themselves are objects of belief.43 Moreover, the context sensitivity to which Moore refers in the passage just quoted is clearly present in the support function of any belief change model that satis es (RT), and indeed this context sensitivity was exploited by Morreau [1992] in his construction of a nontrivial Ramsey test, by Lindstrom and Rabinowicz [1995] and Lindstrom [1996] in a proposed indexical interpretation of conditionals,44 by Hansson [1992] in his accounts of type-L1 conditionals and iterated conditionals, respectively, and by Boutilier and Goldszmidt [1995] in their account of the revision of conditional belief sets. Given a support function s for a belief change model that satis es a version of the Ramsey test, can s be not only a mechanism for autoepistemic reasoning but a mechanism for default reasoning, too? This depends on whether s can be used to make ampliative inferences to conclusions that 40 [Moore, 1983], p. 273. 41 [Moore, 1983], p. 274. 42 [Moore, 1983], p. 274. 43 Rott [1989] and Morreau

tionals are autoepistemic. 44 See also [D oring, 1997].

[1992] explicitly adopt the view that Ramsey test condi-

CONDITIONAL LOGIC

77

are not epistemically context-sensitive from premises that are not epistemically context-sensitive. Since Hansson, Arlo-Costa, and Levi assume that s satis es Conservativeness, it is clear that for them s is an operation of autoepistemic reasoning but not an operation of default reasoning: s(K ) will contain conditionals, i.e. autoepistemic statements, that are not logical consequences of K , but no conditional-free formula gets into s(K ) without being a logical consequence of K , which is itself conditional-free for any K 2 dom(s) according to Hansson, Arlo-Costa, and Levi. Cross and Makinson, on the other hand, do not require the support function to satisfy Conservativeness; accordingly, they allow belief change models in which s supports default reasoning. No distinction between s(K ) and K exists for Morreau, Gardenfors, and Segerberg, hence the issue of the status of s does not arise in their respective cases.

2.7 Logics for Ramsey test conditionals Gardenfors [1978] proves the soundness and completeness of David Lewis' system of conditional logic VC with respect to an epistemic, Ramsey test semantics for the conditional. Several other authors have proposed variants of Gardenfors' Ramsey test semantics, including variants that generalize Gardenfors' semantics, but it will be convenient for our purposes to adopt formalisms similar to those of [Arlo-Costa, 1995] and [Arlo-Costa and Levi, 1996]. Primitive belief revision models

Since the conditional is to be given a semantics in terms of belief revision, the notion of a belief set must be de ned in terms that do not assume a logic for the conditional. To this end we de ne primitive belief sets , primitive expansion , and primitive belief revision models . For a Boolean language L of type L1 or L2 let a primitive belief set de ned on this language be any set K of formulas of L meeting three requirements: (pBS1) K 6= ;; (pBS2) if 2 K and (pBS3)

2 K , then ^ 2 K ; if 2 K and ! is a truth-functional tautology, then 2 K .

If K K 0 and K , K 0 are both primitive belief sets, then K 0 is a primitive expansion of K . The operation of primitive expansion is de ned as follows: (DEF+) K+ = f : !

2 K g.

It is easy to see that if K is a primitive belief set, then so is K+. Finally, let us de ne the notion of a primitive belief revision model on L:

78

DONALD NUTE AND CHARLES B. CROSS

(DefpBRM) A primitive belief revision model (or pBRM) on a Boolean language L is an ordered quadruple hK; ; K? ; si whose components are as follows: 1. K? = wffL , where L0 is L or a fragment of L; 2. K is a nonempty set of primitive belief sets de ned on L0, and if K 2 K then K contains every primitive expansion of K on L0 ; 3. is a function mapping each K 2 K and each formula 2 K? to a primitive belief set K belonging to K; 4. s is a function mapping each K 2 K to a primitive belief set s(K ) of formulas of L, where s satis es the following: (a) if 2 K? and 2 s(K ) then 2 K ; (b) K s(K ) if K? 6= K 2 K. 0

Note that while K and s(K ) must both be primitive belief sets, they need not be primitive belief sets of the same language. When referring to the belief revision postulates (K 1), (K 2), etc., in the context of primitive belief revision models we will assume that and range over K? . A primitive belief revision model hK; ; K? ; si de ned on a Boolean language L is a Gardenfors pBRM i L is of type L2 and K? = wffL and s is the identity function on K and s satis es the following unrestricted version of the positive Ramsey test: (pRTG) For all K 2 K, if ; 2 K?, then ( > ) 2 s(K ) i 2 K .

A primitive belief revision model hK; ; K? ; si de ned on a Boolean language L is an Arlo-Costa/Levi pBRM i L is of type L1 and K? is the set of all formulas of the largest conditional-free fragment of L and s satis es the following versions of both the positive and negative Ramsey tests: (pPRT) For all K s(K ) i

2 K such that K 6= K? , if ; 2 K? , then ( > 2 K . For all K 2 K such that K = 6 K? , if ; 2 K? , then :( > s(K ) i 62 K .

)2

(pNRT)

)2

Positive and negative validity In [Arlo-Costa, 1995] (and in [Arlo-Costa and Levi, 1996], with Isaac Levi) Arlo-Costa distinguishes between positive and negative concepts of validity. The concepts are distinct in Arlo-Costa/Levi pBRMs, though not in Gardenfors pBRMs. A formula is positively valid (PV) relative to hK; ; K? ; si, where the latter is a primitive belief revision model, i 2 s(K ) for all K such that

CONDITIONAL LOGIC

79

K? 6= K 2 K; and is positively valid relative to a set of belief revision models i is positively valid relative to each member of the set. is negatively valid (NV) relative to hK; ; K? ; si i : 62 s(K ) for each K such that K? 6= K 2 K; and is negatively valid relative to a set of belief revision models i is negatively valid relative to each member of the set. Notions of entailment can be associated with positive and negative validity, respectively. Given a set of formulas of a type-L1 or type-L2 language L, and a formula of L, positively entails ( j=+ ) with respect to a primitive belief revision model hK; ; K?; si i 2 s(K ) for every K such that s(K ) and K? 6= K 2 K. By contrast, negatively entails ( j= ) with respect to a primitive belief revision model hK; ; K? ; si i there is no K such that K? 6= K 2 K and [ f:g s(K ). In a Gardenfors pBRM, positive and negative validity coincide: PROPOSITION 18. Relative to any Gardenfors pBRM de ned on a language L of type L2 , a formula of L is positively valid i is negatively valid. Proof. Let hK; ; K?; si be a Gardenfors pBRM de ned on a language L of type L2 , and let be a formula of L. First, suppose that is positively valid relative to hK; ; K?; si and choose an arbitrary K such that K? 6= K 2 K. Then 2 s(K ), but since s(K ) = K 6= K? we have : 62 s(K ), as required. Conversely, assume that is negatively valid relative to hK; ; K? ; si and choose an arbitrary K such that K? 6= K 2 K. Assume for reductio that 62 s(K ). Since s is the identity function, we have that 62 K , in which case K:+ 6= K?. Since K is closed under primitive expansions, we have in addition that K:+ 2 K. Thus, : 2 s(K:+) and K? 6= K:+ 2 K, which is contrary to the negative validity of . Positive and negative validity do not coincide in Arlo-Costa/Levi pBRMs, however. The negative Ramsey test prevents it. Consider the following pair of lemmas regarding the thesis (CS): LEMMA 19. For any type-L1 language L, if , are conditional-free, then ( ^ ) ! ( > ) is negatively valid in an Arlo-Costa/Levi pBRM de ned on L i the model satis es (K 4w). The latter is equivalent to Observation 4.7 of [Arlo-Costa and Levi, 1996]. LEMMA 20. For any type-L1 language L containing at least one atomic formula other than > and ?, there are conditional-free and such that ( ^ ) ! ( > ) is not positively valid in any Arlo-Costa/Levi pBRM de ned on L that satis es (K 3) and contains a primitive belief set K where :; 62 K . Proof. Suppose L is a type-L1 language containing at least one atomic formula dierent from > and ?, and consider an Arlo-Costa/Levi pBRM

80

DONALD NUTE AND CHARLES B. CROSS

hK; ; K? ; si de ned on L that satis es (K 3) and contains a primitive belief set K such that :; 62 K . Suppose for reductio that ( ^ ) ! ( >

) is positively valid for all conditional-free and . Then, in particular (> ^ ) ! (> > ) is positively valid relative to hK; ; K?; si. By (K 3) we have K> K>+ = K . Since, in addition, :; 62 K we have that :; 62 K> . Since 62 K> , it follows by the Negative Ramsey test that :(> > ) 2 s(K ), hence by the positive validity of (> ^ ) ! (> > ) relative to hK; ; K? ; si we have that :(> ^ ) 2 s(K ). Since :(> ^ ) is conditional-free, it follows that :(> ^ ) 2 K . Since primitive belief sets are deductively closed, we have : 2 K , contrary to assumption. The proof just given is derived from that given by Arlo-Costa for Observation 3.14 in [Arlo-Costa, 1995]. Finally, we state the following obvious but necessary lemma: LEMMA 21. For some type-L1 language L, there is an Arlo-Costa/Levi pBRM de ned on L that satis es (K 3) and (K 4w) and also contains a primitive belief set K where :; 62 K for some conditional-free formula of L. These three lemmas suÆce to show the following: THEOREM 22. There are Arlo-Costa/Levi pBRMs relative to which at least some formulas of the form ( ^ ) ! ( > ) are negatively valid but not positively valid. Interestingly, despite Theorem 22, f ^ g j=+ > holds relative to every Arlo-Costa/Levi pBRM that satis es (K 4w).45 Belief revision models for VC

Gardenfors provides an epistemic semantics for VC based on negative validity. He begins with a minimal conditional logic CM de ned as follows: Axiom schemata

Taut:

All truth-functional tautologies;

CC:

[( > ) ^ ( > )] ! [ > (

CN:

> >.

^ )];

Rules of inference

Modus Ponens From and !

RCM: From 45 See

! to infer ( >

to infer ; ) ! ( > ).

OBSERVATION 3.15 in [Arlo-Costa, 1995].

CONDITIONAL LOGIC

81

Gardenfors [1978] proves a soundness/completeness theorem for CM that is equivalent to the following: THEOREM 23. A formula of any type L2 language is a theorem of CM i it is negatively valid in every Gardenfors pBRM. Gardenfors then proves the following: THEOREM 24. A formula is a theorem of VC i it is derivable from CM together with (ID), (CSO0 ), (CS), (MP), (CA), and (CV) as additional axiom schemata: ID:

>

CS:

( ^ ) ! ( > )

CSO 0 : [( > ) ^ ( > )] ! [( > ) ! ( > )] MP: CA: CV:

( > ) ! ( ! )

[( > ) ^ ( > )] ! [( _ ) > ]

[( > ) ^ :( > :)] ! [( ^ ) > ]

An epistemic semantics for VC is obtained by restricting attention to Gardenfors pBRMs that satisfy constraints corresponding to axioms ID, CSO 0, CS, MP, CA, and CV. Gardenfors [1978] proves lemmas equivalent to the following: LEMMA 25. Where M is any Gardenfors pBRM, 1. all instances of ID are negatively valid in M i M satis es (K2); 2. all instances of CSO 0 are negatively valid in M i M satis es (K 6s); 3. all instances of CS are negatively valid in M i M satis es (K 4w);

4. all instances of MP are negatively valid in M i M satis es (K 3); 5. if M satis es (K 2), (K 6s), (K 4w), and (K 3), then all instances of CA are negatively valid in M if M satis es (K 7); 6. if all instances of ID, CSO 0 , CS, and MP are negatively valid in M, then M satis es (K 7) if all instances of CA are negatively valid in M; 7. if M satis es (K 2), (K 6s), (K 4w), and (K 3), then all instances of CV are negatively valid in M if M satis es (K L); 8. if all instances of ID, CSO 0 , CS, and MP are negatively valid in M, then M satis es (K L) if all instances of CV are negatively valid in M.

82

DONALD NUTE AND CHARLES B. CROSS

Theorem 24 and Lemma 25 allow the soundness and completeness result of Theorem 23 to be extended to yield the following: THEOREM 26. A formula is a theorem of VC i it is negatively valid in all Gardenfors pBRMs that satisfy (K 2), (K3), (K 4w), (K 6s), (K 7), and (K L). This theorem shows that if VC is translated into a theory of belief revision on Gardenfors pBRMs using that version of the Ramsey test which is built into the notion of a Gardenfors pBRM, then the resulting theory of belief revision is de ned by (K 1) (which is built into the de nition of a pBRM), (K 2), (K 3), (K 4w), (K 6s), (K 7), and (K L). The absence of (K 5w) should not be surprising, given Theorem 13. Since in a Gardenfors pBRM (K 3), (K 4w), and (DEF+) imply (K T), and since (K T) together with (DEF+), (K 6s) and (K 8) imply (K P), and given Theorem 8, the absence of (K 8) should not be surprising. A conditional logic that approximates AGM belief revision

Whereas Gardenfors [1978] sets out to nd epistemic models for Lewis's system VC of conditional logic, Arlo-Costa [1995] sets out to nd a system of conditional logic whose primitive belief revision models are de ned at least approximately by the AGM belief revision postulates for transitive relational partial meet contraction (see [Gardenfors, 1988], Chapters 3{4). The result is the system EF , which is de ned only on languages of type L1 (languages of at conditionals). EF has the following axioms and rules, where ; ; and are conditional free: Axiom schemata

Taut:

All truth-functional tautologies

ID:

>

MP:

( > ) ! ( ! )

CC:

[( > ) ^ ( > )] ! [ > (

CA:

[( > ) ^ ( > )] ! [( _ ) > ]

CV:

[( > ) ^ :( > :)] ! [( ^ ) > ]

CN:

>>

CD:

:( > ?) for all non-tautologous .

^ )]

CONDITIONAL LOGIC

83

Rules of inference

Modus Ponens: From and !

to infer .

! to infer ( > ) ! ( > ). From $ to infer ( > ) $ ( > ).

RCM: From RCEA:

One obvious dierence between VC and Arlo-Costa's EF is that EF is de ned for type L1 languages only whereas VC is de ned for type L2 languages. Another dierence is that CS, an axiom of VC, is not a theorem of EF . A third dierence is that CD, an axiom of EF , is not a theorem of VC. Arlo-Costa's epistemic semantics for EF is crucially dierent from Gardenfors' epistemic semantics for VC in that the semantics of EF is de ned in terms of positive validity over Arlo-Costa/Levi pBRMs rather than in terms of negative validity over Gardenfors pBRMs. Positive and negative validity coincide in Gardenfors pBRMs (see Proposition 18) but not in Arlo-Costa/Levi pBRMs (see Theorem 22). Which notion of validity should then be adopted? Arlo-Costa and Levi argue that positive validity should be adopted rather than negative validity both because positive validity is more intuitive and because in Arlo-Costa/Levi models, which satisfy the Negative Ramsey Test favored by Arlo-Costa and Levi, the inference rule modus ponens does not preserve negative validity.46 Consider a type-L1 language L; relative to L the logical system Flat CM is the smallest set of formulas of L that contains all instances of the axiom schemata of Gardenfors' CM and is closed under the rules of CM. Note that EF is an extension of Flat CM. Arlo-Costa [1995] proves the completeness of EF with respect to an epistemic semantics (based on positive validity) by proving a result equivalent to Theorem 31 below. We begin with a series of results to be used as lemmas for Theorem 31:47 THEOREM 27. A formula of any type L1 language L is a theorem of Flat CM i it is positively valid in every Arlo-Costa/Levi pBRM de ned on L. THEOREM 28. Let CM+ be the result of extending Flat CM by adding the rule (RCEA) (restricted to the conditionals of a type-L1 language). A formula is derivable in CM+ i it is positively valid in the class of all ArloCosta/Levi pBRMs that satisfy (K 6). THEOREM 29. Let CMU + be the result of extending CM+ by adding :( > ?) for +every non-tautologous conditional-free . A formula is derivable in CMU i it is positively valid in the class of all Arlo-Costa/Levi pBRMs that satisfy (K 6) and (K C). 46 See [Arl o-Costa and Levi, 1996], pp. 239-240. 47 Our formulation of these results re ects the organization

Levi, 1996].

found in [Arlo-Costa and

84

DONALD NUTE AND CHARLES B. CROSS

LEMMA 30. Where M is any Arlo-Costa/Levi pBRM,

1. all instances of ID are positively valid in M i M satis es (K 2); 2. all instances of MP are positively valid in M i M satis es (K 3); 3. all instances of CA are positively valid in M i M satis es (K 70 ); 4. if all instances of ID are positively valid in M, then all instances of CV are positively valid in M if M satis es (K 8);

Theorems 27, 28, and 29, together with Lemma 30 and the fact that (K 7) and (K 70 ) are equivalent in any pBRM that satis es (K 2) and (K 6), yield the following completeness theorem for EF :48 THEOREM 31. A formula of any type L1 language L is a theorem of EF i it is positively valid in every Arlo-Costa/Levi pBRM de ned on L satisfying (K 2), (K 3), (K C ), (K 6), (K 7), and (K 8). Postulates (K 1), (K 2), (K 3), (K 4), (K 5), (K 6), (K 7), and (K 8) jointly capture that notion of revision that is derivable via the Levi Identity (LI) from the AGM notion of transitively relational partial meet contraction (AGM Revision , for short).49 Since (K 1) holds in all pBRMs, EF comes very close to capturing AGM Revision, but (K 1) and the postulates mentioned in Theorem 31 de ne a notion of revision (EF Revision, for short) that is strictly weaker than AGM revision in two respects. First, whereas AGM revision includes (K 4), EF revision does not. It turns out that (K 4) does not correspond to the positive validity of any typeL1 formula. Still, (K 4) does correspond to a certain positive entailment, as Arlo-Costa [1995] shows: PROPOSITION 32. If M is an Arlo-Costa/Levi pBRM de ned on a typeL1 language L, then M satis es (K 4) i

f ! ; :(> > :)g j=+ > holds in M for all conditional-free formulas and of L. This result is equivalent to OBSERVATION 3.16 of [Arlo-Costa, 1995]. Note that Proposition 32 does not establish conditions for the positive validity of ( ! ) ! [:(> > :) ! ( > )]: But if nesting of conditionals is allowed, then (K 4) can be associated with the positive validity of nested conditionals of the form [( ! ) ^ :(> > 48 Arl o-Costa [1995] notes that Theorems 27 and 29 and Lemma 30 suÆce to yield completeness theorem for the type-L1 fragment of David Lewis' system VW. 49 See, for example, [G ardenfors, 1988], Chapters 3 and 4.

CONDITIONAL LOGIC

85

:)] > ( >

) (see THEOREM 8.1 and OBSERVATION 8.3 in [ArloCosta, 1995]). In general, [ fg j=+ is not equivalent to j=+ > in an Arlo-Costa/Levi pBRM, but this equivalence does hold for certain and when = ; (see OBSERVATION 3.17 of [Arlo-Costa, 1995]). A second dierence between EF revision and AGM revision is this: where`as AGM Revision includes (K 5), which entails (K 5w), EF revision includes neither (K 5) nor (K 5w) but instead includes (K C). The only difference between (K 5w) and (K C) is that (K 5w) places a constraint on the revision of all belief sets that (K C) places just on the revision of consistent belief sets. In particular, where is nontautologous, (K 5w) requires (K?) to be distinct from K? (and therefore, actually, a contraction of K?), whereas (K C) implies no such requirement. Theorem 29 reveals that (K C) is secured in Arlo-Costa/Levi pBRMs via (pNRT) and the positive validity of negated conditionals of the form :( > ?), where is nontautologous. These negated conditionals also belong to K?, of course, but allowing K to take K? as a value in (pNRT) is not an option. Allowing K to take K? as a value in (pPRT) also does not help: Theorem 13 shows that (K 5w) and a consistent underlying logic cannot be combined with the positive Ramsey test in that case. Still, leaving aside the revision of K? , it is true, as Arlo-Costa [1995] has shown, that AGM revision of nonabsurd belief sets can be speci ed in terms of positive validity in a type-L2 language or in terms of positive validity and positive entailment in a type-L1 language. 3 OTHER TOPICS Our discussion of the major kinds of conditionals is far from exhaustive. We have looked at several dierent approaches to the problem of providing an adequate formal semantics and logic for various kinds of conditionals without being able to demonstrate that one approach is clearly superior to all the others. Furthermore, there are many problems involved in the analysis of conditionals which we either have not discussed at all or have only just mentioned in passing. In this section we will look at several of these, giving each the very briefest attention. One issue which has received much attention is the relationship between conditionals and probability. Stalnaker [1970] proposed that the probability that a conditional is true should be identical with the standard conditional probability. Lewis demonstrates in [Lewis, 1976], however, that this assumption can only be true if we restrict our probability functions to those which assign only a small nite number of distinct values to propositions. Stalnaker [1976] provides a dierent proof for a similar result, a proof which does not depend upon certain assumptions which Lewis used and which some investigators have questioned. Van Fraassen [1976] avoids

86

DONALD NUTE AND CHARLES B. CROSS

these Triviality Results for a weakened, non-classical version of Stalnaker's conditional logic C2. Lewis [1976] shows, however, that we can embrace a result which resembles Stalnaker's while avoiding the Triviality Result. Lewis's suggestion depends upon a technique which he calls imaging. This technique, which provides an alternative method for determining conditional probabilities, requires that in conditionalising a probability assignment with respect to , i.e. in modifying the assignment in a way which produces a new assignment which assigns probability 1 to , all the probability which was originally assigned to each :-world i would be transferred to the -world closest to i. Lewis demonstrates that if we accept Stalnaker's semantics and if we assign conditional probabilities in this new, non-standard way, then the probability that a conditional is true turns out to be identical with the conditional probability even when the probabilities of truth for conditionals take on in nitely many dierent values. Lewis's imaging techniques can be adapted to semantics other than Stalnaker's. Nute [1980b] adapts Lewis's imaging technique to class selection function semantics, producing a notion of subjunctive probability which diers from both the standard conditional probability and the probability that the corresponding conditional is true. While promising in some ways, Nute's account is extremely cumbersome. Gardenfors [1982] presents a generalized form of imaging and shows that conditional probability cannot be described even in terms of generalized imaging. Other papers on conditionals and probability include [Doring, 1994; Fetzer and Nute, 1979; Fetzer and Nute, 1980; Hajek, 1994; Hall, 1994; Lance, 1991; Lewis, 1981b; Lewis, 1986; McGee, 1989; Nute, 1981a; Stalnaker and Jerey, 1994]. For a careful and comprehensive survey of results relating the probabilities of conditionals to conditional probabilities see [Hajek and Hall, 1994]. The relationship between causation and conditionals has certainly not been overlooked either. Many authors like Jackson [1977] and Kvart [1980; 1986] assign a special role to causation in their analyses of counterfactual conditionals. Others like Lewis [1973a] and Swain [1978] attempt to provide analyses of causation in terms of counterfactual dependence. Still others like Fetzer and Nute [1979; 1980] have tried to develop a semantics for a special kind of causal conditional. These special causal conditionals have then been employed in the formulation of a single-case propensity interpretation of law statements. Conditional logic also has applications in deontic logic (see, for example, [Hilpinen, 1981]), in decision theory (see, for example, [Gibbard and Harper, 1981; Stalnaker, 1981a]), and in nonmonotonic logic (for a summary of some of this work, see [Nute, 1994]). In addition, there has been signi cant attention in recent years to the issue of whether so-called future indicative conditionals (e.g. `If Oswald doesn't shoot President Kennedy, then someone else will') should be classi ed as indicative or as subjunctive (see, for example, [Bennett, 1988; Bennett, 1995; Dudman, 1984; Dudman, 1989;

CONDITIONAL LOGIC

87

Dudman, 1994; Jackson, 1990]). For a careful and comprehensive review of this and other recent topics of discussion see [Edgington, 1995]. It is not possible in this essay to discuss or even to list all of the material that can be found in the literature on conditional logic and its applications. 4 LIST OF SOME IMPORTANT RULES, THESES, AND LOGICS In this section we collect some of the most important rules and theses of conditional logic together with de nitions for a few of the better known conditional logics. Rules RCEC: from $ , to infer ( > ) $ ( > ). RCK:

from (1 ^ : : : ^ n ) ! , to infer [( > 1 )(^ : : : ( > n )] ! ( > ); n 0.

RCEA: from $ , to infer ( > ) RCE:

from ! , to infer > .

RCM:

from

RR:

$ ( > ).

! , to infer ( > ) ! ( > ). from ! ( > ) infer ( Æ ) ! , and from ( Æ ) ! ! ( > ).

Theses

Transitivity: [( > ) ^ ( > )] ! ( > ) Contraposition: ( > : ) ! ( > :) Strengthening Antecedents: ( > ) ! [( ^ ) > ] ID:

>

MP:

( > ) ! ( ! )

MOD:

(: > ) ! ( > )

CSO: CSO 0 :

[( > ) ^ ( > )] ! [( > ) $ ( > )]

CV:

[( > ) ^ :( > :)] ! [( ^ ) > ]

CEM: CS:

[( > ) ^ ( > )] ! [( > ) ! ( > )] ( > ) _ ( > : ) ( ^ ) ! ( > )

infer

88

DONALD NUTE AND CHARLES B. CROSS

CC:

[( > ) ^ ( > )] ! [ >

CM:

[ > (

CA: SDA: CN: CT: CU: CD:

^ )]

^ )] ! [( > ) ^ ( > )] [( > ) ^ ( > )] ! [( _ ) > ] [( _ ) > ] ! [( > ) ^ ( > )] >> (: > ?) ! :( > ?) ! (( > ?) > ?) :( > ?) for all non-tautologous .

Recall that in Section 1 we de ned a conditional logic as any collection L of sentences formed in the usual way from the symbols of classical sentential logic together with a conditional operator >, such that L is closed under modus ponens and L contains every tautology. We now modify this de nition as follows, adopting the terminology of Section 2, to take dierent language types for conditional logic into account: let a conditional logic on a Boolean language L of type L1 or type L2 be any collection L of sentences of L such that L is closed under modus ponens and L contains every tautology. Logics for full conditional languages For a given Boolean language L of type L2 , each of the following is the smallest conditional logic on L closed under all the rules and containing all the theses associated with it below.

CM:

RCM, CC, CN

VW:

RCEC, RCK; ID, MOD, CSO, MP, CV

SS:

RCEC, RCK; ID, MOD, CSO, MP, CA, CS

VC:

RCEC, RCK; ID, MOD, CSO, MP, CV, CS

VCU: RCEC, RCK; ID, MOD, CSO, MP, CV, CS, CT, CU VCU2 : RCEC, RCK, RR; ID, MOD, CSO, MP, CV, CS, CT, CU (with Æ as an additional binary operator) C2:

RCEC, RCK; ID, MOD, CSO, MP, CV, CEM

Neither of VW and SS is an extension of the other, and neither of VCU and C2 is an extension of the other. VCU2 is an extension of VCU, and C2 and VCU are both extensions of VC, which is an extension of both VW and SS. VW and SS are both extensions of CM. For the de nitions

CONDITIONAL LOGIC

89

of several weaker conditional logics, see [Lewis, 1973b; Chellas, 1975; Nute, 1980b]. Logics for languages of \ at" conditionals If L is a Boolean language of type L1 , then each of the following logics is the smallest conditional logic on L closed under all the rules and containing all the theses associated with it below.

Flat CM: RCM, CC, CN Flat VW: RCEC, RCK; ID, MOD, CSO, MP, CV Flat VC: RCEC, RCK; ID, MOD, CSO, MP, CV, CS

EF :

RCM, RCEA, ID, MP, CC, CA, CV, CN, CD

Flat CM is contained in Flat VW, which is contained in both Flat VC and EF , but neither of Flat VC and EF is contained in the other. For a discussion of the logic of at conditionals aimed at being as true as possible to Ramsey's ideas, see [Levi, 1996], Chapter 4.

Acknowledgements Sections 1, 3 and 4 are primarily the work of the rst author, but revised from the rst edition of this Handbook with input from the second author. Section 2 is primarily the work of the second author. We are grateful to Lennart Aqvist, Horacio Arlo-Costa, Ermanno Bencivenga, John Burgess, David Butcher, Michael Dunn, Dov Gabbay, Christopher Gauker, Franz Guenthner, Hans Kamp, David Lewis and Christian Rohrer for their helpful comments and suggestions on material contained in this paper. We are also grateful to Richmond Thomason for help in assembling our list of references. Finally, we thank Kluwer and the editor of the Journal of Philosophical Logic for permission to use material from [Nute, 1981b] in this article. Donald Nute University of Georgia, USA. Charles B. Cross University of Georgia, USA. BIBLIOGRAPHY [Adams, 1966] E. Adams. Probability and the logic of conditionals. In J. Hintikka and P. Suppes, editors, Aspects of Inductive Logic. North Holland, Amsterdam, 1966. [Adams, 1975a] E. Adams. Counterfactual conditionals and prior probabilities. In A. Hooker and W. Harper, editors, Proceedings of International Congress on the Foundations of Statistics. Reidel, Dordrecht, 1975.

90

DONALD NUTE AND CHARLES B. CROSS

[Adams, 1975b] E. Adams. The Logic of Conditionals; An Application of Probability to Deductive Logic. Reidel, Dordrecht, 1975. [Adams, 1977] E. Adams. A note on comparing probabilistic and modal logics of conditionals. Theoria, 43:186{194, 1977. [Adams, 1981] E. Adams. Transmissible improbabilities and marginal essentialness of premises in inferences involving indicative conditionals. Journal of Philosophical Logic, 10:149{177, 1981. [Adams, 1995] E. Adams. Remarks on a theorem of McGee. Journal of Philosophical Logic, 24:343{348, 1995. [Adams, 1996] E. Adams. Four probability preserving properties of inferences. Journal of Philosophical Logic, 25:1{24, 1996. [Adams, 1997] E. Adams. A Primer of Probability Logic. Cambridge University Press, Cambridge, England, 1997. [Alchourron et al., 1985] C. Alchourron, P. Gardenfors, and D. Makinson. On the logic of theory change: partial meet contraction and revision functions. The Journal of Symbolic Logic, 50:510{530, 1985. [Alchourron, 1994] C. Alchourron. Philosophical foundations of deontic logic and the logic of defeasible conditionals. In J.J. Meyer and R.J. Wieringa, editors, Deontic Logic in Computer Science: Normative System Speci cation, pages 43{84. John Wiley and Sons, New York, 1994. [Appiah, 1984] A. Appiah. Generalizing the probabilistic semantics of conditionals. Journal of Philosophical Logic, 13:351{372, 1984. [Appiah, 1985] A. Appiah. Assertion and Conditionals. Cambridge University Press, Cambridge, England, 1985. [ Aqvist, 1973] Lennart Aqvist. Modal logic with subjunctive conditionals and dispositional predicates. Journal of Philosophical Logic, 2:1{76, 1973. [Arlo-Costa and Levi, 1996] H. Arlo-Costa and I. Levi. Two notions of epistemic validity: epistemic models for Ramsey's conditionals. Synthese, 109:217{262, 1996. [Arlo-Costa and Segerberg, 1998] H. Arlo-Costa and K. Segerberg. Conditionals and hypothetical belief revision (abstract). Theoria, 1998. Forthcoming. [Arlo-Costa, 1990] H. Arlo-Costa. Conditionals and monotonic belief revisions: the success postulate. Studia Logica, 49:557{566, 1990. [Arlo-Costa, 1995] H. Arlo-Costa. Epistemic conditionals, snakes, and stars. In G. Crocco, L. Fari~nas del Cerro, and A. Herzig, editors, Conditionals: From Philosophy to Computer Science. Oxford University Press, Oxford, 1995. [Arlo-Costa, 1998] H. Arlo-Costa. Belief revision conditionals: Basic iterated systems. Annals of Pure and Applied Logic, 1998. Forthcoming. [Asher and Morreau, 1991] N. Asher and M. Morreau. Commonsense entailment: a modal theory of nonmonotonic reasoning. In J. Mylopoulos and R. Reiter, editors, Proceedings of the Twelfth International Joint Conference on Arti cial Intelligence, Los Altos, California, 1991. Morgan Kaufmann. [Asher and Morreau, 1995] N. Asher and M. Morreau. What some generic sentences mean. In Gregory Carlson and Francis Jerey Pelletier, editors, The Generic Book. Chicago University Press, Chicago, IL, 1995. [Balke and Pearl, 1994a] A. Balke and J. Pearl. Counterfactual probabilities: Computational methods, bounds, and applications. In R. Lopez de Mantaras and D. Poole, editors, Uncertainty in Arti cial Intelligence 10. Morgan Kaufmann, San Mateo, California, 1994. [Balke and Pearl, 1994b] A. Balke and J. Pearl. Counterfactuals and policy analysis in structural models. In P. Besnard and S. Hanks, editors, Uncertainty in Arti cial Intelligence 11. Morgan Kaufmann, San Francisco, 1994. [Balke and Pearl, 1994c] A. Balke and J. Pearl. Probabilistic evaluation of counterfactual queries. In B. Hayes-Roth and R. Korf, editors, Proceedings of the Twelfth National Conference on Arti cial Intelligence, Menlo Park, California, 1994. American Association for Arti cial Intelligence, AAAI Press. [Barwise, 1986] J. Barwise. Conditionals and conditional information. In E. Traugott, A. ter Meulen, J. Reilly, and C. Ferguson, editors, On Conditionals. Cambridge University Press, Cambridge, England, 1986.

CONDITIONAL LOGIC

91

[Bell, 1988] J. Bell. Predictive Conditionals, Nonmonotonicity, and Reasoning About the Future. Ph.D. dissertation, University of Essex, Colchester, 1988. [Benferat et al., 1997] S. Benferat, D. Dubois, and H. Prade. Nonmonotonic reasoning, conditional objects, and possibility theory. Arti cial Intelligence, 92:259{276, 1997. [Bennett, 1974] J. Bennett. Counterfactuals and possible worlds. Canadian Journal of Philosophy, 4:381{402, 1974. [Bennett, 1982] J. Bennett. Even if. Linguistics and Philosophy, 5:403{418, 1982. [Bennett, 1984] J. Bennett. Counterfactuals and temporal direction. Philosophical Review, 43:57{91, 1984. [Bennett, 1988] J. Bennett. Farewell to the phlogiston theory of conditionals. Mind, 97:509{527, 1988. [Bennett, 1995] J. Bennett. Classifying conditionals: the traditional way is right. Mind, 104:331{354, 1995. [Bigelow, 1976] J. C. Bigelow. If-then meets the possible worlds. Philosophia, 6:215{236, 1976. [Bigelow, 1980] J. Bigelow. Review of [Pollock, 1976]. Linguistics and Philosophy, 4:129{ 139, 1980. [Blue, 1981] N. A. Blue. A metalinguistic interpretation of counterfactual conditionals. Journal of Philosophical Logic, 10:179{200, 1981. [Boutilier and Goldszmidt, 1995] C. Boutilier and M. Goldszmidt. On the revision of conditional belief sets. In G. Crocco, L. Fari~nas del Cerro, and A. Herzig, editors, Conditionals: From Philosophy to Computer Science. Oxford University Press, Oxford, 1995. [Boutilier, 1990] C. Boutilier. Conditional logics of normality as modal systems. In T. Dietterich and W. Swartout, editors, Proceedings of the Eighth National Conference on Arti cial Intelligence, Menlo Park, California, 1990. American Association for Arti cial Intelligence, AAAI Press. [Boutilier, 1992] C. Boutilier. Conditional logics for default reasoning and belief revision. Technical Report KRR{TR{92{1, Computer Science Department, University of Toronto, Toronto, Ontario, 1992. [Boutilier, 1993a] C. Boutilier. Belief revision and nested conditionals. In R. Bajcsy, editor, Proceedings of the Thirteenth International Joint Conference on Arti cial Intelligence, San Mateo, California, 1993. Morgan Kaufmann. [Boutilier, 1993b] C. Boutilier. A modal characterization of defeasible deontic conditionals and conditional goals. In Working Notes of the AAAI Spring Symposium on Reasoning about Mental States, Menlo Park, California, 1993. American Association for Arti cial Intelligence. [Boutilier, 1993c] C. Boutilier. Revision by conditional beliefs. In R. Fikes and W. Lehnert, editors, Proceedings of the Eleventh National Conference on Arti cial Intelligence, Menlo Park, California, 1993. American Association for Arti cial Intelligence, AAAI Press. [Boutilier, 1996] C. Boutilier. Iterated revision and minimal change of conditional beliefs. Journal of Philosophical Logic, 25:263{305, 1996. [Bowie, 1979] G. Lee Bowie. The similarity approach to counterfactuals: some problems. No^us, 13:477{497, 1979. [Burgess, 1979] J. P. Burgess. Quick completeness proofs for some logics of conditionals. Notre Dame Journal of Formal Logic, 22:76{84, 1979. [Burgess, 1984] J. P. Burgess. Chapter II.2: Basic tense logic. In Handbook of Philosophical Logic. Reidel, Dordrecht, 1984. [Burks, 1951] A. W. Burks. The logic of causal propositions. Mind, 60:363{382, 1951. [Butcher, 1978] D. Butcher. Subjunctive conditional modal logic. Ph.D. dissertation, Stanford, 1978. [Butcher, 1983a] D. Butcher. Consequent-relative subjunctive implication, 1983. Unpublished. [Butcher, 1983b] D. Butcher. An incompatible pair of subjunctive conditional modal axioms. Philosophical Studies, 44:71{110, 1983. [Chellas, 1975] B. F. Chellas. Basic conditional logic. Journal of Philosophical Logic, 4:133{153, 1975.

92

DONALD NUTE AND CHARLES B. CROSS

[Chisholm, 1946] R. Chisholm. The contrary-to-fact conditional. Mind, 55:289{307, 1946. [Clark, 1971] M. Clark. Ifs and hooks. Analysis, 32:33{39, 1971. [Costello, 1996] T. Costello. Modeling belief change using counterfactuals. In L. Carlucci Aiello, J. Doyle, and S. Shapiro, editors, KR'96: Principles of Knowledge Representation and Reasoning. Morgan Kaufmann, San Francisco, California, 1996. [Creary and Hill, 1975] L. G. Creary and C. S. Hill. Review of [Lewis, 1973b]. Philosophy of Science, 43:431{344, 1975. [Crocco and Fari~nas del Cerro, 1996] G. Crocco and L. Fari~nas del Cerro. Counterfactuals: Foundations for nonmonotonic inferences. In A. Fuhrmann and H. Rott, editors, Logic, Action, and Information: Essays on Logic in Philosophy and Arti cial Intelligence. Walter de Gruyter, Berlin, 1996. [Cross and Thomason, 1987] C. Cross and R. Thomason. Update and conditionals. In Z. Ras and M. Zemankova, editors, Methodologies for Intelligent Systems. NorthHolland, Amsterdam, 1987. [Cross and Thomason, 1992] C. Cross and R. Thomason. Conditionals and knowledgebase update. In P. Gardenfors, editor, Cambridge Tracts in Theoretical Computer Science: Belief Revision, volume 29. Cambridge University Press, Cambridge, England, 1992. [Cross, 1985] C. Cross. Jonathan Bennett on `even if'. Linguistics and Philosophy, 8:353{357, 1985. [Cross, 1990a] C. Cross. Belief revision, nonmonotonic reasoning, and the Ramsey test. In H. Kyburg and R. Loui, editors, Knowledge Representation and Defeasible Reasoning. Kluwer, Boston, 1990. [Cross, 1990b] C. Cross. Temporal necessity and the conditional. Studia Logica, 49:345{ 363, 1990. [Daniels and Freeman, 1980] B. Daniels and J. B. Freeman. An analysis of the subjunctive conditional. Notre Dame Journal of Formal Logic, 21:639{655, 1980. [Darwiche and Pearl, 1994] A. Darwiche and J. Pearl. On the logic of iterated belief revision. In Ronald Fagin, editor, Theoretical Aspects of Reasoning About Knowledge: Proceedings of the Fifth Conference, San Francisco, 1994. Morgan Kaufmann. [Davis, 1979] W. Davis. Indicative and subjunctive conditionals. Philosophical Review, 88:544{564, 1979. [Decew, 1981] J. Decew. Conditional obligation and counterfactuals. Journal of Philosophical Logic, 10(1):55{72, 1981. [Delgrande, 1988] J. Delgrande. An approach to default reasoning based on a rst-order conditional logic: Revised report. Arti cial Intelligence, 36:63{90, 1988. [Delgrande, 1995] J. Delgrande. Syntactic conditional closures for defeasible reasoning. In C. Mellish, editor, Proceedings of the Fourteenth International Joint Conference on Arti cial Intelligence, San Francisco, 1995. Morgan Kaufmann. [Doring, 1994] F. Doring. On the probabilities of conditionals. Philosophical Review, 103:689{699, 1994. See Philosophical Review, 105: 231, 1996, for corrections. [Doring, 1997] F. Doring. The Ramsey test and conditional semantics. Journal of Philosophical Logic, 26:359{376, 1997. [Dudman, 1983] V. Dudman. Tense and time in English verb clusters of the primary pattern. Australasian Journal of Linguistics, 3:25{44, 1983. [Dudman, 1984] V. Dudman. Parsing `if'-sentences. Analysis, 44:145{153, 1984. [Dudman, 1984a] V. Dudman. Conditional interpretations of if-sentences. Australian Journal of Linguistics, 4, 143{204, 1984. [Dudman, 1986] V. Dudman. Antecedents and consequents. Theoria, 52, 168{199, 1986. [Dudman, 1989] V. Dudman. Vive la revolution! Mind, 98:591{603, 1989. [Dudman, 1991] V. Dudman. V. Dudman. Jackson classifying conditionals. Analysis, 51:131-136, 1991. [Dudman, 1994] V. Dudman. On conditionals. Journal of Philosophy, 91:113{128, 1994. [Dudman, 1994a] V. Dudman. Against the indicative. Australasian Journal of Philosophy, 72:17-26, 1994. [Dudman, 2000] V. Dudman. Classifying `conditionals': the traditional way is wrong, Analysis, 60:147, 2000.

CONDITIONAL LOGIC

93

[Edgington, 1995] D. Edgington. On conditionals. Mind, 104:235{329, 1995. [Eells and Skyrms, 1994] E. Eells and B. Skyrms, editors. Probability and Conditionals: Belief Revision and Rational Decision. Cambridge University Press, Cambridge, England, 1994. [Eiter and Gottlob, 1993] T. Eiter and G. Gottlob. The complexity of nested counterfactuals and iterated knowledge base revision. In R. Bajcsy, editor, Proceedings of the Thirteenth International Joint Conference on Arti cial Intelligence, San Mateo, California, 1993. Morgan Kaufmann. [Ellis et al., 1977] B. Ellis, F. Jackson, and R. Pargetter. An objection to possible worlds semantics for counterfactual logics. Journal of Philsophical Logic, 6:355{357, 1977. [Ellis, 1978] Brian Ellis. A uni ed theory of conditionals. Journal of Philosophical Logic, 7:107{124, 1978. [Farkas and Sugioka, 1987] D. Farkas and Y. Sugioka. Restrictive if/when clauses. Linguistics and Philosophy, 6:225{258, 1987. [Fetzer and Nute, 1979] J. H. Fetzer and D. Nute. Syntax, semantics and ontology: a probabilistic causal calculus. Synthese, 40:453{495, 1979. [Fetzer and Nute, 1980] J. H. Fetzer and D. Nute. A probabilistic causal calculus: con icting conceptions. Synthese, 44:241{246, 1980. [Fine, 1975] K. Fine. Review of [Lewis, 1973b]. Mind, 84:451{458, 1975. [Friedman and Halpern, 1994] N. Friedman and J. Halpern. Conditional logics for belief change. In B. Hayes-Roth and R. Korf, editors, Proceedings of the Twelfth National Conference on Arti cial Intelligence, Menlo Park, California, 1994. American Association for Arti cial Intelligence, AAAI Press. [Fuhrmann and Levi, 1994] A. Fuhrmann and I. Levi. Undercutting and the Ramsey test for conditionals. Synthese, 101:157{169, 1994. [Gabbay, 1972] D. M. Gabbay. A general theory of the conditional in terms of a ternary operator. Theoria, 38:97{104, 1972. [Galles and Pearl, 1997] D. Galles and J. Pearl. An axiomatic characterization of causal counterfactuals. Technical report, Computer Science Department, UCLA, Los Angeles, California, 1997. [Gardenfors et al., 1991] P. Gardenfors, S. Lindstrom, M. Morreau, and R. Rabinowicz. The negative Ramsey test: another triviality result. In A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change. Cambridge University Press, Cambridge, England, 1991. [Gardenfors, 1978] P. Gardenfors. Conditionals and changes of belief. In I. Niiniluoto and R. Tuomela, editors, The Logic and Epistemology of Scienti c Change. North Holland, Amsterdam, 1978. [Gardenfors, 1979] P. Gardenfors. Even if. In F. V. Jensen, B. H. Mayoh, and K. K. Moller, editors, Proceedings from 5th Scandinavian Logic Symposium. Aalborg University Press, Aalborg, 1979. [Gardenfors, 1982] P. Gardenfors. Imaging and conditionalization. Journal of Philosophy, 79:747{760, 1982. [Gardenfors, 1986] P. Gardenfors. Belief revisions and the Ramsey test for conditionals. Philosophical Review, 95:81{93, 1986. [Gardenfors, 1987] P. Gardenfors. Variations on the Ramsey test: more triviality results. Studia Logica, 46:321{327, 1987. [Gardenfors, 1988] P. Gardenfors. Knowledge in Flux. MIT Press, Cambridge, MA, 1988. [Gibbard and Harper, 1981] A. Gibbard and W. Harper. Counterfactuals and two kinds of expected utility. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Ginsberg, 1985] M. Ginsberg. Counterfactuals. In A. Joshi, editor, Proceedings of the Ninth International Joint Conference on Arti cial Intelligence, Los Altos, California, 1985. Morgan Kaufmann. [Goodman, 1955] N. Goodman. Fact, Fiction and forecast. Harvard, Cambridge, MA, 1955. [Grahne and Mendelzon, 1994] G. Grahne and A. Mendelzon. Updates and subjunctive queries. Information and Computation, 116:241{252, 1994.

94

DONALD NUTE AND CHARLES B. CROSS

[Grahne, 1991] G. Grahne. Updates and counterfactuals. In J. Allen, R. Fikes, and E. Sandewall, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference. Morgan-Kaufmann, Los Altos, 1991. [Grice, 1967] P. Grice. Logic and conversation, 1967. The William James Lectures, given at Harvard University. [Hajek and Hall, 1994] A. Hajek and N. Hall. The hypothesis of the conditional construal of conditional probability. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Hajek, 1989] A. Hajek. Probabilities of conditionals|revisited. Journal of Philosophical Logic, 18:423{428, 1989. [Hajek, 1994] A. Hajek. Triviality on the cheap. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Hall, 1994] N. Hall. Back in the CCCP. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Hansson, 1992] S. Hansson. In defense of the Ramsey test. Journal of Philosophy, 89:499{521, 1992. [Harper et al., 1981] W. Harper, R. Stalnaker, and G. Pearce, editors. Ifs: conditionals, belief, decision, chance, and time. D. Reidel, Dordrecht, 1981. [Harper, 1975] W. Harper. Rational belief change, Popper functions and the counterfactuals. Synthese, 30:221{262, 1975. [Hausman, 1996] D. Hausman. Causation and counterfactual dependence reconsidered. No^us, 30:55{74, 1996. [Hawthorne, 1996] J. Hawthorne. On the logic of nonmonotonic conditionals and conditional probabilities. Journal of Philosophical Logic, 25:185{218, 1996. [Herzberger, 1979] H. Herzberger. Counterfactuals and consistency. Journal of Philosophy, 76:83{88, 1979. [Hilpinen, 1981] R. Hilpinen. Conditionals and possible worlds. In G. Floistad, editor, Contemporary Philosophy: A New Survey, volume I: Philosophy of Language/Philosophical Logic. Martinus Nijho, The Hague, 1981. [Hilpinen, 1982] R. Hilpinen. Disjunctive permissions and conditionals with disjunctive antecedents. In I. Niiniluoto and Esa Saarinen, editors, Proceedings of the Second Soviet{Finnish Logic Conference, Moscow, December 1979. Acta Philosphica Fennica, 1982. [Horgan, 1981] T. Horgan. Counterfactuals and Newcomb's problem. Journal of Philosophy, 78:331{356, 1981. [Horty and Thomason, 1991] J. Horty and R. Thomason. Conditionals and arti cial intelligence. Fundamenta Informaticae, 15:301{324, 1991. [Humberstone, 1978] I. L. Humberstone. Two merits of the circumstantial operator language for conditional logic. Australasian Journal of Philosophy, 56:21{24, 1978. [Hunter, 1980] G. Hunter. Conditionals, indicative and subjunctive. In J. Dancey, editor, Papers on Language and Logic. Keele University Library, 1980. [Hunter, 1982] G. Hunter. Review of [Nute, 1980b]. Mind, 91:136{138, 1982. [Jackson, 1977] F. Jackson. A causal theory of counterfactuals. Australasian Journal of Philosophy, 55:3{21, 1977. [Jackson, 1979] F. Jackson. On assertion and indicative conditionals. Philosophical Review, 88:565{589, 1979. [Jackson, 1987] F. Jackson. Conditionals. Blackwell, Oxford, 1987. [Jackson, 1990] F. Jackson. Classifying conditionals. Analysis, 50:134{147, 1990. [Jackson, 1991] F. Jackson, editor. Conditionals. Oxford University Press, Oxford, 1991. [Jackson, 1991a] F. Jackson, Classifying conditionals II. Analysis, 51:137-143, 1991. [Katsuno and Mendelzon, 1992] H. Katsuno and A. Mendelzon. Updates and counterfactuals. In P. Gardenfors, editor, Cambridge Tracts in Theoretical Computer Science: Belief Revision, volume 29. Cambridge University Press, Cambridge, England, 1992. [Kim, 1973] J. Kim. Causes and counterfactuals. Journal of Philosophy, 70:570{572, 1973. [Kratzer, 1979] A. Kratzer. Conditional necessity and possibility. In R. Bauerle, U. Egli, and A. von Stechow, editors, Semantics from Dierent Points of View. SpringerVerlag, Berlin, 1979.

CONDITIONAL LOGIC

95

[Kratzer, 1981] A. Kratzer. Partition and revision: the semantics of counterfactuals. Journal of Philosophical Logic, 10:201{216, 1981. [Kremer, 1987] M. Kremer. `if' is unambiguous. No^us, 21:199{217, 1987. [Kvart, 1980] I. Kvart. Formal semantics for temporal logic and counterfactuals. Logique et analyse, 23:35{62, 1980. [Kvart, 1986] I. Kvart. A Theory of Counterfactuals. Hackett, Indianapolis, 1986. [Kvart, 1987] I. Kvart. Putnam's counterexample to `A theory of counterfactuals'. Philosophical Papers, 16:235-239, 1987. [Kvart, 1991] I. Kvart. Counterfactuals and causal relevance. Paci c Philosophical Quarterly, 72:314-337, 1991. [Kvart, 1992] I. Kvart. Counterfactuals. Erkenntnis, 36:139-179, 1992. [Kvart, 1994] I. Kvart. Counterfactual ambiguities, true premises and knowledge. Synthese, 100:133-164, 1994. [Lance, 1991] M. Lance. Probabilistic dependence among conditionals. Philosophical Review, 100:269{276, 1991. [Lehmann and Magidor, 1992] D. Lehmann and M. Magidor. What does a conditional knowledge base entail? Arti cial intelligence, 55:1{60, 1992. [Levi, 1977] I. Levi. Subjunctives, dispositions and chances. Synthese, 34:423{455, 1977. [Levi, 1988] I. Levi. Iteration of conditionals and the Ramsey test. Synthese, 76:49{81, 1988. [Levi, 1996] I. Levi. For the Sake of the Argument: Ramsey Test Conditionals, Inductive Inference, and Nonmonotonic Reasoning. Cambridge University Press, Cambridge, England, 1996. [Lewis, 1971] D. Lewis. Completeness and decidability of three logics of counterfactual conditionals. Theoria, 37:74{85, 1971. [Lewis, 1973a] D. Lewis. Causation. Journal of Philosophy, 70:556{567, 1973. [Lewis, 1973b] D. Lewis. Counterfactuals. Harvard, Cambridge, MA, 1973. [Lewis, 1973c] D. Lewis. Counterfactuals and comparative possibility. Journal of Philosophical Logic, 2:418{446, 1973. [Lewis, 1976] D. Lewis. Probabilities of conditionals and conditional probabilities. Philosophical Review, 85:297{315, 1976. [Lewis, 1977] D. Lewis. Possible world semantics for counterfactuals logics: a rejoinder. Journal of Philosophical Logic, 6:359{363, 1977. [Lewis, 1979a] D. Lewis. Counterfactual dependence and time's arrow. No^us, 13:455{ 476, 1979. [Lewis, 1979b] D. Lewis. Scorekeeping in a language game. Journal of Philosophical Logic, 8:339{359, 1979. [Lewis, 1981a] D. Lewis. Ordering semantics and premise semantics for counterfactuals. Journal of Philosophical Logic, 10:217{234, 1981. [Lewis, 1981b] D. Lewis. A subjectivist's guide to objective change. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Lewis, 1986] D. Lewis. Probabilities of conditionals and conditional probabilities II. Philosophical Review, 95:581{589, 1986. [Lindstrom and Rabinowicz, 1992] S. Lindstrom and W. Rabinowicz. Belief revision, epistemic conditionals, and the Ramsey test. Synthese, 91:195{237, 1992. [Lindstrom and Rabinowicz, 1995] S. Lindstrom and W. Rabinowicz. The Ramsey test revisited. In G. Crocco, L. Fari~nas del Cerro, and A. Herzig, editors, Conditionals: From Philosophy to Computer Science. Oxford University Press, Oxford, 1995. [Lindstrom, 1996] S. Lindstrom. The Ramsey test and the indexicality of conditionals: A proposed resolution of Gardenfors' paradox. In A. Fuhrmann and H. Rott, editors, Logic, Action, and Information: Essays on Logic in Philosophy and Arti cial Intelligence. Walter de Gruyter, Berlin, 1996. [Loewer, 1976] B. Loewer. Counterfactuals with disjunctive antecedents. Journal of Philosophy, 73:531{536, 1976. [Loewer, 1978] B. Loewer. Cotenability and counterfactual logics. Journal of Philosophical Logic, 8:99{116, 1978. [Lowe, 1991] E.J. Lowe. Jackson on classifying conditionals. Analysis, 51:126-130, 1991.

96

DONALD NUTE AND CHARLES B. CROSS

[Makinson, 1989] D. Makinson. General theory of cumulative inference. In M. Reinfrank, J. de Kleer, and M. Ginsberg, editors, Lecture Notes in Arti cial Intelligence: NonMonotonic Reasoning, volume 346. Springer-Verlag, Berlin, 1989. [Makinson, 1990] D. Makinson. The Gardenfors impossibility theorem in nonmonotonic contexts. Studia Logica, 49:1{6, 1990. [Mayer, 1981] J. C. Mayer. A misplaced thesis of conditional logic. Journal of Philosophical Logic, 10:235{238, 1981. [McDermott, 1996] M. McDermott. On the truth conditions of certain `if'-sentences. The Philosophical Review, 105:1{37, 1996. [Mcgee, 1981] V. Mcgee. Finite matrices and the logic of conditionals. Journal of Philosophical Logic, 10:349{351, 1981. [McGee, 1985] V. McGee. A counterexample to modus ponens. Journal of Philosophy, 82:462{471, 1985. [McGee, 1989] V. McGee. Conditional probabilities and compounds of conditionals. Philosophical Review, 98:485{541, 1989. [McGee, 2000] V. McGee. To tell the truth about conditionals. Analysis, 60:107-111, 2000. [McKay and Inwagen, 1977] T. McKay and P. Van Inwagen. Counterfactuals with disjunctive antecedents. Philosophical Studies, 31:353{356, 1977. [Mellor, 1993] D.H. Mellor. How to believe a conditional. Journal of Philosophy, 90:233248, 1993.` [Moore, 1983] R. Moore. Semantical considerations on nonmonotonic logic. In Proceedings of the Eighth International Joint Conference on Arti cial Intelligence, volume 1. Morgan Kaufman, San Mateo, 1983. [Morreau, 1992] M. Morreau. Epistemic semantics for counterfactuals. Journal of Philosophical Logic, 21:33{62, 1992. [Morreau, 1997] M. Morreau. Fainthearted conditionals. The Journal of Philosophy, 94:187{211, 1997. [Nute and Mitcheltree, 1982] D. Nute and W. Mitcheltree. Review of [Adams, 1975b]. No^us, 15:432{436, 1982. [Nute, 1975a] D. Nute. Counterfactuals. Notre Dame Journal of Formal Logic, 16:476{ 482, 1975. [Nute, 1975b] D. Nute. Counterfactuals and the similarity of worlds. Journal of Philosophy, 72:73{778, 1975. [Nute, 1977] D. Nute. Scienti c law and nomological conditionals. Technical report, National Science Foundation, 1977. [Nute, 1978a] D. Nute. An incompleteness theorem for conditional logic. Notre Dame Journal of Formal Logic, 19:634{636, 1978. [Nute, 1978b] D. Nute. Simpli cation and substitution of counterfactual antecedents. Philosophia, 7:317{326, 1978. [Nute, 1979] D. Nute. Algebraic semantics for conditional logics. Reports on Mathematical Logic, 10:79{101, 1979. [Nute, 1980a] D. Nute. Conversational scorekeeping and conditionals. Journal of Philosophical Logic, 9:153{166, 1980. [Nute, 1980b] D. Nute. Topics in Conditional Logic. Reidel, Dordrecht, 1980. [Nute, 1981a] D. Nute. Causes, laws and law statements. Synthese, 48:347{370, 1981. [Nute, 1981b] D. Nute. Introduction. Journal of Philosophical Logic, 10:127{147, 1981. [Nute, 1981c] D. Nute. Review of [Pollock, 1976]. No^us, 15:212{219, 1981. [Nute, 1982 and 1991] D. Nute. Tense and conditionals. Technical report, Deutsche Forschungsgemeinschaft and the University of Georgia, 1982 and 1991. [Nute, 1983] D. Nute. Review of [Harper et al., 1981]. Philosophy of Science, 50:518{520, 1983. [Nute, 1991] D. Nute. Historical necessity and conditionals. No^us, 25:161{175, 1991. [Nute, 1994] D. Nute. Defeasible logic. In D. Gabbay and C. Hogger, editors, Handbook of Logic for Arti cial Intelligence and Logic Programming, volume III. Oxford University Press, Oxford, 1994.

CONDITIONAL LOGIC

97

[Pearl, 1994] J. Pearl. From Adams' conditionals to default expressions, causal conditionals, and counterfactuals. In E. Eells and B. Skyrms, editors, Probability and Conditionals: Belief Revision and Rational Decision. Cambridge University Press, Cambridge, England, 1994. [Pearl, 1995] J. Pearl. Causation, action, and counterfactuals. In A. Gammerman, editor, Computational Learning and Probabilistic Learning. John Wiley and Sons, New York, 1995. [Pollock, 1976] J. Pollock. Subjunctive Reasoning. Reidel, Dordrecht, 1976. [Pollock, 1981] J. Pollock. A re ned theory of counterfactuals. Journal of Philosophical Logic, 10:239{266, 1981. [Pollock, 1984] J. Pollock. Knowledge and Justi cation. Princeton University Press, Princeton, 1984. [Posch, 1980] G. Posch. Zur Semantik der Kontrafaktischen Konditionale. Narr, Tuebingen, 1980. [Post, 1981] J. Post. Review of [Pollock, 1976]. Philosophia, 9:405{420, 1981. [Ramsey, 1990] F. Ramsey. Philosophical Papers. Cambridge University Press, Cambridge, England, 1990. [Rescher, 1964] N. Rescher. Hypothetical Reasoning. Reidel, Dordrecht, 1964. [Rott, 1986] H. Rott. Ifs, though, and because. Erkenntnis, 25:345{370, 1986. [Rott, 1989] H. Rott. Conditionals and theory change: revisions, expansions, and additions. Synthese, 81:91{113, 1989. [Rott, 1991] H. Rott. A nonmonotonic conditional logic for belief revision. In A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change. Cambridge University Press, Cambridge, England, 1991. [Sanford, 1992] D. Sanford. If P then Q. Routledge, London, 1992. [Schlechta and Makinson, 1994] K. Schlechta and D. Makinson. Local and global metrics for the semantics of counterfactual conditionals. Journal of applied Non-Classical Logics, 4:129{140, 1994. [Segerberg, 1968] K. Segerberg. Propositional logics related to Heyting's and Johansson's. Theoria, 34:26{61, 1968. [Segerberg, 1989] K. Segerberg. A note on an impossibility theorem of Gardenfors. No^us, 23:351{354, 1989. [Sellars, 1958] W. S. Sellars. Counterfactuals, dispositions and the causal modalities. In Feigl, Scriven, and Maxwell, editors, Minnesota Studies in the Philosophy of Science, volume 2. University of Minnesota, Minneapolis, 1958. [Slote, 1978] M. A. Slote. Time and counterfactuals. Philosophical Review, 87:3{27, 1978. [Stalnaker and Jerey, 1994] R. Stalnaker and R. Jerey. Conditionals as random variables. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Stalnaker and Thomason, 1970] R. Stalnaker and R. Thomason. A semantical analysis of conditional logic. Theoria, 36:23{42, 1970. [Stalnaker, 1968] R. Stalnaker. A theory of conditionals. In N. Rescher, editor, Studies in Logical Theory. American Philosophical quarterly Monograph Series, No. 2, Blackwell, Oxford, 1968. Reprinted in [Harper et al., 1981]. [Stalnaker, 1970] R. Stalnaker. Probabilities and conditionals. Philosophy of Science, 28:64{80, 1970. [Stalnaker, 1975] R. Stalnaker. Indicative conditionals. Philosophia, 5:269{286, 1975. [Stalnaker, 1976] R. Stalnaker. Stalnaker to Van Fraassen. In C. Hooker and W. Harper, editors, Foundations of Probability Theory, Statistical Inference and Statistical Theories of Science. Reidel, Dordrecht, 1976. [Stalnaker, 1981a] R. Stalnaker. A defense of conditional excluded middle. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Stalnaker, 1981b] R. Stalnaker. Letter to David Lewis. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Stalnaker, 1984] R. Stalnaker. Inquiry. MIT Press, Cambridge, MA, 1984. [Swain, 1978] M. Swain. A counterfactual analysis of event causation. Philosophical Studies, 34:1{19, 1978.

98

DONALD NUTE AND CHARLES B. CROSS

[Thomason and Gupta, 1981] R. Thomason and A. Gupta. A theory of conditionals in the context of branching time. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Thomason, 1985] R. Thomason. Note on tense and subjunctive conditionals. Philosophy of Science, pages 151{153, 1985. [Traugott et al., 1986] E. Traugott, A. ter Meulen, J. Reilly, and C. Ferguson, editors. On Conditionals. Cambridge University Press, Cambridge, England, 1986. [Turner, 1981] R. Turner. Counterfactuals without possible worlds. Journal of Philosophical Logic, 10:453{493, 1981. [van Benthem, 1984] J. van Benthem. Foundations of conditional logic. Journal of Philosophical Logic, 13:303{349, 1984. [Van Fraassen, 1974] B. C. Van Fraassen. Hidden variables in conditional logic. Theoria, 40:176{190, 1974. [Van Fraassen, 1976] B. C. Van Fraassen. Probabilities of conditionals. In C. Hooker and W. Harper, editors, Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science. Reidel, Dordrecht, 1976. [Van Fraassen, 1981] B. C. Van Fraassen. A temporal framework for conditionals and chance. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Veltman, 1976] F. Veltman. Prejudices, presuppositions and the theory of conditionals. In J. Groenendijk and M. Stokhof, editors, Amsterdam Papers in Formal Grammar. Vol. 1, Centrale Interfaculteit, Universiteit van Amsterdam, 1976. [Veltman, 1985] Frank Veltman. Logics for Conditionals. Ph.D. dissertation, University of Amsterdam, Amsterdam, 1985. [Warmbrod, 1981] Warmbrod. Counterfactuals and substitution of equivalent antecedents. Journal of Philosophical Logic, 10:267{289, 1981. [Winslett, 1990] M. Winslett. Updating Logical Databases. Cambridge University Press, Cambridge, England, 1990. [Woods, 1997] M. Woods. Conditionals. Oxford University Press, Oxford, 1997. Published posthumously. Edited by D. Wiggins, with a commentary by D. Edgington.

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

DYNAMIC LOGIC PREFACE Dynamic Logic (DL) is a formal system for reasoning about programs. Traditionally, this has meant formalizing correctness speci cations and proving rigorously that those speci cations are met by a particular program. Other activities fall into this category as well: determining the equivalence of programs, comparing the expressive power of various programming constructs, synthesizing programs from speci cations, etc. Formal systems too numerous to mention have been proposed for these purposes, each with its own peculiarities. DL can be described as a blend of three complementary classical ingredients: rst-order predicate logic, modal logic, and the algebra of regular events. These components merge to form a system of remarkable unity that is theoretically rich as well as practical. The name Dynamic Logic emphasizes the principal feature distinguishing it from classical predicate logic. In the latter, truth is static : the truth value of a formula ' is determined by a valuation of its free variables over some structure. The valuation and the truth value of ' it induces are regarded as immutable; there is no formalism relating them to any other valuations or truth values. In Dynamic Logic, there are explicit syntactic constructs called programs whose main role is to change the values of variables, thereby changing the truth values of formulas. For example, the program x := x + 1 over the natural numbers changes the truth value of the formula \x is even". Such changes occur on a metalogical level in classical predicate logic. For example, in Tarski's de nition of truth of a formula, if u : fx; y; : : : g ! N is a valuation of variables over the natural numbers N , then the formula 9x x2 = y is de ned to be true under the valuation u i there exists an a 2 N such that the formula x2 = y is true under the valuation u[x=a], where u[x=a] agrees with u everywhere except x, on which it takes the value a. This de nition involves a metalogical operation that produces u[x=a] from u for all possible values a 2 N . This operation becomes explicit in DL in the form of the program x := ?, called a nondeterministic or wildcard assignment . This is a rather unconventional program, since it is not eective; however, it is quite useful as a descriptive tool. A more conventional way to obtain a square root of y, if it exists, would be the program (1) x := 0 ; while x2 < y do x := x + 1:

In DL, such programs are rst-class objects on a par with formulas, complete with a collection of operators for forming compound programs inductively

100

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

from a basis of primitive programs. To discuss the eect of the execution of a program on the truth of a formula ', DL uses a modal construct <>', which intuitively states, \It is possible to execute starting from the current state and halt in a state satisfying '." There is also the dual construct []', which intuitively states, \If halts when started in the current state, then it does so in a state satisfying '." For example, the rst-order formula 9x x2 = y is equivalent to the DL formula <x := ?> x2 = y. In order to instantiate the quanti er eectively, we might replace the nondeterministic assignment inside the < > with the while program (1); over N , the two formulas would be equivalent. Apart from the obvious heavy reliance on classical logic, computability theory and programming, the subject has its roots in the work of [Thiele, 1966] and [Engeler, 1967] in the late 1960's, who were the rst to advance the idea of formulating and investigating formal systems dealing with properties of programs in an abstract setting. Research in program veri cation

ourished thereafter with the work of many researchers, notably [Floyd, 1967], [Hoare, 1969], [Manna, 1974], and [Salwicki, 1970]. The rst precise development of a DL-like system was carried out by [Salwicki, 1970], following [Engeler, 1967]. This system was called Algorithmic Logic. A similar system, called Monadic Programming Logic, was developed by [Constable, 1977]. Dynamic Logic, which emphasizes the modal nature of the program/assertion interaction, was introduced by [Pratt, 1976]. Background material on mathematical logic, computability, formal languages and automata, and program veri cation can be found in [Shoen eld, 1967] (logic), [Rogers, 1967] (recursion theory), [Kozen, 1997a] (formal languages, automata, and computability), [Keisler, 1971] (in nitary logic), [Manna, 1974] (program veri cation), and [Harel, 1992; Lewis and Papadimitriou, 1981; Davis et al., 1994] (computability and complexity). Much of this introductory material as it pertains to DL can be found in the authors' text [Harel et al., 2000]. There are by now a number of books and survey papers treating logics of programs, program veri cation, and Dynamic Logic [Apt and Olderog, 1991; Backhouse, 1986; Harel, 1979; Harel, 1984; Parikh, 1981; Goldblatt, 1982; Goldblatt, 1987; Knijnenburg, 1988; Cousot, 1990; Emerson, 1990; Kozen and Tiuryn, 1990]. In particular, much of this chapter is an abbreviated summary of material from the authors' text [Harel et al., 2000], to which we refer the reader for a more complete treatment. Full proofs of many of the theorems cited in this chapter can be found there, as well as extensive introductory material on logic and complexity along with numerous examples and exercises.

DYNAMIC LOGIC

101

1 REASONING ABOUT PROGRAMS

1.1 Programs For us, a program is a recipe written in a formal language for computing desired output data from given input data. EXAMPLE 1. The following program implements the Euclidean algorithm for calculating the greatest common divisor (gcd) of two integers. It takes as input a pair of integers in variables x and y and outputs their gcd in variable x: while y 6= 0 do begin z := x mod y; x := y; y := z end The value of the expression x mod y is the (nonnegative) remainder obtained when dividing x by y using ordinary integer division. Programs normally use variables to hold input and output values and intermediate results. Each variable can assume values from a speci c domain of computation , which is a structure consisting of a set of data values along with certain distinguished constants, basic operations, and tests that can be performed on those values, as in classical rst-order logic. In the program above, the domain of x, y, and z might be the integers Z along with basic operations including integer division with remainder and tests including 6=. In contrast with the usual use of variables in mathematics, a variable in a program normally assumes dierent values during the course of the computation. The value of a variable x may change whenever an assignment x := t is performed with x on the left-hand side. In order to make these notions precise, we will have to specify the programming language and its semantics in a mathematically rigorous way. In this section we give a brief introduction to some of these languages and the role they play in program veri cation.

1.2 States and Executions As mentioned above, a program can change the values of variables as it runs. However, if we could freeze time at some instant during the execution of the program, we could presumably read the values of the variables at that instant, and that would give us an instantaneous snapshot of all information that we would need to determine how the computation would proceed from that point. This leads to the concept of a state |intuitively, an instantaneous description of reality.

102

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Formally, we will de ne a state to be a function that assigns a value to each program variable. The value for variable x must belong to the domain associated with x. In logic, such a function is called a valuation . At any given instant in time during its execution, the program is thought to be \in" some state, determined by the instantaneous values of all its variables. If an assignment statement is executed, say x := 2, then the state changes to a new state in which the new value of x is 2 and the values of all other variables are the same as they were before. We assume that this change takes place instantaneously; note that this is a mathematical abstraction, since in reality basic operations take some time to execute. A typical state for the gcd program above is (15; 27; 0; : : : ), where (say) the rst, second, and third components of the sequence denote the values assigned to x, y, and z respectively. The ellipsis \: : : " refers to the values of the other variables, which we do not care about, since they do not occur in the program. A program can be viewed as a transformation on states. Given an initial (input) state, the program will go through a series of intermediate states, perhaps eventually halting in a nal (output) state. A sequence of states that can occur from the execution of a program starting from a particular input state is called a trace . As a typical example of a trace for the program above, consider the initial state (15; 27; 0) (we suppress the ellipsis). The program goes through the following sequence of states: (15,27,0), (15,27,15), (27,27,15), (27,15,15), (27,15,12), (15,15,12), (15,12,12), (15,12,3), (12,12,3), (12,3,3), (12,3,0), (3,3,0), (3,0,0). The value of x in the last (output) state is 3, the gcd of 15 and 27. The binary relation consisting of the set of all pairs of the form (input state, output state) that can occur from the execution of a program , or in other words, the set of all rst and last states of traces of , is called the input/output relation of . For example, the pair ((15; 27; 0); (3; 0; 0)) is a member of the input/output relation of the gcd program above, as is the pair (( 6; 4; 303); (2; 0; 0)). The values of other variables besides x, y, and z are not changed by the program. These values are therefore the same in the output state as in the input state. In this example, we may think of the variables x and y as the input variables , x as the output variable , and z as a work variable , although formally there is no distinction between any of the variables, including the ones not occurring in the program.

1.3 Programming Constructs In subsequent sections we will consider a number of programming constructs. In this section we introduce some of these constructs and de ne a few general classes of languages built on them.

DYNAMIC LOGIC

103

In general, programs are built inductively from atomic programs and tests using various program operators .

While Programs A popular choice of programming language in the literature on DL is the family of deterministic while programs. This language is a natural abstraction of familiar imperative programming languages such as Pascal or C. Dierent versions can be de ned depending on the choice of tests allowed and whether or not nondeterminism is permitted. The language of while programs is de ned inductively. There are atomic programs and atomic tests, as well as program constructs for forming compound programs from simpler ones. In the propositional version of Dynamic Logic (PDL), atomic programs are simply letters a; b; : : : from some alphabet. Thus PDL abstracts away from the nature of the domain of computation and studies the pure interaction between programs and propositions. For the rst-order versions of DL, atomic programs are simple assignments x := t, where x is a variable and t is a term. In addition, a nondeterministic or wildcard assignment x := ? or nondeterministic choice construct may be allowed. Tests can be atomic tests , which for propositional versions are simply propositional letters p, and for rst-order versions are atomic formulas p(t1 ; : : : ; tn ), where t1 ; : : : ; tn are terms and p is an n-ary relation symbol in the vocabulary of the domain of computation. In addition, we include the constant tests 1 and 0. Boolean combinations of atomic tests are often allowed, although this adds no expressive power. These versions of DL are called poor test . More complicated tests can also be included. These versions of DL are sometimes called rich test . In rich test versions, the families of programs and tests are de ned by mutual induction. Compound programs are formed from the atomic programs and tests by induction, using the composition , conditional , and while operators. Formally, if ' is a test and and are programs, then the following are programs:

; if ' then else while ' do . We can also parenthesize with begin : : : end where necessary. The gcd program of Example 1 above is an example of a while program. The semantics of these constructs is de ned to correspond to the ordinary operational semantics familiar from common programming languages.

104

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Regular Programs Regular programs are more general than while programs, but not by much. The advantage of regular programs is that they reduce the relatively more complicated while program operators to much simpler constructs. The deductive system becomes comparatively simpler too. They also incorporate a simple form of nondeterminism. For a given set of atomic programs and tests, the set of regular programs is de ned as follows:

(i) any atomic program is a program (ii) if ' is a test, then '? is a program (iii) if and are programs, then ; is a program; (iv) if and are programs, then [ is a program; (v) if is a program, then is a program.

These constructs have the following intuitive meaning: (i) Atomic programs are basic and indivisible; they execute in a single step. They are called atomic because they cannot be decomposed further. (ii) The program '? tests whether the property ' holds in the current state. If so, it continues without changing state. If not, it blocks without halting. (iii) The operator ; is the sequential composition operator. The program ; means, \Do , then do ." (iv) The operator [ is the nondeterministic choice operator. The program [ means, \Nondeterministically choose one of or and execute it." (v) The operator is the iteration operator. The program means, \Execute some nondeterministically chosen nite number of times."

Keep in mind that these descriptions are meant only as intuitive aids. A formal semantics will be given in Section 2.2, in which programs will be interpreted as binary input/output relations and the programming constructs above as operators on binary relations. The operators [; ; ; may be familiar from automata and formal language theory (see [Kozen, 1997a]), where they are interpreted as operators on sets of strings over a nite alphabet. The language-theoretic and relationtheoretic semantics share much in common; in fact, they have the same equational theory, as shown in [Kozen, 1994a].

DYNAMIC LOGIC

105

The operators of deterministic while programs can be de ned in terms of the regular operators: (2) if ' then else (3) while ' do

def = '? ; [ :'? ; def = ('? ; ) ; :'?

The class of while programs is equivalent to the subclass of the regular programs in which the program operators [, ?, and are constrained to appear only in these forms. Recursion Recursion can appear in programming languages in several forms. Two such manifestations are recursive calls and stacks . Under certain very general conditions, the two constructs can simulate each other. It can also be shown that recursive programs and while programs are equally expressive over the natural numbers, whereas over arbitrary domains, while programs are strictly weaker. While programs correspond to what is often called tail recursion or iteration. R.E. Programs

A nite computation sequence of a program , or seq for short, is a nitelength string of atomic programs and tests representing a possible sequence of atomic steps that can occur in a halting execution of . Seqs are denoted ; ; : : : . The set of all seqs of a program is denoted CS (). We use the word \possible" loosely|CS () is determined by the syntax of alone. Because of tests that evaluate to false, CS () may contain seqs that are never executed under any interpretation. The set CS () is a subset of A , where A is the set of atomic programs and tests occurring in . For while programs, regular programs, or recursive programs, we can de ne the set CS () formally by induction on syntax. For example, for regular programs, CS (a) CS (skip) CS (fail) CS ( ; ) CS ( [ ) CS ( )

def = def = def = def = def = def =

fag; a an atomic program or test f"g

? f ; j 2 CS (); CS () [ CS ( ) CS () [ = CS (n ); n0

2 CS ( )g

106

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

where

0 n+1

def = skip def = n ; :

For example, if a is an atomic program and p an atomic formula, then the program

while p do a = (p? ; a) ; :p?

has as seqs all strings of the form (p? ; a)n ; :p? = p| ?; a; p?; a{z; ; p?; a}; :p? n

for all n 0. Note that each seq of a program is itself a program, and CS () = fg:

While programs and regular programs give rise to regular sets of seqs, and recursive programs give rise to context-free sets of seqs. Taking this a step further, we can de ne an r.e. program to be simply a recursively enumerable set of seqs. This is the most general programming language we will consider in the context of DL; it subsumes all the others in expressive power. Nondeterminism

We should say a few words about the concept of nondeterminism and its role in the study of logics and languages, since this concept often presents diÆculty the rst time it is encountered. In some programming languages we will consider, the traces of a program need not be uniquely determined by their start states. When this is possible, we say that the program is nondeterministic. A nondeterministic program can have both divergent and convergent traces starting from the same input state, and for such programs it does not make sense to say that the program halts on a certain input state or that it loops on a certain input state; there may be dierent computations starting from the same input state that do each. There are several concrete ways nondeterminism can enter into programs. One construct is the nondeterministic or wildcard assignment x := ?. Intuitively, this operation assigns an arbitrary element of the domain to the variable x, but it is not determined which one.1 Another source of nondeterminism is the unconstrained use of the choice operator [ in regular 1 This construct is often called random assignment in the literature. This terminology is misleading, because it has nothing at all to do with probability.

DYNAMIC LOGIC

107

programs. A third source is the iteration operator in regular programs. A fourth source is r.e. programs, which are just r.e. sets of seqs; initially, the seq to execute is chosen nondeterministically. For example, over N , the r.e. program

fx := n j n 0g is equivalent to the regular program

x := 0 ; (x := x + 1) :

Nondeterministic programs provide no explicit mechanism for resolving the nondeterminism. That is, there is no way to determine which of many possible next steps will be taken from a given state. This is hardly realistic. So why study nondeterminism at all if it does not correspond to anything operational? One good answer is that nondeterminism is a valuable tool that helps us understand the expressiveness of programming language constructs. It is useful in situations in which we cannot necessarily predict the outcome of a particular choice, but we may know the range of possibilities. In reality, computations may depend on information that is out of the programmer's control, such as input from the user or actions of other processes in the system. Nondeterminism is useful in modeling such situations. The importance of nondeterminism is not limited to logics of programs. Indeed, the most important open problem in the eld of computational complexity theory, the P =NP problem, is formulated in terms of nondeterminism.

1.4 Program Veri cation Dynamic Logic and other program logics are meant to be useful tools for facilitating the process of producing correct programs. One need only look at the miasma of buggy software to understand the dire need for such tools. But before we can produce correct software, we need to know what it means for it to be correct. It is not good enough to have some vague idea of what is supposed to happen when a program is run or to observe it running on some collection of inputs. In order to apply formal veri cation tools, we must have a formal speci cation of correctness for the veri cation tools to work with. In general, a correctness speci cation is a formal description of how the program is supposed to behave. A given program is correct with respect to a correctness speci cation if its behavior ful lls that speci cation. For the gcd program of Example 1, the correctness might be speci ed informally by the assertion If the input values of x and y are positive integers c and d, respectively, then

108

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

(i) the output value of x is the gcd of c and d, and (ii) the program halts. Of course, in order to work with a formal veri cation system, these properties must be expressed formally in a language such as rst-order logic. The assertion (ii) is part of the correctness speci cation because programs do not necessarily halt, but may produce in nite traces for certain inputs. A nite trace, as for example the one produced by the gcd program above on input state (15,27,0), is called halting, terminating, or convergent. In nite traces are called looping or divergent. For example, the program

while x > 7 do x := x + 3 loops on input state (8; : : : ), producing the in nite trace (8; : : : ); (11; : : : ); (14; : : : ); : : : Dynamic Logic can reason about the behavior of a program that is manifested in its input/output relation. It is not well suited to reasoning about program behavior manifested in intermediate states of a computation (although there are close relatives, such as Process Logic and Temporal Logic, that are). This is not to say that all interesting program behavior is captured by the input/output relation, and that other types of behavior are irrelevant or uninteresting. Indeed, the restriction to input/output relations is reasonable only when programs are supposed to halt after a nite time and yield output results. This approach will not be adequate for dealing with programs that normally are not supposed to halt, such as operating systems. For programs that are supposed to halt, correctness criteria are traditionally given in the form of an input/output speci cation consisting of a formal relation between the input and output states that the program is supposed to maintain, along with a description of the set of input states on which the program is supposed to halt. The input/output relation of a program carries all the information necessary to determine whether the program is correct relative to such a speci cation. Dynamic Logic is well suited to this type of veri cation. It is not always obvious what the correctness speci cation ought to be. Sometimes, producing a formal speci cation of correctness is as diÆcult as producing the program itself, since both must be written in a formal language. Moreover, speci cations are as prone to bugs as programs. Why bother then? Why not just implement the program with some vague speci cation in mind? There are several good reasons for taking the eort to produce formal speci cations:

DYNAMIC LOGIC

109

1. Often when implementing a large program from scratch, the programmer may have been given only a vague idea of what the nished product is supposed to do. This is especially true when producing software for a less technically inclined employer. There may be a rough informal description available, but the minor details are often left to the programmer. It is very often the case that a large part of the programming process consists of taking a vaguely speci ed problem and making it precise. The process of formulating the problem precisely can be considered a de nition of what the program is supposed to do. And it is just good programming practice to have a very clear idea of what we want to do before we start doing it. 2. In the process of formulating the speci cation, several unforeseen cases may become apparent, for which it is not clear what the appropriate action of the program should be. This is especially true with error handling and other exceptional situations. Formulating a speci cation can de ne the action of the program in such situations and thereby tie up loose ends. 3. The process of formulating a rigorous speci cation can sometimes suggest ideas for implementation, because it forces us to isolate the issues that drive design decisions. When we know all the ways our data are going to be accessed, we are in a better position to choose the right data structures that optimize the tradeos between eÆciency and generality. 4. The speci cation is often expressed in a language quite dierent from the programming language. The speci cation is functional |it tells what the program is supposed to do|as opposed to imperative |how to do it. It is often easier to specify the desired functionality independent of the details of how it will be implemented. For example, we can quite easily express what it means for a number x to be the gcd of y and z in rst-order logic without even knowing how to compute it. 5. Verifying that a program meets its speci cation is a kind of sanity check. It allows us to give two solutions to the problem|once as a functional speci cation, and once as an algorithmic implementation| and lets us verify that the two are compatible. Any incompatibilities between the program and the speci cation are either bugs in the program, bugs in the speci cation, or both. The cycle of re ning the speci cation, modifying the program to meet the speci cation, and reverifying until the process converges can lead to software in which we have much more con dence.

110

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Partial and Total Correctness

Typically, a program is designed to implement some functionality. As mentioned above, that functionality can often be expressed formally in the form of an input/output speci cation. Concretely, such a speci cation consists of an input condition or precondition ' and an output condition or postcondition . These are properties of the input state and the output state, respectively, expressed in some formal language such as the rst-order language of the domain of computation. The program is supposed to halt in a state satisfying the output condition whenever the input state satis es the input condition. We say that a program is partially correct with respect to a given input/output speci cation '; if, whenever the program is started in a state satisfying the input condition ', then if and when it ever halts, it does so in a state satisfying the output condition . The de nition of partial correctness does not stipulate that the program halts; this is what we mean by partial. A program is totally correct with respect to an input/output speci cation '; if

it is partially correct with respect to that speci cation; and

it halts whenever it is started in a state satisfying the input condition '.

The input/output speci cation imposes no requirements when the input state does not satisfy the input condition '|the program might as well loop in nitely or erase memory. This is the \garbage in, garbage out" philosophy. If we really do care what the program does on some of those input states, then we had better rewrite the input condition to include them and say formally what we want to happen in those cases. For example, in the gcd program of Example 1, the output condition might be the condition (i) stating that the output value of x is the gcd of the input values of x and y. We can express this completely formally in the language of rst-order number theory. We may try to start o with the input speci cation '0 = 1 (true ); that is, no restrictions on the input state at all. Unfortunately, if the initial value of y is 0 and x is negative, the nal value of x will be the same as the initial value, thus negative. If we expect all gcds to be positive, this would be wrong. Another problematic situation arises when the initial values of x and y are both 0; in this case the gcd is not de ned. Therefore, the program as written is not partially correct with respect to the speci cation '0 ; . We can remedy the situation by providing an input speci cation that rules out these troublesome input values. We can limit the input states to those in which x and y are both nonnegative and not both zero by taking

DYNAMIC LOGIC

111

the input speci cation

'1 = (x 0 ^ y > 0)

_ (x > 0 ^ y 0):

The gcd program of Example 1 above would be partially correct with respect to the speci cation '1 ; . It is also totally correct, since the program halts on all inputs satisfying '1 . Perhaps we want to allow any input in which not both x and y are zero. In that case, we should use the input speci cation '2 = :(x = 0 ^ y = 0). But then the program of Example 1 is not partially correct with respect to '2 ; ; we must amend the program to produce the correct (positive) gcd on negative inputs.

1.5 Exogenous and Endogenous Logics There are two main approaches to modal logics of programs: the exogenous approach, exempli ed by Dynamic Logic and its precursor Hoare Logic [Hoare, 1969], and the endogenous approach, exempli ed by Temporal Logic and its precursor, the invariant assertions method of [Floyd, 1967]. A logic is exogenous if its programs are explicit in the language. Syntactically, a Dynamic Logic program is a well-formed expression built inductively from primitive programs using a small set of program operators. Semantically, a program is interpreted as its input/output relation. The relation denoted by a compound program is determined by the relations denoted by its parts. This aspect of compositionality allows analysis by structural induction. The importance of compositionality is discussed in [van Emde Boas, 1978]. In Temporal Logic, the program is xed and is considered part of the structure over which the logic is interpreted. The current location in the program during execution is stored in a special variable for that purpose, called the program counter, and is part of the state along with the values of the program variables. Instead of program operators, there are temporal operators that describe how the program variables, including the program counter, change with time. Thus Temporal Logic sacri ces compositionality for a less restricted formalism. We discuss Temporal Logic further in Section 14.2. 2 PROPOSITIONAL DYNAMIC LOGIC (PDL) Propositional Dynamic Logic (PDL) plays the same role in Dynamic Logic that classical propositional logic plays in classical predicate logic. It describes the properties of the interaction between programs and propositions that are independent of the domain of computation. Since PDL is a subsystem of rst-order DL, we can be sure that all properties of PDL that we discuss in this section will also be valid in rst-order DL.

112

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Since there is no domain of computation in PDL, there can be no notion of assignment to a variable. Instead, primitive programs are interpreted as arbitrary binary relations on an abstract set of states K . Likewise, primitive assertions are just atomic propositions and are interpreted as arbitrary subsets of K . Other than this, no special structure is imposed. This level of abstraction may at rst appear too general to say anything of interest. On the contrary, it is a very natural level of abstraction at which many fundamental relationships between programs and propositions can be observed. For example, consider the PDL formula (4) [](' ^ )

$ []' ^ [] :

The left-hand side asserts that the formula ' ^ must hold after the execution of program , and the right-hand side asserts that ' must hold after execution of and so must . The formula (4) asserts that these two statements are equivalent. This implies that to verify a conjunction of two postconditions, it suÆces to verify each of them separately. The assertion (4) holds universally, regardless of the domain of computation and the nature of the particular , ', and . As another example, consider (5) [ ; ]'

$ [][ ]':

The left-hand side asserts that after execution of the composite program ; , ' must hold. The right-hand side asserts that after execution of the program , [ ]' must hold, which in turn says that after execution of , ' must hold. The formula (5) asserts the logical equivalence of these two statements. It holds regardless of the nature of , , and '. Like (4), (5) can be used to simplify the veri cation of complicated programs. As a nal example, consider the assertion (6) []p

$ [ ]p

where p is a primitive proposition symbol and and are programs. If this formula is true under all interpretations, then and are equivalent in the sense that they behave identically with respect to any property expressible in PDL or any formal system containing PDL as a subsystem. This is because the assertion will hold for any substitution instance of (6). For example, the two programs

= if ' then else Æ = if :' then Æ else are equivalent in the sense of (6).

DYNAMIC LOGIC

113

2.1 Syntax Syntactically, PDL is a blend of three classical ingredients: propositional logic, modal logic, and the algebra of regular expressions. There are several versions of PDL, depending on the choice of program operators allowed. In this section we will introduce the basic version, called regular PDL. Variations of this basic version will be considered in later sections. The language of regular PDL has expressions of two sorts: propositions or formulas '; ; : : : and programs ; ; ; : : : . There are countably many atomic symbols of each sort. Atomic programs are denoted a; b; c; : : : and the set of all atomic programs is denoted 0 . Atomic propositions are denoted p; q; r; : : : and the set of all atomic propositions is denoted 0 . The set of all programs is denoted and the set of all propositions is denoted . Programs and propositions are built inductively from the atomic ones using the following operators: Propositional operators:

! 0

implication falsity

Program operators: ;

[

composition choice iteration

Mixed operators: []

?

necessity test

The de nition of programs and propositions is by mutual induction. All atomic programs are programs and all atomic propositions are propositions. If '; are propositions and ; are programs, then

'! 0 []'

propositional implication propositional falsity program necessity

are propositions and

; [ '?

sequential composition nondeterministic choice iteration test

are programs. In more formal terms, we de ne the set of all programs and the set of all propositions to be the smallest sets such that

114

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

0 if '; 2 , then ' ! 2 and 0 2 if ; 2 , then ; , [ , and 2 if 2 and ' 2 , then []' 2 if ' 2 then '? 2 : 0

Note that the inductive de nitions of programs and propositions are intertwined and cannot be separated. The de nition of propositions depends on the de nition of programs because of the construct []', and the de nition of programs depends on the de nition of propositions because of the construct '?. Note also that we have allowed all formulas as tests. This is the rich test version of PDL. Compound programs and propositions have the following intuitive meanings: []' \It is necessary that after executing ,

;

' is true."

\Execute , then execute ."

[ \Choose either or nondeterministically and execute it." '?

\Execute a nondeterministically chosen nite number of times (zero or more)." \Test '; proceed if true, fail if false."

We avoid parentheses by assigning precedence to the operators: unary operators, including [], bind tighter than binary ones, and ; binds tighter than [. Thus the expression [; [ ]' _

should be read

([(; ( )) [ ( )]') _ :

Of course, parentheses can always be used to enforce a particular parse of an expression or to enhance readability. Also, under the semantics to be given in the next section, the operators ; and [ will turn out to be associative, so we may write ; ; and [ [ without ambiguity. We often omit the symbol ; and write the composition ; as . The propositional operators ^, _, :, $, and 1 can be de ned from ! and 0 in the usual way.

DYNAMIC LOGIC

115

The possibility operator < > is the modal dual of the necessity operator

[ ]. It is de ned by

def = :[]:': The propositions []' and <>' are read \box '" and \diamond '," <>'

respectively. The latter has the intuitive meaning, \There is a computation of that terminates in a state satisfying '." One important dierence between < > and [ ] is that <>' implies that terminates, whereas []' does not. Indeed, the formula []0 asserts that no computation of terminates, and the formula []1 is always true, regardless of . In addition, we de ne

if '1 ! 1

skip fail 'n ! n

j j do '1 ! 1 j j 'n ! n od if ' then else while ' do repeat until '

f'g f g

def = 1? def = 0? def = '1 ?; 1 [ [ 'n ?; n n n [ ^ def = ( 'i ?; i ) ; ( :'i )? i=1 i=1 def = if ' ! j :' ! = '?; [ :'?; def = do ' ! od = ('?; ) ; :'? def = ; while :' do = ; (:'?; ) ; '? def = ' ! [] :

The programs skip and fail are the program that does nothing (noop) and the failing program, respectively. The ternary if-then-else operator and the binary while-do operator are the usual conditional and while loop constructs found in conventional programming languages. The constructs if-j- and do-j-od are the alternative guarded command and iterative guarded command constructs, respectively. The construct f'g f g is the Hoare partial correctness assertion. We will argue later that the formal de nitions of these operators given above correctly model their intuitive behavior.

2.2 Semantics The semantics of PDL comes from the semantics for modal logic. The structures over which programs and propositions of PDL are interpreted

116

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

are called Kripke frames in honor of Saul Kripke, the inventor of the formal semantics of modal logic. A Kripke frame is a pair K = (K; mK ); where K is a set of elements u; v; w; : : : called states and mK is a meaning function assigning a subset of K to each atomic proposition and a binary relation on K to each atomic program. That is, mK (p) K; p 2 0 mK (a) K K; a 2 0 : We will extend the de nition of the function mK by induction below to give a meaning to all elements of and such that mK (') K; '2 mK () K K; 2 : Intuitively, we can think of the set mK (') as the set of states satisfying the proposition ' in the model K, and we can think of the binary relation mK () as the set of input/output pairs of states of the program . Formally, the meanings mK (') of ' 2 and mK () of 2 are de ned by mutual induction on the structure of ' and . The basis of the induction, which speci es the meanings of the atomic symbols p 2 0 and a 2 0 , is already given in the speci cation of K. The meanings of compound propositions and programs are de ned as follows.

mK (' ! ) mK (0) mK ([]') (7)

mK (; )

mK ( [ ) (8) mK ( ) mK ('?)

def = (K mK (')) [ mK ( ) def = ? def = K (mK () Æ (K mK ('))) = fu j 8v 2 K if (u; v) 2 mK () then v 2 mK (')g def = mK () Æ mK ( ) = f(u; v) j 9w 2 K (u; w) 2 mK () and (w; v) 2 mK ( )g def = mK () [ mK ( ) [ def = mK () = mK ()n n0 def = f(u; u) j u 2 mK (')g:

The operator Æ in (7) is relational composition. In (8), the rst occurrence of is the iteration symbol of PDL, and the second is the re exive transitive closure operator on binary relations. Thus (8) says that the program is interpreted as the re exive transitive closure of mK (). We write K; u ' and u 2 mK (') interchangeably, and say that u satis es ' in K, or that ' is true at state u in K. We may omit the K and write u '

DYNAMIC LOGIC

117

when K is understood. The notation u 2 ' means that u does not satisfy ', or in other words that u 62 mK ('). In this notation, we can restate the de nition above equivalently as follows:

u'! u20 u []' (u; v) 2 mK ( ) (u; v) 2 mK ( [ ) (u; v) 2 mK ( ) (u; v) 2 mK ('?)

def u ' implies u () def () def () def () def ()

8v if (u; v) 2 mK () then v ' 9w (u; w) 2 mK () and (w; v) 2 mK ( ) (u; v) 2 mK () or (u; v) 2 mK ( ) 9n 0 9u0 ; : : : ; un u = u0; v = un ; and (ui ; ui+1 ) 2 mK (); 0 i n 1 def () u = v and u ':

The de ned operators inherit their meanings from these de nitions:

mK (' _ ) mK (' ^ ) mK (:') mK (<>')

def = def = def = def =

= mK (1) def = def mK (skip) = mK (fail) def =

mK (') [ mK ( ) mK (') \ mK ( ) K mK (')

fu j 9v 2 K (u; v) 2 mK () and v 2 mK (')g mK () Æ mK (') K

mK (1?) = ; the identity relation mK (0?) = ?:

In addition, the if-then-else, while-do, and guarded commands inherit their semantics from the above de nitions, and the input/output relations given by the formal semantics capture their intuitive operational meanings. For example, the relation associated with the program while ' do is the set of pairs (u; v) for which there exist states u0 ; u1; : : : ; un, n 0, such that u = u0 , v = un, ui 2 mK (') and (ui ; ui+1 ) 2 mK () for 0 i < n, and un 62 mK ('). This version of PDL is usually called regular PDL and the elements of are called regular programs because of the primitive operators [, ;, and , which are familiar from regular expressions. Programs can be viewed as regular expressions over the atomic programs and tests. In fact, it can be shown that if p is an atomic proposition symbol, then any two test-free programs ; are equivalent as regular expressions|that is, they represent the same regular set|if and only if the formula <>p $ < >p is valid.

118

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

EXAMPLE 2. Let p be an atomic proposition, let a be an atomic program, and let K = (K; mK ) be a Kripke frame with

K = mK (p) = mK (a) =

fu; v; wg fu; vg f(u; v); (u; w); (v; w); (w; v)g:

s s s

The following diagram illustrates K.

w

7So

a

u

SSa -wS v a

p

In this structure, u :p ^ p, but v [a]:p and w [a]p. Moreover, every state of K satis es the formula [(aa) ]p ^ [(aa) ]:p:

2.3 Computation Sequences Let be a program. Recall from Section 1.3 that a nite computation sequence of is a nite-length string of atomic programs and tests representing a possible sequence of atomic steps that can occur in a halting execution of . These strings are called seqs and are denoted ; ; : : : . The set of all such sequences is denoted CS (). We use the word \possible" here loosely|CS () is determined by the syntax of alone, and may contain strings that are never executed in any interpretation. The formal de nition of CS () was given in Section 1.3. Note that each nite computation sequence of a program is itself a program, and CS ( ) = f g. Moreover, the following proposition is not diÆcult to prove by induction on the structure of : PROPOSITION 3.

mK () =

[

2CS ()

mK ():

2.4 Satis ability and Validity The de nitions of satis ability and validity of propositions come from modal logic. Let K = (K; mK ) be a Kripke frame and let ' be a proposition. We have de ned in Section 2.2 what it means for K; u '. If K; u ' for some

DYNAMIC LOGIC

119

u 2 K , we say that ' is satis able in K. If ' is satis able in some K, we say that ' is satis able. If K; u ' for all u 2 K , we write K ' and say that ' is valid in K. If K ' for all Kripke frames K, we write ' and say that ' is valid. If is a set of propositions, we write K if K ' for all ' 2 . A proposition is said to be a logical consequence of if K whenever K , in which case we write . (Note that this is not the same as saying that K; u whenever K; u .) We say that an inference rule '1 ; : : : ; 'n ' is sound if ' is a logical consequence of f'1 ; : : : ; 'n g. Satis ability and validity are dual in the same sense that 9 and 8 are dual and < > and [ ] are dual: a proposition is valid (in K) if and only if its negation is not satis able (in K). EXAMPLE 4. Let p; q be atomic propositions, let a; b be atomic programs, and let K = (K; mK ) be a Kripke frame with K = fs; t; u; vg mK (p) = fu; vg mK (q) = ft; vg mK (a) = f(t; v); (v; t); (s; u); (u; s)g mK (b) = f(u; v); (v; u); (s; t); (t; s)g: The following gure illustrates K.

s s s

a

6

? p u

b

s s -t 6 a

b

q

-?v

The following formulas are valid in K. p $ [(aba) ]p q $ [(bab) ]q: Also, let be the program = (aa [ bb [ (ab [ ba)(aa [ bb)(ab [ ba)) : Thinking of as a regular expression, generates all words over the alphabet fa; bg with an even number of occurrences of each of a and b. It can be shown that for any proposition ', the proposition ' $ []' is valid in K.

120

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

EXAMPLE 5. The formula

p ^ [a ]((p ! [a]:p) ^ (:p ! [a]p))

$

[(aa) ]p ^ [a(aa) ]:p

is valid. Both sides assert in dierent ways that p is alternately true and false along paths of execution of the atomic program a.

2.5 Basic Properties THEOREM 6. The following are valid formulas of PDL: (i) <>(' _ ) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) (xii) (xiii) (xiv)

$ <>' _ <> [](' ^ ) $ []' ^ [] <>' ^ [] ! <>(' ^ ) [](' ! ) ! ([]' ! [] <>(' ^ ) ! <>' ^ <> []' _ [] ! [](' _ ) <>0 $ 0 []' $ :<>:'. < [ >' $ <>' _ < >' [ [ ]' $ []' ^ [ ]' < ; >' $ <>< >' [ ; ]' $ [][ ]' :

'!

<>' ! <>

DYNAMIC LOGIC

121

(iii) Monotonicity of []:

'! []' ! [] The converse operator

is a program operator with semantics

mK ( ) = mK ()

=

f(v; u) j (u; v) 2 mK ()g:

Intuitively, the converse operator allows us to \run a program backwards;" semantically, the input/output relation of the program is the output/input relation of . Although this is not always possible to realize in practice, it is nevertheless a useful expressive tool. For example, it gives us a convenient way to talk about backtracking, or rolling back a computation to a previous state. THEOREM 8. For any programs and , (i) mK (( [ ) ) = mK ( [ ) (ii) mK (( ; ) ) = mK ( ; )

(iii) mK ('? ) = mK ('?)

(iv) mK ( ) = mK ( ) (v) mK (

) = mK ().

THEOREM 9. The following are valid formulas of PDL:

! []< >' ' ! [ ]<>' <>[ ]' ! ' < >[]' ! '.

(i) ' (ii) (iii) (iv)

The iteration operator is interpreted as the re exive transitive closure operator on binary relations. It is the means by which iteration is coded in PDL. This operator diers from the other operators in that it is in nitary in nature, as re ected by its semantics:

mK ( ) = mK () =

[

n

mK ()n

(see Section 2.2). This introduces a level of complexity to PDL beyond the other operators. Because of it, PDL is not compact: the set (9)

f< >'g [ f:'; :<>'; :<2 >'; : : : g

122

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is nitely satis able but not satis able. Because of this in nitary behavior, it is rather surprising that PDL should be decidable and that there should be a nitary complete axiomatization. The properties of the operator of PDL come directly from the properties of the re exive transitive closure operator on binary relations. In a nutshell, for any binary relation R, R is the -least re exive and transitive relation containing R. THEOREM 10. The following are valid formulas of PDL: (i) [ ]'

! ' (ii) ' ! < >' (iii) [ ]' ! []' (iv) <>' ! < >' (v) [ ]' $ [ ]' (vi) < >' $ < >' (vii) [ ]' $ []' (viii) < >' $ <>' (ix) [ ]' $ ' ^ [][ ]'. (x) < >' $ ' _ <>< >'. (xi) [ ]' $ ' ^ [ ](' ! []'). (xii) < >' $ ' _ < >(:' ^ <>'). Semantically, is a re exive and transitive relation containing , and Theorem 10 captures this. That is re exive is captured in (ii); that it is transitive is captured in (vi); and that it contains is captured in (iv). These three properties are captured by the single property (x). Re exive Transitive Closure and Induction

To prove properties of iteration, it is not enough to know that is a re exive and transitive relation containing . So is the universal relation K K , and that is not very interesting. We also need some way of capturing the idea that is the least re exive and transitive relation containing . There are several equivalent ways this can be done:

DYNAMIC LOGIC

123

(RTC) The re exive transitive closure rule : (' _ <> ) ! < >' !

(LI) The loop invariance rule :

! [] ! [ ] (IND) The induction axiom (box form): ' ^ [ ](' ! []')

! [ ]'

(IND) The induction axiom (diamond form): < >'

! ' _ < >(:' ^ <>')

The rule (RTC) is called the re exive transitive closure rule . Its importance is best described in terms of its relationship to the valid PDL formula of Theorem 10(x). Observe that the right-to-left implication of this formula is obtained by substituting < >' for R in the expression (10) ' _ <>R

! R:

Theorem 10(x) implies that < >' is a solution of (10); that is, (10) is valid when < >' is substituted for R. The rule (RTC) says that < >' is the least such solution with respect to logical implication. That is, it is the least PDL-de nable set of states that when substituted for R in (10) results in a valid formula. The dual propositions labeled (IND) are jointly called the PDL induction axiom . Intuitively, the box form of (IND) says, \If ' is true initially, and if, after any number of iterations of the program , the truth of ' is preserved by one more iteration of , then ' will be true after any number of iterations of ." The diamond form of (IND) says, \If it is possible to reach a state satisfying ' in some number of iterations of , then either ' is true now, or it is possible to reach a state in which ' is false but becomes true after one more iteration of ."

124

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Note that the box form of (IND) bears a strong resemblance to the induction axiom of Peano arithmetic:

'(0) ^ 8n ('(n) ! '(n + 1))

! 8n '(n):

Here '(0) is the basis of the induction and 8n ('(n) ! '(n + 1)) is the induction step, from which the conclusion 8n '(n) can be drawn. In the PDL axiom (IND), the basis is ' and the induction step is [ ](' ! []'), from which the conclusion [ ]' can be drawn.

2.6 Encoding Hoare Logic The Hoare partial correctness assertion f'g f g is encoded as ' ! [] in PDL. The following theorem says that under this encoding, Dynamic Logic subsumes Hoare Logic. THEOREM 11. The following rules of Hoare Logic are derivable in PDL: (i) Composition rule:

f'g fg; fg f g f'g ; f g (ii) Conditional rule:

f' ^ g f g; f:' ^ g f g fg if ' then else f g (iii)

While rule: f' ^ g f g f g while ' do f:' ^ g

(iv) Weakening rule:

'0 ! ';

f'g f g; f'0 g f 0 g

!

0

DYNAMIC LOGIC

125

3 FILTRATION AND DECIDABILITY The small model property for PDL says that if ' is satis able, then it is satis ed at a state in a Kripke frame with no more than 2j'j states, where j'j is the number of symbols of '. This result and the technique used to prove it, called ltration , come directly from modal logic. This immediately gives a naive decision procedure for the satis ability problem for PDL: to determine whether ' is satis able, construct all Kripke frames with at most 2j'j states and check whether ' is satis ed at some state in one of them. Considering only interpretations of the primitive formulas and prim' itive programs appearing in ', there are roughly 22 such models, so this algorithm is too ineÆcient to be practical. A more eÆcient algorithm will be described in Section 5. j

j

3.1 The Fischer{Ladner Closure Many proofs in simpler modal systems use induction on the well-founded subformula relation. In PDL, the situation is complicated by the simultaneous inductive de nitions of programs and propositions and by the behavior of the operator, which make the induction proofs somewhat tricky. Nevertheless, we can still use the well-founded subexpression relation in inductive proofs. Here an expression can be either a program or a proposition. Either one can be a subexpression of the other because of the mixed operators [ ] and ?. We start by de ning two functions FL : ! 2 FL2 : f[]' j 2 ; ' 2 g

! 2

by simultaneous induction. The set FL(') is called the Fischer{Ladner closure of '. The ltration construction for PDL uses the Fischer{Ladner closure of a given formula where the corresponding proof for propositional modal logic would use the set of subformulas. The functions FL and FL2 are de ned inductively as follows: (a) FL(p) def =

fpg, p an atomic proposition

(b) FL(' ! ) def = (c) FL(0) def =

f' ! g [ FL(') [ FL(

)

f0g

(d) FL([]') def = FL2 ([]') [ FL(') (e) FL2 ([a]') def =

f[a]'g, a an atomic program

126

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

(f) FL2 ([ [ ]') def =

f[ [ ]'g [ FL2 ([]') [ FL2([ ]')

(g) FL2 ([ ; ]') def = (h)

f[ ; ]'g [ FL2([][ ]') [ FL2 ([ ]') FL2 ([ ]') def = f[ ]'g [ FL2 ([][ ]')

(i) FL2 ([ ?]') def =

f[

?]'g [ FL( ).

This de nition is apparently quite a bit more involved than for mere subexpressions. In fact, at rst glance it may appear circular because of the rule (h). The auxiliary function FL2 is introduced for the express purpose of avoiding any such circularity. It is de ned only for formulas of the form []' and intuitively produces those elements of FL([]') obtained by breaking down and ignoring '. LEMMA 12.

2 FL('), then 2 FL('). If [?] 2 FL('), then 2 FL('). If [ [ ] 2 FL('), then [] 2 FL(') and [ ] 2 FL('). If [ ; ] 2 FL('), then [][ ] 2 FL(') and [ ] 2 FL('). If [ ] 2 FL('), then [][ ] 2 FL(').

(i) If [] (ii) (iii) (iv) (v)

Even after convincing ourselves that the de nition is noncircular, it may not be clear how the size of FL(') depends on the length of '. Indeed, the right-hand side of rule (h) involves a formula that is larger than the formula on the left-hand side. However, it can be shown by induction on subformulas that the relationship is linear: LEMMA 13. (i) For any formula ', #FL(')

j'j.

(ii) For any formula []', #FL2 ([]')

jj.

3.2 Filtration Given a PDL proposition ' and a Kripke frame K = (K; mK ), we de ne a new frame K=FL(') = (K=FL('); mK=FL(') ), called the ltration of K by FL('), as follows. De ne a binary relation on states of K by:

uv

def 8 2 FL(') (u 2 m ( ) , v 2 m ( )): () K K

DYNAMIC LOGIC

127

In other words, we collapse states u and v if they are not distinguishable by any formula of FL('). Let

def = def = def = def mK=FL(') (a) = [u] K=FL(') mK=FL(') (p)

fv j v ug f[u] j u 2 K g f[u] j u 2 mK (p)g; p an atomic proposition f([u]; [v]) j (u; v) 2 mK (a)g; a an atomic program.

The map mK=FL(') is extended inductively to compound propositions and programs as described in Section 2.2. The following key lemma relates K and K=FL('). Most of the diÆculty in the following lemma is in the correct formulation of the induction hypotheses in the statement of the lemma. Once this is done, the proof is a fairly straightforward induction on the well-founded subexpression relation. LEMMA 14 (Filtration Lemma). Let K be a Kripke frame and let u; v be states of K.

2 FL('), u 2 mK ( ) i [u] 2 mK=FL(')( ). For all [] 2 FL('), (a) if (u; v) 2 mK () then ([u]; [v]) 2 mK=FL(')(); (b) if ([u]; [v]) 2 mK=FL(')() and u 2 mK ([] ), then v 2 mK (

(i) For all (ii)

).

Using the ltration lemma, we can prove the small model theorem easily. THEOREM 15 (Small Model Theorem). Let ' be a satis able formula of PDL. Then ' is satis ed in a Kripke frame with no more than 2j'j states.

Proof. If ' is satis able, then there is a Kripke frame K and state u 2 K with u 2 mK ('). Let FL(') be the Fischer-Ladner closure of '. By the ltration lemma (Lemma 14), [u] 2 mK=FL(')('). Moreover, K=FL(') has no more states than the number of truth assignments to formulas in FL('), which by Lemma 13(i) is at most 2j'j. It follows immediately that the satis ability problem for PDL is decidable, since there are only nitely many possible Kripke frames of size at most 2j'j to check, and there is a polynomial-time algorithm to check whether a given formula is satis ed at a given state in a given Kripke frame. A more eÆcient algorithm exists (see Section 5). The completeness proof for PDL also makes use of the ltration lemma (Lemma 14), but in a somewhat stronger form. We need to know that it also holds for nonstandard Kripke frames as well as the standard Kripke frames de ned in Section 2.2.

128

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

A nonstandard Kripke frame is any structure N = (N; mN) that is a Kripke frame in the sense of Section 2.2 in every respect, except that mN ( ) need not be the re exive transitive closure of mN (), but only a re exive, transitive binary relation containing mN () satisfying the PDL axioms for (Axioms 17(vii) and (viii) of Section 4.1). LEMMA 16 (Filtration for Nonstandard Models). Let N be a nonstandard Kripke frame and let u; v be states of N.

2 FL('), u 2 mN ( ) i [u] 2 mN=FL(')( ). For all [] 2 FL('), (a) if (u; v) 2 mN () then ([u]; [v]) 2 mN=FL(')(); (b) if ([u]; [v]) 2 mN=FL(') () and u 2 mN([] ), then v 2 mN(

(i) For all (ii)

).

4 DEDUCTIVE COMPLETENESS OF PDL

4.1 A Deductive System The following list of axioms and rules constitutes a sound and complete Hilbert-style deductive system for PDL. Axiom System 17. (i) Axioms for propositional logic (ii) [](' ! ) (iii) (iv) (v) (vi) (vii) (viii)

! ([]' ! [] ) [](' ^ ) $ []' ^ [] [ [ ]' $ []' ^ [ ]' [ ; ]' $ [][ ]' [ ?]' $ ( ! ') ' ^ [][ ]' $ [ ]' ' ^ [ ](' ! []') ! [ ]'

In PDL with converse , we also include

! []< >' ' ! [ ]<>'

(ix) ' (x)

DYNAMIC LOGIC

129

Rules of Inference '; ' ! (MP) (GEN)

' []'

2

The axioms (ii) and (iii) and the two rules of inference are not particular to PDL, but come from modal logic. The rules (MP) and (GEN) are called modus ponens and (modal) generalization , respectively. Axiom (viii) is called the PDL induction axiom . Intuitively, (viii) says: \Suppose ' is true in the current state, and suppose that after any number of iterations of , if ' is still true, then it will be true after one more iteration of . Then ' will be true after any number of iterations of ." In other words, if ' is true initially, and if the truth of ' is preserved by the program , then ' will be true after any number of iterations of . We write ` ' if the proposition ' is a theorem of this system, and say that ' is consistent if 0 :'; that is, if it is not the case that ` :'. A set of propositions is consistent if all nite conjunctions of elements of are consistent. The soundness of these axioms and rules over Kripke frames can be established by elementary arguments in relational algebra using the semantics of Section 2.2. We write ` ' if the formula ' is provable in this deductive system. A formula ' is consistent if 0 :', that is, if it is not the caseV that ` :'; that a nite set of formulas is consistent if its conjunction is consistent; and that an in nite set of formulas is consistent if every nite subset is consistent. Axiom System 17 is complete: all valid formulas of PDL are theorems. This fact can be proved by constructing a nonstandard Kripke frame from maximal consistent sets of formulas, then using the ltration lemma for nonstandard models (Lemma 16) to collapse this nonstandard model to a nite standard model. THEOREM 18 (Completeness of PDL). If ' then ` '. In classical logics, a completeness theorem of the form of Theorem 18 can be adapted to handle the relation of logical consequence ' j= between formulas because of the deduction theorem, which says

'`

, `'! :

Unfortunately, the deduction theorem fails in PDL, as can be seen by taking = [a]p and ' = p. However, the following result allows Theorem

130

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

18, as well as the deterministic exponential-time satis ability algorithm described in the next section, to be extended to handle the logical consequence relation: THEOREM 19. Let ' and be any PDL formulas. Then

' j=

, j= [(a1 [ [ an ) ]' ! ;

where a1 ; : : : ; an are all atomic programs appearing in ' or . Allowing in nitary conjunctions, if is a set of formulas in which only nitely many atomic programs appear, then

j=

, j= f[(a1 [ [ an )]' j ' 2 g ! ; ^

where a1 ; : : : ; an are all atomic programs appearing in or .

5 COMPLEXITY OF PDL The small model theorem (Theorem 15) gives a naive deterministic algorithm for the satis ability problem: construct all Kripke frames of at most 2j'j states and check whether ' is satis ed at any state in any of them. Although checking whether a given formula is satis ed in a given state of a given Kripke frame can be done quite eÆciently, the naive satis ability algorithm is highly ineÆcient. For one thing, the models constructed are of exponential size in the length of the given formula; for another, there O( ' ) are 22 of them. Thus the naive satis ability algorithm takes double exponential time in the worst case. There is a more eÆcient algorithm [Pratt, 1979b] that runs in deterministic single-exponential time. One cannot expect to improve this signi cantly due to a corresponding lower bound. THEOREM 20. There is an exponential-time algorithm for deciding whether a given formula of PDL is satis able. j

j

THEOREM 21. The satis ability problem for PDL is EXPTIME-complete. COROLLARY 22. There is a constant c > 1 such that the satis ability problem for PDL is not solvable in deterministic time cn= log n , where n is the size of the input formula. EXPTIME -hardness can be established by constructing a formula of PDL whose models encode the computation of a given linear-space-bounded onetape alternating Turing machine M on a given input x of length n over M 's input alphabet. Since the membership problem for alternating polynomialspace machines is EXPTIME -hard [Chandra et al., 1981], so is the satis ability problem for PDL.

DYNAMIC LOGIC

131

It is interesting to compare the complexity of satis ability in PDL with the complexity of satis ability in propositional logic. In the latter, satis ability is NP -complete; but at present it is not known whether the two complexity classes EXPTIME and NP dier. Thus, as far as current knowledge goes, the satis ability problem is no easier in the worst case for propositional logic than for its far richer superset PDL. As we have seen, current knowledge does not permit a signi cant dierence to be observed between the complexity of satis ability in propositional logic and in PDL. However, there is one easily veri ed and important behavioral dierence: propositional logic is compact , whereas PDL is not. Compactness has signi cant implications regarding the relation of logical consequence. If a propositional formula ' is a consequence of a set of propositional formulas, then it is already a consequence of some nite subset of ; but this is not true in PDL. Recall that we write ' and say that ' is a logical consequence of if ' satis ed in any state of any Kripke frame K all of whose states satisfy all the formulas of . That is, if K , then K '. An alternative intepretation of logical consequence, not equivalent to the above, is that in any Kripke frame, the formula ' holds in any state satisfyingVall formulas in . Allowing in nite conjunctions,Vwe might write this as ! '. This is not the same as ', since ! ' implies ', but not necessarily vice versa. A counterexample is provided by = fpg and ' = [a]p. However, if contains only nitely many V 0 atomic programs, we can reduce the problem ' to the problem ! ' for a related 0 , as shown in Theorem 19. Under either interpretation, compactness fails: THEOREM 23. There is an in nite set of formulas and a formula ' such V 0 is it the that ! ' (hence '), but for no proper subset V 0 0 case that ' (hence neither is it the case that ! ').

As shown in Theorem 19, logical consequences ' for nite are no more diÆcult to decide than validity of single formulas. But what if is in nite? Here compactness is the key factor. If is an r.e. set and the logic is compact, then the consequence problem is r.e.: to check whether ', the nite subsets of can be eectively enumerated, and checking ' for nite is a decidable problem. Since compactness fails in PDL, this observation does us no good, even when is known to be recursively enumerable. However, the following result shows that the situation is much worse than we might expect: even if is taken to be the set of substitution instances of a single formula of PDL, the consequence problem becomes very highly undecidable. This is a rather striking manifestation of PDL's lack of compactness.

132

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Let ' be a given formula. The set S' of substitution instances of ' is the set of all formulas obtained by substituting a formula for each atomic proposition appearing in '. THEOREM 24. The problem of deciding whether S' is 11 -complete. The problem is 11 -hard even for a particular xed '. 6 NONREGULAR PDL In this section we enrich the class of regular programs in PDL by introducing programs whose control structure requires more than a nite automaton. For example, the class of context-free programs requires a pushdown automaton (PDA), and moving up from regular to context-free programs is really going from iterative programs to ones with parameterless recursive procedures. Several questions arise when enriching the class of programs of PDL, such as whether the expressive power of the logic grows, and if so whether the resulting logics are still decidable. It turns out that any nonregular program increases PDL's expressive power and that the validity problem for PDL with context-free programs is undecidable. The bulk of the section is then devoted to the diÆcult problem of trying to characterize the borderline between decidable and undecidable extensions. On the one hand, validity for PDL with the addition of even a single extremely simple nonregular program is already 11 -complete; but on the other hand, when we add another equally simple program, the problem remains decidable. Besides these results, which pertain to very speci c extensions, we discuss some broad decidability results that cover many languages, including some that are not even context-free. Since no similarly general undecidability results are known, we also address the weaker issue of whether nonregular extensions admit the nite model property and present a negative result that covers many cases.

6.1 Nonregular Programs Consider the following self-explanatory program: (11) while p do a ; now do b the same number of times This program is meant to represent the following set of computation sequences:

f(p? ; a)i ; :p? ; bi j i 0g: Viewed as a language over the alphabet fa; b; p; :pg, this set is not regular, thus cannot be programmed in PDL. However, it can be represented by the following parameterless recursive procedure:

DYNAMIC LOGIC

133

proc V f if p then f a ; call V ; b g else return

g

The set of computation sequences of this program is captured by the contextfree grammar

V

! :p? j p?aV b:

We are thus led to the idea of allowing context-free programs inside the boxes and diamonds of PDL. From a pragmatic point of view, this amounts to extending the logic with the ability to reason about parameterless recursive procedures. The particular representation of the context-free programs is unimportant; we can use pushdown automata, context-free grammars, recursive procedures, or any other formalism that can be eectively translated into these. In the rest of this section, a number of speci c programs will be of interest, and we employ special abbreviations for them. For example, we de ne:

a ba def = fai bai j i 0g a b def = fai bi j i 0g ba def = fbi ai j i 0g: Note that ab is really just a nondeterministic version of the program (11) in which there is simply no p to control the iteration. In fact, (11) could have been written in this notation as (p?a) :p?b .2 In programming terms, we can compare the regular program (ab) with the nonregular one ab by observing that if a is \purchase a loaf of bread" and b is \pay $1.00," then the former program captures the process of paying for each loaf when purchased, while the latter one captures the process of paying for them all at the end of the month. It turns out that enriching PDL with even a single arbitrary nonregular program increases expressive power. If L is any language over atomic programs and tests, then PDL + L is de ned exactly as PDL, but with the additional syntax rule stating that for any formula ', the expression ' is a new formula. The semantics of PDL + L is like that of PDL with the addition of the clause [ mK (L) def = mK ( ): 2L

2 It is noteworthy that the results of this section do not depend on nondeterminism. For example, the negative Theorem 28 holds for the deterministic version (11) too. Also, most of the results in this section involve nonregular programs over atomic programs only, but can be generalized to allow tests as well.

134

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Note that PDL + L does not allow L to be used as a formation rule for new programs or to be combined with other programs. It is added to the programming language as a single new stand-alone program only. If PDL1 and PDL2 are two extensions of PDL, we say that PDL1 is as expressive as PDL2 if for each formula ' of PDL2 there is a formula of PDL1 such that ' $ . If PDL1 is as expressive as PDL2 but PDL2 is not as expressive as PDL1 , we say that PDL1 is strictly more expressive than PDL2 . Thus, one version of PDL is strictly more expressive than another if anything the latter can express the former can too, but there is something the former can express that the latter cannot. A language is test-free if it is a subset of 0 ; that is, if its seqs contain no tests. THEOREM 25. If L is any nonregular test-free language, then PDL + L is strictly more expressive than PDL. We can view the decidability of regular PDL as showing that propositionallevel reasoning about iterative programs is computable. We now wish to know if the same is true for recursive procedures. We de ne context-free PDL to be PDL extended with context-free programs, where a context-free program is one whose seqs form a context-free language. The precise syntax is unimportant, but for de niteness we might take as programs the set of context-free grammars G over atomic programs and tests and de ne

mK (G) def =

[

2CS (G)

mK ( );

where CS (G) is the set of computation sequences generated by G as described in Section 1.3. THEOREM 26. The validity problem for context-free PDL is undecidable. Theorem 26 leaves several interesting questions unanswered. What is the level of undecidability of context-free PDL? What happens if we want to add only a small number of speci c nonregular programs? The rst of these questions arises from the fact that the equivalence problem for context-free languages is co-r.e.-complete, or complete for 01 in the arithmetic hierarchy. Hence, all Theorem 26 shows is that the validity problem for context-free PDL is 01 -hard, while it might in fact be worse. The second question is far more general. We might be interested in reasoning only about deterministic or linear context-free programs,3 or we might be interested only in a few special context-free programs such as a ba or ab . Perhaps PDL 3 A linear program is one whose seqs are generated by a context-free grammar in which there is at most one nonterminal symbol on the right-hand side of each rule. This corresponds to a family of recursive procedures in which there is at most one recursive call in each procedure.

DYNAMIC LOGIC

135

remains decidable when these programs are added. The general question is to determine the borderline between the decidable and the undecidable when it comes to enriching the class of programs allowed in PDL. Interestingly, if we wish to consider such simple nonregular extensions as PDL + aba or PDL + ab , we will not be able to prove undecidability by the technique used for context-free PDL in Theorem 26, since standard problems that are undecidable for context-free languages, such as equivalence and inclusion, are decidable for classes containing the regular languages and the likes of aba and ab . Moreover, we cannot prove decidability by the technique used for PDL in Section 3.2, since logics like PDL + aba and PDL + ab do not enjoy the nite model property. Thus, if we want to determine the decidability status of such extensions, we will have to work harder. THEOREM 27. There is a satis able formula in PDL + ab that is not satis ed in any nite structure. For PDL + a ba, the news is worse than mere undecidability: THEOREM 28. The validity problem for PDL + aba is 11 -complete. The 11 result holds also for PDL extended with the two programs a b and ba . It is easy to show that the validity problem for context-free PDL in its entirety remains in 11 . Together with the fact that aba is a contextfree language, this yields an answer to the rst question mentioned earlier: context-free PDL is 11 -complete. As to the second question, Theorem 28 shows that the high undecidability phenomenon starts occurring even with the addition of one very simple nonregular program. We now turn to nonregular programs over a single letter. Consider the language of powers of 2: i a2 def = fa2 j i 0g: Here we have: THEOREM 29. The validity problem for PDL + a2 is undecidable. It is actually possible to prove this result for powers of anyi xed k 2. Thus PDL with the addition of any language of the form fak j i 0g for xed k 2 is undecidable. Another class of one-letter extensions that has been proven to be undecidable consists of Fibonacci-like sequences: THEOREM 30. Let f0 ; f1 be arbitrary elements of N with f0 < f1 , and let F be the sequence f0 ; f1 ; f2; : : : generated by the recurrence fi = fi 1 + fi 2 for i 2. Let aF def = fafi j i 0g. Then the validity problem for PDL + aF is undecidable. In both these theorems, the fact that the sequences of a's in the programs grow exponentially is crucial to the proofs. Indeed, we know of no

136

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

undecidability results for any one-letter extension in which the lengths of the sequences of a's grow subexponentially. Particularly intriguing are the cases of squares and cubes:

a 3 a

def = fai j i 0g; def = fai j i 0g:

2

2 3

Are PDL + a and PDL + a undecidable? There is a decidability result for a slightly restricted version of the squares extension, which seems to indicate that the full unrestricted version PDL + 2 a is decidable too. However, we conjecture that for cubes the problem is undecidable. Interestingly, several classical open problems in number theory 3 reduce to instances of the validity problem for PDL+a . For example, while no one knows whether every integer greater than 10000 is the sum of ve cubes, the following formula is valid if and only if the answer is yes: 2

[(a )5 ]p 3

3

! [a10001 a ]p:

(The 5-fold and 10001-fold iterations have to be written out in full, of 3 course.) If PDL + a were decidable, then we could compute the answer in a simple manner, at least in principle.

6.2 Decidable Extensions

We now turn to positive results. Theorem 27 states that PDL + a b does not have the nite model property. Nevertheless, we have the following: THEOREM 31. The validity problem for PDL + ab is decidable. When contrasted with Theorem 28, the decidability of PDL + ab is very surprising. We have two of the simplest nonregular languages|aba and ab |which are extremely similar, yet the addition of one to PDL yields high undecidability while the other leaves the logic decidable. Theorem 31 was proved originally by showing that, although PDL + ab does not always admit nite models, it does admit nite pushdown models, in which transitions are labeled not only with atomic programs but also with push and pop instructions for a particular kind of stack. A close study of the proof (which relies heavily on the idiosyncrasies of the language a b) suggests that the decidability or undecidability has to do with the manner in which an automaton accepts the languages involved. For example, in the usual way of accepting aba , a pushdown automaton (PDA) reading an a will carry out a push or a pop, depending upon its location in the input word. However, in the standard way of accepting ab , the a's are always pushed and the b's are always popped, regardless of the location; the input symbol alone determines what the automaton does. More recent work, which we now set out to describe, has yielded a general decidability result

DYNAMIC LOGIC

137

that con rms this intuition. It is of special interest due to its generality, since it does not depend on speci c programs. Let M = (Q; ; ; q0 ; z0 ; Æ) be a PDA that accepts by empty stack. We say that M is simple-minded if, whenever Æ(q; ; ) = (p; b), then for each q0 and 0 , either Æ(q0 ; ; 0 ) = (p; b) or Æ(q0 ; ; 0 ) is unde ned. A context-free language is said to be simple-minded (a simple-minded CFL) if there exists a simple-minded PDA that accepts it. In other words, the action of a simple-minded automaton is determined uniquely by the input symbol; the state and stack symbol are only used to help determine whether the machine halts (rejecting the input) or continues. Note that such an automaton is necessarily deterministic. It is noteworthy that simple-minded PDAs accept a large fragment of the context-free languages, including ab and ba , as well as all balanced parenthesis languages (Dyck sets) and many of their intersections with regular languages. THEOREM 32. If L is accepted by a simple-minded PDA, then PDL + L is decidable. We can obtain another general decidability result involving languages accepted by deterministic stack automata. A stack automaton is a one-way PDA whose head can travel up and down the stack reading its contents, but can make changes only at the top of the stack. Stack automata can accept non-context-free languages such as a bc and its generalizations a 1 a 2 : : : an for any n, as well as many variants thereof. It would be nice to be able to prove decidability of PDL when augmented by any language accepted by such a machine, but this is not known. What has been proven, however, is that if each word in such a language is preceded by a new symbol to mark its beginning, then the enriched PDL is decidable: THEOREM 33. Let e 62 0 , and let L be a language over 0 that is accepted by a deterministic stack automaton. If we let eL denote the language feu j u 2 Lg, then PDL + eL is decidable. While Theorems 32 and 33 are general and cover many languages, they do not prove decidability of PDL + a bc , which may be considered the simplest non-context-free extension of PDL. Nevertheless, the constructions used in the proofs of the two general results have been combined to yield: THEOREM 34. PDL + ab c is decidable. As explained, we know of no undecidabile extension of PDL with a polynomially growing language, although we conjecture that the cubes extension is undecidable. Since the decidability status of such extensions seems hard to determine, we now address a weaker notion: the presence or absence of a nite model property. The technique used in Theorem 27 to show that PDL + ab violates the nite model property does not work for one-letter alphabets. Nevertheless, we now state a general result leading to many one-

138

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

letter extensions that violate the nite model property. In particular, the theorem will yield the following: 2 PROPOSITION 35 (squares and cubes). The logics PDL + a and PDL + 3 a do not have the nite model property. PROPOSITION 36 (polynomials). For every polynomial of the form

p(n) = ci ni + ci 1 ni 1 + + c0 2 Z[n] with i 2 and positive leading coeÆcient ci > 0, let Sp = fp(m) j m 2 N g \ N. Then PDL + aSp does not have the nite model property. PROPOSITION 37 (sums of primes). Let pi be the ith prime (with p1 = 2), and de ne Ssop def =

n X

f

i=1

pi j n 1g:

Then PDL + aSsop does not have the nite model property.

PROPOSITION 38 (factorials). Let Sfac def = fn! j n 2 N g. Then PDL + aSfac does not have the nite model property. The nite model property fails for any suÆciently fast-growing integer linear recurrence, not just the Fibonacci sequence, although we do not know whether these extensions also render PDL undecidable. A kth-order integer linear recurrence is an inductively de ned sequence (12) `n def = c1 `n 1 + + ck `n k + c0 ; n k; where k 1, c0 ; : : : ; ck 2 N , ck 6= 0, and `0 ; : : : ; `k 1 2 N are given. PROPOSITION 39 (linear recurrences). Let Slr = f`n j n 0g be the set de ned inductively by (12). The following conditions are equivalent: (i) aSlr is nonregular; (ii) PDL + aSlr does not have the nite model property; (iii) not all `0; : : : ; `k 1 are zero and

Pk

i=1 ci

> 1.

7 OTHER VARIANTS OF PDL

7.1 Deterministic Programs Nondeterminism arises in PDL in two ways:

atomic programs can be interpreted in a structure as (not necessarily single-valued) binary relations on states; and

DYNAMIC LOGIC

139

the programming constructs [ and involve nondeterministic choice. Many modern programming languages have facilities for concurrency and distributed computation, certain aspects of which can be modeled by nondeterminism. Nevertheless, the majority of programs written in practice are still deterministic. Here we investigate the eect of eliminating either one or both of these sources of nondeterminism from PDL. A program is said to be (semantically ) deterministic in a Kripke frame K if its traces are uniquely determined by their rst states. If is an atomic program a, this is equivalent to the requirement that mK (a) be a partial function; that is, if both (s; t) and (s; t0 ) 2 mK (a), then t = t0 . A deterministic Kripke frame K = (K; mK ) is one in which all atomic a are semantically deterministic. The class of deterministic while programs , denoted DWP, is the class of programs in which the operators [, ?, and may appear only in the context of the conditional test, while loop, skip, or fail;

tests in the conditional test and while loop are purely propositional; that is, there is no occurrence of the < > or [ ] operators. The class of nondeterministic while programs, denoted WP, is the same, except unconstrained use of the nondeterministic choice construct [ is allowed. It is easily shown that if and are semantically deterministic in K, then so are if ' then else and while ' do . By restricting either the syntax or the semantics or both, we obtain the following logics: DPDL (deterministic PDL), which is syntactically identical to PDL, but interpreted over deterministic structures only;

SPDL (strict PDL), in which only deterministic while programs are allowed; and

SDPDL (strict deterministic PDL), in which both restrictions are in

force. Validity and satis ability in DPDL and SDPDL are de ned just as in PDL, but with respect to deterministic structures only. If ' is valid in PDL, then ' is also valid in DPDL, but not conversely: the formula (13) ' ! [a]' is valid in DPDL but not in PDL. Also, SPDL and SDPDL are strictly less expressive than PDL or DPDL, since the formula (14) '

140

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is not expressible in SPDL, as shown in [Halpern and Reif, 1983]. THEOREM 40. If the axiom scheme (15) ' ! [a]'; a 2 0 is added to Axiom System 17, then the resulting system is sound and complete for DPDL. THEOREM 41. Validity in DPDL is deterministic exponential-time complete. Now we turn to SPDL, in which atomic programs can be nondeterministic but can be composed into larger programs only with deterministic constructs. THEOREM 42. Validity in SPDL is deterministic exponential-time complete. The nal version of interest is SDPDL, in which both the syntactic restrictions of SPDL and the semantic ones of DPDL are adopted. The exponentialtime lower bound fails here, and we have: THEOREM 43. The validity problem for SDPDL is complete in polynomial space. The question of relative power of expression is of interest here. Is DPDL < PDL? Is SDPDL < DPDL? The rst of these questions is inappropriate, since the syntax of both languages is the same but they are interpreted over dierent classes of structures. Considering the second, we have: THEOREM 44. SDPDL < DPDL and SPDL < PDL. In summary, we have the following diagram describing the relations of expressiveness between these logics. The solid arrows indicate added expressive power and broken ones a dierence in semantics. The validity problem is exponential-time complete for all but SDPDL, for which it is PSPACE complete. Straightforward variants of Axiom System 17 are complete for all versions.

3

kQ

SPDL

PDL

Q

kQ

SDPDL

Q DPDL 3

7.2 Representation by Automata A PDL program represents a regular set of computation sequences. This same regular set could possibly be represented exponentially more succinctly

DYNAMIC LOGIC

141

by a nite automaton. The dierence between these two representations corresponds roughly to the dierence between while programs and owcharts. Since nite automata are exponentially more succinct in general, the upper bound of Section 5 could conceivably fail if nite automata were allowed as programs. Moreover, we must also rework the deductive system of Section 4.1. However, it turns out that the completeness and exponential-time decidability results of PDL are not sensitive to the representation and still go through in the presence of nite automata as programs, provided the deductive system of Section 4.1 and the techniques of Sections 4 and 5 are suitably modi ed, as shown in [Pratt, 1979b; Pratt, 1981b] and [Harel and Sherman, 1985]. In recent years, the automata-theoretic approach to logics of programs has yielded signi cant insight into propositional logics more powerful than PDL, as well as substantial reductions in the complexity of their decision procedures. Especially enlightening are the connections with automata on in nite strings and in nite trees. By viewing a formula as an automaton and a treelike model as an input to that automaton, the satis ability problem for a given formula becomes the emptiness problem for a given automaton. Logical questions are thereby transformed into purely automata-theoretic questions. We assume that nondeterministic nite automata are given in the form (16) M = (n; i; j; Æ); where n = f0; : : : ; n 1g is the set of states, i; j 2 n are the start and nal states respectively, and Æ assigns a subset of 0 [ f'? j ' 2 g to each pair of states. Intuitively, when visiting state ` and seeing symbol a, the automaton may move to state k if a 2 Æ(`; k). The fact that the automata (16) have only one accept state is without loss of generality. If M is an arbitrary nondeterministic nite automaton with accept states F , then the set accepted by M is the union of the sets accepted by Mk for k 2 F , where Mk is identical to M except that it has unique accept state k. A desired formula [M ]' can be written as a conjunction ^

k2F

[Mk ]'

with at most quadratic growth. We now obtain a new logic APDL (automata PDL) by de ning and inductively using the clauses for from Section 2.1 and letting = 0 [ f'? j ' 2 g [ F , where F is the set of automata of the form (16).

142

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Axioms 17(iv), (v), and (vii) are replaced by: ^

(17) [n; i; j; Æ]'

$

(18) [n; i; i; Æ]'

$ '^

k2n 2Æ(i;k)

[][n; k; j; Æ ]';

^

k 2n 2Æ(i;k)

i 6= j

[][n; k; i; Æ ]':

The induction axiom 17(viii) becomes ^

(19) (

k2n

[n; i; k; Æ ]('k

!

^

m2n 2Æ(k;m)

[]'m ))

! ('i ! [n; i; j; Æ]'j ):

These and other similar changes can be used to prove: THEOREM 45. Validity in APDL is decidable in exponential time. THEOREM 46. The axiom system described above is complete for APDL.

7.3 Converse The converse operator \run backwards":

is a program operator that allows a program to be

mK ( ) def = f(s; t) j (t; s) 2 mK ()g:

PDL with converse is called CPDL.

The following identities allow us to assume without loss of generality that the converse operator is applied to atomic programs only. ( ; ) ( [ )

$ ; $ [ $ :

The converse operator strictly increases the expressive power of PDL, since the formula < >1 is not expressible without it. THEOREM 47. PDL < CPDL.

s s s

Proof. Consider the structure described in the following gure: s

6 a

t

u

DYNAMIC LOGIC

143

In this structure, s 1 but u 2 1. On the other hand, it can be shown by induction on the structure of formulas that if s and u agree on all atomic formulas, then no formula of PDL can distinguish between the two. More interestingly, the presence of the converse operator implies that the operator <> is continuous in the sense that W if A is any (possibly in nite) W family of formulas W possessing a join A, then <>A exists and is logically equivalent to <> A. In the absence of the converse operator, one can construct nonstandard models for which this fails. The completeness and exponential time decidability results of Sections 4 and 5 can be extended to CPDL provided the following two axioms are added:

' '

! []< >' ! [ ]<>':

The ltration lemma (Lemma 14) still holds in the presence of , as does the nite model property.

7.4 Well-foundedness

If is a deterministic program, the formula ' ! <> asserts the total correctness of with respect to pre- and postconditions ' and , respectively. For nondeterministic programs, however, this formula does not express the right notion of total correctness. It asserts that ' implies that there exists a halting computation sequence of yielding , whereas we would really like to assert that ' implies that all computation sequences of terminate and yield . Let us denote the latter property by TC ('; ; ):

Unfortunately, this is not expressible in PDL. The problem is intimately connected with the notion of well-foundedness . A program is said to be well-founded at a state u0 if there exists no in nite sequence of states u0; u1 ; u2 ; : : : with (ui ; ui+1 ) 2 mK () for all i 0. This property is not expressible in PDL either, as we will see. Several very powerful logics have been proposed to deal with this situation. The most powerful is perhaps the propositional -calculus, which is essentially propositional modal logic augmented with a least xpoint operator . Using this operator, one can express any property that can be formulated as the least xpoint of a monotone transformation on sets of states de ned by the PDL operators. For example, the well-foundedness of a program is expressed

144

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

(20) X:[]X in this logic. Two somewhat weaker ways of capturing well-foundedness without resorting to the full -calculus have been studied. One is to add to PDL an explicit predicate wf for well-foundedness:

mK (wf ) def = fs0 j :9s1 ; s2 ; : : : 8i 0 (si ; si+1 ) 2 mK ()g:

Another is to add an explicit predicate halt, which asserts that all computations of its argument terminate. The predicate halt can be de ned inductively from wf as follows: (21) halt a (22) halt ; (23) halt [ (24) halt

def () def () def () def ()

1; a an atomic program or test; halt ^ []halt ; halt ^ halt ; wf ^ [ ]halt : These constructs have been investigated under the various names loop, repeat, and . The predicates loop and repeat are just the complements of halt and wf , respectively: def :halt loop () def :wf : repeat () Clause (24) is equivalent to the assertion

def repeat _ < >loop : loop () It asserts that a nonhalting computation of consists of either an in nite sequence of halting computations of or a nite sequence of halting computations of followed by a nonhalting computation of . Let RPDL and LPDL denote the logics obtained by augmenting PDL with the wf and halt predicates, respectively.4 It follows from the preceding discussion that PDL LPDL RPDL the propositional -calculus: Moreover, all these inclusions are known to be strict. The logic LPDL is powerful enough to express the total correctness of nondeterministic programs. The total correctness of with respect to precondition ' and postcondition is expressed def ' ! halt ^ [] : TC ('; ; ) () 4 The L in LPDL stands for \loop" and the R in RPDL stands for \repeat." We retain these names for historical reasons.

DYNAMIC LOGIC

145

Conversely, halt can be expressed in terms of TC :

halt

,

TC (1; ; 1):

THEOREM 48. PDL < LPDL. THEOREM 49. LPDL < RPDL. It is possible to extend Theorem 49 to versions CRPDL and CLPDL in which converse is allowed in addition to wf or halt. Also, the proof of Theorem 47 goes through for LPDL and RPDL, so that 1 is not expressible in either. Theorem 48 goes through for the converse versions too. We obtain the situation illustrated in the following gure, in which the arrows indicate < and the absence of a path between two logics means that each can express properties that the other cannot.

CRPDL

3

QkQ

QkQ

CLPDL

3 CPDL kQ Q Q

Q LPDL 3

PDL

Q RPDL 3

The ltration lemma fails for all halt and wf versions as in Theorem 48. However, satis able formulas of the -calculus (hence of RPDL and LPDL) do have nite models. This nite model property is not shared by CLPDL or CRPDL. THEOREM 50. The CLPDL formula

:halt a ^ [a ]halt a is satis able but has no nite model.

As it turns out, Theorem 50 does not prevent CRPDL from being decidable.

146

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

THEOREM 51. The validity problems for CRPDL, CLPDL, RPDL, LPDL, and the propositional -calculus are all decidable in deterministic exponential time. Obviously, the simpler the logic, the simpler the arguments needed to show exponential time decidability. Over the years all these logics have been gradually shown to be decidable in exponential time by various authors using various techniques. Here we point to the exponential time decidability of the propositional -calculus with forward and backward modalities, proved in [Vardi, 1998b], from which all these can be seen easily to follow. The proof in [Vardi, 1998b] is carried out by exhibiting an exponential time decision procedure for two-way alternating automata on in nite trees. As mentioned above, RPDL possesses the nite (but not necessarily the small and not the collapsed) model property. THEOREM 52. Every satis able formula of RPDL, LPDL, and the propositional -calculus has a nite model. CRPDL and CLPDL are extensions of PDL that, like PDL + ab (Theorems 27 and 31), are decidable despite lacking a nite model property. Complete axiomatizations for RPDL and LPDL can be obtained by embedding them into the -calculus (see Section 14.4).

7.5 Concurrency Another interesting extension of PDL concerns concurrent programs. One can de ne an intersection operator \ such that the binary relation on states corresponding to the program \ is the intersection of the binary relations corresponding to and . This can be viewed as a kind of concurrency operator that admits transitions to those states that both and would have admitted. Here we consider a dierent and perhaps more natural notion of concurrency. The interpretation of a program will not be a binary relation on states, which relates initial states to possible nal states, but rather a relation between a states and sets of states. Thus mK () will relate a start state u to a collection of sets of states U . The intuition is that starting in state u, the (concurrent) program can be run with its concurrent execution threads ending in the set of nal states U . The basic concurrency operator will be denoted here by ^, although in the original work on concurrent Dynamic Logic [Peleg, 1987b; Peleg, 1987c; Peleg, 1987a] the notation \ is used. The syntax of concurrent PDL is the same as PDL, with the addition of the clause:

if ; 2 , then ^ 2 . The program ^ means intuitively, \Execute and in parallel."

DYNAMIC LOGIC

147

The semantics of concurrent PDL is de ned on Kripke frames K = (K; mK ) as with PDL, except that for programs ,

mK () K 2K :

Thus the meaning of is a collection of reachability pairs of the form (u; U ), where u 2 K and U K . In this brief description of concurrent PDL, we require that structures assign to atomic programs sequential, non-parallel, meaning; that is, for each a 2 0 , we require that if (u; U ) 2 mK (a), then #U = 1. The true parallelism will stem from applying the concurrency operator to build larger sets U in the reachability pairs of compound programs. For details, see [Peleg, 1987b; Peleg, 1987c]. The relevant results for this logic are the following: THEOREM 53. PDL < concurrent PDL. THEOREM 54. The validity problem for concurrent PDL is decidable in deterministic exponential time. Axiom System 17, augmented with the following axiom, can be be shown to be complete for concurrent PDL: < ^ >'

$ <>' ^ < >':

8 FIRST-ORDER DYNAMIC LOGIC (DL) In this section we begin the study of rst-order Dynamic Logic. The main dierence between rst-order DL and the propositional version PDL discussed in previous sections is the presence of a rst-order structure A, called the domain of computation , over which rst-order quanti cation is allowed. States are no longer abstract points, but valuations of a set of variables over A, the carrier of A. Atomic programs in DL are no longer abstract binary relations, but assignment statements of various forms, all based on assigning values to variables during the computation. The most basic example of such an assignment is the simple assignment x := t, where x is a variable and t is a term. The atomic formulas of DL are generally taken to be atomic rst-order formulas. In addition to the constructs of PDL, the basic DL syntax contains individual variables ranging over A, function and predicate symbols for distinguished functions and predicates of A, and quanti ers ranging over A, exactly as in classical rst-order logic. More powerful versions of the logic contain array and stack variables and other constructs, as well as primitive operations for manipulating them, and assignments for changing their values. Sometimes the introduction of a new construct increases expressive power and sometimes not; sometimes it has an eect on the complexity of

148

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

deciding satis ability and sometimes not. Indeed, one of the central goals of research has been to classify these constructs in terms of their relative expressive power and complexity. In this section we lay the groundwork for this by de ning the various logical and programming constructs we shall need.

8.1 Basic Syntax The language of rst-order Dynamic Logic is built upon classical rst-order logic. There is always an underlying rst-order vocabulary , which involves a vocabulary of function symbols and predicate (or relation) symbols. On top of this vocabulary, we de ne a set of programs and a set of formulas . These two sets interact by means of the modal construct [ ] exactly as in the propositional case. Programs and formulas are usually de ned by mutual induction. Let = ff; g; : : : ; p; r; : : : g be a nite rst-order vocabulary. Here f and g denote typical function symbols of , and p and r denote typical relation symbols. Associated with each function and relation symbol of is a xed arity (number of arguments), although we do not represent the arity explicitly. We assume that always contains the equality symbol =, whose arity is 2. Functions and relations of arity 0; 1; 2; 3 and n are called nullary, unary, binary, ternary, and n-ary, respectively. Nullary functions are also called constants. We shall be using a countable set of individual variables V = fx0 ; x1 ; : : : g. We always assume that contains at least one function symbol of positive arity. A vocabulary is polyadic if it contains a function symbol of arity greater than one. Vocabularies whose function symbols are all unary are called monadic . A vocabulary is rich if either it contains at least one predicate symbol besides the equality symbol or the sum of arities of the function symbols is at least two. Examples of rich vocabularies are: two unary function symbols, or one binary function symbol, or one unary function symbol and one unary predicate symbol. A vocabulary that is not rich is poor . Hence a poor vocabulary has just one unary function symbol and possibly some constants, but no relation symbols other than equality. The main dierence between rich and poor vocabularies is that the former admit exponentially many pairwise non-isomorphic structures of a given nite cardinality, whereas the latter admit only polynomially many. We say that the vocabulary is mono-unary if it contains no function symbols other than a single unary one. It may contain constants and predicate symbols. The de nitions of DL programs and formulas below depend on the vocabulary , but in general we shall not make this dependence explicit unless we have some speci c reason for doing so.

DYNAMIC LOGIC

149

Atomic Formulas and Programs

In all versions of DL that we will consider, atomic formulas are atomic formulas of the rst-order vocabulary ; that is, formulas of the form r(t1 ; : : : ; tn ), where r is an n-ary relation symbol of and t1 ; : : : ; tn are terms of . As in PDL, programs are de ned inductively from atomic programs using various programming constructs. The meaning of a compound program is given inductively in terms of the meanings of its constituent parts. Dierent classes of programs are obtained by choosing dierent classes of atomic programs and programming constructs. In the basic version of DL, an atomic program is a simple assignment x := t, where x 2 V and t is a term of . Intuitively, this program assigns the value of t to the variable x. This is the same form of assignment found in most conventional programming languages. More powerful forms of assignment such as stack and array assignments and nondeterministic \wildcard" assignments will be discussed later. The precise choice of atomic programs will be made explicit when needed, but for now, we use the term atomic program to cover all of these possibilities. Tests As in PDL, DL contains a test operator ?, which turns a formula into a program. In most versions of DL that we shall discuss, we allow only quanti erfree rst-order formulas as tests. We sometimes call these versions poor test . Alternatively, we might allow any rst-order formula as a test. Most generally, we might place no restrictions on the form of tests, allowing any DL formula whatsoever, including those that contain other programs, perhaps containing other tests, etc. These versions of DL are labeled rich test as in Section 2.1. Whereas programs can be de ned independently from formulas in poor test versions, rich test versions require a mutually inductive de nition of programs and formulas. As with atomic programs, the precise logic we consider at any given time depends on the choice of tests we allow. We will make this explicit when needed, but for now, we use the term test to cover all possibilities. Regular Programs For a given set of atomic programs and tests, the set of regular programs is de ned as in PDL (see Section 2.1): any atomic program or test is a program;

if and are programs, then ; is a program; if and are programs, then [ is a program; if is a program then is a program.

150

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

While Programs Much of the literature on DL is concerned with the class of while programs (see Section 2.1). Formally, deterministic while programs form the subclass of the regular programs in which the program operators [, ?, and are constrained to appear only in the forms skip def = 1? def fail = 0? (25) if ' then else def = ('?; ) [ (:'?; ) def (26) while ' do = ('?; ) ; :'? The class of nondeterministic while programs is the same, except that we allow unrestricted use of the nondeterministic choice construct [. Of course, unrestricted use of the sequential composition operator is allowed in both languages. Restrictions on the form of atomic programs and tests apply as with regular programs. For example, if we are allowing only poor tests, then the ' occurring in the programs (25) and (26) must be a quanti er-free rst-order formula. The class of deterministic while programs is important because it captures the basic programming constructs common to many real-life imperative programming languages. Over the standard structure of the natural numbers N , deterministic while programs are powerful enough to de ne all partial recursive functions, and thus over N they are as as expressive as regular programs. A similar result holds for a wide class of models similar to N , for a suitable de nition of \partial recursive functions" in these models. However, it is not true in general that while programs, even nondeterministic ones, are universally expressive. We discuss these results in Section 12. Formulas

A formula of DL is de ned in way similar to that of PDL, with the addition of a rule for quanti cation. Equivalently, we might say that a formula of DL is de ned in a way similar to that of rst-order logic, with the addition of a rule for modality. The basic version of DL is de ned with regular programs:

the false formula 0 is a formula; any atomic formula is a formula; if ' and are formulas, then ' ! is a formula; if ' is a formula and x 2 V , then 8x ' is a formula;

DYNAMIC LOGIC

151

if ' is a formula and is a program, then []' is a formula. The only missing rule in the de nition of the syntax of DL are the tests. In our basic version we would have:

if ' is a quati er-free rst-order formula, then '? is a test. For the rich test version, the de nitions of programs and formulas are mutually dependent, and the rule de ning tests is simply:

if ' is a formula, then '? is a test. We will use the same notation as in propositional logic that :' stands for ' ! 0. As in rst-order logic, the rst-order existential quanti er 9 is considered a de ned construct: 9x ' abbreviates :8x :'. Similarly, the modal construct < > is considered a de ned construct as in Section 2.1, since it is the modal dual of [ ]. The other propositional constructs ^, _, $ are de ned as in Section 2.1. Of course, we use parentheses where necessary to ensure unique readability. Note that the individual variables in V serve a dual purpose: they are both program variables and logical variables.

8.2 Richer Programs Seqs and R.E. Programs

Some classes of programs are most conveniently de ned as certain sets of seqs. Recall from Section 2.3 that a seq is a program of the form 1 ; ; k , where each i is an assignment statement or a quanti er-free rst-order test. Each regular program is associated with a unique set of seqs CS () (Section 2.3). These de nitions were made in the propositional context, but they apply equally well to the rst-order case; the only dierence is in the form of atomic programs and tests. Construing the word in the broadest possible sense, we can consider a program to be an arbitrary set of seqs. Although this makes sense semantically| we can assign an input/output relation to such a set in a meaningful way| such programs can hardly be called executable. At the very least we should require that the set of seqs be recursively enumerable, so that there will be some eective procedure that can list all possible executions of a given program. However, there is a subtle issue that arises with this notion. Consider the set of seqs

fxi := f i (c) j i 2 N g: This set satis es the above restriction, yet it can hardly be called a program. It uses in nitely many variables, and as a consequence it might change a

152

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

valuation at in nitely many places. Another pathological example is the set of seqs fxi+1 := f (xi ) j i 2 N g; which not only could change a valuation at in nitely many locations, but also depends on in nitely many locations of the input valuation. In order to avoid such pathologies, we will require that each program use only nitely many variables. This gives rise to the following de nition of r.e. programs, which is the most general family of programs we will consider. Speci cally, an r.e. program is a Turing machine that enumerates a set of seqs over a nite set of variables. The set of seqs enumerated will be called CS (). By FV () we will denote the nite set of variables that occur in seqs of CS (). An important issue connected with r.e. programs is that of bounded memory. The assignment statements or tests in an r.e. program may have in nitely many terms with increasingly deep nesting of function symbols (although, as discussed, these terms only use nitely many variables), and these could require an unbounded amount of memory to compute. We de ne a set of seqs to be bounded memory if the depth of terms appearing in it is bounded. In fact, without sacri cing computational power, we could require that all terms be of the form f (x1 ; : : : ; xn ) in a bounded-memory set of seqs. Arrays and Stacks Interesting variants of the programming language we use in DL arise from allowing auxiliary data structures. We shall de ne versions with arrays and stacks , as well as a version with a nondeterministic assignment statement called wildcard assignment . Besides these, one can imagine augmenting while programs with many other kinds of constructs such as blocks with declarations, recursive procedures with various parameter passing mechanisms, higher-order procedures, concurrent processes, etc. It is easy to arrive at a family consisting of thousands of programming languages, giving rise to thousands of logics. Obviously, we have had to restrict ourselves. It is worth mentioning, however, that certain kinds of recursive procedures are captured by our stack operations, as explained below. Arrays To handle arrays, we include a countable set of array variables Varray = fF0 ; F1 ; : : : g: Each array variable has an associated arity, or number of arguments, which we do not represent explicitly. We assume that there are countably many

DYNAMIC LOGIC

153

variables of each arity n 0. In the presence of array variables, we equate the set V of individual variables with the set of nullary array variables; thus V Varray . The variables in Varray of arity n will range over n-ary functions with arguments and values in the domain of computation. In our exposition, elements of the domain of computation play two roles: they are used both as indices into an array and as values that can be stored in an array. One might equally well introduce a separate sort for array indices; although conceptually simple, this would complicate the notation and would give no new insight. We extend the set of rst-order terms to allow the unrestricted occurrence of array variables, provided arities are respected. The classes of regular programs with arrays and deterministic and nondeterministic while programs with arrays are de ned similarly to the classes without, except that we allow array assignments in addition to simple assignments. Array assignments are similar to simple assignments, but on the left-hand side we allow a term in which the outermost symbol is an array variable: F (t1 ; : : : ; tn ) := t: Here F is an n-ary array variable and t1 ; : : : ; tn ; t are terms, possibly involving other array variables. Note that when n = 0, this reduces to the ordinary simple assignment. Recursion via an Algebraic Stack

We now consider DL in which the programs can manipulate a stack. The literature in automata theory and formal languages often distinguishes a stack from a pushdown store. In the former, the automaton is allowed to inspect the contents of the stack but to make changes only at the top. We shall use the term stack to denote the more common pushdown store, where the only inspection allowed is at the top of the stack. The motivation for this extension is to be able to capture recursion. It is well known that recursive procedures can be modeled using a stack, and for various technical reasons we prefer to extend the data-manipulation capabilities of our programs than to introduce new control constructs. When it encounters a recursive call, the stack simulation of recursion will push the return location and values of local variables and parameters on the stack. It will pop them upon completion of the call. The LIFO (last-in- rst-out) nature of stack storage ts the order in which control executes recursive calls. To handle the stack in our stack version of DL, we add two new atomic programs push(t) and pop(y);

154

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

where t is a term and y 2 V . Intuitively, push(t) pushes the current value of t onto the top of the stack, and pop(y) pops the top value o the top of the stack and assigns that value to the variable y. If the stack is empty, the pop operation does not change anything. We could have added a test for stack emptiness, but it can be shown to be redundant. Formally, the stack is simply a nite string of elements of the domain of computation. The classes of regular programs with stack and deterministic and nondeterministic while programs with stack are obtained by augmenting the respective classes of programs with the push and pop operations as atomic programs in addition to simple assignments. In contrast to the case of arrays, here there is only a single stack. In fact, expressiveness changes dramatically when two or more stacks are allowed. Also, in order to be able to simulate recursion, the domain must have at least two distinct elements so that return addresses can be properly encoded in the stack. One way of doing this is to store the return address itself in unary using one element of the domain, then store one occurrence of the second element as a delimiter symbol, followed by domain elements constituting the current values of parameters and local variables. The kind of stack described here is often termed algebraic , since it contains elements from the domain of computation. It should be contrasted with the Boolean stack described next. Parameterless Recursion via a Boolean Stack

An interesting special case is when the stack can contain only two distinct elements. This version of our programming language can be shown to capture recursive procedures without parameters or local variables. This is because we only need to store return addresses, but no actual data items from the domain of computation. This can be achieved using two values, as described above. We thus arrive at the idea of a Boolean stack. To handle such a stack in this version of DL, we add three new kinds of atomic programs and one new test. The atomic programs are

push-1

push-0

pop;

and the test is simply top?. Intuitively, push-1 and push-0 push the corresponding distinct Boolean values on the stack, pop removes the top element, and the test top? evaluates to true i the top element of the stack is 1, but with no side eect. With the test top? only, there is no explicit operator that distinguishes a stack with top element 0 from the empty stack. We might have de ned such an operator, and in a more realistic language we would certainly do so. However, it is mathematically redundant, since it can be simulated with the operators we already have.

DYNAMIC LOGIC

155

Wildcard Assignment

The nondeterministic assignment x := ? is a device that arises in the study of fairness; see [Apt and Plotkin, 1986]. It has often been called random assignment in the literature, although it has nothing to do with randomness or probability. We shall call it wildcard assignment . Intuitively, it operates by assigning a nondeterministically chosen element of the domain of computation to the variable x. This construct together with the [ ] modality is similar to the rst-order universal quanti er, since it will follow from the semantics that the two formulas [x := ?]' and 8x ' are equivalent. However, wildcard assignment may appear in programs and can therefore be iterated.

8.3 Semantics In this section we assign meanings to the syntactic constructs described in the previous sections. We interpret programs and formulas over a rstorder structure A. Variables range over the carrier of this structure. We take an operational view of program semantics: programs change the values of variables by sequences of simple assignments x := t or other assignments, and ow of control is determined by the truth values of tests performed at various times during the computation. States as Valuations An instantaneous snapshot of all relevant information at any moment during the computation is determined by the values of the program variables. Thus our states will be valuations u; v; : : : of the variables V over the carrier of the structure A. Our formal de nition will associate the pair (u; v) of such valuations with the program if it is possible to start in valuation u, execute the program , and halt in valuation v. In this case, we will call (u; v) an input/output pair of and write (u; v) 2 mA (). This will result in a Kripke frame exactly as in Section 2. Let A = (A; mA ) be a rst-order structure for the vocabulary . We call A the domain of computation . Here A is a set, called the carrier of A, and mA is a meaning function such that mA (f ) is an n-ary function mA (f ) : An ! A interpreting the n-ary function symbol f of , and mA (r) is an n-ary relation mA (r) An interpreting the n-ary relation symbol r of . The equality symbol = is always interpreted as the identity relation. For n 0, let An ! A denote the set of all n-ary functions in A. By convention, we take A0 ! A = A. Let A denote the set of all nite-length strings over A. The structure A determines a Kripke frame, which we will also denote by A, as follows. A valuation over A is a function u assigning an n-ary function over A to each n-ary array variable. It also assigns meanings to

156

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

the stacks as follows. We shall use the two unique variable names STK and BSTK to denote the algebraic stack and the Boolean stack, respectively. The valuation u assigns a nite-length string of elements of A to STK and a nite-length string of Boolean values 1 and 0 to BSTK . Formally: u(F ) 2 An ! A; if F is an n-ary array variable, u(STK ) 2 A ; u(BSTK ) 2 f1; 0g:

By our convention A0 ! A = A, and assuming that V Varray , the individual variables (that is, the nullary array variables) are assigned elements of A under this de nition: u(x) 2 A if x 2 V: The valuation u extends uniquely to terms t by induction. For an n-ary function symbol f and an n-ary array variable F ,

u(f (t1 ; : : : ; tn )) def = mA (f )(u(t1 ); : : : ; u(tn )) def u(F (t1 ; : : : ; tn )) = u(F )(u(t1 ); : : : ; u(tn )): The function-patching operator is de ned as follows: if X and D are sets, f : X ! D is any function, x 2 X , and d 2 D, then f [x=d] : X ! D is the function de ned by d; if x = y f [x=d](y) def = f (y); otherwise. We will be using this notation in several ways, both at the logical and metalogical levels. For example: If u is a valuation, x is an individual variable, and a 2 A, then u[x=a] is the new valuation obtained from u by changing the value of x to a and leaving the values of all other variables intact. If F is an n-ary array variable and f : An ! A, then u[F=f ] is the new valuation that assigns the same value as u to the stack variables and to all array variables other than F , and u[F=f ](F ) = f:

If f : An ! A is an n-ary function and a = a1 ; : : : ; an 2 An and a 2 A, then the expression f [a=a] denotes the n-ary function that agrees with f everywhere except for input a, on which it takes the value a. More precisely,

f [a=a](b) =

a; if b = a f (b); otherwise.

DYNAMIC LOGIC

157

We call valuations u and v nite variants of each other if u(F )(a1 ; : : : ; an ) = v(F )(a1 ; : : : ; an ) for all but nitely many array variables F and n-tuples a1 ; : : : ; an 2 An . In other words, u and v dier on at most nitely many array variables, and for those F on which they do dier, the functions u(F ) and v(F ) dier on at most nitely many values. The relation \is a nite variant of" is an equivalence relation on valuations. Since a halting computation can run for only a nite amount of time, it can execute only nitely many assignments. It will therefore not be able to cross equivalence class boundaries; that is, in the binary relation semantics given below, if the pair (u; v) is an input/output pair of the program , then v is a nite variant of u. We are now ready to de ne the states of our Kripke frame. For a 2 A, let wa be the valuation in which the stacks are empty and all array and individual variables are interpreted as constant functions taking the value a everywhere. A state of A is any nite variant of a valuation wa . The set of states of A is denoted S A . Call a state initial if it diers from some wa only at the values of individual variables. It is meaningful, and indeed useful in some contexts, to take as states the set of all valuations. Our purpose in restricting our attention to states as de ned above is to prevent arrays from being initialized with highly complex oracles that would compromise the value of the relative expressiveness results of Section 12. Assignment Statements As in Section 2.2, with every program we associate a binary relation mA () S A S A (called the input/output relation of p), and with every formula ' we associate a set mA (') S A : The sets mA () and mA (') are de ned by mutual induction on the structure of and '. For the basis of this inductive de nition, we rst give the semantics of all the assignment statements discussed earlier.

The array assignment F (t1 ; : : : ; tn ) := t is interpreted as the binary relation

mA (F (t1 ; : : : ; tn ) := t)

def = f(u; u[F=u(F )[u(t1); : : : ; u(tn)=u(t)]]) j u 2 S A g:

158

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

In other words, starting in state u, the array assignment has the effect of changing the value of F on input u(t1 ); : : : ; u(tn ) to u(t), and leaving the value of F on all other inputs and the values of all other variables intact. For n = 0, this de nition reduces to the following de nition of simple assignment:

mA (x := t) def = f(u; u[x=u(t)]) j u 2 S A g:

The push operations, push(t) for the algebraic stack and push-1 and push-0 for the Boolean stack, are interpreted as the binary relations

mA (push(t)) mA (push-1) mA (push-0)

def = f(u; u[STK =(u(t) u(STK ))]) j u 2 S A g def = f(u; u[BSTK =(1 u(BSTK ))]) j u 2 S A g def = f(u; u[BSTK =(0 u(BSTK ))]) j u 2 S A g;

respectively. In other words, push(t) changes the value of the algebraic stack variable STK from u(STK ) to the string u(t) u(STK ), the concatenation of the value u(t) with the string u(STK ), and everything else is left intact. The eects of push-1 and push-0 are similar, except that the special constants 1 and 0 are concatenated with u(BSTK ) instead of u(t).

The pop operations, pop(y) for the algebraic stack and pop for the Boolean stack, are interpreted as the binary relations

mA (pop(y))

def = f(u; u[STK =tail(u(STK ))][y=head(u(STK ); u(y))])u 2 S A g mA (pop) def = f(u; u[BSTK =tail(u(BSTK ))]) j u 2 S A g; respectively, where

tail(a ) tail(") head(a ; b) head("; b)

def = def = def = def =

" a b

and " is the empty string. In other words, if u(STK ) 6= ", this operation changes the value of STK from u(STK ) to the string obtained by

DYNAMIC LOGIC

159

deleting the rst element of u(STK ) and assigns that element to the variable y. If u(STK ) = ", then nothing is changed. Everything else is left intact. The Boolean stack operation pop changes the value of BSTK only, with no additional changes. We do not include explicit constructs to test whether the stacks are empty, since these can be simulated. However, we do need to be able to refer to the value of the top element of the Boolean stack, hence we include the top? test.

The Boolean test program top? is interpreted as the binary relation

mA (top?) def = f(u; u) j u 2 S A ; head(u(BSTK )) = 1g: In other words, this test changes nothing at all, but allows control to proceed i the top of the Boolean stack contains 1.

The wildcard assignment x :=? for x 2 V is interpreted as the relation

mA (x := ?) def = f(u; u[x=a]) j u 2 S A ; a 2 Ag: As a result of executing this statement, x will be assigned some arbitrary value of the carrier set A, and the values of all other variables will remain unchanged. Programs and Formulas

The meanings of compound programs and formulas are de ned by mutual induction on the structure of and ' exactly as in the propositional case (see Section 2.2). Seqs and R.E. Programs

Recall that an r.e. program is a Turing machine enumerating a set CS () of seqs. If is an r.e. program, we de ne

mA () def =

[

2CS ()

mA ():

Thus, the meaning of is de ned to be the union of the meanings of the seqs in CS (). The meaning mA () of a seq is determined by the meanings of atomic programs and tests and the sequential composition operator. There is an interesting point here regarding the translation of programs using other programming constructs into r.e. programs. This can be done for arrays and stacks (for Booleans stacks, even into r.e. programs with bounded memory), but not for wildcard assignment. Since later in the book we shall be referring to the r.e. set of seqs associated with such programs, it

160

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is important to be able to carry out this translation. To see how this is done for the case of arrays, for example, consider an algorithm for simulating the execution of a program by generating only ordinary assignments and tests. It does not generate an array assignment of the form F (t1 ; : : : ; tn ) := t, but rather \remembers" it and when it reaches an assignment of the form x := F (t1 ; : : : ; tn ) it will aim at generating x := t instead. This requires care, since we must keep track of changes in the variables inside t and t1 ; : : : ; tn and incorporate them into the generated assignments. Formulas

Here are the semantic de nitions for the constructs of formulas of DL. The semantics of atomic rst-order formulas is the standard semantics of classical rst-order logic. (27) mA (0) (28) mA (' ! ) (29) mA (8x ') (30) mA ([]')

def = def = def = def =

? fu j if u 2 mA (') then u 2 mA ( )g fu j 8a 2 A u[x=a] 2 mA (')g fu j 8v if (u; v) 2 mA () then v 2 mA (')g:

Equivalently, we could de ne the rst-order quanti ers 8 and 9 in terms of the wildcard assignment: (31) (32)

8x ' $ [x := ?]' 9x ' $ <x := ?>':

Note that for deterministic programs (for example, those obtained by using the while programming language instead of regular programs and disallowing wildcard assignments), mA () is a partial function from states to states; that is, for every state u, there is at most one v such that (u; v) 2 mA (). The partiality of the function arises from the possibility that may not halt when started in certain states. For example, mA (while 1 do skip) is the empty relation. In general, the relation mA () need not be singlevalued. If K is a given set of syntactic constructs, we refer to the version of Dynamic Logic with programs built from these constructs as Dynamic Logic with K or simply as DL(K ). Thus, we have DL(r:e:), DL(array), DL(stk), DL(bstk), DL(wild), and so on. As a default, these logics are the poor-test versions, in which only quanti er-free rst-order formulas may appear as tests. The unadorned DL is used to abbreviate DL(reg), and we use DL(dreg) to denote DL with while programs, which are really deterministic regular programs. Again, while programs use only poor tests. Combinations such as DL(dreg+wild) are also allowed.

DYNAMIC LOGIC

161

8.4 Satis ability and Validity The concepts of satis ability, validity, etc. are de ned as for PDL in Section 2 or as for rst-order logic under the standard semantics. Let A = (A; mA ) be a structure, and let u be a state in S A . For a formula ', we write A; u ' if u 2 mA (') and say that u satis es ' in A. We sometimes write u ' when A is understood. We say that ' is A-valid and write A ' if A; u ' for all u in A. We say that ' is valid and write ' if A ' for all A. We say that ' is satis able if A; u ' for some A; u. For a set of formulas , we write A if A ' for all ' 2 . Informally, A; u []' i every terminating computation of starting in state u terminates in a state satisfying ', and A; u <>' i there exists a computation of starting in state u and terminating in a state satisfying '. For a pure rst-order formula ', the metastatement A; u ' has the same meaning as in rst-order logic. 9 RELATIONSHIPS WITH STATIC LOGICS

9.1 Uninterpreted Reasoning In contrast to the propositional version PDL discussed in Sections 2{7, DL formulas involve variables, functions, predicates, and quanti ers, a state is a mapping from variables to values in some domain, and atomic programs are assignment statements. To give semantic meaning to these constructs requires a rst-order structure A over which to interpret the function and predicate symbols. Nevertheless, we are not obliged to assume anything special about A or the nature of the interpretations of the function and predicate symbols, except as dictated by rst-order semantics. Any conclusions we draw from this level of reasoning will be valid under all possible interpretations. Uninterpreted reasoning refers to this style of reasoning. For example, the formula

p(f (x); g(y; f (x)))

! p(z; g(y; z ))

is true over any domain, irrespective of the interpretations of p, f , and g. Another example of a valid formula is

z=y

^ 8x f (g(x)) = x

! [while p(y) do y := g(y)]<while y 6= z do y := f (y)>1:

Note the use of [ ] applied to < >. This formula asserts that under the assumption that f \undoes" g, any computation consisting of applying g some number of times to z can be backtracked to the original z by applying f some number of times to the result.

162

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

We now observe that three basic properties of classical (uninterpreted) rst-order logic, the Lowenheim{Skolem theorem, completeness, and compactness, fail for even fairly weak versions of DL. The Lowenheim{Skolem theorem for classical rst-order logic states that if a formula ' has an in nite model then it has models of all in nite cardinalities. Because of this theorem, classical rst-order logic cannot de ne the structure of elementary arithmetic N = (!; +; ; 0; 1; =)

up to isomorphism. That is, there is no rst-order sentence that is true in a structure A if and only if A is isomorphic to N . However, this can be done in DL. PROPOSITION 55. There exists a formula N of DL(dreg) that de nes N up to isomorphism. The Lowenheim{Skolem theorem does not hold for DL, because N has an in nite model (namely N ), but all models are isomorphic to N and are therefore countable. Besides the Lowenheim{Skolem Theorem, compactness fails in DL as well. Consider the following countable set of formulas:

f<while p(x) do x := f (x)>1g [ fp(f n (x)) j n 0g:

It is easy to see that is not satis able, but it is nitely satis able, i.e. each nite subset of it is satis able. Worst of all, completeness cannot hold for any deductive system as we normally think of it (a nite eective system of axioms schemes and nitary inference rules). The set of theorems of such a system would be r.e., since they could be enumerated by writing down the axioms and systematically applying the rules of inference in all possible ways. However, the set of valid statements of DL is not recursively enumerable. In fact, we will describe in Section 10 exactly how bad the situation is. This is not to say that we cannot say anything meaningful about proofs and deduction in DL. On the contrary, there is a wealth of interesting and practical results on axiom systems for DL that we will cover in Section 11. In this section we investigate the power of DL relative to classical static logics on the uninterpreted level. In particular, rich test DL of r.e. programs is equivalent to the in nitary language L!1ck! . Some consequences of this fact are drawn in later sections. First we introduce a de nition that allows to compare dierent variants of DL. Let us recall from Section 8.3 that a state is initial if it diers from a constant state wa only at the values of individual variables. If DL1 and DL2 are two variants of DL over the same vocabulary, we say that DL2 is as expressive as DL1 and write DL1 DL2 if for each formula ' in DL1 there is a formula in DL2 such that A; u ' $ for all structures A and initial

DYNAMIC LOGIC

163

states u. If DL2 is as expressive as DL1 but DL1 is not as expressive as DL2 , we say that DL2 is strictly more expressive than DL1 , and write DL1 < DL2 . If DL2 is as expressive as DL1 and DL1 is as expressive as DL2 , we say that DL1 and DL2 are of equal expressive power, or are simply equivalent, and write DL1 DL2 . We will also use these notions for comparing versions of DL with static logics such as L!! . There is a technical reason for the restriction to initial states in the above de nition. If DL1 and DL2 have access to dierent sets of data types, then they may be trivially incomparable for uninteresting reasons, unless we are careful to limit the states on which they are compared. We shall see examples of this in Section 12. Also, in the de nition of DL(K ) given in Section 8.4, the programming language K is an explicit parameter. Actually, the particular rst-order vocabulary over which DL(K ) and K are considered should be treated as a parameter too. It turns out that the relative expressiveness of versions of DL is sensitive not only to K , but also to . This second parameter is often ignored in the literature, creating a source of potential misinterpretation of the results. For now, we assume a xed rst-order vocabulary . Rich Test Dynamic Logic of R.E. Programs

We are about to introduce the most general version of DL we will ever consider. This logic is called rich test Dynamic Logic of r.e. programs , and it will be denoted DL(rich-test r:e:). Programs of DL(rich-test r:e:) are r.e. sets of seqs as de ned in Section 8.2, except that the seqs may contain tests '? for any previously constructed formula '. The formal de nition is inductive. All atomic programs are programs and all atomic formulas are formulas. If '; are formulas, ; are programs, fn j n 2 !g is an r.e. set of programs over a nite set of variables (free or bound), and x is a variable, then

0 '! []'

8x '

are formulas and

; fn j n 2 !g '?

164

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

are programs. The set CS () of computation sequences of a rich test r.e. program is de ned as usual. The language L!1 ! is the language with the formation rules of the rstorder language L!! , but V W in which countably in nite conjunctions and disjunctions i2I 'i and i2I 'i are also allowed. In addition, if f'i j i 2 I g is recursively enumerable, then the resulting language is denoted L!1ck! and is sometimes called constructive L!1 ! . PROPOSITION 56. DL(rich-test r:e:) L!1ck ! . Since r.e. programs as de ned in Section 8.2 are clearly a special case of general rich-test r.e. programs, it follows that DL(rich-test r:e:) is as expressive as DL(r:e:). In fact they are not of the same expressive power. THEOREM 57. DL(r:e:) < DL(rich-test r:e:). Henceforth, we shall assume that the rst-order vocabulary contains at least one function symbol of positive arity. Under this assumption, DL can easily be shown to be strictly more expressive than L!! : THEOREM 58. L!! < DL. COROLLARY 59. L!! < DL < DL(r:e:) < DL(rich-test r:e:) L!1ck! : The situation with the intermediate versions of DL, e.g. DL(stk), DL(bstk), DL(wild), etc., is of interest. We deal with the relative expressive power of these in Section 12.

9.2 Interpreted Reasoning Arithmetical Structures This is the most detailed level we will consider. It is the closest to the actual process of reasoning about concrete, fully speci ed programs. Syntactically, the programs and formulas are as on the uninterpreted level, but here we assume a xed structure or class of structures. In this framework, we can study programs whose computational behavior depends on (sometimes deep) properties of the particular structures over which they are interpreted. In fact, almost any task of verifying the correctness of an actual program falls under the heading of interpreted reasoning. One speci c structure we will look at carefully is the natural numbers with the usual arithemetic operations: N = (!; 0; 1; +; ; =):

Let denote the ( rst-order-de nable) operation of subtraction and let gcd(x; y) denote the rst-order-de nable operation giving the greatest common divisor of x and y. The following formula of DL is N -valid, i.e., true in

DYNAMIC LOGIC

all states of N : (33) x = x0 ^ y = y0 ^ xy 1

165

! <>(x = gcd(x0 ; y0 ))

where is the while program of Example 1 or the regular program (x 6= y?; ((x > y?; x := x y) [ (x < y?; y := y

x))) x = y?:

Formula (33) states the correctness and termination of an actual program over N computing the greatest common divisor. As another example, consider the following formula over N :

8x 1 (x = 1):

Here = denotes integer division, and even( ) is the relation that tests if its argument is even. Both of these are rst-order de nable. This innocentlooking formula asserts that starting with an arbitrary positive integer and repeating the following two operations, we will eventually reach 1:

if the number is even, divide it by 2; if the number is odd, triple it and add 1.

The truth of this formula is as yet unknown, and it constitutes a problem in number theory (dubbed \the 3x + 1 problem") that has been open for over 60 years. The formula 8x 1 <>1, where is

while x 6= 1 do if even(x) then x := x=2 else x := 3x + 1;

says this in a slightly dierent way. The speci c structure N can be generalized, resulting in the class of arithmetical structures . Brie y, a structure A is arithmetical if it contains a rst-order-de nable copy of N and has rst-order de nable functions for coding nite sequences of elements of A into single elements and for the corresponding decoding. Arithmetical structures are important because (i) most structures arising naturally in computer science (e.g., discrete structures with recursively de ned data types) are arithmetical, and (ii) any structure can be extended to an arithmetical one by adding appropriate encoding and decoding capabilities. While most of the results we present for the interpreted level are given in terms of N alone, many of them hold for any arithmetical structure, so their signi cance is greater. Expressive Power over N

The results of Corollary 59 establishing that

L!! < DL < DL(r:e:) < DL(rich-test r:e:)

166

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

were on the uninterpreted level, where all structures are taken into account. Thus rst-order logic, regular DL, and DL(rich-test r:e:) form a sequence of increasingly more powerful logics when interpreted uniformly over all structures. What happens if one xes a structure, say N ? Do these dierences in expressive power still hold? We now address these questions. First, we introduce notation for comparing expressive power over N . If DL1 and DL2 are variants of DL (or static logics, such as L!! ) and are de ned over the vocabulary of N , we write DL1 N DL2 if for each ' 2 DL1 there is 2 DL2 such that N ' $ . We de ne '; where ' is a rst-order formula. Inference Rules

modus ponens:

'; ' !

DYNAMIC LOGIC

171

We denote provability in Axiom System S1 by `S1 . THEOREM 71. For any DL formula of the form ' ! <> , for rst-order ' and and program containing rst-order tests only,

' ! <>

, `S ' ! <> : 1

Given the high undecidability of validity in DL, we cannot hope for a complete axiom system in the usual sense. Nevertheless, we do want to provide an orderly axiomatization of valid DL formulas, even if this means that we have to give up the nitary nature of standard axiom systems. Below we present a complete in nitary axiomatization S2 of DL that includes an inference rule with in nitely many premises. Before doing so, however, we must get a certain technical complication out of the way. We would like to be able to consider valid rst-order formulas as axiom schemes, but instantiated by general formulas of DL. In order to make formulas amenable to rst-order manipulation, we must be able to make sense of such notions as \a free occurrence of x in '" and the substitution '[x=t]. For example, we would like to be able to use the axiom scheme of the predicate calculus 8x ' ! '[x=t], even if ' contains programs. The problem arises because the dynamic nature of the semantics of DL may cause a single occurrence of a variable in a DL formula to act as both a free and bound occurrence. For example, in the formula <while x 99 do x := x + 1>1, the occurrence of x in the expression x +1 acts as both a free occurrence (for the rst assignment) and as a bound occurrence (for subsequent assignments). There are several reasonable ways to deal with this, and we present one for de niteness. Without loss of generality, we assume that whenever required, all programs appear in the special form (34) ' where x = (x1 ; : : : ; xn ) and z = (z1 ; : : : ; zn) are tuples of variables, z := x stands for

z1 := x1 ; ; zn := xn (and similarly for x := z), the xi do not appear in , and the zi are new variables appearing nowhere in the relevant context outside of the program . The idea is to make programs act on the \local" variables zi by rst copying the values of the xi into the zi , thus freezing the xi , executing the program with the zi , and then restoring the xi . This form can be easily obtained from any DL formula by consistently changing all variables of any program to new ones and adding the appropriate assignments that copy and then restore the values. Clearly, the new formula is equivalent to the old. Given a DL formula in this form, the following are bound occurrences of variables:

172

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

all occurrences of x in a subformula of the form 9x ';

all occurrences of xi in a subformula of the form (34) except for its occurrence in the assignment zi := xi .

all occurrences of zi in a subformula of the form (34) (note, though, that zi does not occur in ' at all);

Every occurrence of a variable that is not bound is free. Our axiom system will have an axiom that enables free translation into the special form discussed, and in the sequel we assume that the special form is used whenever required (for example, in the assignment axiom scheme below). As an example, consider the formula:

8x (p(x; y)) ! p(x; y):

z2 := f (z1 ); z1 := g(z2; z1 ); x := z1 ;

Denoting p(x; y) by ', the conclusion of the implication is just '[x=h(z )] according to the convention above; that is, the result of replacing all free occurrences of x in ' by h(z ) after ' has been transformed into special form. We want the above formula to be considered a legal instance of the assignment axiom scheme below.

Axiom System S2 Axiom Schemes

all instances of valid rst-order formulas; all instances of valid formulas of PDL; <x := t>' $ '[x=t]; ' $ 'b, where 'b is ' in which some occurrence of a program has

been replaced by the program z := x; 0 ; x := z for z not appearing in ', and where 0 is with all occurrences of x replaced by z .

Inference Rules

modus ponens:

'; ' !

DYNAMIC LOGIC

generalization: ' and []'

173

'

8x '

in nitary convergence:

' ! [n ] ; n 2 ! ' ! [ ] Provability in Axiom System S2, denoted by `S2 , is the usual concept for systems with in nitary rules of inference; that is, deriving a formula using the in nitary rule requires in nitely many premises to have been previously derived. Axiom System S2 consists of an axiom for assignment, facilities for propositional reasoning about programs and rst-order reasoning with no programs (but with programs possibly appearing in instantiated rst-order formulas), and an in nitary rule for [ ]. The dual construct, < >, is taken care of by the \unfolding" validity of PDL: < >'

$ (' _ <; >'):

THEOREM 72. For any formula ' of DL,

' , `S ': 2

11.2 Interpreted Reasoning Proving properties of real programs very often involves reasoning on the interpreted level, where one is interested in A-validity for a particular structure A. A typical proof might use induction on the length of the computation to establish an invariant for partial correctness or to exhibit a decreasing value in some well-founded set for termination. In each case, the problem is reduced to the problem of verifying some domain-dependent facts, sometimes called veri cation conditions . Mathematically speaking, this kind of activity is really an eective transformation of assertions about programs into ones about the underlying structure. For DL, this transformation can be guided by a direct induction on program structure using an axiom system that is complete relative to any given arithmetical structure A. The essential idea is to exploit the existence, for any given DL formula, of a rst-order equivalent in A, as guaranteed by Theorem 60. In the axiom systems we construct, instead of dealing with the 11 -hardness of the validity problem by an in nitary rule, we take all A-valid rst-order formulas as additional axioms. Relative to this set of axioms, proofs are nite and eective.

174

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

For partial correctness assertions of the form ' ! [] with ' and rst-order and containing rst-order tests, it suÆces to show that DL reduces to the rst-order logic L!! , and there is no need for the natural numbers to be present. Thus, Axiom System S3 below works for nite structures too. Axiom System S4 is an arithmetically complete system for full DL that does make explicit use of natural numbers. It follows from Theorem 66 that for partial correctness formulas we cannot hope to obtain a completeness result similar to the one proved in Theorem 71 for termination formulas. A way around this diÆculty is to consider only expressive structures. A structure A for the rst-order vocabulary is said to be expressive for a programming language K if for every 2 K and for every rst-order formula ', there exists a rst-order formula L such that A L $ []'. Examples of structures that are expressive for most programming languages are nite structures and arithmetical structures.

Axiom System S3 Axiom Schemes

all instances of valid formulas of PDL; <x := t>' $ '[x=t] for rst-order '. Inference Rules

modus ponens:

'; ' !

generalization:

' : []' Note that Axiom System S3 is really the axiom system for PDL from Section 4 with the addition of the assignment axiom. Given a DL formula ' and a structure A, denote by A `S3 ' provability of ' in the system obtained from Axiom System S3 by adding the following set of axioms:

all A-valid rst-order sentences.

DYNAMIC LOGIC

175

THEOREM 73. For every expressive structure A and for every formula of DL of the form ' ! [] , where ' and are rst-order and involves only rst-order tests, we have

A , A `S : 3

Now we present an axiom system S4 for full DL. It is similar in spirit to S3 in that it is complete relative to the formulas valid in the structure under consideration. However, this system works for arithmetical structures only. It is not tailored to deal with other expressive structures, notably nite ones, since it requires the use of the natural numbers. The kind of completeness result stated here is thus termed arithmetical. As in Section 9.2, we state the results for the special structure N , omitting the technicalities needed to deal with general arithmetical structures. The main dierence is that in N we can use variables n, m, etc., knowing that their values will be natural numbers. We can thus write n + 1, for example, assuming the standard interpretation. When working in an unspeci ed arithmetical structure, we have to precede such usage with appropriate predicates that guarantee that we are indeed talking about that part of the domain that is isomorphic to the natural numbers. For example, we would often have to use the rst-order formula, call it nat(n), which is true precisely for the elements representing natural numbers, and which exists by the de nition of an arithmetical structure.

Axiom System S4 Axiom Schemes

all instances of valid rst-order formulas; all instances of valid formulas of PDL; <x := t>' $ '[x=t] for rst-order '. Inference Rules

modus ponens:

'; ' !

generalization:

'

[]'

and

'

8x '

176

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

convergence:

'(n + 1) ! <>'(n) '(n) ! < >'(0) for rst order ' and variable n not appearing in . REMARK 74. For general arithmetical structures, the +1 and 0 in the rule of convergence denote suitable rst-order de nitions. As in Axiom System S3, denote by A `S4 ' provability of ' in the system obtained from Axiom System S4 by adding all A-valid rst-order sentences as axioms. THEOREM 75. For every formula of DL, N

,

N

`S : 4

The use of the natural numbers as a device for counting down to 0 in the convergence rule of Axiom System S4 can be relaxed. In fact, any wellfounded set suitably expressible in any given arithmetical structure suÆces. Also, it is not necessary to require that an execution of causes the truth of the parameterized '(n) in that rule to decrease exactly by 1; it suÆces that the decrease is positive at each iteration. In closing, we note that appropriately restricted versions of all axiom systems of this section are complete for DL(dreg). In particular, as pointed out in Section 2.6, the Hoare while-rule ' ^ ! []' ' ! [while do ](' ^ : ) results from combining the generalization rule with the induction and test axioms of PDL, when is restricted to appear only in the context of a while statement; that is, only in the form ( ?; p) ; (: )?. 12 EXPRESSIVENESS OF DL The subject of study in this section is the relative expressive power of languages. We will be primarily interested in comparing, on the uninterpreted level, the expressive power of various versions of DL. That is, for programming languages P1 and P2 we will study whether DL(P1 ) DL(P2 ) holds. Recall from Section 9 that the latter relation means that for each formula ' in DL(P1 ), there is a formula in DL(P2 ) such that A; u ' $ for all structures A and initial states u. Studying the expressive power of logics rather than the computational power of programs allows us to compare, for example, deterministic and

DYNAMIC LOGIC

177

nondeterministic programming languages. Also, we will see that the answer to the fundamental question \DL(P1 ) DL(P2 )?" may depend crucially on the vocabulary over which we consider logics and programs. For this reason we always make clear in the theorems of this section our assumptions on the vocabulary. THEOREM 76. Let be a rich vocabulary. Then (i) DL(stk) DL(array). (ii) DL(stk) DL(array) i P = PSPACE. Moreover, the same holds for deterministic regular programs with an algebraic stack and deterministic regular programs with arrays. THEOREM 77. Over a monadic vocabulary, nondeterministic regular programs with a Boolean stack have the same computational power as nondeterministic regular programs with an algebraic stack. Now we investigate the role that nondeterminism plays in the expressive power of logics of programs. As we shall see, the general conclusion is that for a programming language of suÆcient computational power, nondeterminism does not increase the expressive power of the logic. We start our discussion of the role of nondeterminism with the basic case of regular programs. Recall that DL and DDL denote the logics of nondeterministic and deterministic regular programs, respectively. We can now state the main result that separates the expressive power of deterministic and nondeterministic while programs. THEOREM 78. For every vocabulary containing at least two unary function symbols or at least one function symbol of arity greater than one, DDL is strictly less expressive than DL; that is, DDL < DL. It turns out that Theorem 78 cannot be extended to vocabularies containing just one unary function symbol without solving a well known open problem in complexity theory. THEOREM 79. For every rich mono-unary vocabulary, the statement \DDL is strictly less expressive than DL" is equivalent to LOGSPACE 6= NLOGSPACE. We now turn our attention to the discussion of the role nondeterminism plays in the expressive power of regular programs with a Boolean stack. For a vocabulary containing at least two unary function symbols, nondeterminism increases the expressive power of DL over regular programs ` with a Boolean stack. For the rest of this section, we let the vocabulary contain two unary function symbols. THEOREM 80. For a vocabulary containing at least two unary function symbols or a function symbol of arity greater than two, DL(dbstk) < DL(bstk).

178

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

It turns out that for programming languages that use suÆciently strong data types, nondeterminism does not increase the expressive power of Dynamic Logic. THEOREM 81. For every vocabulary,

DL(stk); DL(darray) DL(array).

(i) DL(dstk) (ii)

We will discuss the role of unbounded memory of programs for the expressive power of the corresponding logic. However, this result depends on assumptions about the vocabulary . Recall from Section 8.2 that an r.e. program has bounded memory if the set CS () contains only nitely many distinct variables from V , and if in addition the nesting of function symbols in terms that occur in seqs of CS () is bounded. This restriction implies that such a program can be simulated in all interpretations by a device that uses a xed nite number of registers, say x1 ; : : : ; xn , and all its elementary steps consist of either performing a test of the form

r(xi1 ; : : : ; xim )?; where r is an m-ary relation symbol of , or executing a simple assignment of either of the following two forms:

xi := f (xi1 ; : : : ; xik )

xi := xj :

In general, however, such a device may need a very powerful control (that of a Turing machine) to decide which elementary step to take next. An example of a programming language with bounded memory is the class of regular programs with a Boolean stack. Indeed, the Boolean stack strengthens the control structure of a regular program without introducing extra registers for storing algebraic elements. It can be shown without much diÆculty that regular programs with a Boolean stack have bounded memory. On the other hand, regular programs with an algebraic stack or with arrays are programming languages with unbounded memory. For monadic vocabularies, the class of nondeterministic regular programs with a Boolean stack is computationally equivalent to the class of nondeterministic regular programs with an algebraic stack. For deterministic programs, the situation is slightly dierent. THEOREM 82. (i) For every vocabulary containing a function symbol of arity greater than one, DL(dbstk) < DL(dstk) and DL(bstk) < DL(stk). (ii) For all monadic vocabularies, DL(bstk)

DL(stk).

DYNAMIC LOGIC

(iii) For all mono-unary vocabularies, DL(dbstk)

179

DL(dstk).

(iv) For all monadic vocabularies containing at least two function symbols, DL(dbstk) < DL(dstk). Regular programs with a Boolean stack are situated between pure regular programs and regular programs with an algebraic stack. We start our discussion by comparing the expressive power of regular programs with and without a Boolean stack. The only known de nite answer to this problem is given in the following result, which covers the case of deterministic programs only. THEOREM 83. (i) Let the vocabulary be rich and mono-unary. Then

DL(dreg) DL(dstk) , LOGSPACE = P : (ii) If the vocabulary contains at least one function symbol of arity greater than one or at least two unary function symbols, then DL(dreg) < DL(dbstk). It is not known whether Theorem 83(ii) holds for nondeterministic programs, and neither is its statement known to be equivalent to any of the well known open problems in complexity theory. In contrast, it follows from Theorems 83(i) and 82(iii) that for rich mono-unary vocabularies, DL(dreg) DL(dbstk) if and only if LOGSPACE = P . Hence, this problem cannot be solved without solving one of the major open problems in complexity theory. The wildcard assignment statement x :=? discussed in Section 8.2 chooses an element of the domain of computation nondeterministically and assigns it to x. It is a device that represents unbounded nondeterminism as opposed to the binary nondeterminism of the nondeterministic choice construct [. The programming language of regular programs augmented with wildcard assignment is not an acceptable programming language, since a wildcard assignment can produce values that are outside the substructure generated by the input. Our rst result shows that wildcard assignment increases the expressive power in quite a substantial way; it cannot be simulated even by r.e. programs. THEOREM 84. Let the vocabulary contain two constants c1 ; c2 , a binary predicate symbol p, the symbol = for equality, and no other function or predicate symbols. There is a formula of DL(wild) that is equivalent to no formula of DL(r:e:), thus DL(wild) 6 DL(r:e:).

180

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

It is not known whether any of the logics with unbounded memory are reducible to DL(wild). When both wildcard and array assignments are allowed, it is possible to de ne the niteness of (the domain of) a structure, but not in the logics with either of the additions removed. Thus, having both memory and nondeterminism unbounded provides more power than having either of them bounded. THEOREM 85. Let vocabulary contain only the symbol of equality. There is a formula of DL(array+wild) equivalent to no formula of either DL(array) or DL(wild). 13 VARIANTS OF DL In this section we consider some restrictions and extensions of DL. We are interested mainly in questions of comparative expressive power on the uninterpreted level. In arithmetical structures these questions usually become trivial, since it is diÆcult to go beyond the power of rst-order arithmetic without allowing in nitely many distinct tests in programs (see Theorems 60 and 61). In regular DL this luxury is not present.

13.1 Algorithmic Logic Algorithmic Logic (AL) is the predecessor of Dynamic Logic. The basic system was de ned by [Salwicki, 1970] and generated an extensive amount of subsequent research carried out by a group of mathematicians working in Warsaw. Two surveys of the rst few years of their work can be found in [Banachowski et al., 1977] and [Salwicki, 1977]. The original version of AL allowed deterministic while programs and formulas built from the constructs ' [ ' \ ' corresponding in our terminology to <>'

< >'

^

n2!

<n >';

respectively, where is a deterministic while program and ' is a quanti erfree rst-order formula. In [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b], AL was extended to allow nondeterministic while programs and the constructs r' ' corresponding in our terminology to <>' halt() ^ []' ^ <>';

DYNAMIC LOGIC

181

respectively. The latter asserts that all traces of are nite and terminate in a state satisfying '. A feature present in AL but not in DL is the set of \dynamic terms" in addition to dynamic formulas. For a rst-order term t and a deterministic while program , the meaning of the expression t is the value of t after executing program . If does not halt, the meaning is unde ned. Such terms can be systematically eliminated; for example, P (x; t) is replaced by 9z (<>(z = t) ^ P (x; z )). The emphasis in the early research on AL was in obtaining in nitary completeness results, developing normal forms for programs, investigating recursive procedures with parameters, and axiomatizing certain aspects of programming using formulas of AL. As an example of the latter, the algorithmic formula (while s 6= " do s := pop(s))1 can be viewed as an axiom connected with the data structure stack. One can then investigate the consequences of such axioms within AL, regarding them as properties of the corresponding data structures. Complete in nitary deductive systems for rst-order and propositional versions are given in [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b]. The in nitary completeness results for AL are usually proved by the algebraic methods of [Rasiowa and Sikorski, 1963]. [Constable, 1977], [Constable and O'Donnell, 1978] and [Goldblatt, 1982] present logics similar to AL and DL for reasoning about deterministic while programs.

13.2 Well-Foundedness As in Section 7 for PDL, we consider adding to DL assertions to the eect that programs can enter in nite computations. Here too, we shall be interested both in LDL and in RDL versions; i.e., those in which halt and wf , respectively, have been added inductively as new formulas for any program . As mentioned there, the connection with the more common notation repeat and loop (from which the L and R in the names LDL and RDL derive) is by:

loop repeat

def :halt () def :wf : ()

We now state some of the relevant results. The rst concerns the addition of halt : THEOREM 86. LDL DL. In contrast to this, we have:

182

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

THEOREM 87. LDL < RDL. Turning to the validity problem for these extensions, clearly they cannot be any harder to decide than that of DL, which is 11 -complete. However, the following result shows that detecting the absence of in nite computations of even simple uninterpreted programs is extremely hard. THEOREM 88. The validity problems for formulas of the form ' ! and formulas of the form ' ! , for rst-order ' and regular , are both 11 -complete. If is constrained to have only rst-order tests then the ' ! case remains 11 -complete but the ' ! case is r.e.; that is, it is 01 -complete. We just mention here that the additions to Axiom System S4 of Section 11 that are used to obtain an arithmetically complete system for RDL are the axiom [ ](' ! <>') ! (' ! :wf ) and the inference rule '(n + 1) ! []'(n); :'(0) '(n) ! wf for rst-order ' and n not occurring in .

wf

halt

wf

halt

13.3 Probabilistic Programs There is wide interest recently in programs that employ probabilistic moves such as coin tossing or random number draws and whose behavior is described probabilistically (for example, is \correct" if it does what it is meant to do with probability 1). To give one well known example taken from [Miller, 1976] and [Rabin, 1980], there are fast probabilistic algorithms for checking primality of numbers but no known fast nonprobabilistic ones. Many synchronization problems including digital contract signing, guaranteeing mutual exclusion, etc. are often solved by probabilistic means. This interest has prompted research into formal and informal methods for reasoning about probabilistic programs. It should be noted that such methods are also applicable for reasoning probabilistically about ordinary programs, for example, in average-case complexity analysis of a program, where inputs are regarded as coming from some set with a probability distribution. [Kozen, 1981d] provided a formal semantics for probabilistic rst-order while programs with a random assignment statement x :=?. Here the term \random" is quite appropriate (contrast with Section 8.2) as the statement essentially picks an element out of some xed distribution over the domain D. This domain is assumed to be given with an appropriate set of measurable subsets. Programs are then interpreted as measurable functions on a certain measurable product space of copies of D.

DYNAMIC LOGIC

183

In [Feldman and Harel, 1984] a probabilistic version of rst-order Dynamic Logic, P r(DL), was investigated on the interpreted level. Kozen's semantics is extended as described below to a semantics for formulas that are closed under Boolean connectives and quanti cation over reals and integers and that employ terms of the form F r(') for rst-order '. In addition, if is a while program with nondeterministic assignments and ' is a formula, then fg' is a new formula. The semantics assumes a domain D, say the reals, with a measure space consisting of an appropriate family of measurable subsets of D. The states ; ; : : : are then taken to be the positive measures on this measure space. Terms are interpreted as functions from states to real numbers, with F r(') in being the frequency (or simply, the measure ) of ' in . Frequency is to positive measures as probability is to probability measures. The formula fg' is true in if ' is true in , the state (i.e., measure) that is the result of applying to in Kozen's semantics. Thus fg' means \after , '" and is the construct analogous to <>' of DL. For example, in P r(DL) one can write F r(1) = 1 ! fgF r(1) p to mean, \ halts with probability at least p." The formula F r(1) = 1 ! [i := 1; x := ?; while x > 1=2 do (x := ?; i := i + 1)] 8n ((n 1 ! F r(i = n) = 2 n) ^ (n < 1 ! F r(i = n) = 0)) is valid in all structures in which the distribution of the random variable used in x := ? is a uniform distribution on the real interval [0; 1]. An axiom system for P r(DL) was proved in [Feldman and Harel, 1984] to be complete relative to an extension of rst-order analysis with integer variables, and for discrete probabilities rst-order analysis with integer variables was shown to suÆce. 14 OTHER APPROACHES Here we discuss brie y some topics closely related to Dynamic Logic.

14.1 Logic of Eective De nitions The Logic of Eective De nitions (LED), introduced by [Tiuryn, 1981a], was intended to study notions of computability over abtract models and to provide a universal framework for the study of logics of programs over such models. It consists of rst-order logic augmented with new atomic formulas of the form = , where and are eective de nitional schemes (the latter notion is due to [Friedman, 1971]):

184

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

if '1 then t1 else if '2 then t2 else if '3 then t3 else if : : : where the 'i are quanti er-free formulas and ti are terms over a bounded set of variables, and the function i 7! ('i ; ti ) is recursive. The formula = is de ned to be true in a state if both and terminate and yield the same value, or neither terminates. Model theory and in nitary completeness of LED are treated in [Tiuryn, 1981a]. Eective de nitional schemes in the de nition of LED can be replaced by any programming language K , giving rise to various logical formalisms. The following result, which relates LED to other logics discussed here, is proved in [Meyer and Tiuryn, 1981; Meyer and Tiuryn, 1984]. THEOREM 89. For every vocabulary L, LED DL(r:e:).

14.2 Temporal Logic Temporal Logic (TL) is an alternative application of modal logic to program speci cation and veri cation. It was rst proposed as a useful tool in program veri cation by [Pnueli, 1977] and has since been developed by many authors in various forms. This topic is surveyed in depth in [Emerson, 1990] and [Gabbay et al., 1994]. TL diers from DL chie y in that it is endogenous ; that is, programs are not explicit in the language. Every application has a single program associated with it, and the language may contain program-speci c statements such as at L, meaning \execution is currently at location L in the program." There are two competing semantics, giving rise to two dierent theories called linear-time and branching-time TL. In the former, a model is a linear sequence of program states representing an execution sequence of a deterministic program or a possible execution sequence of a nondeterministic or concurrent program. In the latter, a model is a tree of program states representing the space of all possible traces of a nondeterministic or concurrent program. Depending on the application and the semantics, dierent syntactic constructs can be chosen. The relative advantages of linear and branching time semantics are discussed in [Lamport, 1980; Emerson and Halpern, 1986; Emerson and Lei, 1987; Vardi, 1998a].

DYNAMIC LOGIC

185

Modal constructs used in TL include

2' 3'

\' holds in all future states" \' holds in some future state" ' \' holds in the next state" ' until \there exists some strictly future point t at which will be satis ed and all points strictly between the current state and t satisfy '" for linear-time logic, as well as constructs for expressing \for all traces starting from the present state : : : " \for some trace starting from the present state : : : " for branching-time logic. Temporal logic is useful in situations where programs are not normally supposed to halt, such as operating systems, and is particularly well suited to the study of concurrency. Many classical program veri cation methods such as the intermittent assertions method are treated quite elegantly in this framework. Temporal logic has been most successful in providing tools for proving properties of concurrent nite state protocols, such as solutions to the dining philosophers and mutual exclusion problems, which are popular abstract versions of synchronization and resource management problems in distributed systems. The induction principle of TL takes the form:

e

e

(35) ' ^ 2(' ! ')

! 2':

Note the similarity to the PDL induction axiom (Axiom 17(viii)):

' ^ [ ](' ! []') ! [ ]': This is a classical program veri cation method known as inductive or invariant assertions. The operators , 3, and 2 can all be de ned in terms of until: ' , :(0 until :') 3' , ' _ (1 until ') 2' , ' ^ :(1 until :'); but not vice-versa. It has been shown in [Kamp, 1968] and [Gabbay et al., 1980] that the until operator is powerful enough to express anything that can be expressed in the rst-order theory of (!; , respectively.

e

14.3 Process Logic Dynamic Logic and Temporal Logic embody markedly dierent approaches to reasoning about programs. This dichotomy has prompted researchers to search for an appropriate process logic that combines the best features of both. An appropriate candidate should combine the ability to reason

188

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

about programs compositionally with the ability to reason directly about the intermediate states encountered during the course of a computation. [Pratt, 1979c], [Parikh, 1978b], [Nishimura, 1980], and [Harel et al., 1982b] all suggested increasingly more powerful propositional-level formalisms in which the basic idea is to interpret formulas in traces rather than in states. In particular, [Harel et al., 1982b] present a system called Process Logic (PL), which is essentially a union of TL and test-free regular PDL. That paper proves that the satis ability problem is decidable and gives a complete nitary axiomatization. Syntactically, we have programs ; ; : : : and propositions '; ; : : : as in PDL. We have atomic symbols of each type and compound expressions built up from the operators !, 0, ;, [, , ? (applied to Boolean combinations of atomic formulas only), !, and [ ]. In addition we have the temporal operators rst and until. The temporal operators are available for expressing and reasoning about trace properties, but programs are constructed compositionally as in PDL. Other operators are de ned as in PDL (see Section 2.1) except for skip, which is handled specially. Semantically, both programs and propositions are interpreted as sets of traces. We start with a Kripke frame K = (K; mK ) as in Section 2.2, where K is a set of states s; t; : : : and the function mK interprets atomic formulas p as subsets of K and atomic programs a as binary relations on K . The temporal operators are de ned as in TL. Trace models satisfy (most of) the PDL axioms. As in Section 14.2, de ne

e

def halt () 0 def n () 3halt def : n; inf () which say that the trace is of length 0, of nite length, or of in nite length, respectively. De ne two new operators [[ ]] and >: def [[]]' () n ! []' def >' () :[[]]:' , n ^ <>':

The operator is the same as in PDL. It can be shown that the two PDL axioms ' ^ [][ ]' $ [ ]' ' ^ [ ](' ! []') ! [ ]' hold by establishing that [ [ mK (n ) = mK (0 ) [ (mK () Æ mK (n )) n0 n0 [ 0 = mK ( ) [ (( mK (n )) Æ mK ()): n0

DYNAMIC LOGIC

189

As mentioned, the version of PL of [Harel et al., 1982b] is decidable (but, it seems, in nonelementary time only) and complete. It has also been shown that if we restrict the semantics to include only nite traces (not a necessary restriction for obtaining the results above), then PL is no more expressive than PDL. Translations of PL structures into PDL structures have also been investigated, making possible an elementary time decision procedure for deterministic PL; see [Halpern, 1982; Halpern, 1983]. An extension of PL in which rst and until are replaced by regular operators on formulas has been shown to be decidable but nonelementary in [Harel et al., 1982b]. This logic perhaps comes closer to the desired objective of a powerful decidable logic of traces with natural syntactic operators that is closed under attachment of regular programs to formulas.

14.4 The -Calculus The -calculus was suggested as a formalism for reasoning about programs in [Scott and de Bakker, 1969] and was further developed in [Hitchcock and Park, 1972], [Park, 1976], and [de Bakker, 1980]. The heart of the approach is , the least xpoint operator, which captures the notions of iteration and recursion. The calculus was originally de ned as a rst-order-level formalism, but propositional versions have become popular. The operator binds relation variables. If '(X ) is a logical expression with a free relation variable X , then the expression X:'(X )represents the least X such that '(X ) = X , if such an X exists. For example, the re exive transitive closure R of a binary relation R is the least binary relation containing R and closed under re exivity and transitivity; this would be expressed in the rst-order -calculus as

(36) R def = X (x; y):(x = y _ 9z (R(x; z ) ^ X (z; y))): This should be read as, \the least binary relation X (x; y) such that either x = y or x is related by R to some z such that z and y are already related by X ." This captures the usual xpoint formulation of re exive transitive closure. The formula (36) can be regarded either as a recursive program computing R or as an inductively de ned assertion that is true of a pair (x; y) i that pair is in the re exive transitive closure of R. The existence of a least xpoint is not guaranteed except under certain restrictions. Indeed, the formula :X has no xpoint, therefore X::X does not exist. Typically, one restricts the application of the binding operator X to formulas that are positive or syntactically monotone in X ; that is, those formulas in which every free occurrence of X occurs in the scope of an even number of negations. This implies that the relation operator X 7! '(X ) is (semantically) monotone, which by the Knaster{Tarski theorem ensures the existence of a least xpoint.

190

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

The rst-order -calculus can de ne all sets de nable by rst-order induction and more. In particular, it can capture the input/output relation of any program built from any of the DL programming constructs we have discussed. Since the rst-order -calculus also admits rst-order quanti cation, it is easily seen to be as powerful as DL. It was shown by [Park, 1976] that niteness is not de nable in the rstorder -calculus with the monotonicity restriction, but well-foundedness is. Thus this version of the -calculus is independent of L!1ck ! (and hence of DL(r:e:)) in expressive power. Well-foundedness of a binary relation R can be written

8x (X (x):8y (R(y; x) ! X (y))): A more severe syntactic restriction on the binding operator X is to allow its application only to formulas that are syntactically continuous in X ; that is, those formulas in which X does not occur free in the scope of any negation or any universal quanti er. It can be shown that this syntactic restriction implies semantic continuity, so the least xpoint is the union of ?, '(?), '('(?)); : : : . As shown in [Park, 1976], this version is strictly weaker than L!1ck! . In [Pratt, 1981a] and [Kozen, 1982; Kozen, 1983], propositional versions of the -calculus were introduced. The latter version consists of propositional modal logic with a least xpoint operator. It is the most powerful logic of its type, subsuming all known variants of PDL, game logic of [Parikh, 1983], various forms of temporal logic (see Section 14.2), and other seemingly stronger forms of the -calculus ([Vardi and Wolper, 1986b]). In the following presentation we focus on this version, since it has gained fairly widespread acceptance; see [Kozen, 1984; Kozen and Parikh, 1983; Streett, 1985b; Streett and Emerson, 1984; Vardi and Wolper, 1986b; Walukiewicz, 1993; Walukiewicz, 1995; Walukiewicz, 2000; Stirling, 1992; Mader, 1997; Kaivola, 1997]. The language of the propositional -calculus, also called the modal calculus , is syntactically simpler than PDL. It consists of the usual propositional constructs ! and 0, atomic modalities [a], and the least xpoint operator . A greatest xpoint operator dual to can be de ned:

X:'(X )

def :X::'(:X ): ()

Variables are monadic, and the operator may be applied only to syntactically monotone formulas. As discussed above, this ensures monotonicity of the corresponding set operator. The language is interpreted over Kripke frames in which atomic propositions are interpreted as sets of states and atomic programs are interpreted as binary relations on states. The propositional -calculus subsumes PDL. For example, the PDL formula ' for atomic a can be written X:(' _ X ). The formula

DYNAMIC LOGIC

191

X:[a]X , which expresses the existence of a forced win for the rst player in a two-player game, and the formula X:[a]X , which expresses well-foundedness and is equivalent to wf a (see Section 7), are both inexpressible in PDL, as shown in [Streett, 1981; Kozen, 1981c]. [Niwinski, 1984] has shown that even with the addition of the halt construct, PDL is strictly less expressive than the -calculus. The propositional -calculus satis es a nite model theorem, as rst shown in [Kozen, 1988]. Progressively better decidability results were obtained in [Kozen and Parikh, 1983; Vardi and Stockmeyer, 1985; Vardi, 1985b], culminating in a deterministic exponential-time algorithm of [Emerson and Jutla, 1988] based on an automata-theoretic lemma of [Safra, 1988]. Since the -calculus subsumes PDL, it is EXPTIME -complete. In [Kozen, 1982; Kozen, 1983], an axiomatization of the propositional calculus was proposed and conjectured to be complete. The axiomatization consists of the axioms and rules of propositional modal logic, plus the axiom '[X=X:'] ! X:' and rule '[X= ] ! X:' ! for . Completeness of this deductive system for a syntactically restricted subset of formulas was shown in [Kozen, 1982; Kozen, 1983]. Completeness for the full language was proved by [Walukiewicz, 1995; Walukiewicz, 2000]. This was quickly followed by simpler alternative proofs by [Ambler et al., 1995; Bonsangue and Kwiatkowska, 1995; Hartonas, 1998]. [Brad eld, 1996] showed that the alternating = hierarchy (least/greatest xpoints) is strict. An interesting open question is the complexity of model checking : does a given formula of the propositional -calculus hold in a given state of a given Kripke frame? Although some progress has been made (see [Bhat and Cleaveland, 1996; Cleaveland, 1996; Emerson and Lei, 1986; Sokolsky and Smolka, 1994; Stirling and Walker, 1989]), it is still unknown whether this problem has a polynomial-time algorithm. The propositional -calculus has become a popular system for the speci cation and veri cation of properties of transition systems, where it has had some practical impact ([Steen et al., 1996]). Several recent papers on model checking work in this context; see [Bhat and Cleaveland, 1996; Cleaveland, 1996; Emerson and Lei, 1986; Sokolsky and Smolka, 1994; Stirling and Walker, 1989]. A comprehensive introduction can be found in [Stirling, 1992].

14.5 Kleene Algebra Kleene algebra (KA) is the algebra of regular expressions. It is named for the mathematician S. C. Kleene (1909{1994), who among his many other

192

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

achievements invented regular expressions and proved their equivalence to nite automata in [Kleene, 1956]. Kleene algebra has appeared in various guises and under many names in relational algebra [Ng, 1984; Ng and Tarski, 1977], semantics and logics of programs [Kozen, 1981b; Pratt, 1988], automata and formal language theory [Kuich, 1987; Kuich and Salomaa, 1986], and the design and analysis of algorithms [Aho et al., 1975; Tarjan, 1981; Mehlhorn, 1984; Iwano and Steiglitz, 1990; Kozen, 1991b]. As discussed in Section 13, Kleene algebra plays a prominent role in dynamic algebra as an algebraic model of program behavior. Beginning with the monograph of [Conway, 1971], many authors have contributed over the years to the development of the algebraic theory; see [Backhouse, 1975; Krob, 1991; Kleene, 1956; Kuich and Salomaa, 1986; 1992; Hopkins and Kozen, Sakarovitch, 1987; Kozen, 1990; Bloom and Esik, 1999]. See also [Kozen, 1996] for further references. A Kleene algebra is an algebraic structure (K; +; ; ; 0; 1) satisfying the axioms

+ ( + ) + +0 ( ) 1 ( + ) ( + ) 0 (37) 1 + (38) + (39) +

= = = = = = = = =

! !

( + ) + + + = ( ) 1 = + + 0 = 0 1 + =

where refers to the natural partial order on K :

def + = : ()

In short, a KA is an idempotent semiring under +; ; 0; 1 such that is the least solution to + x x and is the least solution to + x x. The axioms (37){(39) say essentially that behaves like the asterate operator on sets of strings or re exive transitive closure on binary relations. This particular axiomatization is from [Kozen, 1991a; Kozen, 1994a], but there are other competing ones. The axioms (38) and (39) correspond to the re exive transitive closure rule (RTC) of PDL (Section 2.5). Instead, we might postulate the equivalent

DYNAMIC LOGIC

axioms (40) (41)

193

! ! ;

which correspond to the loop invariance rule (LI). The induction axiom (IND) is inexpressible in KA, since there is no negation. A Kleene algebra is -continuous if it satis es the in nitary condition (42) = sup n n0 where

0 def = 1

n+1 def = n

and where the supremum is with respect to the natural order . We can think of (42) as a conjunction of the in nitely many axioms n , n 0, and the in nitary Horn formula ^

(

n0

n Æ)

! Æ:

In the presence of the other axioms, the *-continuity condition (42) implies (38){(41) and is strictly stronger in the sense that there exist Kleene algebras that are not *-continuous [Kozen, 1990]. The fundamental motivating example of a Kleene algebra is the family of regular sets of strings over a nite alphabet, but other classes of structures share the same equational theory, notably the binary relations on a set. In fact it is the latter interpretation that makes Kleene algebra a suitable choice for modeling programs in dynamic algebras. Other more unusual interpretations are the min; + algebra used in shortest path algorithms (see [Aho et al., 1975; Tarjan, 1981; Mehlhorn, 1984; Kozen, 1991b]) and KAs of convex polyhedra used in computational geometry as described in [Iwano and Steiglitz, 1990]. Axiomatization of the equational theory of the regular sets is a central question going back to the original paper of [Kleene, 1956]. A completeness theorem for relational algebras was given in an extended language by [Ng, 1984; Ng and Tarski, 1977]. Axiomatization is a central focus of the monograph of [Conway, 1971], but the bulk of his treatment is in nitary. [Redko, 1964] proved that there is no nite equational axiomatization. Schematic equational axiomatizations for the algebra of regular sets, necessarily representing in nitely many equations, have been given by [Krob, 1991] and [Bloom and Esik, 1993]. [Salomaa, 1966] gave two nitary complete axiomatizations that are sound for the regular sets but not sound in general over other standard interpretations, including relational interpretations. The axiomatization given above is a nitary universal Horn axiomatization that

194

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is sound and complete for the equational theory of standard relational and language-theoretic models, including the regular sets [Kozen, 1991a; Kozen, 1994a]. Other work on completeness appears in [Krob, 1991; Boa, 1990; Boa, 1995; Archangelsky, 1992]. The literature contains a bewildering array of inequivalent de nitions of Kleene algebras and related algebraic structures; see [Conway, 1971; Pratt, 1988; Pratt, 1990; Kozen, 1981b; Kozen, 1991a; Aho et al., 1975; Mehlhorn, 1984; Kuich, 1987; Kozen, 1994b]. As demonstrated in [Kozen, 1990], many of these are strongly related. One important property shared by most of them is closure under the formation of n n matrices. This was proved for the axiomatization above in [Kozen, 1991a; Kozen, 1994a], but the idea essentially goes back to [Kleene, 1956; Conway, 1971; Backhouse, 1975]. This result gives rise to an algebraic treatment of nite automata in which the automata are represented by their transition matrices. The equational theory of Kleene algebra is PSPACE -complete [Stockmeyer and Meyer, 1973]; thus it is apparently less complex than PDL, which is EXPTIME -complete (Theorem 21), although the strict separation of the two complexity classes is still open. Kleene Algebra with Tests

From a practical standpoint, many simple program manipulations such as loop unwinding and basic safety analysis do not require the full power of PDL, but can be carried out in a purely equational subsystem using the axioms of Kleene algebra. However, tests are an essential ingredient, since they are needed to model conventional programming constructs such as conditionals and while loops and to handle assertions. This motivates the de nition of the following variant of KA introduced in [Kozen, 1996; Kozen, 1997b]. A Kleene algebra with tests (KAT) is a Kleene algebra with an embedded Boolean subalgebra. Formally, it is a two-sorted algebra (K; B; +; ; ; ; 0; 1)

such that

(K; +; ; ; 0; 1) is a Kleene algebra (B; +; ; ; 0; 1) is a Boolean algebra B K.

The unary negation operator is de ned only on B . Elements of B are called tests and are written '; ; : : : . Elements of K (including elements of B ) are written ; ; : : : . In PDL, a test would be written '?, but in KAT we dispense with the symbol ?.

DYNAMIC LOGIC

195

This deceptively concise de nition actually carries a lot of information. The operators +; ; 0; 1 each play two roles: applied to arbitrary elements of K , they refer to nondeterministic choice, composition, fail, and skip, respectively; and applied to tests, they take on the additional meaning of Boolean disjunction, conjunction, falsity, and truth, respectively. These two usages do not con ict|for example, sequential testing of two tests is the same as testing their conjunction|and their coexistence admits considerable economy of expression. For applications in program veri cation, the standard interpretation would be a Kleene algebra of binary relations on a set and the Boolean algebra of subsets of the identity relation. One could also consider trace models, in which the Kleene elements are sets of traces (sequences of states) and the Boolean elements are sets of states (traces of length 0). As with KA, one can form the algebra n n matrices over a KAT (K; B ); the Boolean elements of this structure are the diagonal matrices over B . KAT can express conventional imperative programming constructs such as conditionals and while loops as in PDL. It can perform elementary program manipulation such as loop unwinding, constant propagation, and basic safety analysis in a purely equational manner. The applicability of KAT and related equational systems in practical program veri cation has been explored in [Cohen, 1994a; Cohen, 1994b; Cohen, 1994c; Kozen, 1996; Kozen and Patron, 2000]. There is a language-theoretic model that plays the same role in KAT that the regular sets play in KA, namely the algebra of regular sets of guarded strings, and a corresponding completeness result was obtained by [Kozen and Smith, 1996]. Moreover, KAT is complete for the equational theory of relational models, as shown in [Kozen and Smith, 1996]. Although less expressive than PDL, KAT is also apparently less diÆcult to decide: it is PSPACE -complete, the same as KA, as shown in [Cohen et al., 1996]. In [Kozen, 1999a], it is shown that KAT subsumes propositional Hoare Logic in the following sense. The partial correctness assertion f'g f g is encoded in KAT as the equation ' = 0, or equivalently ' = ' . If a rule f'1 g 1 f 1 g; : : : ; f'n g n f ng f'g f g is derivable in propositional Hoare Logic, then its translation, the universal Horn formula

'1 1 1 = 0 ^ ^ 'n n n = 0 ! ' = 0; is a theorem of KAT. For example, the while rule of Hoare logic (see Section 2.6) becomes '' = 0

! '() ' = 0:

196

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

More generally, all relationally valid Horn formulas of the form

1 = 0 ^ ^ n = 0

! =

are theorems of KAT [Kozen, 1999a]. Horn formulas are important from a practical standpoint. For example, commutativity conditions are used to model the idea that the execution of certain instructions does not aect the result of certain tests. In light of this, the complexity of the universal Horn theory of KA and KAT are of interest. There are both positive and negative results. It is shown in [Kozen, 1997c] that for a Horn formula ! ' over *-continuous Kleene algebras,

if contains only commutativity conditions = , the universal Horn theory is 01 -complete; if contains only monoid equations, the problem is 02 -complete;

for arbitrary nite sets of equations , the problem is 11 -complete.

On the other hand, commutativity assumptions of the form ' = ', where ' is a test, and assumptions of the form = 0 can be eliminated without loss of eÆciency, as shown in [Cohen, 1994a; Kozen and Smith, 1996]. Note that assumptions of this form are all we need to encode Hoare Logic as described above. In typed Kleene algebra introduced in [Kozen, 1998; Kozen, 1999b], elements have types s ! t. This allows Kleene algebras of nonsquare matrices, among other applications. It is shown in [Kozen, 1999b] that Hoare Logic is subsumed by the type calculus of typed KA augmented with a typecast or coercion rule for tests. Thus Hoare-style reasoning with partial correctness assertions reduces to typechecking in a relatively simple type system.

14.6 Dynamic Algebra Dynamic algebra provides an abstract algebraic framework that relates to PDL as Boolean algebra relates to propositional logic. A dynamic algebra is de ned to be any two-sorted algebraic structure (K; B; ), where B = (B; !; 0) is a Boolean algebra, K = (K; +; ; ; 0; 1) is a Kleene algebra (see Section 14.5), and : K B ! B is a scalar multiplication satisfying algebraic constraints corresponding to the dual forms of the PDL axioms (Axioms 17). For example, all dynamic algebras satisfy the equations ( ) ' 0 0' (' _ )

= = = =

( ') 0 0 '_ ;

DYNAMIC LOGIC

197

which correspond to the PDL validities < ; >' <>0 ' <>(' _ )

$ $ $ $

<>< >'

0 0 <>' _ <> ;

respectively. The Boolean algebra B is an abstraction of the formulas of PDL and the Kleene algebra K is an abstraction of the programs. The interaction of scalar multiplication with iteration can be axiomatized in a nitary or in nitary way. One can postulate (43) '

' _ ( (:' ^ ( ')))

corresponding to the diamond form of the PDL induction axiom (Axiom 17(viii)). Here ' in B i ' _ = . Alternatively, one can postulate the stronger axiom of -continuity : (44) ' = sup(n '): n

We can think of (44) as a conjunction of in nitely many axioms n ' ', n 0, and the in nitary Horn formula ^

(

n0

n '

)

! ' :

In the presence of the other axioms, (44) implies (43) [Kozen, 1980b], and is strictly stronger in the sense that there are dynamic algebras that are not *-continuous [Pratt, 1979a]. A standard Kripke frame K = (U; mK ) of PDL gives rise to a *-continuous dynamic algebra consisting of a Boolean algebra of subsets of U and a Kleene algebra of binary relations on U . Operators are interpreted as in PDL, including 0 as 0? (the empty program), 1 as 1? (the identity program), and ' as <>'. Nonstandard Kripke frames (see Section 3.2) also give rise to dynamic algebras, but not necessarily *-continuous ones. A dynamic algebra is separable if any pair of distinct Kleene elements can be distinguished by some Boolean element; that is, if 6= , then there exists ' 2 B with ' 6= '. Research directions in this area include the following.

Representation theory. It is known that any separable dynamic algebra is isomorphic to some possibly nonstandard Kripke frame. Under certain conditions, \possibly nonstandard" can be replaced by \standard," but not in general, even for *-continuous algebras [Kozen, 1980b; Kozen, 1979c; Kozen, 1980a].

198

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Algebraic methods in PDL. The small model property (Theorem 15) and completeness (Theorem 18) for PDL can be established by purely algebraic considerations [Pratt, 1980a].

Comparative study of alternative axiomatizations of . For example, it is known that separable dynamic algebras can be distinguished from standard Kripke frames by a rst-order formula, but even L!1! cannot distinguish the latter from -continuous separable dynamic algebras [Kozen, 1981b].

Equational theory of dynamic algebras. Many seemingly unrelated models of computation share the same equational theory, namely that of dynamic algebras [Pratt, 1979b; Pratt, 1979a].

In addition, many interesting questions arise from the algebraic viewpoint, and interesting connections with topology, classical algebra, and model theory have been made [Kozen, 1979b; Nemeti, 1980]. 15 BIBLIOGRAPHICAL NOTES Systematic program veri cation originated with the work of [Floyd, 1967] and [Hoare, 1969]. Hoare Logic was introduced in [Hoare, 1969]; see [Cousot, 1990; Apt, 1981; Apt and Olderog, 1991] for surveys. The digital abstraction, the view of computers as state transformers that operate by performing a sequence of discrete and instantaneous primitive steps, can be attributed to [Turing, 1936]. Finite-state transition systems were de ned formally by [McCulloch and Pitts, 1943]. State-transition semantics is based on this idea and is quite prevalent in early work on program semantics and veri cation; see [Hennessy and Plotkin, 1979]. The relational-algebraic approach taken here, in which programs are interpreted as binary input/output relations, was introduced in the context of DL by [Pratt, 1976]. The notions of partial and total correctness were present in the early work of [Hoare, 1969]. Regular programs were introduced by [Fischer and Ladner, 1979] in the context of PDL. The concept of nondeterminism was introduced in the original paper of [Turing, 1936], although he did not develop the idea. Nondeterminism was further developed by [Rabin and Scott, 1959] in the context of nite automata. [Burstall, 1974] suggested using modal logic for reasoning about programs, but it was not until the work of [Pratt, 1976], prompted by a suggestion of R. Moore, that it was actually shown how to extend modal logic in a useful way by considering a separate modality for every program. The rst research devoted to propositional reasoning about programs seems to be that of [Fischer and Ladner, 1977; Fischer and Ladner, 1979] on PDL. As

DYNAMIC LOGIC

199

mentioned in the Preface, the general use of logical systems for reasoning about programs was suggested by [Engeler, 1967]. Other semantics besides Kripke semantics have been studied; see [Berman, 1979; Nishimura, 1979; Kozen, 1979b; Trnkova and Reiterman, 1980; Kozen, 1980b; Pratt, 1979b]. Modal logic has many applications and a vast literature; good introductions can be found in [Hughes and Cresswell, 1968; Chellas, 1980]. Alternative and iterative guarded commands were studied in [Gries, 1981]. Partial correctness assertions and the Hoare rules given in Section 2.6 were rst formulated by [Hoare, 1969]. Regular expressions, on which the regular program operators are based, were introduced by [Kleene, 1956]. Their algebraic theory was further investigated by [Conway, 1971]. They were rst applied in the context of DL by [Fischer and Ladner, 1977; Fischer and Ladner, 1979]. The axiomatization of PDL given in Axioms 17 was formulated by [Segerberg, 1977]. Tests and converse were investigated by various authors; see [Peterson, 1978; Berman, 1978; Berman and Paterson, 1981; Streett, 1981; Streett, 1982; Vardi, 1985b]. The continuity of the diamond operator in the presence of reverse is due to [Trnkova and Reiterman, 1980]. The ltration argument and the small model property for PDL are due to [Fischer and Ladner, 1977; Fischer and Ladner, 1979]. Nonstandard Kripke frames for PDL were studied by [Berman, 1979; Berman, 1982], [Parikh, 1978a], [Pratt, 1979a; Pratt, 1980a], and [Kozen, 1979c; Kozen, 1979b; Kozen, 1980a; Kozen, 1980b; Kozen, 1981b]. The axiomatization of PDL used here (Axiom System 17) was introduced by [Segerberg, 1977]. Completeness was shown independently by [Gabbay, 1977] and [Parikh, 1978a]. A short and easy-to-follow proof is given in [Kozen and Parikh, 1981]. Completeness is also treated in [Pratt, 1978; Pratt, 1980a; Berman, 1979; Nishimura, 1979; Kozen, 1981a]. The exponential-time lower bound for PDL was established by [Fischer and Ladner, 1977; Fischer and Ladner, 1979] by showing how PDL formulas can encode computations of linear-space-bounded alternating Turing machines. Deterministic exponential-time algorithms were rst given in [Pratt, 1978; Pratt, 1979b; Pratt, 1980b]. Theorem 24 showing that the problem of deciding whether j= , where is a xed r.e. set of PDL formulas, is 11 -complete is due to [Meyer et al., 1981]. The computational diÆculty of the validity problem for nonregular PDL and the borderline between the decidable and undecidable were discussed in [Harel et al., 1983]. The fact that any nonregular program adds expressive power to PDL, Theorem 25, rst appeared explicitly in [Harel and Singerman, 1996]. Theorem 26 on the undecidability of context-free PDL was observed by [Ladner, 1977].

200

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Theorems 27 and 28 are from [Harel et al., 1983]. An alternative proof of Theorem 28 using tiling is supplied in [Harel, 1985]; see [Harel et al., 2000]. The existence of a primitive recursive one-letter extension of PDL that is undecidable was shown already in [Harel et al., 1983], but undecidability for the particular case of a2 , Theorem 29, is from [Harel and Paterson, 1984]. Theorem 30 is from [Harel and Singerman, 1996]. As to decidable extensions, Theorem 31 was proved in [Koren and Pnueli, 1983]. The more general results of Section 6.2, namely Theorems 32, 33, and 34, are from [Harel and Raz, 1993], as is the notion of a simple-minded PDA. The decidability of emptiness for pushdown and stack automata on trees that is needed for the proofs of these is from [Harel and Raz, 1994]. A better bound on the complexity of the emptiness results can be found in [Peng and Iyer, 1995]. A suÆcient condition for PDL with the addition of a program over a single letter alphabet not to have the nite model property is given in [Harel and Singerman, 1996]. Completeness and exponential time decidability for DPDL, Theorem 40 and the upper bound of Theorem 41, are proved in [Ben-Ari et al., 1982] and [Valiev, 1980]. The lower bound of Theorem 41 is from [Parikh, 1981]. Theorems 43 and 44 on SDPDL are from [Halpern and Reif, 1981; Halpern and Reif, 1983]. That tests add to the power of PDL is proved in [Berman and Paterson, 1981]. It is also known that the test-depth hierarchy is strict [Berman, 1978; Peterson, 1978] and that rich-test PDL is strictly more expressive than poor-test PDL [Peterson, 1978; Berman, 1978; Berman and Paterson, 1981]. These results also hold for SDPDL. The results on programs as automata (Theorems 45 and 46) appear in [Pratt, 1981b]. Alternative proofs are given in [Harel and Sherman, 1985]; see [Harel et al., 2000]. In recent years, the development of the automata-theoretic approach to logics of programs has prompted renewed inquiry into the complexity of automata on in nite objects, with considerable success. See [Courcoubetis and Yannakakis, 1988; Emerson, 1985; Emerson and Jutla, 1988; Emerson and Sistla, 1984; Manna and Pnueli, 1987; Muller et al., 1988; Pecuchet, 1986; Safra, 1988; Sistla et al., 1987; Streett, 1982; Vardi, 1985a; Vardi, 1985b; Vardi, 1987; Vardi and Stockmeyer, 1985; Vardi and Wolper, 1986b; Vardi and Wolper, 1986a; Arnold, 1997a; Arnold, 1997b]; and [Thomas, 1997]. Especially noteworthy in this area is the result of [Safra, 1988] involving the complexity of converting a nondeterministic automaton on in nite strings into an equivalent deterministic one. This result has already had a signi cant impact on the complexity of decision procedures for several logics of programs; see [Courcoubetis and Yannakakis, 1988; Emerson and Jutla, 1988; Emerson and Jutla, 1989]; and [Safra, 1988].

DYNAMIC LOGIC

201

Intersection of programs was studied in [Harel et al., 1982a]. That the axioms for converse yield completeness for CPDL is proved in [Parikh, 1978a]. The complexity of PDL with converse and various forms of well-foundedness constructs is studied in [Vardi, 1985b]. Many authors have studied logics with a least- xpoint operator, both on the propositional and rst-order levels ([Scott and de Bakker, 1969; Hitchcock and Park, 1972; Park, 1976; Pratt, 1981a; Kozen, 1982; Kozen, 1983; Kozen, 1988; Kozen and Parikh, 1983; Niwinski, 1984; Streett, 1985a; Vardi and Stockmeyer, 1985]). The version of the propositional -calculus presented here was introduced in [Kozen, 1982; Kozen, 1983]. That the propositional -calculus is strictly more expressive than PDL with wf was show in [Niwinski, 1984] and [Streett, 1985a]. That this logic is strictly more expressive than PDL with halt was shown in [Harel and Sherman, 1982]. That this logic is strictly more expressive than PDL was shown in [Streett, 1981]. The wf construct (actually its complement, repeat) is investigated in [Streett, 1981; Streett, 1982], in which Theorems 48 (which is actually due to Pratt) and 50{52 are proved. The halt construct (actually its complement, loop) was introduced in [Harel and Pratt, 1978] and Theorem 49 is from [Harel and Sherman, 1982]. Finite model properties for the logics LPDL, RPDL, CLPDL, CRPDL, and the propositional -calculus were established in [Streett, 1981; Streett, 1982] and [Kozen, 1988]. Decidability results were obtained in [Streett, 1981; Streett, 1982; Kozen and Parikh, 1983; Vardi and Stockmeyer, 1985]; and [Vardi, 1985b]. Deterministic exponential-time completeness was established in [Emerson and Jutla, 1988] and [Safra, 1988]. For the strongest variant, CRPDL, exponential-time decidability follows from [Vardi, 1998b]. Concurrent PDL is de ned and studied in [Peleg, 1987b]. Additional versions of this logic, which employ various mechanisms for communication among the concurrent parts of a program, are considered in [Peleg, 1987c; Peleg, 1987a]. These papers contain many results concerning expressive power, decidability and undecidability for concurrent PDL with communication. Other work on PDL not described here includes work on nonstandard models, studied in [Berman, 1979; Berman, 1982] and [Parikh, 1981]; PDL with Boolean assignments, studied in [Abrahamson, 1980]; and restricted forms of the consequence problem, studied in [Parikh, 1981]. First-order DL was de ned in [Harel et al., 1977], where it was also rst named Dynamic Logic. That paper was carried out as a direct continuation of the original work of [Pratt, 1976]. Many variants of DL were de ned in [Harel, 1979]. In particular, DL(bstk) is very close to the context-free Dynamic Logic investigated there. Uninterpreted reasoning in the form of program schematology has been a common activity ever since the work of [Ianov, 1960]. It was given con-

202

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

siderable impetus by the work of [Luckham et al., 1970] and [Paterson and Hewitt, 1970]; see also [Greibach, 1975]. The study of the correctness of interpreted programs goes back to the work of Turing and von Neumann, but seems to have become a well-de ned area of research following [Floyd, 1967], [Hoare, 1969] and [Manna, 1974]. Embedding logics of programs in L!1! is based on observations of [Engeler, 1967]. Theorem 57 is from [Meyer and Parikh, 1981]. Theorem 60 is from [Harel, 1979] (see also [Harel, 1984] and [Harel and Kozen, 1984]); it is similar to the expressiveness result of [Cook, 1978]. Theorem 61 and Corollary 62 are from [Harel and Kozen, 1984]. Arithmetical structures were rst de ned by [Moschovakis, 1974] under the name acceptable structures . In the context of logics of programs, they were reintroduced and studied in [Harel, 1979]. The 11 -completeness of DL was rst proved by Meyer, and Theorem 63 appears in [Harel et al., 1977]. An alternative proof is given in [Harel, 1985]; see [Harel et al., 2000]. Theorem 65 is from [Meyer and Halpern, 1982]. That the fragment of DL considered in Theorem 66 is not r.e., was proved by [Pratt, 1976]. Theorem 67 follows from [Harel and Kozen, 1984]. The name \spectral complexity" was proposed by [Tiuryn, 1986], although the main ideas and many results concerning this notion were already present in [Tiuryn and Urzyczyn, 1983] (see [Tiuryn and Urzyczyn, 1988] for the full version). This notion is an instance of the so-called second-order spectrum of a formula. First-order spectra were investigated by [Sholz, 1952], from which originates the well known Spectralproblem. The reader can nd more about this problem and related results in the survey paper by [Borger, 1984]. The notion of a natural chain is from [Urzyczyn, 1983]. The results presented here are from [Tiuryn and Urzyczyn, 1983; Tiuryn and Urzyczyn, 1988]. A result similar to Theorem 69 in the area of nite model theory was obtained by [Sazonov, 1980] and independently by [Gurevich, 1983]. Higher-order stacks were introduced in [Engelfriet, 1983] to study complexity classes. Higher-order arrays and stacks in DL were considered by [Tiuryn, 1986], where a strict hierarchy within the class of elementary recursive sets was established. The main tool used in the proof of the strictness of this hierarchy is a generalization of Cook's auxiliary pushdown automata theorem for higher-order stacks, which is due to [Kowalczyk et al., 1987]. [Meyer and Halpern, 1982] showed completeness for termination assertions (Theorem 71). In nitary completeness for DL (Theorem 72) is based upon a similar result for Algorithmic Logic (see Section 13.1) by [Mirkowska, 1971]. The proof sketch presented in [Harel et al., 2000] is an adaptation of Henkin's proof for L!1! appearing in [Keisler, 1971]. The notion of relative completeness and Theorem 73 are due to [Cook, 1978]. The notion of arithmetical completeness and Theorem 75 is from [Harel, 1979].

DYNAMIC LOGIC

203

The use of invariants to prove partial correctness and of well-founded sets to prove termination are due to [Floyd, 1967]. An excellent survey of such methods and the corresponding completeness results appears in [Apt, 1981]. Some contrasting negative results are contained in [Clarke, 1979], [Lipton, 1977], and [Wand, 1978]. Many of the results on relative expressiveness presented herein answer questions posed in [Harel, 1979]. Similar uninterpreted research, comparing the expressive power of classes of programs (but detached from any surrounding logic) has taken place under the name comparative schematology quite extensively ever since [Ianov, 1960]; see [Greibach, 1975] and [Manna, 1974]. Theorems 76, 79 and 83(i) result as an application of the so-called spectral theorem , which connects expressive power of logics with complexity classes. This theorem was obtained by [Tiuryn and Urzyczyn, 1983; Tiuryn and Urzyczyn, 1984; Tiuryn and Urzyczyn, 1988]. A simpli ed framework for this approach and a statement of this theorem together with a proof is given in [Harel et al., 2000]. Theorem 78 appears in [Berman et al., 1982] and was proved independently in [Stolboushkin and Taitslin, 1983]. An alternative proof is given in [Tiuryn, 1989]. These results extend in a substantial way an earlier and much simpler result for the case of regular programs without equality in the vocabulary, which appears in [Halpern, 1981]. A simpler proof of the special case of the quanti er-free fragment of the logic of regular programs appears in [Meyer and Winklmann, 1982]. Theorem 79 is from [Tiuryn and Urzyczyn, 1984]. Theorem 80 is from [Stolboushkin, 1983]. The proof, as in the case of regular programs (see [Stolboushkin and Taitslin, 1983]), uses Adian's result from group theory ([Adian, 1979]). Results on the expressive power of DL with deterministic while programs and a Boolean stack can be found in [Stolboushkin, 1983; Kfoury, 1985]. Theorem 81 is from [Tiuryn and Urzyczyn, 1983; Tiuryn and Urzyczyn, 1988]. [Erimbetov, 1981; Tiuryn, 1981b; Tiuryn, 1984; Kfoury, 1983; Kfoury and Stolboushkin, 1997] contain results on the expressive power of DL over programming languages with bounded memory. [Erimbetov, 1981] shows that DL(dreg) < DL(dstk). The main proof technique is pebble games on nite trees. Theorem 83 is from [Urzyczyn, 1987]. There is a dierent proof of this result, using Adian structures, which appears in [Stolboushkin, 1989]. Theorem 77 is from [Urzyczyn, 1988], which also studies programs with Boolean arrays. Wildcard assignments were considered in [Harel et al., 1977] under the name nondeterministic assignments. Theorem 84 is from [Meyer and Winklmann, 1982]. Theorem 85 is from [Meyer and Parikh, 1981].

204

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

In our exposition of the comparison of the expressive power of logics, we have made the assumption that programs use only quanti er-free rstorder tests. It follows from the results of [Urzyczyn, 1986] that allowing full rst-order tests in many cases results in increased expressive power. [Urzyczyn, 1986] also proves that adding array assignments to nondeterministic r.e. programs increases the expressive power of the logic. This should be contrasted with the result of [Meyer and Tiuryn, 1981; Meyer and Tiuryn, 1984] to the eect that for deterministic r.e. programs, array assignments do not increase expressive power. [Makowsky, 1980] considers a weaker notion of equivalence between logics common in investigations in abstract model theory, whereby models are extended with interpretations for additional predicate symbols. With this notion it is shown in [Makowsky, 1980] that most of the versions of logics of programs treated here become equivalent. Algorithmic logic was introduced by [Salwicki, 1970]. [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b] extended AL to allow nondeterministic while programs and studied the operators r and . Complete in nitary deductive systems for propositional and rst-order versions were given by [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b] using the algebraic methods of [Rasiowa and Sikorski, 1963]. Surveys of early work in AL can be found in [Banachowski et al., 1977; Salwicki, 1977]. [Constable, 1977; Constable and O'Donnell, 1978; Goldblatt, 1982] presented logics similar to AL and DL for reasoning about deterministic while programs. Nonstandard Dynamic Logic was introduced by [Nemeti, 1981] and [Andreka et al., 1982a; Andreka et al., 1982b] and studied in [Csirmaz, 1985]. See [Makowsky and Sain, 1986] for more information and further references. The halt construct (actually its complement, loop) was introduced in [Harel and Pratt, 1978], and the wf construct (actually its complement, repeat) was investigated for PDL in [Streett, 1981; Streett, 1982]. Theorem 86 is from [Meyer and Winklmann, 1982], Theorem 87 is from [Harel and Peleg, 1985], Theorem 88 is from [Harel, 1984], and the axiomatizations of LDL and PDL are discussed in [Harel, 1979; Harel, 1984]. Dynamic algebra was introduced in [Kozen, 1980b] and [Pratt, 1979b] and studied by numerous authors; see [Kozen, 1979c; Kozen, 1979b; Kozen, 1980a; Kozen, 1981b; Pratt, 1979a; Pratt, 1980a; Pratt, 1988; Nemeti, 1980; Trnkova and Reiterman, 1980]. A survey of the main results appears in [Kozen, 1979a]. The PhD thesis [Ramshaw, 1981] contains an engaging introduction to the subject of probabilistic semantics and veri cation. [Kozen, 1981d] provided a formal semantics for probabilistic programs. The logic P r(DL) was presented in [Feldman and Harel, 1984], along with a deductive system that is complete for Kozen's semantics relative to an extension of rst-order analysis. Various propositional versions of probabilistic DL have been proposed

DYNAMIC LOGIC

205

in [Reif, 1980; Makowsky and Tiomkin, 1980; Feldman, 1984; Parikh and Mahoney, 1983; Kozen, 1985]. The temporal approach to probabilistic veri cation has been studied in [Lehmann and Shelah, 1982; Hart et al., 1982; Courcoubetis and Yannakakis, 1988; Vardi, 1985a]. Interest in the subject of probabilistic veri cation has undergone a recent revival; see [Morgan et al., 1999; Segala and Lynch, 1994; Hansson and Jonsson, 1994; Jou and Smolka, 1990; Baier and Kwiatkowska, 1998; Huth and Kwiatkowska, 1997; Blute et al., 1997]. Concurrent DL is de ned and studied in [Peleg, 1987b]. Additional versions of this logic, which employ various mechanisms for communication among the concurrent parts of a program, are also considered in [Peleg, 1987c; Peleg, 1987a]. David Harel The Weizmann Institute of Science, Rehovot, Israel Dexter Kozen Cornell University, Ithaca, New York Jerzy Tiuryn The University of Warsaw, Warsaw, Poland BIBLIOGRAPHY [Abrahamson, 1980] K. Abrahamson. Decidability and expressiveness of logics of processes. PhD thesis, Univ. of Washington, 1980. [Adian, 1979] S. I. Adian. The Burnside Problem and Identities in Groups. SpringerVerlag, 1979. [Aho et al., 1975] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, Mass., 1975. [Ambler et al., 1995] S. Ambler, M. Kwiatkowska, and N. Measor. Duality and the completeness of the modal -calculus. Theor. Comput. Sci., 151(1):3{27, November 1995. [Andreka et al., 1982a] H. Andreka, I. Nemeti, and I. Sain. A complete logic for reasoning about programs via nonstandard model theory, part I. Theor. Comput. Sci., 17:193{212, 1982. [Andreka et al., 1982b] H. Andreka, I. Nemeti, and I. Sain. A complete logic for reasoning about programs via nonstandard model theory, part II. Theor. Comput. Sci., 17:259{278, 1982. [Apt and Olderog, 1991] K. R. Apt and E.-R. Olderog. Veri cation of Sequential and Concurrent Programs. Springer-Verlag, 1991. [Apt and Plotkin, 1986] K. R. Apt and G. Plotkin. Countable nondeterminism and random assignment. J. Assoc. Comput. Mach., 33:724{767, 1986. [Apt, 1981] K. R. Apt. Ten years of Hoare's logic: a survey|part I. ACM Trans. Programming Languages and Systems, 3:431{483, 1981. [Archangelsky, 1992] K. V. Archangelsky. A new nite complete solvable quasiequational calculus for algebra of regular languages. Manuscript, Kiev State University, 1992. [Arnold, 1997a] A. Arnold. An initial semantics for the -calculus on trees and Rabin's complementation lemma. Technical report, University of Bordeaux, 1997. [Arnold, 1997b] A. Arnold. The -calculus on trees and Rabin's complementation theorem. Technical report, University of Bordeaux, 1997.

206

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Backhouse, 1975] R. C. Backhouse. Closure Algorithms and the Star-Height Problem of Regular Languages. PhD thesis, Imperial College, London, U.K., 1975. [Backhouse, 1986] R. C. Backhouse. Program Construction and Veri cation. PrenticeHall, 1986. [Baier and Kwiatkowska, 1998] C. Baier and M. Kwiatkowska. On the veri cation of qualitative properties of probabilistic processes under fairness constraints. Information Processing Letters, 66(2):71{79, April 1998. [Banachowski et al., 1977] L. Banachowski, A. Kreczmar, G. Mirkowska, H. Rasiowa, and A. Salwicki. An introduction to algorithmic logic: metamathematical investigations in the theory of programs. In Mazurkiewitz and Pawlak, editors, Math. Found. Comput. Sci., pages 7{99. Banach Center, Warsaw, 1977. [Ben-Ari et al., 1982] M. Ben-Ari, J. Y. Halpern, and A. Pnueli. Deterministic propositional dynamic logic: nite models, complexity and completeness. J. Comput. Syst. Sci., 25:402{417, 1982. [Berman and Paterson, 1981] F. Berman and M. Paterson. Propositional dynamic logic is weaker without tests. Theor. Comput. Sci., 16:321{328, 1981. [Berman et al., 1982] P. Berman, J. Y. Halpern, and J. Tiuryn. On the power of nondeterminism in dynamic logic. In Nielsen and Schmidt, editors, Proc 9th Colloq. Automata Lang. Prog., volume 140 of Lect. Notes in Comput. Sci., pages 48{60. Springer-Verlag, 1982. [Berman, 1978] F. Berman. Expressiveness hierarchy for PDL with rich tests. Technical Report 78-11-01, Comput. Sci. Dept., Univ. of Washington, 1978. [Berman, 1979] F. Berman. A completeness technique for D-axiomatizable semantics. In Proc. 11th Symp. Theory of Comput., pages 160{166. ACM, 1979. [Berman, 1982] F. Berman. Semantics of looping programs in propositional dynamic logic. Math. Syst. Theory, 15:285{294, 1982. [Bhat and Cleaveland, 1996] G. Bhat and R. Cleaveland. EÆcient local model checking for fragments of the modal -calculus. In T. Margaria and B. Steen, editors, Proc. Second Int. Workshop Tools and Algorithms for the Construction and Analysis of Systems (TACAS'96), volume 1055 of Lect. Notes in Comput. Sci., pages 107{112. Springer-Verlag, March 1996. [Bloom and E sik, 1992] S. L. Bloom and Z. E sik. Program correctness and matricial iteration theories. In Proc. 7th Int. Conf. Mathematical Foundations of Programming Semantics, volume 598 of Lecture Notes in Computer Science, pages 457{476. Springer-Verlag, 1992. [Bloom and E sik, 1993] S. L. Bloom and Z. E sik. Equational axioms for regular sets. Math. Struct. Comput. Sci., 3:1{24, 1993. [Blute et al., 1997] R. Blute, J. Desharnais, A. Edelat, and P. Panangaden. Bisimulation for labeled Markov processes. In Proc. 12th Symp. Logic in Comput. Sci., pages 149{ 158. IEEE, 1997. [Boa, 1990] M. Boa. Une remarque sur les systemes complets d'identites rationnelles. Informatique Theoretique et Applications/Theoretical Informatics and Applications, 24(4):419{423, 1990. [Boa, 1995] Maurice Boa. Une condition impliquant toutes les identites rationnelles. Informatique Theoretique et Applications/Theoretical Informatics and Applications, 29(6):515{518, 1995. [Bonsangue and Kwiatkowska, 1995] M. Bonsangue and M. Kwiatkowska. Reinterpreting the modal -calculus. In A. Ponse, M. de Rijke, and Y. Venema, editors, Modal Logic and Process Algebra, pages 65{83. CSLI Lecture Notes, August 1995. [Borger, 1984] E. Borger. Spectralproblem and completeness of logical decision problems. In G. Hasenjaeger E. Borger and D. Rodding, editors, Logic and Machines: Decision Problems and Complexity, Proccedings, volume 171 of Lect. Notes in Comput. Sci., pages 333{356. Springer-Verlag, 1984. [Brad eld, 1996] J. C. Brad eld. The modal -calculus alternation hierarchy is strict. In U. Montanari and V. Sassone, editors, Proc. CONCUR'96, volume 1119 of Lect. Notes in Comput. Sci., pages 233{246. Springer, 1996. [Burstall, 1974] R. M. Burstall. Program proving as hand simulation with a little induction. Information Processing, pages 308{312, 1974.

DYNAMIC LOGIC

207

[Chandra et al., 1981] A. Chandra, D. Kozen, and L.Stockmeyer. Alternation. J. Assoc. Comput. Mach., 28(1):114{133, 1981. [Chellas, 1980] B. F. Chellas. Modal Logic: An Introduction. Cambridge University Press, 1980. [Clarke, 1979] E. M. Clarke. Programming language constructs for which it is impossible to obtain good Hoare axiom systems. J. Assoc. Comput. Mach., 26:129{147, 1979. [Cleaveland, 1996] R. Cleaveland. EÆcient model checking via the equational -calculus. In Proc. 11th Symp. Logic in Comput. Sci., pages 304{312. IEEE, July 1996. [Cohen et al., 1996] Ernie Cohen, Dexter Kozen, and Frederick Smith. The complexity of Kleene algebra with tests. Technical Report 96-1598, Computer Science Department, Cornell University, July 1996. [Cohen, 1994a] E. Cohen. Hypotheses in Kleene algebra. Available as ftp://ftp. telcordia.com/pub/ernie/research/homepage.html, April 1994. [Cohen, 1994b] E. Cohen. Lazy caching. Available as ftp://ftp.telcordia.com/pub/ ernie/research/homepage.html, 1994. [Cohen, 1994c] E. Cohen. Using Kleene algebra to reason about concurrency control. Available as ftp://ftp.telcordia.com/pub/ernie/research/homepage.html, 1994. [Constable and O'Donnell, 1978] R. L. Constable and M. O'Donnell. A Programming Logic. Winthrop, 1978. [Constable, 1977] R. L. Constable. On the theory of programming logics. In Proc. 9th Symp. Theory of Comput., pages 269{285. ACM, May 1977. [Conway, 1971] J. H. Conway. Regular Algebra and Finite Machines. Chapman and Hall, London, 1971. [Cook, 1978] S. A. Cook. Soundness and completeness of an axiom system for program veri cation. SIAM J. Comput., 7:70{80, 1978. [Courcoubetis and Yannakakis, 1988] C. Courcoubetis and M. Yannakakis. Verifying temporal properties of nite-state probabilistic programs. In Proc. 29th Symp. Foundations of Comput. Sci., pages 338{345. IEEE, October 1988. [Cousot, 1990] P. Cousot. Methods and logics for proving programs. In J. van Leeuwen, editor, Handbood of Theoretical Computer Science, volume B, pages 841{993. Elsevier, Amsterdam, 1990. [Csirmaz, 1985] L. Csirmaz. A completeness theorem for dynamic logic. Notre Dame J. Formal Logic, 26:51{60, 1985. [Davis et al., 1994] M. D. Davis, R. Sigal, and E. J. Weyuker. Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science. Academic Press, 1994. [de Bakker, 1980] J. de Bakker. Mathematical Theory of Program Correctness. PrenticeHall, 1980. [Emerson and Halpern, 1985] E. A. Emerson and J. Y. Halpern. Decision procedures and expressiveness in the temporal logic of branching time. J. Comput. Syst. Sci., 30(1):1{24, 1985. [Emerson and Halpern, 1986] E. A. Emerson and J. Y. Halpern. \Sometimes" and \not never" revisited: on branching vs. linear time temporal logic. J. ACM, 33(1):151{178, 1986. [Emerson and Jutla, 1988] E. A. Emerson and C. Jutla. The complexity of tree automata and logics of programs. In Proc. 29th Symp. Foundations of Comput. Sci., pages 328{337. IEEE, October 1988. [Emerson and Jutla, 1989] E. A. Emerson and C. Jutla. On simultaneously determinizing and complementing !-automata. In Proc. 4th Symp. Logic in Comput. Sci. IEEE, June 1989. [Emerson and Lei, 1986] E. A. Emerson and C.-L. Lei. EÆcient model checking in fragments of the propositional -calculus. In Proc. 1st Symp. Logic in Comput. Sci., pages 267{278. IEEE, June 1986. [Emerson and Lei, 1987] E. A. Emerson and C. L. Lei. Modalities for model checking: branching time strikes back. Sci. Comput. Programming, 8:275{306, 1987. [Emerson and Sistla, 1984] E. A. Emerson and P. A. Sistla. Deciding full branching-time logic. Infor. and Control, 61:175{201, 1984.

208

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Emerson, 1985] E. A. Emerson. Automata, tableax, and temporal logics. In R. Parikh, editor, Proc. Workshop on Logics of Programs, volume 193 of Lect. Notes in Comput. Sci., pages 79{88. Springer-Verlag, 1985. [Emerson, 1990] E. A. Emerson. Temporal and modal logic. In J. van Leeuwen, editor, Handbook of theoretical computer science, volume B: formal models and semantics, pages 995{1072. Elsevier, 1990. [Engeler, 1967] E. Engeler. Algorithmic properties of structures. Math. Syst. Theory, 1:183{195, 1967. [Engelfriet, 1983] J. Engelfriet. Iterated pushdown automata and complexity classes. In Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing, pages 365{373, Boston, Massachusetts, 1983. [Erimbetov, 1981] M. M. Erimbetov. On the expressive power of programming logics. In Proc. Alma-Ata Conf. Research in Theoretical Programming, pages 49{68, 1981. In Russian. [Feldman and Harel, 1984] Y. A. Feldman and D. Harel. A probabilistic dynamic logic. J. Comput. Syst. Sci., 28:193{215, 1984. [Feldman, 1984] Y. A. Feldman. A decidable propositional dynamic logic with explicit probabilities. Infor. and Control, 63:11{38, 1984. [Fischer and Ladner, 1977] M. J. Fischer and R. E. Ladner. Propositional modal logic of programs. In Proc. 9th Symp. Theory of Comput., pages 286{294. ACM, 1977. [Fischer and Ladner, 1979] M. J. Fischer and R. E. Ladner. Propositional dynamic logic of regular programs. J. Comput. Syst. Sci., 18(2):194{211, 1979. [Floyd, 1967] R. W. Floyd. Assigning meanings to programs. In Proc. Symp. Appl. Math., volume 19, pages 19{31. AMS, 1967. [Friedman, 1971] H. Friedman. Algorithmic procedures, generalized Turing algorithms, and elementary recursion theory. In Gandy and Yates, editors, Logic Colloq. 1969, pages 361{390. North-Holland, 1971. [Gabbay et al., 1980] D. Gabbay, A. Pnueli, S. Shelah, and J. Stavi. On the temporal analysis of fairness. In Proc. 7th Symp. Princip. Prog. Lang., pages 163{173. ACM, 1980. [Gabbay et al., 1994] D. Gabbay, I. Hodkinson, and M. Reynolds. Temporal Logic: Mathematical Foundations and Computational Aspects. Oxford University Press, 1994. [Gabbay, 1977] D. Gabbay. Axiomatizations of logics of programs. Unpublished, 1977. [Goldblatt, 1982] R. Goldblatt. Axiomatising the Logic of Computer Programming, volume 130 of Lect. Notes in Comput. Sci. Springer-Verlag, 1982. [Goldblatt, 1987] R. Goldblatt. Logics of time and computation. Technical Report Lect. Notes 7, Center for the Study of Language and Information, Stanford Univ., 1987. [Greibach, 1975] S. Greibach. Theory of Program Structures: Schemes, Semantics, Veri cation, volume 36 of Lecture Notes in Computer Science. Springer Verlag, 1975. [Gries, 1981] D. Gries. The Science of Programming. Springer-Verlag, 1981. [Gurevich, 1983] Yu. Gurevich. Algebras of feasible functions. In 24-th IEEE Annual Symposium on Foundations of Computer Science, pages 210{214, 1983. [Halpern and Reif, 1981] J. Y. Halpern and J. H. Reif. The propositional dynamic logic of deterministic, well-structured programs. In Proc. 22nd Symp. Found. Comput. Sci., pages 322{334. IEEE, 1981. [Halpern and Reif, 1983] J. Y. Halpern and J. H. Reif. The propositional dynamic logic of deterministic, well-structured programs. Theor. Comput. Sci., 27:127{165, 1983. [Halpern, 1981] J. Y. Halpern. On the expressive power of dynamic logic II. Technical Report TM-204, MIT/LCS, 1981. [Halpern, 1982] J. Y. Halpern. Deterministic process logic is elementary. In Proc. 23rd Symp. Found. Comput. Sci., pages 204{216. IEEE, 1982. [Halpern, 1983] J. Y. Halpern. Deterministic process logic is elementary. Infor. and Control, 57(1):56{89, 1983. [Hansson and Jonsson, 1994] H. Hansson and B. Jonsson. A logic for reasoning about time and probability. Formal Aspects of Computing, 6:512{535, 1994. [Harel and Kozen, 1984] D. Harel and D. Kozen. A programming language for the inductive sets, and applications. Information and Control, 63(1{2):118{139, 1984.

DYNAMIC LOGIC

209

[Harel and Paterson, 1984] D. Harel and M. S. Paterson. Undecidability of PDL with i L = fa2 j i 0g. J. Comput. Syst. Sci., 29:359{365, 1984. [Harel and Peleg, 1985] D. Harel and D. Peleg. More on looping vs. repeating in dynamic logic. Information Processing Letters, 20:87{90, 1985. [Harel and Pratt, 1978] D. Harel and V. R. Pratt. Nondeterminism in logics of programs. In Proc. 5th Symp. Princip. Prog. Lang., pages 203{213. ACM, 1978. [Harel and Raz, 1993] D. Harel and D. Raz. Deciding properties of nonregular programs. SIAM J. Comput., 22:857{874, 1993. [Harel and Raz, 1994] D. Harel and D. Raz. Deciding emptiness for stack automata on in nite trees. Information and Computation, 113:278{299, 1994. [Harel and Sherman, 1982] D. Harel and R. Sherman. Looping vs. repeating in dynamic logic. Infor. and Control, 55:175{192, 1982. [Harel and Sherman, 1985] D. Harel and R. Sherman. Propositional dynamic logic of

owcharts. Infor. and Control, 64:119{135, 1985. [Harel and Singerman, 1996] D. Harel and E. Singerman. More on nonregular PDL: Finite models and Fibonacci-like programs. Information and Computation, 128:109{ 118, 1996. [Harel et al., 1977] D. Harel, A. R. Meyer, and V. R. Pratt. Computability and completeness in logics of programs. In Proc. 9th Symp. Theory of Comput., pages 261{268. ACM, 1977. [Harel et al., 1982a] D. Harel, A. Pnueli, and M. Vardi. Two dimensional temporal logic and PDL with intersection. Unpublished, 1982. [Harel et al., 1982b] D. Harel, D. Kozen, and R. Parikh. Process logic: Expressiveness, decidability, completeness. J. Comput. Syst. Sci., 25(2):144{170, 1982. [Harel et al., 1983] D. Harel, A. Pnueli, and J. Stavi. Propositional dynamic logic of nonregular programs. J. Comput. Syst. Sci., 26:222{243, 1983. [Harel et al., 2000] D. Harel, D. Kozen, and J. Tiuryn. Dynamic Logic. MIT Press, Cambridge, MA, 2000. [Harel, 1979] D. Harel. First-Order Dynamic Logic, volume 68 of Lect. Notes in Comput. Sci. Springer-Verlag, 1979. [Harel, 1984] D. Harel. Dynamic logic. In Gabbay and Guenthner, editors, Handbook of Philosophical Logic, volume II: Extensions of Classical Logic, pages 497{604. Reidel, 1984. [Harel, 1985] D. Harel. Recurring dominoes: Making the highly undecidable highly understandable. Annals of Discrete Mathematics, 24:51{72, 1985. [Harel, 1992] D. Harel. Algorithmics: The Spirit of Computing. Addison-Wesley, second edition, 1992. [Hart et al., 1982] S. Hart, M. Sharir, and A. Pnueli. Termination of probabilistic concurrent programs. In Proc. 9th Symp. Princip. Prog. Lang., pages 1{6. ACM, 1982. [Hartonas, 1998] C. Hartonas. Duality for modal -logics. Theor. Comput. Sci., 202(1{ 2):193{222, 1998. [Hennessy and Plotkin, 1979] M. C. B. Hennessy and G. D. Plotkin. Full abstraction for a simple programming language. In Proc. Symp. Semantics of Algorithmic Languages, volume 74 of Lecture Notes in Computer Science, pages 108{120. Springer-Verlag, 1979. [Hitchcock and Park, 1972] P. Hitchcock and D. Park. Induction rules and termination proofs. In M. Nivat, editor, Int. Colloq. Automata Lang. Prog., pages 225{251. NorthHolland, 1972. [Hoare, 1969] C. A. R. Hoare. An axiomatic basis for computer programming. Comm. Assoc. Comput. Mach., 12:576{580, 583, 1969. [Hopkins and Kozen, 1999] M. Hopkins and D. Kozen. Parikh's theorem in commutative Kleene algebra. In Proc. Conf. Logic in Computer Science (LICS'99), pages 394{401. IEEE, July 1999. [Hughes and Cresswell, 1968] G. E. Hughes and M. J. Cresswell. An Introduction to Modal Logic. Methuen, 1968. [Huth and Kwiatkowska, 1997] M. Huth and M. Kwiatkowska. Quantitative analysis and model checking. In Proc. 12th Symp. Logic in Comput. Sci., pages 111{122. IEEE, 1997.

210

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Ianov, 1960] Y. I. Ianov. The logical schemes of algorithms. In Problems of Cybernetics, volume 1, pages 82{140. Pergamon Press, 1960. [Iwano and Steiglitz, 1990] K. Iwano and K. Steiglitz. A semiring on convex polygons and zero-sum cycle problems. SIAM J. Comput., 19(5):883{901, 1990. [Jou and Smolka, 1990] C. Jou and S. Smolka. Equivalences, congruences and complete axiomatizations for probabilistic processes. In Proc. CONCUR'90, volume 458 of Lecture Notes in Comput. Sci., pages 367{383. Springer-Verlag, 1990. [Kaivola, 1997] R. Kaivola. Using Automata to Characterise Fixed Point Temporal Logics. PhD thesis, University of Edinburgh, April 1997. Report CST-135-97. [Kamp, 1968] H. W. Kamp. Tense logics and the theory of linear order. PhD thesis, UCLA, 1968. [Keisler, 1971] J. Keisler. Model Theory for In nitary Logic. North Holland, 1971. [Kfoury and Stolboushkin, 1997] A.J. Kfoury and A.P. Stolboushkin. An in nite pebble game and applications. Information and Computation, 136:53{66, 1997. [Kfoury, 1983] A.J. Kfoury. De nability by programs in rst-order structures. Theoretical Computer Science, 25:1{66, 1983. [Kfoury, 1985] A. J. Kfoury. De nability by deterministic and nondeterministic programs with applications to rst-order dynamic logic. Infor. and Control, 65(2{3):98{ 121, 1985. [Kleene, 1956] S. C. Kleene. Representation of events in nerve nets and nite automata. In C. E. Shannon and J. McCarthy, editors, Automata Studies, pages 3{41. Princeton University Press, Princeton, N.J., 1956. [Knijnenburg, 1988] P. M. W. Knijnenburg. On axiomatizations for propositional logics of programs. Technical Report RUU-CS-88-34, Rijksuniversiteit Utrecht, November 1988. [Koren and Pnueli, 1983] T. Koren and A. Pnueli. There exist decidable context-free propositional dynamic logics. In Proc. Symp. on Logics of Programs, volume 164 of Lecture Notes in Computer Science, pages 290{312. Springer-Verlag, 1983. [Kowalczyk et al., 1987] W. Kowalczyk, D. Niwinski, and J. Tiuryn. A generalization of Cook's auxiliary{pushdown{automata theorem. Fundamenta Informaticae, XII:497{ 506, 1987. [Kozen and Parikh, 1981] D. Kozen and R. Parikh. An elementary proof of the completeness of PDL. Theor. Comput. Sci., 14(1):113{118, 1981. [Kozen and Parikh, 1983] D. Kozen and R. Parikh. A decision procedure for the propositional -calculus. In Clarke and Kozen, editors, Proc. Workshop on Logics of Programs, volume 164 of Lecture Notes in Computer Science, pages 313{325. SpringerVerlag, 1983. [Kozen and Patron, 2000] D. Kozen and M.-C. Patron. Certi cation of compiler optimizations using Kleene algebra with tests. In J. Lloyd, V. Dahl, U. Furbach, M. Kerber, K.-K. Lau, C. Palamidessi, L. M. Pereira, Y. Sagiv, and P. J. Stuckey, editors, Proc. 1st Int. Conf. Computational Logic (CL2000), volume 1861 of Lecture Notes in Arti cial Intelligence, pages 568{582, London, July 2000. Springer-Verlag. [Kozen and Smith, 1996] D. Kozen and F. Smith. Kleene algebra with tests: Completeness and decidability. In D. van Dalen and M. Bezem, editors, Proc. 10th Int. Workshop Computer Science Logic (CSL'96), volume 1258 of Lecture Notes in Computer Science, pages 244{259, Utrecht, The Netherlands, September 1996. Springer-Verlag. [Kozen and Tiuryn, 1990] D. Kozen and J. Tiuryn. Logics of programs. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, pages 789{ 840. North Holland, Amsterdam, 1990. [Kozen, 1979a] D. Kozen. Dynamic algebra. In E. Engeler, editor, Proc. Workshop on Logic of Programs, volume 125 of Lecture Notes in Computer Science, pages 102{144. Springer-Verlag, 1979. chapter of Propositional dynamic logics of programs: A survey by Rohit Parikh. [Kozen, 1979b] D. Kozen. On the duality of dynamic algebras and Kripke models. In E. Engeler, editor, Proc. Workshop on Logic of Programs, volume 125 of Lecture Notes in Computer Science, pages 1{11. Springer-Verlag, 1979. [Kozen, 1979c] D. Kozen. On the representation of dynamic algebras. Technical Report RC7898, IBM Thomas J. Watson Research Center, October 1979.

DYNAMIC LOGIC

211

[Kozen, 1980a] D. Kozen. On the representation of dynamic algebras II. Technical Report RC8290, IBM Thomas J. Watson Research Center, May 1980. [Kozen, 1980b] D. Kozen. A representation theorem for models of *-free PDL. In Proc. 7th Colloq. Automata, Languages, and Programming, pages 351{362. EATCS, July 1980. [Kozen, 1981a] D. Kozen. Logics of programs. Lecture notes, Aarhus University, Denmark, 1981. [Kozen, 1981b] D. Kozen. On induction vs. *-continuity. In Kozen, editor, Proc. Workshop on Logic of Programs, volume 131 of Lecture Notes in Computer Science, pages 167{176, New York, 1981. Springer-Verlag. [Kozen, 1981c] D. Kozen. On the expressiveness of . Manuscript, 1981. [Kozen, 1981d] D. Kozen. Semantics of probabilistic programs. J. Comput. Syst. Sci., 22:328{350, 1981. [Kozen, 1982] D. Kozen. Results on the propositional -calculus. In Proc. 9th Int. Colloq. Automata, Languages, and Programming, pages 348{359, Aarhus, Denmark, July 1982. EATCS. [Kozen, 1983] D. Kozen. Results on the propositional -calculus. Theor. Comput. Sci., 27:333{354, 1983. [Kozen, 1984] D. Kozen. A Ramsey theorem with in nitely many colors. In Lenstra, Lenstra, and van Emde Boas, editors, Dopo Le Parole, pages 71{72. University of Amsterdam, Amsterdam, May 1984. [Kozen, 1985] D. Kozen. A probabilistic PDL. J. Comput. Syst. Sci., 30(2):162{178, April 1985. [Kozen, 1988] D. Kozen. A nite model theorem for the propositional -calculus. Studia Logica, 47(3):233{241, 1988. [Kozen, 1990] D. Kozen. On Kleene algebras and closed semirings. In Rovan, editor, Proc. Math. Found. Comput. Sci., volume 452 of Lecture Notes in Computer Science, pages 26{47, Banska-Bystrica, Slovakia, 1990. Springer-Verlag. [Kozen, 1991a] D. Kozen. A completeness theorem for Kleene algebras and the algebra of regular events. In Proc. 6th Symp. Logic in Comput. Sci., pages 214{225, Amsterdam, July 1991. IEEE. [Kozen, 1991b] D. Kozen. The Design and Analysis of Algorithms. Springer-Verlag, New York, 1991. [Kozen, 1994a] D. Kozen. A completeness theorem for Kleene algebras and the algebra of regular events. Infor. and Comput., 110(2):366{390, May 1994. [Kozen, 1994b] D. Kozen. On action algebras. In J. van Eijck and A. Visser, editors, Logic and Information Flow, pages 78{88. MIT Press, 1994. [Kozen, 1996] D. Kozen. Kleene algebra with tests and commutativity conditions. In T. Margaria and B. Steen, editors, Proc. Second Int. Workshop Tools and Algorithms for the Construction and Analysis of Systems (TACAS'96), volume 1055 of Lecture Notes in Computer Science, pages 14{33, Passau, Germany, March 1996. SpringerVerlag. [Kozen, 1997a] D. Kozen. Automata and Computability. Springer-Verlag, New York, 1997. [Kozen, 1997b] D. Kozen. Kleene algebra with tests. Transactions on Programming Languages and Systems, 19(3):427{443, May 1997. [Kozen, 1997c] D. Kozen. On the complexity of reasoning in Kleene algebra. In Proc. 12th Symp. Logic in Comput. Sci., pages 195{202, Los Alamitos, Ca., June 1997. IEEE. [Kozen, 1998] D. Kozen. Typed Kleene algebra. Technical Report 98-1669, Computer Science Department, Cornell University, March 1998. [Kozen, 1999a] D. Kozen. On Hoare logic and Kleene algebra with tests. In Proc. Conf. Logic in Computer Science (LICS'99), pages 167{172. IEEE, July 1999. [Kozen, 1999b] D. Kozen. On Hoare logic, Kleene algebra, and types. Technical Report 99-1760, Computer Science Department, Cornell University, July 1999. Abstract in: Abstracts of 11th Int. Congress Logic, Methodology and Philosophy of Science, Ed. J. Cachro and K. Kijania-Placek, Krakow, Poland, August 1999, p. 15. To appear

212

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

in: Proc. 11th Int. Congress Logic, Methodology and Philosophy of Science, ed. P. Gardenfors, K. Kijania-Placek and J. Wolenski, Kluwer. [Krob, 1991] Daniel Krob. A complete system of B -rational identities. Theoretical Computer Science, 89(2):207{343, October 1991. [Kuich and Salomaa, 1986] W. Kuich and A. Salomaa. Semirings, Automata, and Languages. Springer-Verlag, Berlin, 1986. [Kuich, 1987] W. Kuich. The Kleene and Parikh theorem in complete semirings. In T. Ottmann, editor, Proc. 14th Colloq. Automata, Languages, and Programming, volume 267 of Lecture Notes in Computer Science, pages 212{225, New York, 1987. EATCS, Springer-Verlag. [Ladner, 1977] R. E. Ladner. Unpublished, 1977. [Lamport, 1980] L. Lamport. \Sometime" is sometimes \not never". Proc. 7th Symp. Princip. Prog. Lang., pages 174{185, 1980. [Lehmann and Shelah, 1982] D. Lehmann and S. Shelah. Reasoning with time and chance. Infor. and Control, 53(3):165{198, 1982. [Lewis and Papadimitriou, 1981] H. R. Lewis and C. H. Papadimitriou. Elements of the Theory of Computation. Prentice Hall, 1981. [Lipton, 1977] R. J. Lipton. A necessary and suÆcient condition for the existence of Hoare logics. In Proc. 18th Symp. Found. Comput. Sci., pages 1{6. IEEE, 1977. [Luckham et al., 1970] D. C. Luckham, D. Park, and M. Paterson. On formalized computer programs. J. Comput. Syst. Sci., 4:220{249, 1970. [Mader, 1997] A. Mader. Veri cation of Modal Properties Using Boolean Equation Systems. PhD thesis, Fakultt fr Informatik, Technische Universitt Mnchen, September 1997. [Makowsky and Sain, 1986] J. A. Makowsky and I. Sain. On the equivalence of weak second-order and nonstandard time semantics for various program veri cation systems. In Proc. 1st Symp. Logic in Comput. Sci., pages 293{300. IEEE, 1986. [Makowsky, 1980] J. A. Makowsky. Measuring the expressive power of dynamic logics: an application of abstract model theory. In Proc. 7th Int. Colloq. Automata Lang. Prog., volume 80 of Lect. Notes in Comput. Sci., pages 409{421. Springer-Verlag, 1980. [Makowsky and Tiomkin, 1980] J. A. Makowsky and M. L. Tiomkin. Probabilistic propositional dynamic logic. Manuscript, 1980. [Manna and Pnueli, 1981] Z. Manna and A. Pnueli. Veri cation of concurrent programs: temporal proof principles. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 200{252. Springer-Verlag, 1981. [Manna and Pnueli, 1987] Z. Manna and A. Pnueli. Speci cation and veri cation of concurrent programs by 8-automata. In Proc. 14th Symp. Principles of Programming Languages, pages 1{12. ACM, January 1987. [Manna, 1974] Z. Manna. Mathematical Theory of Computation. McGraw-Hill, 1974. [McCulloch and Pitts, 1943] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophysics, 5:115{143, 1943. [Mehlhorn, 1984] K. Mehlhorn. Graph Algorithms and NP-Completeness, volume II of Data Structures and Algorithms. Springer-Verlag, 1984. [Meyer and Halpern, 1982] A. R. Meyer and J. Y. Halpern. Axiomatic de nitions of programming languages: a theoretical assessment. J. Assoc. Comput. Mach., 29:555{ 576, 1982. [Meyer and Parikh, 1981] A. R. Meyer and R. Parikh. De nability in dynamic logic. J. Comput. Syst. Sci., 23:279{298, 1981. [Meyer and Tiuryn, 1981] A. R. Meyer and J. Tiuryn. A note on equivalences among logics of programs. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 282{299. Springer-Verlag, 1981. [Meyer and Tiuryn, 1984] A. R. Meyer and J. Tiuryn. Equivalences among logics of programs. Journal of Computer and Systems Science, 29:160{170, 1984. [Meyer and Winklmann, 1982] A. R. Meyer and K. Winklmann. Expressing program looping in regular dynamic logic. Theor. Comput. Sci., 18:301{323, 1982.

DYNAMIC LOGIC

213

[Meyer et al., 1981] A. R. Meyer, R. S. Streett, and G. Mirkowska. The deducibility problem in propositional dynamic logic. In E. Engeler, editor, Proc. Workshop Logic of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 12{22. Springer-Verlag, 1981. [Miller, 1976] G. L. Miller. Riemann's hypothesis and tests for primality. J. Comput. Syst. Sci., 13:300{317, 1976. [Mirkowska, 1971] G. Mirkowska. On formalized systems of algorithmic logic. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astron. Phys., 19:421{428, 1971. [Mirkowska, 1980] G. Mirkowska. Algorithmic logic with nondeterministic programs. Fund. Informaticae, III:45{64, 1980. [Mirkowska, 1981a] G. Mirkowska. PAL|propositional algorithmic logic. In E. Engeler, editor, Proc. Workshop Logic of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 23{101. Springer-Verlag, 1981. [Mirkowska, 1981b] G. Mirkowska. PAL|propositional algorithmic logic. Fund. Informaticae, IV:675{760, 1981. [Morgan et al., 1999] C. Morgan, A. McIver, and K. Seidel. Probabilistic predicate transformers. ACM Trans. Programming Languages and Systems, 8(1):1{30, 1999. [Moschovakis, 1974] Y. N. Moschovakis. Elementary Induction on Abstract Structures. North-Holland, 1974. [Muller et al., 1988] D. E. Muller, A. Saoudi, and P. E. Schupp. Weak alternating automata give a simple explanation of why most temporal and dynamic logics are decidable in exponential time. In Proc. 3rd Symp. Logic in Computer Science, pages 422{427. IEEE, July 1988. [Nemeti, 1980] I. Nemeti. Every free algebra in the variety generated by the representable dynamic algebras is separable and representable. Manuscript, 1980. [Nemeti, 1981] I. Nemeti. Nonstandard dynamic logic. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 311{ 348. Springer-Verlag, 1981. [Ng and Tarski, 1977] K. C. Ng and A. Tarski. Relation algebras with transitive closure, abstract 742-02-09. Notices Amer. Math. Soc., 24:A29{A30, 1977. [Ng, 1984] K. C. Ng. Relation Algebras with Transitive Closure. PhD thesis, University of California, Berkeley, 1984. [Nishimura, 1979] H. Nishimura. Sequential method in propositional dynamic logic. Acta Informatica, 12:377{400, 1979. [Nishimura, 1980] H. Nishimura. Descriptively complete process logic. Acta Informatica, 14:359{369, 1980. [Niwinski, 1984] D. Niwinski. The propositional -calculus is more expressive than the propositional dynamic logic of looping. University of Warsaw, 1984. [Parikh and Mahoney, 1983] R. Parikh and A. Mahoney. A theory of probabilistic programs. In E. Clarke and D. Kozen, editors, Proc. Workshop on Logics of Programs, volume 164 of Lect. Notes in Comput. Sci., pages 396{402. Springer-Verlag, 1983. [Parikh, 1978a] R. Parikh. The completeness of propositional dynamic logic. In Proc. 7th Symp. on Math. Found. of Comput. Sci., volume 64 of Lect. Notes in Comput. Sci., pages 403{415. Springer-Verlag, 1978. [Parikh, 1978b] R. Parikh. A decidability result for second order process logic. In Proc. 19th Symp. Found. Comput. Sci., pages 177{183. IEEE, 1978. [Parikh, 1981] R. Parikh. Propositional dynamic logics of programs: a survey. In E. Engeler, editor, Proc. Workshop on Logics of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 102{144. Springer-Verlag, 1981. [Parikh, 1983] R. Parikh. Propositional game logic. In Proc. 23rd IEEE Symp. Foundations of Computer Science, 1983. [Park, 1976] D. Park. Finiteness is -ineable. Theor. Comput. Sci., 3:173{181, 1976. [Paterson and Hewitt, 1970] M. S. Paterson and C. E. Hewitt. Comparative schematology. In Record Project MAC Conf. on Concurrent Systems and Parallel Computation, pages 119{128. ACM, 1970. [Pecuchet, 1986] J. P. Pecuchet. On the complementation of Buchi automata. Theor. Comput. Sci., 47:95{98, 1986.

214

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Peleg, 1987a] D. Peleg. Communication in concurrent dynamic logic. J. Comput. Sys. Sci., 35:23{58, 1987. [Peleg, 1987b] D. Peleg. Concurrent dynamic logic. J. Assoc. Comput. Mach., 34(2):450{479, 1987. [Peleg, 1987c] D. Peleg. Concurrent program schemes and their logics. Theor. Comput. Sci., 55:1{45, 1987. [Peng and Iyer, 1995] W. Peng and S. Purushothaman Iyer. A new type of pushdowntree automata on in nite trees. Int. J. of Found. of Comput. Sci., 6(2):169{186, 1995. [Peterson, 1978] G. L. Peterson. The power of tests in propositional dynamic logic. Technical Report 47, Comput. Sci. Dept., Univ. of Rochester, 1978. [Pnueli, 1977] A. Pnueli. The temporal logic of programs. In Proc. 18th Symp. Found. Comput. Sci., pages 46{57. IEEE, 1977. [Pratt, 1976] V. R. Pratt. Semantical considerations on Floyd-Hoare logic. In Proc. 17th Symp. Found. Comput. Sci., pages 109{121. IEEE, 1976. [Pratt, 1978] V. R. Pratt. A practical decision method for propositional dynamic logic. In Proc. 10th Symp. Theory of Comput., pages 326{337. ACM, 1978. [Pratt, 1979a] V. R. Pratt. Dynamic algebras: examples, constructions, applications. Technical Report TM-138, MIT/LCS, July 1979. [Pratt, 1979b] V. R. Pratt. Models of program logics. In Proc. 20th Symp. Found. Comput. Sci., pages 115{122. IEEE, 1979. [Pratt, 1979c] V. R. Pratt. Process logic. In Proc. 6th Symp. Princip. Prog. Lang., pages 93{100. ACM, 1979. [Pratt, 1980a] V. R. Pratt. Dynamic algebras and the nature of induction. In Proc. 12th Symp. Theory of Comput., pages 22{28. ACM, 1980. [Pratt, 1980b] V. R. Pratt. A near-optimal method for reasoning about actions. J. Comput. Syst. Sci., 20(2):231{254, 1980. [Pratt, 1981a] V. R. Pratt. A decidable -calculus: preliminary report. In Proc. 22nd Symp. Found. Comput. Sci., pages 421{427. IEEE, 1981. [Pratt, 1981b] V. R. Pratt. Using graphs to understand PDL. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 387{396. Springer-Verlag, 1981. [Pratt, 1988] V. Pratt. Dynamic algebras as a well-behaved fragment of relation algebras. In D. Pigozzi, editor, Proc. Conf. on Algebra and Computer Science, volume 425 of Lecture Notes in Computer Science, pages 77{110, Ames, Iowa, June 1988. Springer-Verlag. [Pratt, 1990] V. Pratt. Action logic and pure induction. In J. van Eijck, editor, Proc. Logics in AI: European Workshop JELIA '90, volume 478 of Lecture Notes in Computer Science, pages 97{120, New York, September 1990. Springer-Verlag. [Rabin and Scott, 1959] M. O. Rabin and D. S. Scott. Finite automata and their decision problems. IBM J. Res. Develop., 3(2):115{125, 1959. [Rabin, 1980] M. O. Rabin. Probabilistic algorithms for testing primality. J. Number Theory, 12:128{138, 1980. [Ramshaw, 1981] L. H. Ramshaw. Formalizing the analysis of algorithms. PhD thesis, Stanford Univ., 1981. [Rasiowa and Sikorski, 1963] H. Rasiowa and R. Sikorski. Mathematics of Metamathematics. Polish Scienti c Publishers, PWN, 1963. [Redko, 1964] V. N. Redko. On de ning relations for the algebra of regular events. Ukrain. Mat. Z., 16:120{126, 1964. In Russian. [Reif, 1980] J. Reif. Logics for probabilistic programming. In Proc. 12th Symp. Theory of Comput., pages 8{13. ACM, 1980. [Rogers, 1967] H. Rogers. Theory of Recursive Functions and Eective Computability. McGraw-Hill, 1967. [Safra, 1988] S. Safra. On the complexity of !-automata. In Proc. 29th Symp. Foundations of Comput. Sci., pages 319{327. IEEE, October 1988.

DYNAMIC LOGIC

215

[Sakarovitch, 1987] J. Sakarovitch. Kleene's theorem revisited: A formal path from Kleene to Chomsky. In A. Kelemenova and J. Keleman, editors, Trends, Techniques, and Problems in Theoretical Computer Science, volume 281 of Lecture Notes in Computer Science, pages 39{50, New York, 1987. Springer-Verlag. [Salomaa, 1966] A. Salomaa. Two complete axiom systems for the algebra of regular events. J. Assoc. Comput. Mach., 13(1):158{169, January 1966. [Salwicki, 1970] A. Salwicki. Formalized algorithmic languages. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astron. Phys., 18:227{232, 1970. [Salwicki, 1977] A. Salwicki. Algorithmic logic: a tool for investigations of programs. In Butts and Hintikka, editors, Logic Foundations of Mathematics and Computability Theory, pages 281{295. Reidel, 1977. [Sazonov, 1980] V.Y. Sazonov. Polynomial computability and recursivity in nite domains. Elektronische Informationsverarbeitung und Kibernetik, 16:319{323, 1980. [Scott and de Bakker, 1969] D. S. Scott and J. W. de Bakker. A theory of programs. IBM Vienna, 1969. [Segala and Lynch, 1994] R. Segala and N. Lynch. Probabilistic simulations for probabilistic processes. In Proc. CONCUR'94, volume 836 of Lecture Notes in Comput. Sci., pages 481{496. Springer-Verlag, 1994. [Segerberg, 1977] K. Segerberg. A completeness theorem in the modal logic of programs (preliminary report). Not. Amer. Math. Soc., 24(6):A{552, 1977. [Shoen eld, 1967] J. R. Shoen eld. Mathematical Logic. Addison-Wesley, 1967. [Sholz, 1952] H. Sholz. Ein ungelostes Problem in der symbolischen Logik. The Journal of Symbolic Logic, 17:160, 1952. [Sistla and Clarke, 1982] A. P. Sistla and E. M. Clarke. The complexity of propositional linear temporal logics. In Proc. 14th Symp. Theory of Comput., pages 159{168. ACM, 1982. [Sistla et al., 1987] A. P. Sistla, M. Y. Vardi, and P. Wolper. The complementation problem for Buchi automata with application to temporal logic. Theor. Comput. Sci., 49:217{237, 1987. [Sokolsky and Smolka, 1994] O. Sokolsky and S. Smolka. Incremental model checking in the modal -calculus. In D. Dill, editor, Proc. Conf. Computer Aided Veri cation, volume 818 of Lect. Notes in Comput. Sci., pages 352{363. Springer, June 1994. [Steen et al., 1996] B. Steen, T. Margaria, A. Classen, V. Braun, R. Nisius, and M. Reitenspiess. A constraint oriented service environment. In T. Margaria and B. Steen, editors, Proc. Second Int. Workshop Tools and Algorithms for the Construction and Analysis of Systems (TACAS'96), volume 1055 of Lect. Notes in Comput. Sci., pages 418{421. Springer, March 1996. [Stirling and Walker, 1989] C. Stirling and D. Walker. Local model checking in the modal -calculus. In Proc. Int. Joint Conf. Theory and Practice of Software Develop. (TAPSOFT89), volume 352 of Lect. Notes in Comput. Sci., pages 369{383. Springer, March 1989. [Stirling, 1992] C. Stirling. Modal and temporal logics. In S. Abramsky, D. Gabbay, and T. Maibaum, editors, Handbook of Logic in Computer Science, pages 477{563. Clarendon Press, 1992. [Stockmeyer and Meyer, 1973] L. J. Stockmeyer and A. R. Meyer. Word problems requiring exponential time. In Proc. 5th Symp. Theory of Computing, pages 1{9, New York, 1973. ACM. [Stolboushkin and Taitslin, 1983] A. P. Stolboushkin and M. A. Taitslin. Deterministic dynamic logic is strictly weaker than dynamic logic. Infor. and Control, 57:48{55, 1983. [Stolboushkin, 1983] A.P. Stolboushkin. Regular dynamic logic is not interpretable in deterministic context-free dynamic logic. Information and Computation, 59:94{107, 1983. [Stolboushkin, 1989] A.P. Stolboushkin. Some complexity bounds for dynamic logic. In Proc. 4th Symp. Logic in Comput. Sci., pages 324{332. IEEE, June 1989. [Streett and Emerson, 1984] R. Streett and E. A. Emerson. The propositional -calculus is elementary. In Proc. 11th Int. Colloq. on Automata Languages and Programming, pages 465{472. Springer, 1984. Lect. Notes in Comput. Sci. 172.

216

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Streett, 1981] R. S. Streett. Propositional dynamic logic of looping and converse. In Proc. 13th Symp. Theory of Comput., pages 375{381. ACM, 1981. [Streett, 1982] R. S. Streett. Propositional dynamic logic of looping and converse is elementarily decidable. Infor. and Control, 54:121{141, 1982. [Streett, 1985a] R. S. Streett. Fixpoints and program looping: reductions from the propositional -calculus into propositional dynamic logics of looping. In R. Parikh, editor, Proc. Workshop on Logics of Programs, volume 193 of Lect. Notes in Comput. Sci., pages 359{372. Springer-Verlag, 1985. [Streett, 1985b] R. Streett. Fixpoints and program looping: reductions from the propositional -calculus into propositional dynamic logics of looping. In Parikh, editor, Proc. Workshop on Logics of Programs 1985, pages 359{372. Springer, 1985. Lect. Notes in Comput. Sci. 193. [Tarjan, 1981] R. E. Tarjan. A uni ed approach to path problems. J. Assoc. Comput. Mach., pages 577{593, 1981. [Thiele, 1966] H. Thiele. Wissenschaftstheoretische untersuchungen in algorithmischen sprachen. In Theorie der Graphschemata-Kalkale Veb Deutscher Verlag der Wissenschaften. Berlin, 1966. [Thomas, 1997] W. Thomas. Languages, automata, and logic. Technical Report 9607, Christian-Albrechts-Universitat Kiel, May 1997. [Tiuryn and Urzyczyn, 1983] J. Tiuryn and P. Urzyczyn. Some relationships between logics of programs and complexity theory. In Proc. 24th Symp. Found. Comput. Sci., pages 180{184. IEEE, 1983. [Tiuryn and Urzyczyn, 1984] J. Tiuryn and P. Urzyczyn. Remarks on comparing expressive power of logics of programs. In Chytil and Koubek, editors, Proc. Math. Found. Comput. Sci., volume 176 of Lect. Notes in Comput. Sci., pages 535{543. Springer-Verlag, 1984. [Tiuryn and Urzyczyn, 1988] J. Tiuryn and P. Urzyczyn. Some relationships between logics of programs and complexity theory. Theor. Comput. Sci., 60:83{108, 1988. [Tiuryn, 1981a] J. Tiuryn. A survey of the logic of eective de nitions. In E. Engeler, editor, Proc. Workshop on Logics of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 198{245. Springer-Verlag, 1981. [Tiuryn, 1981b] J. Tiuryn. Unbounded program memory adds to the expressive power of rst-order programming logics. In Proc. 22nd Symp. Found. Comput. Sci., pages 335{339. IEEE, 1981. [Tiuryn, 1984] J. Tiuryn. Unbounded program memory adds to the expressive power of rst-order programming logics. Infor. and Control, 60:12{35, 1984. [Tiuryn, 1986] J. Tiuryn. Higher-order arrays and stacks in programming: an application of complexity theory to logics of programs. In Gruska and Rovan, editors, Proc. Math. Found. Comput. Sci., volume 233 of Lect. Notes in Comput. Sci., pages 177{198. Springer-Verlag, 1986. [Tiuryn, 1989] J. Tiuryn. A simpli ed proof of DDL < DL. Information and Computation, 81:1{12, 1989. [Trnkova and Reiterman, 1980] V. Trnkova and J. Reiterman. Dynamic algebras which are not Kripke structures. In Proc. 9th Symp. on Math. Found. Comput. Sci., pages 528{538, 1980. [Turing, 1936] A. M. Turing. On computable numbers with an application to the Entscheidungsproblem. Proc. London Math. Soc., 42:230{265, 1936. Erratum: Ibid., 43 (1937), pp. 544{546. [Urzyczyn, 1983] P. Urzyczyn. A necessary and suÆcient condition in order that a Herbrand interpretation be expressive relative to recursive programs. Information and Control, 56:212{219, 1983. [Urzyczyn, 1986] P. Urzyczyn. \During" cannot be expressed by \after". Journal of Computer and System Sciences, 32:97{104, 1986. [Urzyczyn, 1987] P. Urzyczyn. Deterministic context-free dynamic logic is more expressive than deterministic dynamic logic of regular programs. Fundamenta Informaticae, 10:123{142, 1987. [Urzyczyn, 1988] P. Urzyczyn. Logics of programs with Boolean memory. Fundamenta Informaticae, XI:21{40, 1988.

DYNAMIC LOGIC

217

[Valiev, 1980] M. K. Valiev. Decision complexity of variants of propositional dynamic logic. In Proc. 9th Symp. Math. Found. Comput. Sci., volume 88 of Lect. Notes in Comput. Sci., pages 656{664. Springer-Verlag, 1980. [van Emde Boas, 1978] P. van Emde Boas. The connection between modal logic and algorithmic logics. In Symp. on Math. Found. of Comp. Sci., pages 1{15, 1978. [Vardi and Stockmeyer, 1985] M. Y. Vardi and L. Stockmeyer. Improved upper and lower bounds for modal logics of programs: preliminary report. In Proc. 17th Symp. Theory of Comput., pages 240{251. ACM, May 1985. [Vardi and Wolper, 1986a] M. Y. Vardi and P. Wolper. An automata-theoretic approach to automatic program veri cation. In Proc. 1st Symp. Logic in Computer Science, pages 332{344. IEEE, June 1986. [Vardi and Wolper, 1986b] M. Y. Vardi and P. Wolper. Automata-theoretic techniques for modal logics of programs. J. Comput. Syst. Sci., 32:183{221, 1986. [Vardi, 1985a] M. Y. Vardi. Automatic veri cation of probabilistic concurrent nitestate programs. In Proc. 26th Symp. Found. Comput. Sci., pages 327{338. IEEE, October 1985. [Vardi, 1985b] M. Y. Vardi. The taming of the converse: reasoning about two-way computations. In R. Parikh, editor, Proc. Workshop on Logics of Programs, volume 193 of Lect. Notes in Comput. Sci., pages 413{424. Springer-Verlag, 1985. [Vardi, 1987] M. Y. Vardi. Veri cation of concurrent programs: the automata-theoretic framework. In Proc. 2nd Symp. Logic in Comput. Sci., pages 167{176. IEEE, June 1987. [Vardi, 1998a] M. Vardi. Linear vs. branching time: a complexity-theoretic perspective. In Proc. 13th Symp. Logic in Comput. Sci., pages 394{405. IEEE, 1998. [Vardi, 1998b] M. Y. Vardi. Reasoning about the past with two-way automata. In Proc. 25th Int. Colloq. Automata Lang. Prog., volume 1443 of Lect. Notes in Comput. Sci., pages 628{641. Springer-Verlag, July 1998. [Walukiewicz, 1993] I. Walukiewicz. Completeness result for the propositional calculus. In Proc. 8th IEEE Symp. Logic in Comput. Sci., June 1993. [Walukiewicz, 1995] I. Walukiewicz. Completeness of Kozen's axiomatisation of the propositional -calculus. In Proc. 10th Symp. Logic in Comput. Sci., pages 14{24. IEEE, June 1995. [Walukiewicz, 2000] I. Walukiewicz. Completeness of Kozen's axiomatisation of the propositional -calculus. Infor. and Comput., 157(1{2):142{182, February{March 2000. [Wand, 1978] M. Wand. A new incompleteness result for Hoare's system. J. Assoc. Comput. Mach., 25:168{175, 1978. [Wolper, 1981] P. Wolper. Temporal logic can be more expressive. In Proc. 22nd Symp. Foundations of Computer Science, pages 340{348. IEEE, 1981. [Wolper, 1983] P. Wolper. Temporal logic can be more expressive. Infor. and Control, 56:72{99, 1983.

HENRY PRAKKEN & GERARD VREESWIJK

LOGICS FOR DEFEASIBLE ARGUMENTATION 1 INTRODUCTION Logic is the science that deals with the formal principles and criteria of validity of patterns of inference. This chapter surveys logics for a particular group of patterns of inference, namely those where arguments for and against a certain claim are produced and evaluated, to test the tenability of the claim. Such reasoning processes are usually analysed under the common term `defeasible argumentation'. We shall illustrate this form of reasoning with a dispute between two persons, A and B . They disagree on whether it is morally acceptable for a newspaper to publish a certain piece of information concerning a politician's private life.1 Let us assume that the two parties have reached agreement on the following points. (1) The piece of information I concerns the health of person P ; (2) P does not agree with publication of I ; (3) Information concerning a person's health is information concerning that person's private life

A now states the moral principle that (4) Information concerning a person's private life may not be published if that person does not agree with publication. and A says \So the newspapers may not publish I " (Fig. 1, page 220). Although B accepts principle (4) and is therefore now committed to (1-4), B still refuses to accept the conclusion that the newspapers may not publish I . B motivates his refusal by replying that: (5) P is a cabinet minister (6) I is about a disease that might aect P 's political functioning (7) Information about things that might aect a cabinet minister's political functioning has public signi cance Furthermore, B maintains that there is also the moral principle that (8) Newspapers may publish any information that has public signi cance 1 Adapted

from [Sartor, 1994].

220

HENRY PRAKKEN & GERARD VREESWIJK

(3) Information concerning a person's health is information concerning that person's private (1) I concerns life. the health of P . (2) P does not I concerns the permit private life of publication of P. I . (4) Information I concerns the concerning a private life of P person's private and P does not life may not be permit published publication of against that I. person's will. The newspapers may not publish I. Figure 1. A's argument.

B concludes by saying that therefore the newspapers may write about P 's disease (Fig. 2, page 221). A agrees with (5{7) and even accepts (8) as a moral principle, but A does not give up his initial claim. (It is assumed that A and B are both male.) Instead he tries to defend it by arguing that he has the stronger argument: he does so by arguing that in this case (9) The likelihood that the disease mentioned in I aects P 's functioning is small. (10) If the likelihood that the disease mentioned in I aects P 's functioning is small, then principle (4) has priority over principle (8). Thus it can be derived that the principle used in A's rst argument is stronger than the principle used by B (Fig. 3, page 222), which makes A's rst argument stronger than B's, so that it follows after all that the newspapers should be silent about P 's disease. Let us examine the various stages of this dispute in some detail. Intuitively, it seems obvious that the accepted basis for discussion after A has stated (4) and B has accepted it, viz. (1,2,3,4), warrants the conclusion that the piece of information I may not be published. However, after B 's counterargument and A's acceptance of its premises (5-8) things have changed.

LOGICS FOR DEFEASIBLE ARGUMENTATION

221

(6) I is about a disease that (5) P is a might aect P 's cabinet political minister. functioning. (7) Information about things I is about a that might disease that aect a cabinet might aect a minister's cabinet political minister's functioning has political public (8) Newspapers may publish functioning. signi cance. any information I has public that has public signi cance. signi cance. The newspapers may publish I . Figure 2. B 's argument. At this stage the joint basis for discussion is (1-8), which gives rise to two con icting arguments. Moreover, (1-8) does not yield reasons to prefer one argument over the other: so at this point A's conclusion has ceased to be warranted. But then A's second argument, which states a preference between the two con icting moral principles, tips the balance in favour of his rst argument: so after the basis for discussion has been extended to (1-10), we must again accept A's moral claim as warranted. This chapter is about logical systems that formalise this kind of reasoning. We shall call them `logics for defeasible argumentation', or `argumentation systems'. As the example shows, these systems lack one feature of `standard', deductive logic (say, rst-order predicate logic, FOL). The notion of `warrant' that we used in explaining the example is clearly not the same as rst-order logical consequence, which has the property of monotonicity: in FOL any conclusion that can be drawn from a given set of premises, remains valid if we add new premises to this set. So according to FOL, if A's claim is implied by (1{4), it is surely also implied by (1{8). From the point of view of FOL it is pointless for B to accept (1{4) and yet state a counterargument; B should also have refused to accept one of the premises, for instance, (4).

222

HENRY PRAKKEN & GERARD VREESWIJK

(9) The likelihood that the disease mentioned in I aects P 's functioning is small.

(10) If the likelihood that the disease mentioned in I aects P 's functioning is small, then principle (4) has priority over principle (8).

Principle (4) has priority over principle (8). Figure 3. A's priority argument. Does this mean that our informal account of the example is misleading, that it conceals a subtle change in the interpretation of, say, (4) as the dispute progresses? This is not so easy to answer in general. Although in some cases it might indeed be best to analyse an argument move like B 's as a reinterpretation of a premise, in other cases this is dierent. In actual reasoning, rules are not always neatly labelled with an exhaustive list of possible exceptions; rather, people are often forced to apply `rules of thumb' or `default rules', in the absence of evidence to the contrary, and it seems natural to analyse an argument like B 's as an attempt to provide such evidence to the contrary. When the example is thus analysed, the force of the conclusions drawn in it can only be captured by a consequence notion that is nonmonotonic: although A's claim is warranted on the basis of (1{4), it is not warranted on the basis of (1{8). Such nonmonotonic consequence notions have been studied over the last twenty years in an area of arti cial intelligence called `nonmonotonic reasoning' (recently the term `defeasible reasoning' has also become popular), and logics for defeasible argumentation are largely a result of this development. Some might say that the lack of the property of monotonicity disquali es these notions from being notions of logical consequence: isn't the very idea of calling an inference `logical' that it is (given the premises) beyond any doubt? We are not so sure. Our view on logic is that it studies criteria of warrant, that is, criteria that determine the degree according to which it is reasonable to accept logical conclusions, even though some of these conclusions are established non-deductively: sometimes it is reasonable to accept a conclusion of an argument even though this argument is not strong enough to establish its conclusion with absolute certainty. Several ways to formalise nonmonotonic, or defeasible reasoning have been studied. This chapter is not meant to survey all of them but only discusses the argument-based approach, which de nes notions like argument, counterargument, attack and defeat, and de nes consequence notions in terms of the interaction of arguments for and against certain conclusions. This approach was initiated by the philosopher John Pollock [1987], based

LOGICS FOR DEFEASIBLE ARGUMENTATION

223

on his earlier work in epistemology, e.g. [1974], and the computer scientist Ronald Loui [1987]. As we shall see, argumentation systems are able to incorporate the traditional, monotonic notions of logical consequence as a special case, for instance, in their de nition of what an argument is. The eld of defeasible argumentation is relatively young, and researchers disagree on many issues, while the formal meta-theory is still in its early stages. Yet we think that the eld has suÆciently matured to devote a handbook survey to it.2 We aim to show that there are also many similarities and connections between the various systems, and that many dierences are variations on a few basic notions, or are caused by dierent focus or dierent levels of abstraction. Moreover, we shall show that some recent developments pave the way for a more elaborate meta-theory of defeasible argumentation. Although when discussing individual systems we aim to be as formal as possible, when comparing them we shall mostly use conceptual or quasiformal terms. We shall also report on some formal results on this comparison, but it is not our aim to present new technical results; this we regard as a task for further research in the eld. The structure of this chapter is as follows. In Section 2 we give an overview of the main approaches in nonmonotonic reasoning, and argue why the study of this kind of reasoning is relevant not only for arti cial intelligence but also for philosophy. In Section 3 we give a brief conceptual sketch of logics for defeasible argumentation, and we argue that it is not obvious that they need a model-theoretic semantics. In Section 4 we become formal, studying how semantic consequence notions for argumentation systems can be de ned given a set of arguments ordered by a defeat relation. This discussion is still abstract, leaving the structure of arguments and the origin of the defeat relation largely unspeci ed. In Section 5 we become more concrete, in discussing particular logics for defeasible argumentation. Then in Section 6 we discuss one way in which argumentation systems can be formulated, viz. in the form of rules for dispute. We end this chapter in Section 7 with some concluding remarks, and with a list of the main open issues in the eld. 2 NONMONOTONIC LOGICS: OVERVIEW AND PHILOSOPHICAL RELEVANCE Before discussing argumentation systems, we place them in the context of the study of nonmonotonic reasoning, and discuss why this study deserves a place in philosophical logic. 2 For a survey of this topic from a computer science perspective, see [Ches~ nevar et al., 1999].

224

HENRY PRAKKEN & GERARD VREESWIJK

2.1 Research in nonmonotonic reasoning Although this chapter is not about nonmonotonic logics in general, it is still useful to give a brief impression of this eld, to put systems for defeasible argumentation in context. Several styles of nonmonotonic logics exist. Most of them take as the basic `nonstandard' unit the notion of a default, or defeasible conditional or rule: this is a conditional that can be quali ed with phrases like `typically', `normally' or `unless shown otherwise' (the two principles in our example may be regarded as defaults). Defaults do not guarantee that their consequent holds whenever their antecedent holds; instead they allow us in such cases to defeasibly derive their consequent, i.e., if nothing is known about exceptional circumstances. Most nonmonotonic logics aim to formalise this phenomenon of `default reasoning', but they do so in dierent ways. Firstly, they dier in whether the above quali cations are regarded as extra conditions in the antecedent of a default, as aspects of the use of a default, or as inherent in the meaning of a defeasible conditional operator. In addition, within each of these views on defaults, nonmonotonic logics dier in the technical means by which they formalise it. Let us brie y review the main approaches. (More detailed overviews can be found in e.g. [Brewka, 1991] and [Gabbay et al., 1994].) Preferential entailment

Preferential entailment, e.g. [Shoham, 1988], is a model-theoretic approach based on standard rst-order logic, which weakens the standard notion of entailment. The idea is that instead of checking all models of the premises to see if the conclusion holds, only some of the models are checked, viz. those in which as few exceptions to the defaults hold as possible. This technique is usually combined with the `extra condition' view on defaults, by adding a special `normality condition' to their antecedent, as in (1)

8x:Birds(x) ^ :ab1 (x) Canfly(x)

Informally, this reads as `Birds can y, unless they are abnormal with respect to ying'. Let us now also assume that Tweety is a bird: (2) Bird(T weety) We want to infer from (1) and (2) that Canfly(T weety), since there is no reason to believe that ab1 (T weety). This inference is formalised by only looking at those models of (1,2) where the extension of the abi predicates are minimal (with respect to set inclusion). Thus, since on the basis of (1) and (2) nothing is known about whether Tweety is an abnormal bird, there are both FOL-models of these premises where ab1 (T weety) is satis ed and FOL-models where this is not satis ed. The idea is then that we can

LOGICS FOR DEFEASIBLE ARGUMENTATION

225

disregard the models satisfying ab1(T weety), and only look at the models satisfying : ab1 (T weety); clearly in all those models Canfly(T weety) holds. The defeasibility of this inference can be shown by adding ab1 (T weety) to the premises. Then all models of the premises satisfy ab1 (T weety), and the preferred models are now those in which the extension of ab1 is fT weetyg. Some of those models satisfy Canfly(T weety) but others satisfy :Canfly(T weety), so we cannot any more draw the conclusion Canfly(T weety ). A variant of this approach is Poole's [1988] `abductive framework for default reasoning'. Poole also represents defaults with normality conditions, but he does not de ne a new semantics. Instead, he recommends a new way of using rst-order logic, viz. for constructing `extensions' of a theory. Essentially, extensions can be formed by adding as many normality statements to a theory as is consistently possible. The standard rst-order models of a theory extension correspond to the preferred models of the original theory. Intensional semantics for defaults

There are also intensional approaches to the semantics of defaults, e.g. [Delgrande, 1988; Asher & Morreau, 1990]. The idea is to interpret defaults in a possible-worlds semantics, and to evaluate their truth in a model by focusing on a subset of the set of possible worlds within a model. This is similar to the focusing on certain models of a theory in preferential entailment. On the other hand, intensional semantics capture the defeasibility of defaults not with extra normality conditions, but in the meaning of the conditional operator. This development draws its inspiration from the similarity semantics for counterfactuals in conditional logics, e.g. [Lewis, 1973]. In these logics a counterfactual conditional is interpreted as follows: ' ) is true just in case is true in a subset of the possible worlds in which ' is true, viz. in the possible worlds which resemble the actual world as much as possible, given that in them ' holds. Now with respect to defeasible conditionals the idea is to de ne in a similar way a possible-worlds semantics for defeasible conditionals. A defeasible conditional ' ) is roughly interpreted as `in all most normal worlds in which ' holds, holds as well'. Obviously, if read in this way, then modus ponens is not valid for such conditionals, since even if ' holds in the actual world, the actual world need not be a normal world. This is dierent for counterfactual conditionals, where the actual world is always among the worlds most similar to itself. This difference makes that intensional defeasible logics need a component that is absent in counterfactual logics, and which is similar to the selection of the `most normal' models in preferential entailment: in order to derive default conclusions from defeasible conditionals, the actual world is assumed to be as normal as possible given the premises. It is this assumption that makes the resulting conclusions defeasible: it validates modus ponens for those

226

HENRY PRAKKEN & GERARD VREESWIJK

defaults for which there is no evidence of exceptions. Consistency and non-provability statements

Yet another approach is to somehow make the expression possible of consistency or non-provability statements. This is, for instance, the idea behind Reiter's [1980] default logic, which extends rst-order logic with constructs that technically play the role of inference rules, but that express domainspeci c generalisations instead of logical inference principles. In default logic, the Tweety default can be written as follows. Bird(x) : Canfly(x)=Canfly(x) The middle part of this `default' can be used to express consistency statements. Informally the default reads as `If it is provable that Tweety is a bird, and it is not provable that Tweety cannot y, then we may infer that Tweety can y'. To see how this works, assume that in addition to this default we have a rst-order theory W = fBird(T weety); 8x:Penguin(x) :Canfly(x)g Then (informally) since Canfly(T weety) is consistent with what is known, we can apply the default to T weety and defeasibly derive Canfly(T weety) from W . That this inference is indeed defeasible becomes apparent if Penguin(T weety ) is also added to W : then :Canfly(T weety ) is classically entailed by what is known and the consistency check for applying the default fails, for which reason Canfly(T weety) cannot be derived from W [ fPenguin(Tweety)g. This example seems straightforward but the formal de nition of defaultlogical consequence is tricky: in this approach, what is provable is determined by what is not provable, so the problem is how to avoid a circular de nition. In default logic (as in related logics) this is solved by giving the de nition a xed-point appearance; see below in Section 5.4. Similar equilibrium-like de nitions for argumentation systems will be discussed throughout this chapter. Inconsistency handling

It has also been proposed to formalise defeasible reasoning as strategies for dealing with inconsistent information, e.g. by Brewka [1989]. In this approach defaults are formalised with ordinary material implications and without normality conditions, and their defeasible nature is captured in how they are used by the consistency handling strategies. In particular, in case of inconsistency, alternative consistent subsets (subtheories) of the premises give rise to alternative default conclusions, after which a choice can be made for the subtheory containing the exceptional rule.

LOGICS FOR DEFEASIBLE ARGUMENTATION

227

In our birds example this works out as follows. (1) bird canfly (2) penguin : canfly (3) bird (4) penguin

The set f(1); (3)g is a subtheory supporting the conclusion canfly, while f(2); (4)g is a subtheory supporting the opposite. The exceptional nature of (2) over (1) can be captured by preferring the latter subtheory. Systems for defeasible argumentation

Argumentation systems are yet another way to formalise nonmonotonic reasoning, viz. as the construction and comparison of arguments for and against certain conclusions. In these systems the basic notion is not that of a defeasible conditional but that of a defeasible argument. The idea is that the construction of arguments is monotonic, i.e., an argument stays an argument if more premises are added. Nonmonotonicity, or defeasibility, is not explained in terms of the interpretation of a defeasible conditional, but in terms of the interactions between con icting arguments: in argumentation systems nonmonotonicity arises from the fact that new premises may give rise to stronger counterarguments, which defeat the original argument. So in case of Tweety we may construct one argument that Tweety ies because it is a bird, and another argument that Tweety does not y because it is a penguin, and then we may prefer the latter argument because it is about a speci c class of birds, and is therefore an exception to the general rule. Argumentation systems can be combined with each of the above-discussed views on defaults. The `normality condition' view can be formalised by regarding an argument as a standard derivation from a set of premises augmented with normality statements. Thus a counterargument is an attack on such a normality statement. A variant of this method can be applied to the use of consistency and nonprovability expressions. The `pragmatic' view on defaults (as in inconsistency handling) can be formalised by regarding arguments as a standard derivation from a consistent subset of the premises. Here a counterargument attacks a premise of an argument. Finally, the `semantic' view on defaults could be formalised by allowing the construction of arguments with inference rules (such as modus ponens) that are invalid in the semantics. In that case a counterargument attacks the use of such an inference rule. It is important to note, however, that argumentation systems have wider scope than just reasoning with defaults. Firstly, argumentation systems can be applied to any form of reasoning with contradictory information, whether the contradictions have to do with rules and exceptions or not. For instance, the contradictions may arise from reasoning with several sources

228

HENRY PRAKKEN & GERARD VREESWIJK

of information, or they may be caused by disagreement about beliefs or about moral, ethical or political claims. Moreover, it is important that several argumentation systems allow the construction and attack of arguments that are traditionally called `ampliative', such as inductive, analogical and abductive arguments; these reasoning forms fall outside the scope of most other nonmonotonic logics. Most argumentation systems have been developed in arti cial intelligence research on nonmonotonic reasoning, although Pollock's work, which was the rst logical formalisation of defeasible argumentation, was initially applied to the philosophy of knowledge and justi cation (epistemology). The rst arti cial intelligence paper on argumentation systems was [Loui, 1987]. One domain in which argumentation systems have become popular is legal reasoning [Loui et al., 1993; Prakken, 1993; Sartor, 1994; Gordon, 1995; Loui & Norman, 1995; Prakken & Sartor, 1996; Freeman & Farley, 1996; Prakken & Sartor, 1997a; Prakken, 1997; Gordon & Karacapilidis, 1997]. This is not surprising, since legal reasoning often takes place in an adversarial context, where notions like argument, counterargument, rebuttal and defeat are very common. However, argumentation systems have also been applied to such domains as medical reasoning [Das et al., 1996], negotiation [Parsons et al., 1998] and risk assessment in oil exploration [Clark, 1990].

2.2 Nonmonotonic reasoning: arti cial intelligence or logic? Usually, nonmonotonic logics are studied as a branch of arti cial intelligence. However, it is more than justi ed to regard these logics as also part of philosophical logic. In fact, several issues in nonmonotonic logic have come up earlier in philosophy. For instance, in the context of moral reasoning, Ross [1930] has studied the notion of prima facie obligations. According to Ross an act is prima facie obligatory if it has a characteristic that makes the act (by virtue of an underlying moral principle) tend to be a `duty proper'. Ful lling a promise is a prima facie duty because it is the ful llment of a promise, i.e., because of the moral principle that one should do what one has promised to do. But the act may also have other characteristics which make the act tend to be forbidden. For instance, if John has promised a friend to visit him for a cup of tea, and then John's mother suddenly falls ill, then he also has a prima facie duty to do his mother's shopping, based, say, on the principle that we ought to help our parents when they need it. To nd out what one's duty proper is, one should `consider all things', i.e., compare all prima facie duties that can be based on any aspect of the factual circumstances and nd which one is `more incumbent' than any con icting one. If we qualify the all-things-considered clause as `consider all things that you know', then the reasoning involved is clearly nonmonotonic: if we are rst only told that John has promised his friend to visit him, then we conclude that Johns' duty proper is to visit his friend. But if we next also

LOGICS FOR DEFEASIBLE ARGUMENTATION

229

hear that John's mother has become ill, we conclude instead that John's duty proper is to help his mother. The term `defeasibility' was rst introduced not in logic but in legal philosophy, viz. by Hart [1949] (see the historical discussion in [Loui, 1995]). Hart observed that legal concepts are defeasible in the sense that the conditions for when a fact situation classi es as an instance of a legal concept (such as `contract'), are only ordinarily, or presumptively, suÆcient. If a party in a law suit succeeds in proving these conditions, this does not have the eect that the case is settled; instead, legal procedure is such that the burden of proof shifts to the opponent, whose turn it then is to prove additional facts which, despite the facts proven by the proponent, nevertheless prevent the claim from being granted (for instance, insanity of one of the contracting parties). Hart's discussion of this phenomenon stays within legal-procedural terms, but it is obvious that it provides a challenge for standard logic: an explanation is needed of how proving new facts without rejecting what was proven by the other party can reverse the outcome of a case. Toulmin [1958], who criticised the logicians of his days for neglecting many features of ordinary reasoning, was aware of the implications of this phenomenon for logic. In his well-known pictorial scheme for arguments he leaves room for rebuttals of an argument. He also urges logicians to take the procedural aspect (in the legal sense) of argumentation seriously. In particular, Toulmin argues that (outside mathematics) an argument is valid if it can stand against criticism in a properly conducted dispute, and the task of logicians is to nd criteria for when a dispute has been conducted properly. The notion of burden of proof, and its role in dialectical inquiry, has also been studied by Rescher [1977], in the context of epistemology. Among other things, Rescher claims that a dialectical model of scienti c reasoning can explain the rational force of inductive arguments: they must be accepted if they cannot be successfully challenged in a properly conducted scienti c dispute. Rescher thereby assumes that the standards for constructing inductive arguments are somehow given by generally accepted practices of scienti c reasoning; he only focuses on the dialectical interaction between con icting inductive arguments. Another philosopher who has studied defeasible reasoning is John Pollock. Although his work, to be presented below, is also well-known in the eld of arti cial intelligence, it was initially a contribution to epistemology, with, like Rescher, much attention for induction as a form of defeasible reasoning. As this overview shows, a logical study of nonmonotonic, or defeasible reasoning fully deserves a place in philosophical logic. Let us now turn to the discussion of logics for defeasible argumentation.

230

HENRY PRAKKEN & GERARD VREESWIJK

3 SYSTEMS FOR DEFEASIBLE ARGUMENTATION: A CONCEPTUAL SKETCH In this section we give a conceptual sketch of the general ideas behind logics for defeasible argumentation. These systems contain the following ve elements (although sometimes implicitly): an underlying logical language, de nitions of an argument, of con icts between arguments and of defeat among arguments and, nally, a de nition of the status of arguments, which can be used to de ne a notion of defeasible logical consequence. Argumentation systems are built around an underlying logical language and an associated notion of logical consequence, de ning the notion of an argument. As noted above, the idea is that this consequence notion is monotonic: new premises cannot invalidate arguments as arguments but only give rise to counterarguments. Some argumentation systems assume a particular logic, while other systems leave the underlying logic partly or wholly unspeci ed; thus these systems can be instantiated with various alternative logics, which makes them frameworks rather than systems. The notion of an argument corresponds to a proof (or the existence of a proof) in the underlying logic. As for the layout of arguments, in the literature on argumentation systems three basic formats can be distinguished, all familiar from the logic literature. Sometimes arguments are de ned as a tree of inferences grounded in the premises, and sometimes as a sequence of such inferences, i.e., as a deduction. Finally, some systems simply de ne an argument as a premises - conclusion pair, leaving implicit that the underlying logic validates a proof of the conclusion from the premises. One argumentation system, viz. Dung [1995], leaves the internal structure of an argument completely unspeci ed. Dung treats the notion of an argument as a primitive, and exclusively focuses on the ways arguments interact. Thus Dung's framework is of the most abstract kind.

t p

q

r

s

:t

t p

q

r

s

:dp; q; r; s=te

Figure 4. Rebutting attack (left) vs. undercutting attack (right). The notions of an underlying logic and an argument still t with the standard picture of what a logical system is. The remaining three elements are what makes an argumentation system a framework for defeasible argumentation.

LOGICS FOR DEFEASIBLE ARGUMENTATION

231

q p

:p

p

:p

Figure 5. Direct attack (left) vs. indirect attack (right). The rst is the notion of a con ict between arguments (also used are the terms `attack' and `counterargument'). In the literature, three types of con icts are discussed. The rst type is when arguments have contradictory conclusions, as in `Tweety ies, because it is a bird' and `Tweety does not

y because it is a penguin' (cf. the left part of Fig. 4). Clearly, this form of attack, which is often called rebutting an argument, is symmetric. The other two types of con ict are not symmetric. One is where one argument makes a non-provability assumption (as in default logic) and another argument proves what was assumed unprovable by the rst. For example, an argument `Tweety ies because it is a bird, and it is not provable that Tweety is a penguin', is attacked by any argument with conclusion `Tweety is a penguin'. We shall call this assumption attack. The nal type of con ict ( rst discussed by Pollock [1970]) is when one argument challenges, not a proposition, but a rule of inference of another argument (cf. the right part of Fig. 4). After Pollock, this is usually called undercutting an inference. Obviously, a rule of inference can only be undercut if it is not deductive. Non-deductive rules of inference occur in argumentation systems that allow inductive, abductive or analogical arguments. To consider an example, the inductive argument `Raven 101 is black since the observed ravens raven1 . . . raven100 were black' is undercut by an argument `I saw raven102, which was white'. In order to formalise this type of con ict, the rule of inference that is to be undercut (in Fig. 4: the rule that is enclosed in the dotted box, in at text written as p; q; r; s=t) must be expressed in the object language: dp; q; r; s=te) and denied: :dp; q; r; s=te.3 Note that all these senses of attack have a direct and an indirect version; indirect attack is directed against a subconclusion or a substep of an argument, as illustrated by Figure 5 for indirect rebutting. The notion of con icting, or attacking arguments does not embody any form of evaluation; evaluating con icting pairs of arguments, or in other 3 Ceiling brackets around a meta-level formula denote a conversion of that formula to the object language, provided that the object language is expressive enough to enable such a conversion.

232

HENRY PRAKKEN & GERARD VREESWIJK

words, determining whether an attack is successful, is another element of argumentation systems. It has the form of a binary relation between arguments, standing for `attacking and not weaker' (in a weak form) or `attacking and stronger' (in a strong form). The terminology varies: some terms that have been used are `defeat' [Prakken & Sartor, 1997b], `attack' [Dung, 1995; Bondarenko et al., 1997] and `interference' [Loui, 1998]. Other systems do not explicitly name this notion but leave it implicit in the de nitions. In this chapter we shall use `defeat' for the weak notion and `strict defeat' for the strong, asymmetric notion. Note that the several forms of attack, rebutting vs. assumption vs. undercutting and direct vs. indirect, have their counterparts for defeat. Argumentation systems vary in their grounds for the evaluation of arguments. In arti cial intelligence the speci city principle, which prefers arguments based on the most speci c defaults, is by many regarded as very important, but several researchers, e.g. Vreeswijk [1989], Pollock [1995] and Prakken & Sartor [1996], have argued that speci city is not a general principle of common-sense reasoning but just one of the many standards that might or might not be used. Moreover, some have claimed that general, domain-independent principles of defeat do not exist or are very weak, and that information from the semantics of the domain will be the most important way of deciding among competing arguments [Konolige, 1988; Vreeswijk, 1989]. For these reasons several argumentation systems are parametrised by user-provided criteria. Some, e.g. Prakken & Sartor, even argue that the evaluation criteria are debatable, just as the rest of the domain theory is, and that argumentation systems should therefore allow for defeasible arguments on these criteria. (Our example in the introduction contains such an argument, viz. A's use of a priority rule (10) based on the expected consequences of certain events. This argument might, for instance, be attacked by an argument that in case of important oÆcials even a small likelihood that the disease aects the oÆcial's functioning justi es publication, or by an argument that the negative consequences of publication for the oÆcial are small.) The notion of defeat is a binary relation on the set of arguments. It is important to note that this relation does not yet tell us with what arguments a dispute can be won; it only tells us something about the relative strength of two individual con icting arguments. The ultimate status of an argument depends on the interaction between all available arguments: it may very well be that argument B defeats argument A, but that B is in turn defeated by a third argument C ; in that case C `reinstates' A (see Figure 6).4 Suppose, for instance, that the argument A that Tweety ies because it is a bird is regarded as being defeated by the argument B that 4 While in gures 4 and 5 the arrows stood for attack relations, from now on they will depict defeat relations.

LOGICS FOR DEFEASIBLE ARGUMENTATION

A

B

233

C

Figure 6. Argument C reinstates argument A. Tweety does not y because it is a penguin (for instance, because con icting arguments are compared with respect to speci city). And suppose that B is in turn defeated by an argument C , attacking B 's intermediate conclusion that Tweety is a penguin. C might, for instance, say that the penguin observation was done with faulty instruments. In that case C reinstates argument A. Therefore, what is also needed is a de nition of the status of arguments on the basis of all the ways in which they interact. Besides reinstatement, this de nition must also capture the principle that an argument cannot be justi ed unless all its subarguments are justi ed (by Vreeswijk [1997] called the `compositionality principle'). There is a close relation between these two notions, since reinstatement often proceeds by indirect attack, i.e., attacking a subargument of the attacking argument. (Cf. Fig. 5 on page 231.) It is this de nition of the status of arguments that produces the output of an argumentation system: it typically divides arguments in at least two classes: arguments with which a dispute can be `won' and arguments with which a dispute should be `lost'. Sometimes a third, intermediate category is also distinguished, of arguments that leave the dispute undecided. The terminology varies here also: terms that have been used are justi ed vs. defensible vs. defeated (or overruled), defeated vs. undefeated, in force vs. not in force, preferred vs. not preferred, etcetera. Unless indicated otherwise, this chapter shall use the terms `justi ed', `defensible' and `overruled' arguments. These notions can be de ned both in a `declarative' and in a `procedural' form. The declarative form, usually with xed-point de nitions, just declares certain sets of arguments as acceptable, (given a set of premises and evaluation criteria) without de ning a procedure for testing whether an argument is a member of this set; the procedural form amounts to de ning just such a procedure. Thus the declarative form of an argumentation system can be regarded as its (argumentation-theoretic) semantics, and the procedural form as its proof theory. Note that it is very well possible that, while an argumentation system has an argumentation-theoretic semantics, at the same time its underlying logic for constructing arguments has a model-theoretic semantics in the usual sense, for instance, the semantics of standard rst-order logic, or a possible-worlds semantics of some modal logic. In fact, this point is not universally accepted, and therefore we devote a separate subsection to it.

234

HENRY PRAKKEN & GERARD VREESWIJK

Semantics: model-theoretic or not? A much-discussed issue is whether logics for nonmonotonic reasoning should have a model-theoretic semantics or not. In the early days of this eld it was usual to criticise several systems (such as default logic) for the lack of a model-theoretic semantics. However, when such semantics were provided, this was not always felt to be a major step forward, unlike when, for instance, possible-worlds semantics for modal logic was introduced. In addition, several researchers argued that nonmonotonic reasoning needs a dierent kind of semantics than a model theory, viz. an argumentation-theoretic semantics. It is here not the place to decide the discussion. Instead we con ne ourselves to presenting some main arguments for this view that have been put forward. Traditionally, model theory has been used in logic to de ne the meaning of logical languages. Formulas of such languages were regarded as telling us something about reality (however de ned). Model-theoretic semantics de nes the meaning of logical symbols by de ning how the world looks like if an expression with these symbols is true, and it de nes logical consequence, entailment, by looking at what else must be true if the premises are true. For defaults this means that their semantics should be in terms of how the world normally, or typically looks like when defaults are true; logical consequence should, in this approach, be determined by looking at the most normal worlds, models or situations that satisfy the premises. However, others, e.g. Pollock [1991, p. 40], Vreeswijk [1993a, pp. 88{9] and Loui [1998], have argued that the meaning of defaults should not be found in a correspondence with reality, but in their role in dialectical inquiry. That a relation between premises and conclusion is defeasible means that a certain burden of proof is induced. In this approach, the central notions of defeasible reasoning are notions like attack, rebuttal and defeat among arguments, and these notions are not `propositional', for which reason their meaning is not naturally captured in terms of correspondence between a proposition and the world. This approach instead de nes `argumentationtheoretic' semantics for such notions. The basic idea of such a semantics is to capture sets of arguments that are as large as possible, and adequately defend themselves against attacks on their members. It should be noted that this approach does not deny the usefulness of model theory but only wants to de ne its proper place. Model theory should not be applied for things for which it is not suitable, but should be reserved for the initial components of an argumentation system, the notions of a logical language and a consequence relation de ning what an argument is. It should also be noted, however, that some have proposed argumentation systems as proof theories for model-theoretic semantics of preferential entailment (in particular Gener & Pearl [1992]). In our opinion, one criterion for success of such model-theoretic semantics of argumentation systems

LOGICS FOR DEFEASIBLE ARGUMENTATION

235

is whether natural criteria for model preference can be de ned. For certain restricted cases this seems possible, but whether this approach is extendable to more general argumentation systems, for instance, those allowing inductive, analogical or abductive arguments, remains to be investigated. 4 GENERAL FEATURES OF ARGUMENT-BASED SEMANTICS Let us now, before looking at some systems in detail, become more formal about some of the notions that these systems have in common. We shall focus in particular on the semantics of argumentation systems, i.e., on the conditions that sets of justi ed arguments should satisfy. In line with the discussion at the end of Section 3, we can say that argumentation systems are not concerned with truth of propositions, but with justi cation of accepting a proposition as true. In particular, one is justi ed in accepting a proposition as true if there is an argument for the proposition that one is justi ed in accepting. Let us concentrate on the task of de ning the notion of a justi ed argument. Which properties should such a de nition have? Let us assume as background a set of arguments, with a binary relation of `defeat' de ned over it. Recall that we read `A defeats B ' in the weak sense of `A con icts with B and is not weaker than B '; so in some cases it may happen that A defeats B and B defeats A. For the moment we leave the internal structure of an argument unspeci ed, as well as the precise de nition of defeat.5 Then a simple de nition of the status of an argument is the following. DEFINITION 1. Arguments are either justi ed or not justi ed. 1. An argument is justi ed if all arguments defeating it (if any) are not justi ed. 2. An argument is not justi ed if it is defeated by an argument that is justi ed. This de nition works well in simple cases, in which it is clear which arguments should emerge victorious, as in the following example. EXAMPLE 2. Consider three arguments A, B and C such that B defeats A and C defeats B :

A

5 This

B

C

style of discussion is inspired by Dung [1995]; see further Subsection 5.1 below.

236

HENRY PRAKKEN & GERARD VREESWIJK

A concrete version of this example is A = `Tweety ies because it is a bird' B = `Tweety does not y because it is a penguin' C = `The observation that Tweety is a penguin is unreliable' C is justi ed since it is not defeated by any other argument. This makes B not justi ed, since B is defeated by C . This in turn makes A justi ed: although A is defeated by B , A is reinstated by C , since C makes B not justi ed. In other cases, however, De nition 1 is circular or ambiguous. Especially when arguments of equal strength interfere with each other, it is not clear which argument should remain undefeated. EXAMPLE 3. (Even cycle.) Consider the arguments A and B such that A defeats B and B defeats A.

A

B

A concrete example is A = `Nixon was a paci st because he was a quaker' B = `Nixon was not a paci st because he was a republican' Can we regard A as justi ed? Yes, we can, if B is not justi ed. Can we regard B as not justi ed? Yes, we can, if A is justi ed. So, if we regard A as justi ed and B as not justi ed, De nition 1 is satis ed. However, it is obvious that by a completely symmetrical line of reasoning we can also regard B as justi ed and A as not justi ed. So there are two possible `status assignments' to A and B that satisfy De nition 1: one in which A is justi ed at the expense of B , and one in which B is justi ed at the expense of A. Yet intuitively, we are not justi ed in accepting either of them. In the literature, two approaches to the solution of this problem can be found. The rst approach consists of changing De nition 1 in such a way that there is always precisely one possible way to assign a status to arguments, and which is such that with `undecided con icts' as in our example both of the con icting arguments receive the status `not justi ed'. The second approach instead regards the existence of multiple status assignments not as a problem but as a feature: it allows for multiple assignments and de nes an argument as `genuinely' justi ed if and only if it receives this status in all possible assignments. The following two subsections discuss the details of both approaches. First, however, another problem with De nition 1 must be explained, having to do with self-defeating arguments. EXAMPLE 4. (Self-defeat.) Consider an argument L, such that L defeats L. Suppose L is not justi ed. Then all arguments defeating L are not

LOGICS FOR DEFEASIBLE ARGUMENTATION

237

L Figure 7. A self-defeating argument. justi ed, so by clause 1 of De nition 1 L is justi ed. Contradiction. Suppose now L is justi ed. Then L is defeated by a justi ed argument, so by clause 2 of De nition 1 L is not justi ed. Contradiction. Thus, De nition 1 implies that there are no self-defeating arguments. Yet the notion of self-defeating arguments seems intuitively plausible, as is illustrated by the following example. EXAMPLE 5. (The Liar.) An elementary self-defeating argument can be fabricated on the basis of the so-called paradox of the Liar . There are many versions of this paradox. The one we use here, runs as follows: Dutch people can be divided into two classes: people who always tell the truth, and people who always lie. Hendrik is a Dutch monk, and of Dutch monks we know that they tend to be consistent truth-tellers. Therefore, it is reasonable to assume that Hendrik is a consistent truth-teller. However, Hendrik says he is a lier. Is Hendrik a truth-teller or a lier? The Liar-paradox is a paradox, because either answer leads to a contradiction. 1. Suppose that Hendrik tells the truth. Then what Hendrik says must be true. So, Hendrik is a lier. Contradiction. 2. Suppose that Hendrik lies. Then what Hendrik says must be false. So, Hendrik is not a lier. Because Dutch people are either consistent truth-tellers or consistent liers, it follows that Hendrik always tells the truth. Contradiction. From this paradox, a self-defeating argument L can be made out of (1):

238

HENRY PRAKKEN & GERARD VREESWIJK

Dutch monks tend to be consistent truth-tellers Hendrik says: \I lie"

Hendrik is a Dutch monk Hendrik is a consistent truth-teller

Hendrik lies Hendrik is not a consistent truth-teller If the argument for \Hendrik is not a consistent truth-teller" is as strong as its subargument for \Hendrik is a consistent truth-teller," then L defeats one of its own sub-arguments, and thus is a self-defeating argument. In conclusion, it seems that De nition 1 needs another revision, to leave room for the existence of self-defeating arguments. Below we shall not discuss this in general terms since, perhaps surprisingly, in the literature it is hard to nd generally applicable solutions to this problem. Instead we shall discuss for each particular system how it deals with self-defeat.

4.1 The unique-status-assignment approach The idea to enforce unique status assignments basically comes in two variants. The rst de nes status assignments in terms of some xed-point operator, and the second involves a recursive de nition of a justi ed argument, by introducing the notion of a subargument of an argument. We rst discuss the xed-point approach. Fixed-point de nitions

This approach, followed by e.g. Pollock [1987; 1992], Simari & Loui [1992] and Prakken & Sartor [1997b], can best be explained with the notion of `reinstatement' (see above, Section 3). The key observation is that an argument that is defeated by another argument can only be justi ed if it is reinstated by a third argument, viz. by a justi ed argument that defeats its defeater. This idea is captured by Dung's [1995] notion of acceptability . DEFINITION 6. An argument A is acceptable with respect to a set S of arguments i each argument defeating A is defeated by an argument in S . The arguments in S can be seen as the arguments capable of reinstating A in case A is defeated.

LOGICS FOR DEFEASIBLE ARGUMENTATION

239

However, the notion of acceptability is not suÆcient. Consider in Example 3 the set S = fAg. It is easy to see that A is acceptable with respect to S , since all arguments defeating A (viz. B ) are defeated by an argument in S , viz. A itself. Clearly, we do not want that an argument can reinstate itself, and this is the reason why a xed-point operator must be used. Consider the following operator from [Dung, 1995], which for each set of arguments returns the set of all arguments that are acceptable to it. DEFINITION 7. (Dung's [1995] grounded semantics.) Let Args be a set of arguments ordered by a binary relation of defeat,6 and let S Args. Then the operator F is de ned as follows:

F (S ) = fA 2 Args j A is acceptable with respect to S g Dung proves that the operator F has a least xed point. (The basic idea is that if an argument is acceptable with respect to S , it is also acceptable with respect to any superset of S , so that F is monotonic.) Self-reinstatement can then be avoided by de ning the set of justi ed arguments as that least xed point. Note that in Example 3 the sets fAg and fB g are xed points of F but not its least xed point, which is the empty set. In general we have that if no argument is undefeated, then F (;) = ;. These observations allow the following de nition of a justi ed argument. DEFINITION 8. An argument is justi ed i it is a member of the least xed point of F . It is possible to reformulate De nition 7 in various ways, which are either equivalent to, or approximations of the least xed point of F . To start with, Dung shows that it can be approximated from below, and when each argument has at most nitely many defeaters even be obtained, by iterative application of F to the empty set. PROPOSITION 9. Consider the following sequence of arguments.

F0 = ; F i+1 = fA 2 Args j A is acceptable with respect to F i g. The following observations hold [Dung, 1995]. i 1. All arguments in [1 i=0 (F ) are justi ed.

2. If each argument is defeated by at most a nite number of arguments, i then an argument is justi ed i it is in [1 i=0 (F ) 6 As

remarked above, Dung uses the term `attack' instead of `defeat'.

240

HENRY PRAKKEN & GERARD VREESWIJK

In the iterative construction rst all arguments that are not defeated by any argument are added, and at each further application of F all arguments that are reinstated by arguments that are already in the set are added. This is achieved through the notion of acceptability. To see this, suppose we apply F for the ith time: then for any argument A, if all arguments that defeat A are themselves defeated by an argument in F i 1 , then A is in F i . It is instructive to see how this works in Example 2. We have that F 1 = F (;) = fC g F 2 = F (F 1 ) = fA; C g F 3 = F (F 2 ) = F 2 Dung [1995] also shows that F is equivalent to double application of a simpler operator G, i.e. F = G Æ G. The operator G returns for each set of arguments all arguments that are not defeated by any argument in that set. DEFINITION 10. Let Args be a set of arguments ordered by a binary relation of defeat. Then the operator G is de ned as follows: G(S ) = fA 2 ArgsjA is not defeated by any argument in S g The G operator is in turn very similar to the one used by Pollock [1987; 1992]. To see this, we reformulate G in Pollock's style, by considering the sequence obtained by iterative application of G to the empty set, and de ning an argument A to be justi ed if and only if at some point (or \level") m in the sequence A remains in Gn for all n m. DEFINITION 11. (Levels in justi cation.) All arguments are in at level 0. An argument is in at level n +1 i it is not defeated by any argument in at level n. An argument is justi ed i there is an m such that for every n m, the argument is in at level n. As shown by Dung [1995], this de nition stands to De nition 10 as the construction of Proposition 9 stands to De nition 7. Dung also remarks that De nition 11 is equivalent to Pollock's [1987; 1992] de nition, but as we shall see below, this is not completely accurate. In Example 2, De nition 11 works out as follows. level in 0 A; B; C 1 C 2 A; C 3 A; C . ...

LOGICS FOR DEFEASIBLE ARGUMENTATION

241

C is in at all levels, while A becomes in at 2 and stays in at all subsequent levels. And in Example 3 both A and B are in at all even levels and out at all odd levels. level in 0 A; B 1 2 A; B 3 4 A; B . ... The following example, with an in nite chain of defeat relations, gives another illustration of De nitions 7 and 11. EXAMPLE 12. (In nite defeat chain.) Consider an in nite chain of arguments A1 ; : : : ; An ; : : : such that A1 is defeated by A2 , A2 is defeated by A3 , and so on.

A1

A2

A3

A4

A5

:::

The least xed point of this chain is empty, since no argument is undefeated. Consequently, F (;) = ;. Note that this example has two other xed points, which also satisfy De nition 1, viz. the set of all Ai where i is odd, and the set of all Ai where i is even. Defensible arguments

A nal peculiarity of the de nitions is that they allow a distinction between two types of arguments that are not justi ed. Consider rst again Example 2 and observe that, although B defeats A, A is still justi ed since it is reinstated by C . Consider next the following extension of Example 3. EXAMPLE 13. (Zombie arguments.) Consider three arguments A, B and C such that A defeats B , B defeats A, and B defeats C .

A A concrete example is

B

C

242

HENRY PRAKKEN & GERARD VREESWIJK

A = `Dixon is no paci st because he is a republican' B = `Dixon is a paci st because he is a quaker, and he has no gun because he is a paci st' C = `Dixon has a gun because he lives in Chicago' According to De nitions 8 and 11, neither of the three arguments are justi ed. For A and B this is since their relation is the same as in Example 3, and for C this is since it is defeated by B . Here a crucial distinction between the two examples becomes apparent: unlike in Example 2, B is, although not justi ed, not defeated by any justi ed argument and therefore B retains the potential to prevent C from becoming justi ed: there is no justi ed argument that reinstates C by defeating B . Makinson & Schlechta [1991] call arguments like B `zombie arguments':7 B is not `alive', (i.e., not justi ed) but it is not fully dead either; it has an intermediate status, in which it can still in uence the status of other arguments. Following Prakken & Sartor [1997b], we shall call this intermediate status `defensible'. In the unique-status-assignment approach it can be de ned as follows. DEFINITION 14. (Overruled and defensible arguments.)

An argument is overruled i it is not justi ed, and defeated by a justi ed argument.

An argument is defensible i it is not justi ed and not overruled.

Self-defeating arguments Finally, we must come back to the problem of self-defeating arguments. How does De nition 7 deal with them? Consider the following extension of Example 4. EXAMPLE 15. Consider two arguments A and B such that A defeats A and A defeats B .

A

B

Intuitively, we want that B is justi ed, since the only argument defeating it is self-defeating. However, we have that F (;) = ;, so neither A nor B are justi ed. Moreover, they are both defensible, since they are not defeated by any justi ed argument. How can De nitions 7 and 11 be modi ed to obtain the intuitive result that A is overruled and B is justi ed? Here is where Pollock's deviation from the latter de nition becomes relevant. His version is as follows. 7 Actually, they talk about `zombie paths', since their article is about inheritance systems.

LOGICS FOR DEFEASIBLE ARGUMENTATION

243

DEFINITION 16. (Pollock, [1992])

An argument is in at level 0 i it is not self-defeating.

An argument is justi ed i there is an m such that for every n m, the argument is in at level n.

An argument is in at level n + 1 i it is in at level 0 and it is not defeated by any argument in at level n.

The additions i it is not self-defeating in the rst condition and i it is in at level 0 in the second make the dierence: they render all self-defeating arguments out at every level, and incapable of preventing other arguments from being out. Another solution is provided by Prakken & Sartor [1997b] and Vreeswijk [1997], who distinguish a special `empty' argument, which is not defeated by any other argument and which by de nition defeats any self-defeating argument. Other solutions are possible, but we shall not pursue them here. Recursive de nitions

Sometimes a second approach to the enforcement of unique status assignments is employed, e.g. by Prakken [1993] and Nute [1994]. The idea is to make explicit that arguments are usually constructed step-by-step, proceeding from intermediate to nal conclusions (as in Example 13, where A has an intermediate conclusion `Dixon is a paci st' and a nal conclusion `Dixon has no gun'). This approach results in an explicitly recursive de nition of justi ed arguments, re ecting the basic intuition that an argument cannot be justi ed if not all its subarguments are justi ed. At rst sight, this recursive style is very natural, particularly for implementing the de nition in a computer program. However, the approach is not so straightforward as it seems, as the following discussion aims to show. To formalise the recursive approach, we must make a rst assumption on the structure of arguments, viz. that they have subarguments (which are `proper' i they are not identical to the entire argument). Justi ed arguments are then de ned as follows. (We already add how self-defeating arguments can be dealt with, so that our discussion can be con ned to the issue of avoiding multiple status assignments. Note that the explicit notion of a subargument makes it possible to regard an argument as self-defeating if it defeats one of its subarguments, as in Example 5.) DEFINITION 17. (Recursively justi ed arguments.) An argument A is justi ed i 1. A is not self-defeating; and 2. All proper subarguments of A are justi ed; and

244

HENRY PRAKKEN & GERARD VREESWIJK

3. All arguments defeating A are self-defeating, or have at least one proper subargument that is not justi ed. How does this de nition avoid multiple status assignments in Example 3? The `trick' is that for an argument to be justi ed, clause (2) requires that it have no (non self-defeating) defeaters of which all proper subarguments are justi ed. This is dierent in De nition 1, which leaves room for such defeaters, and instead requires that these themselves are not justi ed; thus this de nition implies in Example 3 that A is justi ed if and only if B is not justi ed, inducing two status assignments. With De nition 17, on the other hand, A is prevented from being justi ed by the existence of a (non-selfdefeating) defeater with justi ed subarguments, viz. B (and likewise for B ). The reader might wonder whether this solution is not too drastic, since it would seem to give up the property of reinstatement. For instance, when applied to Example 2, De nition 17 says that argument A is not justi ed, since it is defeated by B , which is not self-defeating. That B is in turn defeated by C is irrelevant, even though C is justi ed. However, here it is important that De nition 17 allows us to distinguish between two kinds of reinstatement. Intuitively, the reason why C defeats B in Example 2, is that it defeats B 's proper subargument that Tweety is a penguin. And if the subarguments in the example are made explicit as follows, De nition 17 yields the intuitive result. (As for notation, for any pair of arguments X and X , the latter is a proper subargument of the rst.) EXAMPLE 18. Consider four arguments A, B , B and C such that B defeats A and C defeats B .

B A

B

C

According to De nition 17, A and C are justi ed and B and B are not justi ed. Note that B is not justi ed by Clause 2. So C reinstates A not by directly defeating B but by defeating B 's subargument B . The crucial dierence between the Examples 2 and 3 is that in the latter example the defeat relation is of a dierent kind, in that A and B are in con ict on their nal conclusions (respectively that Nixon is, or is not a paci st). The only way to reinstate, say, the argument A that Nixon was a

LOGICS FOR DEFEASIBLE ARGUMENTATION

245

paci st is by nding a defeater of B 's proper subargument that Nixon was a republican (while making the subargument relations explicit). So the only case in which De nition 17 does not capture reinstatement is when all relevant defeat relations concern the nal conclusions of the arguments involved. This might even be regarded as a virtue of the de nition, as is illustrated by the following modi cation of Example 2 (taken from [Nute, 1994]). EXAMPLE 19. Consider three arguments A, B and C such that B defeats A and C defeats B . Read the arguments as follows. A = `Tweety ies because it is a bird' B = `Tweety does not y because it is a penguin' C = `Tweety might y because it is a genetically altered penguin' Note that, unlike in Example 2, these three arguments are in con ict on the same issue, viz. on whether Tweety can y. According to De nitions 7 and 11 both A and C are justi ed; in particular, A is justi ed since it is reinstated by C . However, according to De nition 17 only C is justi ed, since A has a non-self-defeating defeater, viz. B . The latter outcome might be regarded as the intuitively correct one, since we still accept that Tweety is a penguin, which blocks the `birds y' default, and C allows us at most to conclude that Tweety might y. So does this example show that De nitions 7 and 11 must be modi ed? We think not, since it is possible to represent the arguments in such a way that these de nitions give the intuitive outcome. However, this solution requires a particular logical language, for which reason its discussion must be postponed (see Section 5.2, p. 269). Nevertheless, we can at least conclude that while the indirect form of reinstatement (by defeating a subargument) clearly seems a basic principle of argumentation, Example 19 shows that with direct reinstatement this is not so clear. Unfortunately, De nition 17 is not yet fully adequate, as can be shown with the following extension of Example 3. It is a version of Example 13 with the subarguments made explicit. EXAMPLE 20. (Zombie arguments 2.) Consider the arguments A , A, B and C such that A and B defeat each other and A defeats C .

A

A

B

C

246

HENRY PRAKKEN & GERARD VREESWIJK

A concrete example is A = `Dixon is a paci st because he is a quaker' B = `Dixon is no paci st because he is a republican' A = `Dixon has no gun because he is a paci st' C = `Dixon has a gun because he lives in Chicago' According to De nition 17, C is justi ed since its only defeater, A, has a proper subargument that is not justi ed, viz. A . Yet, as we explained above with Example 13, intuitively A should retain its capacity to prevent C from being justi ed, since the defeater of its subargument is not justi ed. There is an obvious way to repair De nition 17: it must be made explicitly `three-valued' by changing the phrase `not justi ed' in Clause 3 into `overruled',8 where the latter term is de ned as follows. DEFINITION 21. (Defensible and overruled arguments 2.)

An argument is overruled i it is not justi ed and either it is selfdefeating, or it or one of its proper subarguments is defeated by a justi ed argument.

An argument is defensible i it is not justi ed and not overruled.

This results in the following de nition of justi ed arguments. DEFINITION 22. (Recursively justi ed arguments|revised.) An argument A is justi ed i 1. A is not self-defeating; and 2. All proper subarguments of A are justi ed; and 3. All arguments defeating A are self-defeating, or have at least one proper subargument that is overruled. In Example 20 this has the following result. Note rst that none of the arguments are self-defeating. Then to determine whether C is justi ed, we must determine the status of A. A defeats C , so C is only justi ed if A is overruled. Since A is not defeated, A can only be overruled if its proper subargument A is overruled. No proper subargument of A is defeated, but A is defeated by B . So if B is justi ed, A is overruled. Is B justi ed? No, since it is defeated by A , and A is not self-defeating and has no overruled proper subarguments. But then A is not overruled, which means that C is not justi ed. In fact, all arguments in the example are defensible, as can be easily veri ed. 8 Makinson & Schlechta [1991] criticise this possibility and recommend the approach with multiple status assignments.

LOGICS FOR DEFEASIBLE ARGUMENTATION

247

Comparing xed-point and recursive de nitions

Comparing the xed-point and recursive de nitions, we have seen that in the main example where their outcomes dier (Example 19), the intuitions seem to favour the outcome of the recursive de nitions (but see below, p. 269). We have also seen that the recursive de nition, if made `three-valued', can deal with zombie arguments just as well as the xed-point de nitions. So must we favour the recursive form? The answer is negative, since it also has a problem: De nitions 17 and 22 do not always enforce a unique status assignment. Consider the following example. EXAMPLE 23. (Crossover defeat.)9 Consider four arguments A ; A; B ; B such that A defeats B while B defeats A .

A

B

A

B

De nition 17 allows for two status assignments, viz. one in which only A and A are justi ed, and one in which only B and B are justi ed. In addition, De nition 22 also allows for the status assignment which makes all arguments defensible. Clearly, the latter status assignment is the intuitively intended one. However, without xed-point constructions it seems hard to enforce it as the unique one. Note, nally, that in our discussion of the non-recursive approach we implicitly assumed that when a proper subargument of an argument is defeated, thereby the argument itself is also defeated (see e.g. Example 2). In fact, any particular argumentation system that has no explicitly recursive de nition of justi ed arguments should satisfy this assumption. By contrast, systems that have a recursive de nition, can leave defeat of an argument independent from defeat of its proper subarguments. Furthermore, if a system has no recursive de nition of justi ed arguments, but still distinguishes arguments and subarguments for other reasons (as e.g. [Simari & Loui, 1992] and [Prakken & Sartor, 1997b]), then a proof is required that Clause 2 of De nition 17 holds. Further illustration of this point must be postponed to the discussion of concrete systems in Section 5. 9 The

name `crossover' is taken from Hunter [1993].

248

HENRY PRAKKEN & GERARD VREESWIJK

Unique status assignments: evaluation

Evaluating the unique-status-assignment approach, we have seen that it can be formalised in an elegant way if xed-point de nitions are used, while the, perhaps more natural attempt with a recursive de nition has some problems. However, regardless of its precise formalisation, this approach has inherent problems with certain types of examples, such as the following. EXAMPLE 24. (Floating arguments.) Consider the arguments A; B; C and D such that A defeats B , B defeats A, A defeats C , B defeats C and C defeats D.

B C

D

A Since no argument is undefeated, De nition 8 tells us that all of them are defensible. However, it might be argued that for C and D this should be otherwise: since C is defeated by both A and B , C should be overruled. The reason is that as far as the status of C is concerned, there is no need to resolve the con ict between A and B : the status of C ` oats' on that of A and B . And if C should be overruled, then D should be justi ed, since C is its only defeater. A variant of this example is the following piece of default reasoning. To analyse this example, we must again make an assumption on the structure of arguments, viz. that they have a conclusion. EXAMPLE 25. (Floating conclusions.)10 Consider the arguments A , A, B and B such that A and B defeat each other and A and B have the same conclusion.

A

B

A

B

An intuitive reading is 10 The

term ` oating conclusions' was coined by Makinson & Schlechta [1991].

LOGICS FOR DEFEASIBLE ARGUMENTATION

249

A = Brygt Rykkje is Dutch because he was born in Holland B = Brygt Rykkje is Norwegian because he has a Norwegian name A = Brygt Rykkje likes ice skating because he is Dutch B = Brygt Rykkje likes ice skating because he is Norwegian The point is that whichever way the con ict between A and B is decided, we always end up with an argument for the conclusion that Brygt Rykkje likes ice skating, so it seems that it is justi ed to accept this conclusion as true, even though it is not supported by a justi ed argument. In other words, the status of this conclusion oats on the status of the arguments A and B . While the unique-assignment approach is inherently unable to capture

oating arguments and conclusions, there is a way to capture them, viz. by working with multiple status assignments. To this approach we now turn.

4.2 The multiple-status-assignments approach A second way to deal with competing arguments of equal strength is to let them induce two alternative status assignments, in both of which one is justi ed at the expense of the other. Note that both these assignments will satisfy De nition 1. In this approach, an argument is `genuinely' justi ed i it receives this status in all status assignments. To prevent terminological confusion, we now slightly reformulate the notion of a status assignment. DEFINITION 26. A status assignment to a set X of arguments ordered by a binary defeat relation is an assignment to each argument of either the status `in' or the status `out' (but not both), satisfying the following conditions: 1. An argument is in if all arguments defeating it (if any) are out. 2. An argument is out if it is defeated by an argument that is in. Note that the conditions (1) and (2) are just the conditions of De nition 1. In Example 3 there are precisely two possible status assignments:

A

B

A

B

Recall that an argumentation system is supposed to de ne when it is justi ed to accept an argument. What can we say in case of A and B ? Since both of them are `in' in one status assignment but `out' in the other, we must conclude that neither of them is justi ed. This is captured by rede ning the notion of a justi ed argument as follows:

250

HENRY PRAKKEN & GERARD VREESWIJK

DEFINITION 27. Given a set X of arguments and a relation of defeat on X , an argument is justi ed i it is `in' in all status assignments to X . However, this is not all; just as in the unique-status-assignment approach, it is possible to distinguish between two dierent categories of arguments that are not justi ed. Some of those arguments are in no extension, but others are at least in some extensions. The rst category can be called the overruled , and the latter category the defensible arguments. DEFINITION 28. Given a set X of arguments and a relation of defeat on X

An argument is overruled i it is `out' in all status assignments to X ; An argument is defensible i it is `in' in some and `out' in some status assignments to X .

It is easy to see that the unique-assignment and multiple-assignments approaches are not equivalent. Consider again Example 24. Argument A and B form an even loop, thus, according to the multiple-assignments approach, either A and B can be assigned `in' but not both. So the above defeat relation induces two status assignments:

B

B C

D

C

and

A

D

A

While in the unique-assignment approach all arguments are defensible, we now have that D is justi ed and C is overruled. Multiple status assignments also make it possible to capture oating conclusions. This can be done by de ning the status of formulas as follows. DEFINITION 29. (The status of conclusions.)

' is a justi ed conclusion i every status assignment assigns `in' to an argument with conclusion ';

' is a defensible conclusion i ' is not justi ed, and a conclusion of a defensible argument.

' is an overruled conclusion i ' is not justi ed or defensible, and a conclusion of an overruled argument.

LOGICS FOR DEFEASIBLE ARGUMENTATION

251

Changing the rst clause into `' is a justi ed conclusion i ' is the conclusion of a justi ed argument ' would express a stronger notion, not recognising

oating conclusions as justi ed. There is reason to distinguish several variants of the multiple-statusassignments approach. Consider the following example, with an `odd loop' of defeat relations. EXAMPLE 30. (Odd loop.) Let A; B and C be three arguments, represented in a triangle, such that A defeats C , B defeats A, and C defeats B .

C

A

B

In this situation, De nition 27 has some problems, since this example has no status assignments. 1. Assume that A is `in'. Then, since A defeats C , C is `out'. Since C is `out', B is `in', but then, since B defeats A, A is `out'. Contradiction. 2. Assume next that A is `out'. Then, since A is the only defeater of C , C is `in'. Then, since C defeats B , B is `out'. But then, since B is the only defeater of A, A is `in'. Contradiction. Note that a self-defeating argument is a special case of Example 30, viz. the case where B and C are identical to A. This means that sets of arguments containing a self-defeating argument might have no status assignment. To deal with the problem of odd defeat cycles, several alternatives to De nition 26 have been studied in the literature. They will be discussed in Section 5, in particular in 5.1 and 5.2.

4.3 Comparing the two approaches How do the unique- and multiple-assignment approaches compare to each other? It is sometimes said that their dierence re ects a dierence between a `sceptical' and `credulous' attitude towards drawing defeasible conclusions: when faced with an unresolvable con ict between two arguments, a sceptic would refrain from drawing any conclusion, while a credulous reasoner would choose one conclusion at random (or both alternatively) and further explore its consequences. The sceptical approach is often defended by saying that since in an unresolvable con ict no argument is stronger than the other, neither of them can be accepted as justi ed, while the credulous approach

252

HENRY PRAKKEN & GERARD VREESWIJK

has sometimes been defended by saying that the practical circumstances often require a person to act, whether or not s/he has conclusive reasons to decide which act to perform. In our opinion this interpretation of the two approaches is incorrect. When deciding what to accept as a justi ed belief, what is important is not whether one or more possible status assignments are considered, but how the arguments are evaluated given these assignments. And this evaluation is captured by the quali cations `justi ed' and `defensible', which thus capture the distinction between `sceptical' and `credulous' reasoning. And since, as we have seen, the distinction justi ed vs. defensible arguments can be made in both the unique-assignment and the multiple-assignments approach, these approaches are independent of the distinction `sceptical' vs. `credulous' reasoning. Although both approaches can capture the notion of a defensible argument, they do so with one important dierence. The multiple-assignments approach is more convenient for identifying sets of arguments that are compatible with each other. The reason is that while with unique assignments the defensible arguments are defensible on an individual basis, with multiple assignments they are defensible because they belong to a set of arguments that are `in' and thus can be defended simultaneously. Even if two defensible arguments do not defeat each other, they might be incompatible in the sense that no status assignment makes them both `in', as in the following example. EXAMPLE 31. A and B defeat each other, B defeats C , C defeats D.

A

B

C

D

This example has two status assignments, viz. fA; C g and fB; Dg. Accordingly, all four arguments are defensible. Note that, although A and D do not defeat each other, A is in i D is out. So A and D are in some sense incompatible. In the unique-assignment approach this notion of incompatibility seems harder to capture. As we have seen, the unique-assignment approach has no inherent dif culty to recognise `zombie arguments'; this problem only occurs if this approach uses a recursive two-valued de nition of the status of arguments. As for their outcomes, the approaches mainly dier in their treatment of

oating arguments and conclusions. With respect to these examples, the question easily arises whether one approach is the right one. However, we prefer a dierent attitude: instead of speaking about the `right' or `wrong' de nition, we prefer to speak of `senses' in which an argument or conclusion

LOGICS FOR DEFEASIBLE ARGUMENTATION

253

can be justi ed. For instance, the sense in which the conclusion that Brygt Rykkje likes ice skating in Example 25 is justi ed is dierent from the sense in which, for instance, the conclusion that Tweety ies in Example 2 is justi ed: only in the second case is the conclusion supported by a justi ed argument. And the status of D in Example 24 is not quite the same as the status of, for instance, A in Example 2. Although both arguments need the help of other arguments to be justi ed, the argument helping A is itself justi ed, while the arguments helping D are merely defensible. In the concluding section we come back to this point, and generalise it to other dierences between the various systems.

4.4 General properties of consequence notions We conclude this section with a much-discussed issue, viz. whether any nonmonotonic consequence notion, although lacking the property of monotonicity, should still satisfy other criteria. Many argue that this is the case, and much research has been devoted to formulating such criteria and designing systems that satisfy them; see e.g. [Gabbay, 1985; Makinson, 1989; Kraus et al., 1990]. We, however, do not follow this approach, since we think that it is hard to nd any criterion that should really hold for any argumentation system, or nonmonotonic consequence notion, for that matter. We shall illustrate this with the condition that is perhaps most often defended, called cumulativity . In terms of argumentation systems this principle says that if a formula ' is justi ed on the basis of a set of premises T , then any formula is justi ed on the basis of T if and only if is also justi ed on the basis of T [ f'g. We shall in particular give counterexamples to the `if' part of the biconditional, which is often called cautious monotony . This condition in fact says that adding justi ed conclusions to the premises cannot make other justi ed conclusions unjusti ed. At rst sight, this principle would seem uncontroversial. However, we shall now (quasi-formally) discuss reasonably behaving argumentation systems, with plausible criteria for defeat, and show by example that they do not satisfy cautious monotony and are therefore not cumulative. These examples illustrate two points. First they illustrate Makinson & Schlechta's [1991] remark that systems that do not satisfy cumulativity assign facts a special status. Second, since the examples are quite natural, they illustrate that argumentation systems should assign facts a special status and therefore should not be cumulative. Below, the ! symbols stand for unspeci ed reasoning steps in an argument, and the formulas stand for the conclusion drawn in such a step. EXAMPLE 32. Consider two (schematic) arguments

A: p !q B : ! :s

! r ! :q ! s

254

HENRY PRAKKEN & GERARD VREESWIJK

Suppose we have a system in which self-defeating arguments have no capacity to prevent other arguments from being justi ed. Assume also that A is self-defeating, since a subconclusion, :q, is based on a subargument for a conclusion q. Assume, nally, that the system makes A's subargument for r justi ed (since it has no non-selfdefeating counterarguments). Then B is justi ed. However, if r is now added to the `facts', the following argument can be constructed: A0 : r ! :q ! s This argument is not self-defeating, and therefore it might have the capacity to prevent B from being justi ed. EXAMPLE 33. Consider next the following arguments. A is a two-step argument p ! q ! r B is a three-step argument s ! t ! u ! :r And assume that con icting arguments are compared on their length (the shorter, the better). Then A strictly defeats B , so A is justi ed. Assume, however, also that B 's subargument s !t !u is justi ed, since it has no counterarguments, and assume that u is added to the facts. Then we have a new argument for :r, viz. B 0 : u ! :r which is shorter than A and therefore strictly defeats A. Yet another type of example uses numerical assessments of arguments. EXAMPLE 34. Consider the arguments A: p !q !r B : s ! :r Assume that in A the strength of the derivation of q from p is 0.7 and that the strength of the derivation of r from q is 0.85, while in B the strength of the derivation of :r from s is 0.8. Consider now an argumentation system where arguments are compared with respect to their weakest links. Then B strictly defeats A, since B 's weakest link is 0.8 while A's weakest link is 0.7. However, assume once more that A's subargument for q is justi ed because it has no counterargument, and then assume that q is added as a fact. Then a new argument A0 : q ! r can be constructed, with as weakest link 0.85, so that it strictly defeats B . The point of these examples is that reasonable argumentation systems with plausible criteria for defeat are conceivable which do not satisfy cumulativity, so that cumulativity cannot be required as a minimum requirement for justi ed belief. Vreeswijk [1993a, pp. 82{8] has shown that other

LOGICS FOR DEFEASIBLE ARGUMENTATION

255

properties of nonmonotonic consequence relations also turn out to be counterintuitive in a number of realistic logical scenario's. 5 SOME ARGUMENTATION SYSTEMS Let us, after our general discussions, now turn to individual argumentation systems and frameworks. We shall present them according to the conceptual sketch of Section 3, and also evaluate them in the light of Section 4.

5.1 The abstract approach of Bondarenko, Dung, Kowalski and Toni Introductory remarks

We rst discuss an abstract approach to nonmonotonic logic developed in several articles by Bondarenko, Dung, Toni and Kowalski (below called the `BDKT approach'). Historically, this work came after the development by others of a number of argumentation systems (to be discussed below). The major innovation of the BDKT approach is that it provides a framework and vocabulary for investigating the general features of these other systems, and also of nonmonotonic logics that are not argument-based. The latest and most comprehensive account of the BDKT approach is Bondarenko et al. [1997]. In this account, the basic notion is that of a set of \assumptions". In their approach the premises come in two kinds: `ordinary' premises, comprising a theory , and assumptions , which are formulas (of whatever form) that are designated (on whatever ground) as having default status. Inspired by Poole [1988], Bondarenko et al. [1997] regard nonmonotonic reasoning as adding sets of assumptions to theories formulated in an underlying monotonic logic, provided that the contrary of the assumptions cannot be shown. What in their view makes the theory argumentationtheoretic is that this provision is formalised in terms of sets of assumptions attacking each other. In other words, according to Bondarenko et al. [1997] an argument is a set of assumptions. This approach has especially proven successful in capturing existing nonmonotonic logics. Another version of the BDKT approach, presented by Dung [1995], completely abstracts from both the internal structure of an argument and the origin of the set of arguments; all that is assumed is the existence of a set of arguments, ordered by a binary relation of `defeat'.11 This more abstract point of view seems more in line with the aims of this chapter, and therefore we shall below mainly discuss Dung's version of the BDKT approach. As remarked above, it inspired much of our discussion in Section 4. The 11 BDKT

use the term `attack', but to maintain uniformity we shall use `defeat'.

256

HENRY PRAKKEN & GERARD VREESWIJK

assumption-based version of Bondarenko et al. [1997] will be brie y outlined at the end of this subsection. Basic notions

As just remarked, Dung's [1995] primitive notion is a set of arguments ordered by a binary relation of defeat. Dung then de nes various notions of so-called argument extensions, which are intended to capture various types of defeasible consequence. These notions are declarative, just declaring sets of arguments as having a certain status. Finally, Dung shows that many existing nonmonotonic logics can be reformulated as instances of the abstract framework. Dung's basic formal notions are as follows. DEFINITION 35. An argumentation framework (AF) is a pair (Args, defeat), where Args is a set of arguments, and defeat a binary relation on Args.

An AF is nitary i each argument in Args is defeated by at most a nite number of arguments in Args.

A set of arguments is con ict-free i no argument in the set is defeated by an argument in the set.

One might think of the set Args as all arguments that can be constructed in a given logic from a given set of premises (although this is not always the case; see the discussions below of `partial computation'). Unless stated otherwise, we shall below implicitly assume an arbitrary but xed AF. Dung interprets defeat , like us, in the weak sense of `con icting and not being weaker'. Thus in Dung's approach two arguments can defeat each other. Dung does not explicitly use the stronger (and asymmetric) notion of strict defeat, but we shall sometimes use it below. A central notion of Dung's framework is acceptability, already de ned above in De nition 6. We repeat it here. It captures how an argument that cannot defend itself, can be protected from attacks by a set of arguments. DEFINITION 36. An argument A is acceptable with respect to a set S of arguments i each argument defeating A is defeated by an argument in S . As remarked above, the arguments in S can be seen as the arguments capable of reinstating A in case A is defeated. To illustrate acceptability, consider again Example 2, which in terms of Dung has an AF (called `TT' for `Tweety Triangle') with Args = fA; B; C g and defeat = f(B; A); (C; B )g (B strictly defeats A and C strictly defeats B ). A is acceptable with respect to fC g, fA; C g, fB; C g and fA; B; C g, but not with respect to ; and fB g. Another central notion is that of an admissible set.

LOGICS FOR DEFEASIBLE ARGUMENTATION

257

DEFINITION 37. A con ict-free set of arguments S is admissible i each argument in S is acceptable with respect to S . Intuitively, an admissible set represents an admissible, or defendable, point of view. In Example 2 the sets ;, fC g and fA; C g are admissible but all other subsets of fA; B; C g are not admissible. Argument extensions

In terms of the notions of acceptability and admissibility several notions of `argument extensions' can be de ned, which are what we above called `status assignments'. The following notion of a stable extension is equivalent to De nition 26 above. DEFINITION 38. A con ict-free set S is a stable extension i every argument that is not in S , is defeated by some argument in S . In Example 2, T T has only one stable extension, viz. fA; C g. Consider next an AF called ND (the Nixon Diamond), corresponding to Example 3, with Args = fA; B g, and defeat = f(A; B ); (B; A)g. ND has two stable extensions, fAg and fB g. Since a stable extension is con ict-free, it re ects in some sense a coherent point of view. It is also a maximal point of view, in the sense that every possible argument is either accepted or rejected. In fact, stable semantics is the most `aggressive' type of semantics, since a stable extension defeats every argument not belonging to it, whether or not that argument is hostile to the extension. This feature is the reason why not all AF's have stable extensions, as Example 30 has shown. To give such examples also a multiple-assignment semantics, Dung de nes the notion of a preferred extension. DEFINITION 39. A con ict-free set is a preferred extension i it is a maximal (with respect to set inclusion) admissible set. Let us go back to De nition 26 of a status assignment and de ne a partial status assignment in the same way as a status assignment, but without the condition that it assigns a status to all arguments. Then it is easy to verify that preferred extensions correspond to maximal partial status assignments. Dung shows that every AF has a preferred extension. Moreover, he shows that stable extensions are preferred extensions, so in the Nixon Diamond and the Tweety Triangle the two semantics coincide. However, not all preferred extensions are stable: in Example 30 the empty set is a (unique) preferred extension, which is not stable. Preferred semantics leaves all arguments in an odd defeat cycle out of the extension, so none of them is defeated by an argument in the extension. Preferred and stable semantics are an instance of the multiple-statusassignments approach of Section 4.2: in cases of an irresolvable con ict as

258

HENRY PRAKKEN & GERARD VREESWIJK

in the Nixon diamond, two incompatible extensions are obtained. Dung also explores the unique-status-assignment approach, with his notion of a grounded extension, already presented above as De nition 7. To build a bridge between the various semantics, Dung also de nes `complete semantics'. DEFINITION 40. An admissible set of arguments is a complete extension i each argument that is acceptable with respect to S belongs to S . This de nition implies that a set of arguments is a complete extension i it is a xed point of the operator F de ned in De nition 7. According to Dung, a complete extension captures the beliefs of a rational person who believes everything s/he can defend. Self-defeating arguments How do Dung's various semantics deal with self-defeating arguments? It turns out that all semantics have some problems. For stable semantics they are the most serious, since an AF with a self-defeating argument might have no stable extensions. For preferred semantics this problem does not arise, since preferred extensions are guaranteed to exist. However, this semantics still has a problem, since self-defeating arguments can prevent other arguments from being justi ed. This can be illustrated with Example 15 (an AF with two arguments A and B such that A defeats A and A defeats B ). The set fB g is not admissible, so the only preferred extension is the empty set. Yet intuitively it seems that instead fB g should be the only preferred extension, since B 's only defeater is self-defeating. It is easy to see that the same holds for complete semantics. In Section 4.1 we already saw that this example causes the same problems for grounded semantics, but that for nitary AF's Pollock [1987] provides a solution. Both Dung [1995] and Bondarenko et al. [1997] recognise the problem of self-defeating arguments, and suggest that solutions in the context of logic programming of Kakas et al. [1994] could be generalised to deal with it. Dung also acknowledges Pollock's [1995] approach, to be discussed in Subsection 5.2. Formal results Both Dung [1995] and Bondarenko et al. [1997] establish a number of results on the existence of extensions and the relation between the various semantics. We now summarise some of them.

1. Every stable extension is preferred, but not vice versa. 2. Every preferred extension is a complete extension, but not vice versa. 3. The grounded extension is the least (with respect to set inclusion) complete extension.

LOGICS FOR DEFEASIBLE ARGUMENTATION

259

4. The grounded extension is contained in the intersection of all preferred extensions (Example 24 is a counterexample against `equal to'.) 5. If an AF contains no in nite chains A1 ; : : : ; An ; : : : such that each Ai+1 defeats Ai then AF has exactly one complete extension, which is grounded, preferred and stable. (Note that the even loop of Example 3 and the odd loop of Example 30 form such an in nite chain.) 6. Every AF has at least one preferred extension. 7. Every AF has exactly one grounded extension. Finally, Dung [1995] and Bondarenko et al. [1997] identify several conditions under which preferred and stable semantics coincide. Assumption-based formulation of the framework As mentioned above, Bondarenko et al. [1997] have developed a dierent version of the BDKT approach. This version is less abstract than the one of Dung [1995], in that it embodies a particular view on the structure of arguments. Arguments are seen as sets of assumptions that can be added to a theory in order to (monotonically) derive conclusions that cannot be derived from the theory alone. Accordingly, Bondarenko et al. [1997] de ne a more concrete version of Dung's [1995] argumentation frameworks as follows: DEFINITION 41. Let L be a formal language and ` a monotonic logic de ned over L. An assumption-based framework with respect to (L; `) is a tuple hT; Ab; i where

T; Ab L is a mapping from Ab into L, where denotes the contrary of . The notion of defeat is now de ned for sets of assumptions (below we leave the assumption-based framework implicit). DEFINITION 42. A set of assumptions A defeats an assumption i T [ A ` ; and A defeats a set of assumptions i A defeats some assumption 2 . The notions of argument extensions are then de ned in terms of sets of assumptions. For instance, DEFINITION 43. A set of assumptions is stable i

is closed, i.e., = f 2 AbjT [ ` g does not defeat itself

260

HENRY PRAKKEN & GERARD VREESWIJK

defeats each assumption 62

A stable extension is a set T h(T [ ) for some stable set of assumptions. As remarked above, Bondarenko et al.'s [1997] main aim is to reformulate existing nonmonotonic logics in their general framework. Accordingly, what an assumption is, and what its contrary is, is determined by the choice of nonmonotonic logic to be reformulated. For instance, in applications of preferential entailment where abnormality predicates abi are to be minimised (see Section 2.1), the assumptions will include expressions of the form :abi (c), where :abi (c) = abi (c). And in default logic (see also Section 2.1), an assumption is of the form M' for any `middle part' ' of a default, where M' = :'; moreover, all defaults ': = are added to the rules de ning ` as monotonic inference rules '; M =. Procedure

The developers of the BDKT approach have also studied procedural forms for the various semantics. Dung et al. [1996; 1997] propose two abstract proof procedures for computing admissibility (De nition 37), where the second proof procedure is a computationally more eÆcient re nement of the rst. Both procedures are based upon a proof procedure originally intended for computing stable semantics in logic programming. And they are both formulated as logic programs that are derived from a formal speci cation. The derivation guarantees the correctness of the proof procedures. Further, Dung et al. [1997] show that both proof procedures are complete. Here, the rst procedure is discussed. It is de ned in the form of a meta-level logic program, of which the toplevel clause de nes admissibility. This concept is captured in a predicate adm: (1) adm(0 ; )

! [0 and is admissible]

and 0 are sets of assumptions, where ` is admissible' is a low-level concept that is de ned with the help of auxiliary clauses. In this manner, (1) provides a speci cation for the proof procedure. Similarly, a top-level predicate defends is de ned defends(D; ) ! [D defeats 0 , for every 0 that defeats ] The proof procedure that Dung et al. propose can be understood in procedural terms as repeatedly adding defences to the initially given set of assumptions 0 until no further defences need to be added. More precisely, given a current set of assumptions , initialised as 0 , the proof procedure repeatedly

LOGICS FOR DEFEASIBLE ARGUMENTATION

261

1. nds a set of assumptions D such that defends(D; ); 2. replaces by [ D until D = , in which case it returns . Step (1) is non-deterministic, since there might be more than one set of assumptions D defending the current . The proof procedure potentially needs to explore a search tree of alternatives to nd a branch which terminates with a self-defending set. The logic-programming formulation of the proof procedure is:

adm(; ) adm(; 0 )

defends(; ) defends(D; ); adm( [ D; 0 )

The procedural characterisation of the proof procedure is obtained by applying SLD resolution to the above clauses with a left-to-right selection rule, with an initial query of the form adm(0 ; ) with 0 as input and as output. The procedure is proved correct with respect to the admissibility semantics, but it is shown to be incorrect for stable semantics in general. According to Dung et al., this is due to the above-mentioned `epistemic aggressiveness' of stable semantics, viz. the fact that a stable extension defeats every argument not belonging to it. Dung et al. remark that, besides being counterintuitive, this property is also computationally very expensive, because it necessitates a search through the entire space of arguments to determine, for every argument, whether or not it is defeated. Subsequent evaluation by Dung et al. of the proof procedure has suggested that it is the semantics, rather than the proof procedure, which was at fault, and that preferred semantics provides an improvement. This insight is also formulated by Dung [1995]. Finally, it should be noted that recently, Kakas & Toni [1999] have developed proof procedures in dialectical style (see Section 6 below) for the various semantics of Bondarenko et al. [1997] and for Kakas et al. [1994]'s acceptability semantics. Evaluation

As remarked above, the abstract BDKT approach was a major innovation in the study of defeasible argumentation, in that it provided an elegant general framework for investigating the various argumentation systems. Moreover, the framework also applies to other nonmonotonic logics, since Dung and Bondarenko et al. extensively show how many of these logics can be translated into argumentation systems. Thus it becomes very easy to formulate alternative semantics for nonmonotonic logics. For instance, default logic, which was shown by Dung [1995] to have a stable semantics, can very easily

262

HENRY PRAKKEN & GERARD VREESWIJK

be given an alternative semantics in which extensions are guaranteed to exist, like preferred or grounded semantics. Moreover, the proof theories that have been or will be developed for the various argument-based semantics immediately apply to the systems that are an instance of these semantics. Because of these features, the BDKT framework is also very useful as guidance in the development of new systems, as, for instance, Prakken & Sartor have used it in developing the system of Subsection 5.7 below. On the other hand, the level of abstractness of the BDKT approach (especially in Dung's version) also leaves much to the developers of particular systems. In particular, they have to de ne the internal structure of an argument, the ways in which arguments can con ict, and the origin of the defeat relation. Moreover, it seems that at some points the BDKT approach needs to be re ned or extended. We already mentioned the treatment of self-defeating arguments, and Prakken & Sartor [1997b] have extended the BDKT framework to let it cope with reasoning about priorities (see Subsection 5.7 below).

5.2 Pollock John Pollock was one of the initiators of the argument-based approach to the formalisation of defeasible reasoning. Originally he developed his theory as a contribution to philosophy, in particular epistemology. Later he turned to arti cial intelligence, developing a computer program called OSCAR, which implements his theory. Since the program falls outside the scope of this handbook, we shall only discuss the logical aspects of Pollock's system; for the architecture of the computer program the reader is referred to e.g. Pollock [1995]. The latter also discusses other topics, such as practical reasoning, planning and reasoning about action. Reasons, arguments, con ict and defeat

In Pollock's system, the underlying logical language is standard rst-order logic, but the notion of an argument has some nonstandard features. What still conforms to accounts of deductive logic is that arguments are sequences of propositions linked by inference rules (or better, by instantiated inference schemes). However, Pollocks's formalism begins to deviate when we look at the kinds of inference schemes that can be used to build arguments. Let us rst concentrate on linear arguments; these are formed by combining so-called reasons. Technically, reasons connect a set of propositions with a proposition. Reasons come in two kinds, conclusive and prima facie reasons. Conclusive reasons still adhere to the common standard, since they are reasons that logically entail their conclusions. In other words, a conclusive reason is any valid rst-order inference scheme (which means that Pollock's system includes rst-order logic). Thus, examples of conclusive reasons are

LOGICS FOR DEFEASIBLE ARGUMENTATION

263

fp; qg is a conclusive reason for p ^ q f8xP xg is a conclusive reason for P a Prima facie reasons, by contrast have no counterpart in deductive logic; they only create a presumption in favour of their conclusion, which can be defeated by other reasons, depending on the strengths of the con icting reasons. Based on his work in epistemology, Pollock distinguishes several kinds of prima facie reasons: for instance, principles of perception, such as12

dx appears to me as Y e is a prima facie reason for believing dx is Y e. (For the objecti cation-operator d e see page 231 and page 265.) Another source of prima facie reasons is the statistical syllogism, which says that: If (r > 0:5) then dx is an F and prob(G=F ) = re is a prima facie reason of strength r for believing dx is a Ge. Here prob(G=F ) stands for the conditional probability of G given F . Prima facie reasons can also be based on principles of induction, for example,

dX

is a set of m F 's and n members of X have the property G (n=m > 0:5)e is a prima facie reason of strength n=m for believing dall F 's have the property Ge. Actually, Pollock adds to these de nitions the condition that F is projectible with respect to G. This condition, introduced by Goodman, 1954, is meant to prevent certain `unfounded' probabilistic or inductive inferences. For instance, the rst observed person from Lanikai, who is a genius, does not permit the prediction that the next observed Lanikaian will be a genius. That is, the predicate `intelligence' is not projectible with respect to `birthplace'. Projectibility is of major concern in probabilistic reasoning. To give a simple example of a linear argument, assume the following set of `input' facts INPUT = fA(a), prob(B=A) = 0:8, prob(C=B ) = 0:7g. The following argument uses reasons based on the statistical syllogism, and the rst of the above-displayed conclusive reasons. 12 When

a reason for a proposition is a singleton set, we drop the brackets.

264

HENRY PRAKKEN & GERARD VREESWIJK

1. hA(a); 1i (A(a) is in INPUT) 2. h dprob(B=A) = 0:8e; 1i (dprob(B=A) = 0:8e is in INPUT) 3. hA(a)^ dprob(B=A) = 0:8e; 1i (1,2 and fp; qg is a conclusive reason for p ^ q) 4. hB (a); 0:8i (3 and the statistical syllogism) 5. h dprob(C=B ) = 0:7e; 1i (lceilprob(C=B ) = 0:7e is in INPUT) 6. hB (a) ^ dprob(C=B ) = 0:7e; 0:8i (4,5 and fp; qg is a conclusive reason for p ^ q) 7. hC (a); 0:7i (6 and the statistical syllogism) So each line of a linear argument is a pair, consisting of a proposition and a numerical value that indicates the strength, or degree of justi cation of the proposition. The strength 1 at lines 1,2 and 5 indicates that the conclusions of these lines are put forward as absolute facts, originating from the epistemic base `INPUT'. At line 4, the weakest link principle is applied, with the result that the strength of the argument line is the minimum of the strength of the reason for B (a) (0.8) and the argument line 3 from which C (a) is derived with this reason (1). At lines 6 and 7 the weakest link principle is applied again. Besides linear arguments, Pollock also studies suppositional arguments. In suppositional reasoning, we `suppose' something that we have not inferred from the input, draw conclusions from the supposition, and then `discharge' the supposition to obtain a related conclusion that no longer depends on the supposition. In Pollock's system, suppositional arguments can be constructed with inference rules familiar from natural deduction. Accordingly, the propositions in an argument have sets of propositions attached to them, which are the suppositions under which the proposition can be derived from earlier elements in the sequence. The following de nition (based on [Pollock, 1995]) summarises this informal account of argument formation. DEFINITION 44. In OSCAR, an argument based on INPUT is a nite sequence 1 ; : : : ; n , where each i is a line of argument. A line of argument i is a triple hXi ; pi ; i i, where Xi , a set of propositions, is the set of suppositions at line i, pi is a proposition, and i is the degree of justi cation of at line i. A line of argument is obtained from earlier lines of argument according to one of the following rules of argument formation. Input. If p is in INPUT and is an argument, then for any X it holds that ; hX; p; 1i is an argument. Reason. If is an argument, hX1 ; p1 ; 1 i; : : : ; hXn ; pn; n i are members of , and fp1; : : : ; pn g is a reason of strength for q, and for each i, Xi X , then ; hX; q; minf1 ; : : : n ; gi is an argument. Supposition. If is an argument, X a set of propositions and p 2 X , then ; hX; p; 1i is also an argument.

LOGICS FOR DEFEASIBLE ARGUMENTATION

265

Conditionalisation. If is an argument and some line of is hX [ fpg; q; i, then ; hX; (p q); i is also an argument. Dilemma. If is an argument and some line of is hX; p _ q; i, and some line of is hX [ fpg; r; i, and some line of is hX [ fqg; r; i, then ; hX; r; minf; ; gi is also an argument. Pollock [1995] notes that other inference rules could be added as well. It is the use of prima facie reasons that makes arguments defeasible, since these reasons can be defeated by other reasons. This can take place in two ways: by rebutting defeaters, which are at least as strong reasons with the opposite conclusion, and by undercutting defeaters, which are at least as strong reasons of which the conclusion denies the connection that the undercut reason states between its premises and its conclusion. A typical example of rebutting defeat is when an argument using the reason `Birds y' is defeated by an argument using the reason `Penguins don't y'. Pollock's favourite example of an undercutting defeater is when an object looks red because it is illuminated by a red light: knowing this undercuts the reason for believing that this object is red, but it does not give a reason for believing that the object is not red. Before we can explain how Pollock formally de nes the relation of defeat among arguments, some extra notation must be introduced. In the de nition of defeat among arguments, Pollock uses a, what may be called, objecti cation operator, d e. (This operator was also used in Fig. 4 on page 230 and in the prima facie reasons on page 263.) With this operator, expressions in the meta-language are transformed into expressions in the object language. For example, the meta-level rule fp; qg is a conclusive reason for p may be transformed into the object-level expression dfp; qg is a conclusive reason for pe: If the object language is rich enough, then the latter expression is present in the object language, in the form (p^q) p. Evidently, a large fraction of the meta-expressions cannot be conveyed to the object language, because the object language lacks suÆcient expressibility. This is the case, for example, if corresponding connectives are missing in the object language. Pollock formally de nes the relation of defeat among arguments as follows. Defeat among arguments. An argument defeats another argument if and only if: 1. 's last line is hX; q; i and is obtained by the argument formation rule Reason from some earlier lines hX1 ; p1; 1 i; : : : ; hXn ; pn ; n i where fp1; : : : ; pn g is a prima facie reason for q; and

266

HENRY PRAKKEN & GERARD VREESWIJK

2. 's last line is hY; r; i where Y X and either: (a) r is :q and ; or (b) r is :dfp1 ; : : : ; png >> qe and . (1) determines the weak spot of , while (2) determines whether that weak spot is (2a) a conclusion (in this case q), or (2b) a reason (in this case fp1; : : : ; pn g >> q). For Pollock, (2a) is a case of rebutting defeat , and 2b is a case of undercutting defeat : if undercuts the last reason of , it blocks the derivation of q, without supporting :q as alternative conclusion. The formula dfp1; : : : ; png >> qe stands for the translation of `fp1 ; : : : ; pn g is a prima facie reason for q' into the object language. Pollock leaves the notion of con icting arguments implicit in this de nition of defeat. Note also that a defeater of an argument always defeats the last step of an argument; Pollock treats `subargument defeat' by a recursive de nition of a justi ed argument, i.e., in the manner explained above in Section 4.1. Suppositional reasoning

As noted above, the argument formation rules supposition , conditionalisation and dilemma can be used to form suppositional arguments. OSCAR is one of the very few nonmonotonic logics that allow for suppositional reasoning. Pollock nds it necessary to introduce suppositional reasoning because, in his opinion, this type of reasoning is ubiquitous not only in deductive, but also in defeasible reasoning. Pollock mentions, among other things, the reasoning form `reasoning by cases', which is notoriously hard for many nonmonotonic logics. An example is `presumably, birds y, presumably, bats

y, Tweety is a bird or a bat, so, presumably, Tweety ies'. In Pollock's system, this argument can be formalised as follows. EXAMPLE 45. Consider the following reasons. (1) Bird(x) is a prima facie reason of strength for Flies(x) (2) Bat(x) is a prima facie reason of strength for Flies(x) And consider INPUT = fBird(t) _ Bat(t)g. The conclusion Flies(t) can be defeasibly derived as follows. 1. 2. 3. 4. 5. 6.

h;; Bird(t) _ Bat(t); 1i hfBird(t)g; Bird(t); 1i hfBird(t)g; Flies(t); i hfBat(t)g; Bat(t); 1i hfBat(t)g; Flies(t); i h;; Flies(t); minf; gi

(Bird(t) _ Bat(t) is in INPUT) (Supposition) (2 and prima facie reason (1)) (Supposition) (4 and prima facie reason (2)) (3,5 and Dilemma)

LOGICS FOR DEFEASIBLE ARGUMENTATION

267

At line 1, the proposition Bird(t) _ Bat(t) is put forward as an absolute fact. At line (2), the proposition Bird(t) is temporarily supposed to be true. From this assumption, at the following line the conclusion Flies(t) is defeasibly derived with the rst prima facie reason. Line (4) is an alternative continuation of line 1. At line (4), Bat(t) is supposed to be true, and at line (5) it is used to again defeasibly derive Flies(t), this time from the second prima facie reason. Finally, at line (6) the Dilemma rule is applied to (3) and (5), discharging the assumptions in the alternative suppositional arguments, and concluding to Flies(t) under no assumption. According to Pollock, another virtue of his system is that it validates the defeasible derivation of a material implication from a prima facie reason. Consider again the `birds y' reason (1), and assume that INPUT is empty. 1. hfBird(t)g; Bird(t); 1i (Supposition) 2. hfBird(t)g; Flies(t); i (1 and prima facie reason (1)) 3. h;; Bird(t) Flies(t); i (2 and Conditionalisation) Pollock regards the validity of these inferences as desirable. On the other hand, Vreeswijk has argued that suppositional defeasible reasoning, in the way Pollock proposes it, sometimes enables incorrect inferences. Vreeswijk's argument is based on the idea that the strength of a conclusion obtained by means of conditionalisation is incomparable to the reason strength of the implication occurring in that conclusion. For a discussion of this problem the reader is further referred to Vreeswijk [1993a, pp. 184{7]. Having seen how Pollock de nes the notions of arguments, con icting arguments, and defeat among arguments, we now turn to what was the main topic of Section 4 and the main concern of Dung [1995], de ning the status of arguments. The status of arguments Over the years, Pollock has more than once changed his de nition of the status of arguments. One change is that while earlier versions (e.g. Pollock, 1987) dealt with (successful) attack on a subargument in an implicit way via the de nition of defeat, the latest version makes this part of the status de nition, by explicitly requiring that all subarguments of an `undefeated' argument are also undefeated (cf. Section 4.1). Another change is in the form of the status de nition. Earlier Pollock took the unique-statusassignment approach, in particular, the xed-point variant of De nition 16 which, as shown by Dung [1995], (almost) corresponds to the grounded semantics of De nition 7. However, his most recent work is in terms of multiple status assignments, and very similar to the preferred semantics of De nition 39. Pollock's thus combines the recursive style of De nition 17 with the multiple-status-assignments approach. We present the most recent de nition, of [Pollock, 1995]. To maintain uniformity in our terminology, we

268

HENRY PRAKKEN & GERARD VREESWIJK

state it in terms of arguments instead of, as Pollock, in terms of an `inference graph'. To maintain the link with inference graphs, we make the de nition relative to a closed set of arguments, i.e., a set of arguments containing all subarguments of all its elements. Since we deviate from Pollock's inference graphs, we must be careful in de ning the notion of subarguments. Sometimes a later line of an argument depends on only some of its earlier lines. For instance, in Example 45 line (5) only depends on (4). In fact, the entire argument (1-6) has three independent, or parallel subarguments, viz. a lineair subargument (1), and two suppositional subarguments (2,3) and (4,5). Pollock's inference graphs nicely capture such dependencies, since their nodes are argument lines and their links are inferences. However, with our sequential format of an argument this is dierent, for which reason we cannot de ne a subargument as being any subsequence of an argument. Instead, they are only those subsequences of A that can be transformed into an inference tree. DEFINITION 46 (subarguments). An argument A is a subargument of an argument B i A is a subsequence of B and there exists a tree T of argument lines such that 1. T contains all and only lines from A; and 2. T 's root is A's last element; and 3. l is a child of l0 i l was inferred from a set of lines one of which was l0 . A proper subargument of A is any subargument of A unequal to A. Now we can give Pollock's [1995] de nition of a status assignment. DEFINITION 47. An assignment of `defeated' and `undefeated' to a closed set S of arguments is a partial defeat status assignment i it satis es the following conditions. 1. All arguments in S with only lines obtained by the input argument formation rule are assigned `undefeated'; 2. A 2 S is assigned `undefeated' i: (a) All proper sub-arguments of A are assigned `undefeated'; and (b) All arguments in S defeating A are assigned `defeated'. 3. A 2 S is assigned `defeated' i: (a) One of A's proper sub-arguments is assigned `defeated'; or (b) A is defeated by an argument in S that is assigned `undefeated'.

LOGICS FOR DEFEASIBLE ARGUMENTATION

269

A defeat status assignment is a maximal (with respect to set inclusion) partial defeat status assignment. Observe that the conditions (2a) and (3a) on the sub-arguments of A make the weakest link principle hold by de nition. The similarity of defeat status assignments to Dung's preferred extensions of De nition 39 shows itself as follows: the conditions (2b) and (3b) on the defeaters of A are the analogues of Dung's notion of acceptability, which make a defeat status assignment an admissible set; then the fact that a defeat status assignment is a maximal partial assignment induces the similarity with preferred extensions. It is easy to verify that when two arguments defeat each other (Example 3), an input has more than one status assignment. Since Pollock wants to de ne a sceptical consequence notion, he therefore has to consider the intersection of all assignments. Pollock does so in a variant of De nitions 27 and 28. DEFINITION 48. (The status of arguments.) Let S be a closed set of arguments based on INPUT. Then, relative to S , an argument is undefeated i every status assignment to S assigns `undefeated' to it; it is defeated outright i no status assignment to S assigns `undefeated' to it; otherwise it is provisionally defeated. In our terms, `undefeated' is `justi ed', `defeated outright' is `overruled', and `provisionally defeated' is `defensible'. Direct vs. indirect reinstatement

It is now the time to come back to the discussion in Section 4.1 on reinstatement. Example 19 showed that there is reason to invalidate the direct version of this principle, viz. when the con icts are about the same issue. We remarked that the explicitly recursive De nition 17 of justi ed arguments indeed invalidates direct reinstatement while preserving its indirect version. However, we also promised to explain that both versions of reinstatement can be retained if Example 19 is represented in a particular way. In fact, Pollock (personal communication) would represent the example as follows: (1) Being a bird is a prima facie reason for being able to y (2a) Being a penguin is an undercutting reason for (1) (2b) Being a penguin is a defeasible reason for not being able to y (3) Being a genetically altered penguin is an undercutting reason for (2b) (4) Tweety is a genetically altered penguin It is easy to verify that De nitions 47 and 48, which validate both direct and indirect of reinstatement, yield the intuitive outcome, viz. that it is neither justi ed that Tweety can y, nor that it cannot y. A similar representation

270

HENRY PRAKKEN & GERARD VREESWIJK

is possible in systems that allow for abnormality or exception clauses, e.g. in [Gener & Pearl, 1992; Bondarenko et al., 1997; Prakken & Sartor, 1997b]. Self-defeating arguments Pollock has paid much attention to the problem of self-defeating arguments. In Pollock's system, an argument defeats itself i one of its lines defeats another of its lines. Above in Section 4.1 we already discussed Pollock's treatment of self-defeating arguments within the unique-status-assignment approach. However, he later came to regard this treatment as incorrect, and he now thinks that it can only be solved in the multiple-assignment approach (personal communication). Let us now see how Pollock's De nitions 47 and 48 deal with the problem. Two cases must be distinguished. Consider rst two defeasible arguments A and B rebutting each other. Then A and B are `parallel' subarguments of a deductive argument A + B for any proposition. Then (if no other arguments interfere with A or B ) there are two status assignments, one in which A is assigned `undefeated' and B assigned `defeated', and one the other way around. Now A + B is in both of these assignments assigned `defeated', since in both assignments one of its proper subarguments is assigned `defeated'. Thus the self-defeating argument A + B turns out to be defeated outright, which seems intuitively plausible. A dierent case is the following, with the following reasons

(1) p is a prima facie reason of strength 0.8 for q (2) q is a prima facie reason of strength 0.8 for r (3) r is a conclusive reason for d:(p >> q)e

and with INPUT = fpg. The structed. 1. hp; 1i 2. hq; 0:8i 3. hr; 0:8i 4. h:(p >> q); 0:8i

following (linear) argument can be con(p is in INPUT) (1 and prima facie reason (1)) (2 and prima facie reason (2)) (3 and conclusive reason (3))

Let us call this argument A, with proper subarguments A1 ; A2 ; A3 and A4 , respectively. Observe rst that, according to Pollock's de nition of selfdefeat, A4 is self-defeating. Further, according to Pollock's earlier approach with De nition 16, A4 is, as being self-defeating, overruled, or `defeated', while A1 , A2 and A3 are justi ed, or `undefeated'. Pollock now regards this outcome as incorrect: since A4 is a deductive consequence of A3 , A3 should also be `defeated'. This result is obtained with De nitions 47 and 48. Firstly, A1 is clearly undefeated. Consider next A2 . This argument is undercut by A4 , so if A4 is assigned `undefeated', then A2 must be assigned `defeated'. But then A4

LOGICS FOR DEFEASIBLE ARGUMENTATION

271

must also be assigned `defeated', since one of its proper subarguments is assigned `defeated'. Contradiction. If, on the other hand, A4 is assigned `defeated', then A2 and so A3 must be assigned `undefeated'. But then A4 must be assigned `undefeated'. Contradiction. In conclusion, no partial status assignment will assign a status to A4 and, consequently, no status assignment will assign a status to A2 or A3 either. And since this implies that no status assignment assigns the status `undefeated' to any of these arguments, they are by De nition 48 all defeated outright. Two remarks about this outcome can be made. Firstly, it might be doubted whether A2 should indeed be defeated outright, i.e., overruled. It is not self-defeating, its only defeater is self-defeating, and this defeater is not a deductive consequence of A2 's conclusion. Other systems, e.g. those of Vreeswijk (Section 5.5) and Prakken & Sartor (Section 5.7), regard A2 as justi ed. In these systems Pollock's intuition about A3 is formalised by regarding A3 as self-defeating because its conclusion deductively, not just defeasibly, implies a conclusion incompatible with itself. This makes it possible to regard A3 as overruled but A2 as justi ed. Furthermore, even if Pollock's outcome is accepted, the situation is not quite the same as with the previous example. Consider another defeasible argument B which rebuts and is rebutted by A3 . Then no assignment assigns a status to B either, for which reason B is also defeated outright. Yet this shows that the `defeated outright' status of A2 is not the same as the `defeated outright' status of an argument that has an undefeated defeater: apparently, A2 is still capable of preventing other arguments from being undefeated. In fact, the same holds for arguments involved in an odd defeat cycle (as in Example 30). In conclusion, Pollock's de nitions leave room for a fourth status of arguments, which might be called `seemingly defeated'. This status holds for arguments that according to De nition 48 are defeated outright but still have the power to prevent other arguments from being ultimately undefeated. The four statuses can be partially ordered as follows: `undefeated' is better than `provisionally defeated' and than `seemingly defeated', which both in turn are better than `defeated outright'. This observation applies not only to Pollock's de nition, but to all approaches based on partial status assignments, like Dung [1995] preferred semantics. However, this is not yet all: even if the notion of seeming defeat is made explicit, there still is an issue concerning oating arguments (cf. Example 24). To see this, consider the following extension of Example 30 (formulated in terms of [Dung, 1995]). EXAMPLE 49. Let A; B and C be three arguments, represented in a triangle, such that A defeats C , B defeats A, and C defeats B . Furthermore, let D and E be arguments such that all of A, B and C defeat D, and D defeats E .

272

HENRY PRAKKEN & GERARD VREESWIJK

seemingly defeated

better than

ultimately undefeated

better than

provisionally defeated

better than better than

defeated outright

Figure 8. Partial ordering of defeat statuses.

B C

D

E

A The dierence between Example 24 and this example is that the even defeat loop between two arguments is replaced by an odd defeat loop between three arguments. One view on the new example is that this dierence is inessential and that, for the same reasons as why in Example 24 the argument D is justi ed, here the argument E is ultimately undefeated: although E is strictly defeated by D, it is reinstated by all of A, B and C , since all these arguments strictly defeat D. On this account De nitions 47 and 48 are awed since they render all ve arguments defeated outright (and in our terms seemingly defeated). However, an alternative view is that odd defeat loops are of an essentially dierent kind than even defeat loops, so that our analysis of Example 24 does not apply here and that the outcome in Pollock's system re ects a aw in the available input information rather than in the system. Ideal and resource-bounded reasoning

We shall now see that De nition 48 is not yet all that Pollock has to say on the status of arguments. In the previous section we saw that the BDKT approach leaves the origin of the set of `input' arguments unspeci ed. At

LOGICS FOR DEFEASIBLE ARGUMENTATION

273

this point Pollock develops some interesting ideas. At rst sight it might be thought that the set S of the just-given de nitions is just the set of all arguments that can be constructed with the argument formation rules of De nition 44. However, this is only one of the possibilities that Pollock considers, in which De nition 48 captures so-called ideal warrant . DEFINITION 50. (Ideal warrant.) Let S be the set of all arguments based on INPUT. Then an argument A is ideally warranted relative to INPUT i A is undefeated relative to S . Pollock wants to respect that in actual reasoning the construction of arguments takes time, and that reasoners have no in nite amount of time available. Therefore, he also considers two other de nitions, both of which have a computational avour. To capture an actual reasoning process, Pollock makes them relative to a sequence S of closed nite sets S0 : : : Si : : : of arguments. Let us call this an argumentation sequence. Such a sequence contains all arguments constructed by a reasoner, in the order in which they are produced. It (and any of its elements) is based on INPUT if all its arguments are based on INPUT.13 Now the rst `computational' status de nition determines what a reasoner must believe at any given time. DEFINITION 51. (Justi cation.) Let S be an argumentation sequence based on INPUT, and Si an element of S . Then an argument A is justi ed relative to INPUT at stage i i A is undefeated relative to Si . In this de nition the set Si contains just those arguments that have actually been constructed by a reasoner. Thus this de nition captures the current status of a belief; it may be that further reasoning (without adding new premises) changes the status of a conclusion. This cannot happen for the other `computational' consequence notion de ned by Pollock, called warrant. Intuitively, an argument A is warranted i eventually in an argumentation sequence a stage is reached where A remains justi ed at every subsequent stage. To de ne this, the notion of a `maximal' argumentation sequence is needed, i.e., a sequence that cannot be extended. Thus it contains all arguments that a reasoner with unlimited resources would construct (in a particular order). DEFINITION 52. (Warrant.) Let S be a maximal argumentation sequence S0 : : : Si : : : based on INPUT. Then an argument A is warranted (relative to INPUT) i there is an i such that for all j > i, A is undefeated relative to Sj . The dierence between warrant and ideal warrant is subtle: it has to do with the fact that, while in determining warrant every set Sj Si that 13 Note that we again translate Pollock's inference graphs into (structured) sets of arguments.

274

HENRY PRAKKEN & GERARD VREESWIJK

is considered is nite , in determining ideal warrant the set of all possible arguments has to be considered, and this set can be in nite. EXAMPLE 53. (Warrant does not entail ideal warrant.) Suppose A1 ; A2 ; A3 ; : : : are arguments such that every Ai is defeated by its successor Ai+1 . Further, suppose that the arguments are produced in the order A2 ; A1 ; A4 ; A3 ; A6 ; A5 ; A8 ; : : : Then Stage Produced Justi ed 1 A2 A2 2 A2 ; A1 A2 3 A2 ; A1 ; A4 A2 ; A4 4 A2 ; A1 ; A4 ; A3 A2 ; A4 5 A2 ; A1 ; A4 ; A3 ; A6 A2 ; A4 ; A6 6 A2 ; A1 ; A4 ; A3 ; A6 ; A5 A2 ; A4 ; A6 7 A2 ; A1 ; A4 ; A3 ; A6 ; A5 ; A8 A2 ; A4 ; A6 ; A8 .. .. .. . . . From stage 1, A2 is justi ed and stays justi ed. Thus, A2 is warranted. At the same time, however, A2 is not ideally warranted, because there exist two status assignments for all Ai 's. One assignment in which all and only all odd arguments are `in', and one assignment in which all and only all odd arguments are `out'. Hence, according to ideal warrant, every argument is only provisionally defeated. In particular, A2 is provisionally defeated. A remarkable aspect of this example is that, eventually, every argument will be produced, but without reaching the right result for A2 . EXAMPLE 54. (Ideal warrant does not imply warrant.) Suppose that A, B1 , B2 , B3 , . . . and C1 , C2 , C3 , . . . are arguments such that A is defeated by every Bi , and every Bi is defeated by Ci . Further, suppose that the arguments are produced in the order A; B1 ; C1 ; B2 ; C2 ; B3 ; C3 ; . . . Then Stage Produced Justi ed 1 A A 2 A; B1 B1 3 A; B1 ; C1 A; C1 4 A; B1 ; C1 ; B2 C1 ; B2 5 A; B1 ; C1 ; B2 ; C2 A; C1 ; C2 6 A; B1 ; C1 ; B2 ; C2 ; B3 C1 ; C2 ; B3 .. .. .. . . . Thus, in this sequence, A is provisionally defeated. However, according to the de nition of ideal warrant, every Bi is defeated by Ci , so that A remains undefeated. Although the notion of warrant is computationally inspired, as Pollock observes there is no automated procedure that can determine of any war-

LOGICS FOR DEFEASIBLE ARGUMENTATION

275

ranted argument that it is warranted: even if in fact a warranted argument stays undefeated after some nite number n of computations, a reasoner can in state n not know whether it has reached a point where the argument stays undefeated, or whether further computation will change its status. Pollock's reasoning architecture We now discuss Pollock's reasoning architecture for computing the ideally warranted propositions, i.e. the propositions that are the conclusion of an ideally warranted argument. (According to Pollock, ideal warrant is what every reasoner should ultimately strive for.) In deductive logic such an architecture would be called a `proof theory', but Pollock rejects this term. The reason is that one condition normally required of proof theories, viz. that the set of theorems is recursively enumerable, cannot in general be satis ed for a defeasible reasoner. Pollock assumes that a reasoner reasons by constantly updating its beliefs, where an update is an elementary transition from one set of propositions to the next set of propositions. According to this view, a reasoner would be adequate if the resulting sequence is a recursively enumerable approximation of ideal warrant. However, this is impossible. Ideal warrant contains all theorems of predicate logic, and it is known that all theorems of predicate logic form a set that is not recursive. And since in defeasible reasoning some conclusions depend on the failure to derive other conclusions, the set of defeasible conclusions is not recursively enumerable. Therefore, Pollock suggests an alternative criterion of adequacy. A reasoner is called defeasibly adequate if the resulting sequence is a defeasibly enumerable approximation of ideal warrant. DEFINITION 55. A set A is defeasibly enumerable if there is a sequence of sets fAi g1i such that for all x 1. If x 2 A, then there is an N such that x 2 Ai for all i > N . 2. If x 2= A, then there is an M such that x 2= Ai for all i > M .

If A is recursively enumerable, then a reasoner who updates his beliefs in Pollock's way can approach A `from below': the reasoner can construct sets that are all supersets of the preceding set and subsets of A. However, when A is only defeasibly enumerable, a reasoner can only approach A from below and above simultaneously, in the sense that the sets Ai the reasoner constructs may contain elements not contained in A. Every such element must eventually be taken out of the Ai 's, but there need not be any point at which they have all been removed. To ensure defeasible adequacy, Pollock introduces the following three operations: 1. The reasoner must adopt beliefs in response to constructing arguments, provided no counterarguments have already been adopted for

276

HENRY PRAKKEN & GERARD VREESWIJK

any step in the argument. If a defeasible inference occurs, a check must be made whether a counterargument for it has not already been adopted as a belief. 2. The reasoner must keep track of the bases upon which its beliefs are held. When a new belief is adopted that is a defeater for a previous inference step, then the reasoner must retract that inference step and all beliefs inferred from it. 3. The reasoner must keep track of defeated inferences, and when a defeater is itself retracted (2), this should reinstate the defeated inference. To achieve the functions just described, Pollock introduces a so-called

ag-based reasoner. A ag-based reasoner consists of an inference engine that produces all arguments eventually, and a component computing the defeat status of arguments. LOOP

BEGIN END

make-an-inference recompute-defeat-statuses

The procedure recompute-defeat-statuses determines which arguments are defeated outright, undefeated and provisionally defeated at each iteration of the loop. That is, at each iteration it determines justi cation. Pollock then identi es certain conditions under which a ag-based reasoner is defeasibly adequate. For these conditions, the reader is referred to [Pollock, 1995, ch. 4]. Evaluation

Pollock's theory of defeasible reasoning is based on more than thirty years of research in logic and epistemology. This large time span perhaps explains the richness of his theory. It includes both linear and suppositional arguments, and deductive as well as non-deductive (mainly statistical and inductive) arguments, with a corresponding distinction between two types of con icts between arguments. Pollock's de nition of the status of arguments takes the multiple-status-assignments approach, being related to Dung's preferred semantics. This semantics can deal with certain types of

oating statuses and conclusions, but we have seen that certain other types are still ignored. In fact, this seems one of the main unsolved problems in argument-based semantics. An interesting aspect of Pollock's work is his study of the resource-bounded nature of practical reasoning, with the idea of partial computation embodied in the notions of warrant and especially

LOGICS FOR DEFEASIBLE ARGUMENTATION

277

justi cation. And for arti cial intelligence it is interesting that Pollock has implemented his system as a computer program. Since Pollock focuses on epistemological issues, his system is not immediately applicable to some speci c features of practical (including legal) reasoning. For instance, the use of probabilistic notions seems to make it diÆcult to give an account of reasoning with and about priority relations between arguments (see below in Subsection 5.7). Moreover, it would be interesting to know what Pollock would regard as suitable reasons for normative reasoning. It would also be interesting to study how, for instance, analogical and abductive arguments can be analysed in Pollock's system as giving rise to prima facie reasons.

5.3 Inheritance systems A forerunner of argumentation systems is work on so-called inheritance systems, especially of Horty et al., e.g. [1990], which we shall brie y discuss. Inheritance systems determine whether an object of a certain kind has a certain property. Their language is very restricted. The network is a directed graph. Its initial nodes represent individuals and its other nodes stand for classes of individuals. There are two kinds of links, ! and 6!, depending on whether something does or does not belong to a certain class. Links from an individual to a class express class membership, and links between two classes express class inclusion. A path through the graph is an inheritance path i its only negative link is the last one. Thus the following are examples of inheritance paths. P1 : Tweety ! Penguin ! Bird ! Canfly P2 : Tweety ! Penguin 6! Canfly Another basic notion is that of an assertion , which is of the form x ! y or x 6! y, where y is a class. Such an assertion is enabled by an inheritance path if the path starts with x and ends with the same link to y as the assertion. Above, an assertion enabled by P1 is Tweety ! Canfly, and an assertion enabled by P2 is Tweety 6! Canfly. As the example shows, two paths can be con icting. They are compared on speci city, which is read o from the syntactic structure of the net, resulting in relations of neutralisation and preemption between paths. The assignment of a status to a path (whether it is permitted) is similar to the recursive variant of the unique-status-assignment approach of De nition 17. This means that the system has problems with Zombie paths and oating conclusions (as observed by Makinson & Schlechta [1991]). Although Horty et al. present their system as a special-purpose formalism, it clearly has all the elements of an argumentation system. An inheritance path corresponds to an argument, and an assertion enabled by a path to a conclusion of an argument. Their notion of con icting paths

278

HENRY PRAKKEN & GERARD VREESWIJK

corresponds to rebutting attack. Furthermore, neutralisation and preemption correspond to defeat, while a permitted path is the same as a justi ed argument. Because of the restricted language and the rather complex de nition of when an inheritance path is permitted, we shall not present the full system. However, Horty et al. should be credited for anticipating many distinctions and discussions in the eld of defeasible argumentation. In particular, their work is a rich source of benchmark examples. We shall discuss one of them. EXAMPLE 56. Consider four arguments A; B; C and D such that B strictly defeats A, D strictly defeats C , A and D defeat each other and B and C defeat each other.

A

D

B

C

Here is a natural-language version (due to Horty, personal communication), in which the defeat relations are based on speci city considerations. A = Larry is rich because he is a public defender, public defenders are lawyers, and lawyers are rich; B = Larry is not rich because he is a public defender, and public defenders are not rich; C = Larry is rich because he lives in Brentwood, and people who live in Brentwood are rich; D = Larry is not rich because he rents in Brentwood, and people who rent in Brentwood are not rich. If we apply the various semantics of the BDKT approach to this example, we see that since no argument is undefeated, none of them is in the grounded extension. Moreover, there are preferred extensions in which Larry is rich, and preferred extensions in which Larry is not rich. Yet it might be argued that since both arguments that Larry is rich are strictly defeated by an argument that Larry is not rich, the sceptical conclusion should be that Larry is not rich. This is the outcome obtained by Horty et al. [1990]. We note that if this example is represented in the way Pollock proposes for Example 19 (see page 269 above), this outcome can also be obtained in the BDKT approach.

LOGICS FOR DEFEASIBLE ARGUMENTATION

279

5.4 Lin and Shoham Before the BDKT approach, an earlier attempt to provide a unifying framework for nonmonotonic logics was made by Lin & Shoham [1989]. They show how any logic, whether monotonic or not, can be reformulated as a system for constructing arguments. However, in contrast with the other theories in this section, they are not concerned with comparing incompatible arguments, and so their framework cannot be used as a theory of defeat among arguments. The basic elements of Lin & Shoham's abstract framework are an unspeci ed logical language, only assumed to contain a negation symbol, and an also unspeci ed set of inference rules de ned over the assumed language. Arguments can be constructed by chaining inference rules into trees. Inference rules are either monotonic or nonmonotonic. For instance, Penguin(a) ! Bird(a) Penguin(a); :ab(penguin(a)) ! :Fly(a)

are monotonic rules, and

True ) :ab(penguin(a)) True ) :ab(bird(a))

are nonmonotonic rules. Note that these inference rules are, as in default logic, domain speci c. In fact, Lin & Shoham do not distinguish between general and domain-dependent inference rules, as is shown by their reconstruction of default logic, to be discussed below. Although the lack of a notion of defeat is a severe limitation, in capturing nonmonotonic consequence Lin & Shoham introduce a notion which for defeasible argumentation is very relevant viz. that of an argument structure. DEFINITION 57. (argument structures) A set T of arguments is an argument structure if T satis es the following conditions: 1. The set of `base facts' (which roughly are the premises) is in T ; 2. Of every argument in T all its subarguments are in T ; 3. The set of conclusions of arguments in T is deductively closed and consistent. Note that the notion of a `closed' set of arguments that we used above in Pollock's De nition 47 satis es the rst two but not the third of these conditions. Note also that, although argument structures are closed under monotonic rules, they are not closed under defeasible rules. Lin & Shoham then reformulate existing nonmonotonic logics in terms of monotonic and nonmonotonic inference rules, and show how the alternative

280

HENRY PRAKKEN & GERARD VREESWIJK

sets of conclusions of these logics can be captured in terms of argument structures with certain completeness properties. Bondarenko et al. [1997] remark that structures with these properties are very similar to their stable extensions. The claim that existing nonmonotonic logics can be captured by an argument system is an important one, and Lin & Shoham were among the rst to make it. The remainder of this section is therefore devoted to showing with an example how Lin & Shoham accomplish this, viz. for default logic [Reiter, 1980]. In default logic (see also Subsection 2.1), a default theory is a pair = (W; D), where W is a set of rst-order formulas, and D a set of defaults. Each default is of the form A : B1 ; : : : ; Bn =C , where A, Bi and C are rstorder formulas. Informally, a default reads as `If A is known, and B1 ; : : : ; Bn are consistent with what is known, then C may be inferred'. An extension of a default theory is any set of formulas E satisfying the following conditions. E = [1 i=0 , where

E0 = W; Ei+1 = T h(Ei ) [ fC j A : B1 ; : : : ; Bn =C 2 D where A 2 Ei and :B1 ; : : : :Bn 2= E g We now discuss the correspondence between default logic and argument systems by providing a global outline of the translation and proof. Lin & Shoham perform the translation as follows. Let = (W; D) be a closed default theory. De ne R() to be the set of the following rules: 1. True is a base fact. 2. If A 2 W , then A is a base fact of R(). 3. If A1 ; : : : ; An , and B are rst-order sentences and B is a consequence of A1 ; : : : ; An in rst-order logic, then A1 ; : : : ; An ! B is a monotonic rule. 4. If A is a rst-order sentence, then :A ! ab(A) is a monotonic rule. 5. If A : B1 ; : : : ; Bn =C is a default in D, then

A; :ab(B1 ); : : : ; :ab(Bn ) ! C

is a monotonic rule. 6. If B is a rst-order sentence, then True ) :ab(B ) is a nonmonotonic rule. Lin & Shoham proceed by introducing the concept of DL-complete argument structures.

LOGICS FOR DEFEASIBLE ARGUMENTATION

281

DEFINITION 58. An argument structure T of R() is said to be DLcomplete if for any rst-order sentence A, either ab(A) or :ab(A) is in W(T). Thus, a DL-complete argument structure is explicit about the abnormality of every rst-order sentence. For DL-complete argument structures, the following lemma is established. LEMMA 59. If T is a DL-complete argument structure of R(), then for any rst-order sentence A, ab(A) 2 W(T ) i :A 2 W(T ). On the basis of this result, Lin & Shoham are able to establish the following correspondence between default logic and argument systems. THEOREM 60. Let E be a consistent set of rst-order sentences. E is an extension of i there is a DL-complete argument structure T of R() such that E is the restriction of W(T ) to the set of rst-order sentences. This theorem is proven by constructing extensions for given argument structures and vice versa . If E is an extension of , Lin & Shoham de ne T as the set of arguments with all nodes in E 0 , where E 0 = E [ fab(B ) j :B 2 E g [ f:ab(B ) j :B 2= E g and prove that W(T ) = E 0 . Conversely, for a DL-complete argument structure T of R(), Lin & Shoham prove that the rst-order restriction E of W(T ) is a default extension of . This is proven by induction on the de nition of an extension. Two features in the translation are worth noticing. First, default logic makes a distinction between meta-logic default rules and rst-order logic, while argument systems do not. Second, the notion of groundedness of default extensions corresponds to that of an argument in argument systems, and the notion of xed points in default logic corresponds to that of DLcompleteness of argument structures. Lin & Shoham further show that, for normal default theories, the translation can be performed without second-order predicates, such as ab. This result however falls beyond the scope of this chapter.

5.5 Vreeswijk's Abstract Argumentation Systems Like the BDKT approach and Lin & Shoham [1989], Vreeswijk [1993a; 1997] also aims to provide an abstract framework for defeasible argumentation. His framework builds on the one of Lin & Shoham, but contains the main elements that are missing in their system, namely, notions of con ict and defeat between arguments. As Lin & Shoham, Vreeswijk also assumes an unspeci ed logical language L, only assumed to contain the symbol ?, denoting `falsum' or `contradiction,' and an unspeci ed set of monotonic and

282

HENRY PRAKKEN & GERARD VREESWIJK

nonmonotonic inference rules (which Vreeswijk calls `strict' and `defeasible'). This also makes his system an abstract framework rather than a particular system. A point in which Vreeswijk's work diers from Lin & Shoham is that Vreeswijk's inference rules are not domain speci c but general logical principles. DEFINITION 61. (Rule of inference.) Let L be a language. 1. A strict rule of inference is a formula of the form 1 ; : : : ; n ! where 1 ; : : : ; n is a nite, possibly empty, sequence in L and is a member of L. 2. A defeasible rule of inference is a formula of the form 1 ; : : : ; n ) where 1 ; : : : ; n is a nite, possibly empty, sequence in L and is a member of L. A rule of inference is a strict or a defeasible rule of inference. Another aspect taken from Lin & Shoham is that in Vreeswijk's framework, arguments can also be formed by chaining inference rules into trees. DEFINITION 62. (Argument.) Let R be a set of rules. An argument has premises , a conclusion , sentences (or propositions), assumptions , subarguments , top arguments , a length , and a size . These are abbreviated by corresponding pre xes. An argument is 1. A member of L; in that case,

prem() = fg, conc() = , sent() = fg, asm() = ;, sub() = fg, top() = fg, length() = 1, and size() = 1;

or 2. A formula of the form 1 ; : : : ; n ! where 1 ; : : : ; n is a nite, possibly empty, sequence of arguments, such that conc(1 ) = 1 ; : : : ; conc(n ) = n for some rule 1 ; : : : ; n ! in R, and 2= sent(1 ) [ : : : [ sent(n )|in that case,

prem() = prem(1 ) [ : : : [ prem(n ), conc() = , sent() = sent(1 ) [ : : : [ sent(n ) [ fg, asm() = asm(1 ) [ : : : [ asm(n ), sub() = sub(1 ) [ : : : [ sub(n ) [ fg, top() = f1 ; : : : ; n ! j 1 2 top(1 ) ; : : : ; n 2 top(n )g[ fg, length() = maxflength(1) ; : : : ; length(n)g + 1, and size() = size(1 ) + : : : + size(n) + 1;

LOGICS FOR DEFEASIBLE ARGUMENTATION

283

or 3. A formula of the form 1 ; : : : ; n ) where 1 ; : : : ; n is a nite, possibly empty, sequence of arguments, such that conc(1 ) = 1 ; : : : ; conc(n ) = n for some rule 1 ; : : : ; n ) in R, and 2= sent(1 ) [ : : : [ sent(n ); for assumptions we have

asm() = asm(1 ) [ : : : [ asm(n ) [ fg; premises, conclusions, and other attributes are de ned as in (2). Arguments of type (1) are atomic arguments; arguments of type (2) and (3) are composite arguments. Thus, atomic arguments are language elements. An argument is said to be in contradiction if conc() = ?. An argument is defeasible if it contains at least one defeasible rule of inference; else it is strict . Unlike Lin & Shoham, Vreeswijk assumes an ordering on arguments, indicating their dierence in strength (on which more below). As for con icts between arguments, a dierence from all other systems of this section (except [Verheij, 1996]; see below in subsection 5.10) is that a counterargument is in fact a set of arguments: Vreeswijk de nes a set of arguments incompatible with an argument i the conclusions of [ f g give rise to a strict argument for ?. Sets of arguments are needed because the language in Vreeswijk's framework is unspeci ed and therefore lacks the expressive power to `recognise' inconsistency. The consequence of this lack of expressiveness is that a set of arguments 1 ; : : : ; n that is incompatible with , cannot be joined to one argument that contradicts, or is inconsistent, with . Therefore, it is necessary to take sets of arguments into account. Vreeswijk has no explicit notion of undercutting attacks; he claims that this notion is implicitly captured by his notion of incompatibility, viz. as arguments for the denial of a defeasible conditional used by another argument. This requires some extra assumptions on the language of an abstract argumentation system, viz. that it is closed under negation (:), conjunction (^), material implication (), and defeasible implication (>). For the latter connective Vreeswijk de nes the following defeasible inference rule. '; ' > ) With these extra language elements, it is possible to express rules of inference (which are meta-linguistic notions) in the object language. Meta-level rules using ! (strict rule of inference) and ) (defeasible rule of inference) are then represented by corresponding object language implication symbols and >. Under this condition, Vreeswijk claims to be able to de ne rebutting and undercutting attackers in a formal fashion. For example, let and be arguments in Vreeswijk's system with conclusions ' and , respectively. Let '1 ; : : : ; 'n ) ' be the top rule of .

284

HENRY PRAKKEN & GERARD VREESWIJK

Rebutting attack. If = :', then Vreeswijk calls a rebutting attacker of . Thus, the conclusion of a rebutting attacker contradicts the conclusion of the argument it attacks. Undercutting attack. If = :('1 ^ : : : ^ 'n > '), i.e. if is the negation of the last rule of stated in the object language, then is said to be an undercutting attacker of . Thus, the conclusion of an undercutting attacker contradicts the last inference of the argument it attacks. Vreeswijk's notion of defeat rests on two basic concepts, viz. the abovede ned notion of incompatibility and the notion of undermining. An argument is said to undermine a set of arguments, if it dominates at least one element of that set. Formally, a set of arguments is undermined by an argument if < for some 2 . If a set of arguments is undermined by another argument, it cannot uphold or maintain all of its members in case of a con ict. Vreeswijk then de nes the notion of a defeater as follows: DEFINITION 63. (Defeater.) Let P be a base set, and let be an argument. A set of arguments is a defeater of if it is incompatible with and not undermined by it; in this case is said to be defeated by , and defeats . is a minimal defeater of if all its proper subsets do not defeat . As for the assessment of arguments, Vreeswijk's declarative de nition, (which he says is about \warrant") is similar to Pollock's de nition of a defeat status assignment: both de nitions have an explicit recursive structure and both lead to multiple status assignments in case of irresolvable con icts. However, Vreeswijk's status assignments cannot be partial, for which reason Vreeswijk's de nition is closer to stable semantics than to preferred semantics. DEFINITION 64. (Defeasible entailment.) Let P be a base set. A relation j between P and arguments based on P is a defeasible entailment relation if, for every argument based on P , we have P j ( is in force on the basis of P ) if and only if 1. The set P contains ; or 2. For some arguments 1 ; : : : ; n we have P n ! ; or

j 1 ; : : : ; n and 1 ; : : : ;

3. For some arguments 1 ; : : : ; n we have P j 1 ; : : : ; n and 1 ; : : : ; n ) and every set of arguments such that P j does not defeat . In the Nixon Diamond of Example 3 this results in `the Quaker argument is in force i the Republican argument is not in force'. To deal with such

LOGICS FOR DEFEASIBLE ARGUMENTATION

285

circularities Vreeswijk de nes for every j satisfying the above de nition an extension (1) = f j P j g On the basis of De nition 64 it can be proven that (1) is stable, i.e., it can be proven that 2= i 0 defeats for some 0 . With equally strong con icting arguments, as in the Nixon Diamond, this results in multiple stable extensions (cf. De nition 38). Just as in Dung's stable semantics, in Vreeswijk's system examples with odd defeat loops might have no extensions. However, an exception holds for the special case of self-defeating arguments, since De nition 63 implies that every argument of which the conclusion strictly implies ? is defeated by the empty set. Argumentation sequences

Vreeswijk extensively studies various other characterisations of defeasible argumentation. Among other things, he develops the notion of an `argumentation sequence'. An argumentation sequence can be regarded as a sequence 1 ! 2 ! : : : ! n ! : : : of Lin & Shoham's [1989] argument structures, but without the condition that these structures are closed under deduction. Each following structure is constructed by applying an inference rule to the arguments in the preceding structure. An important addition to Lin & Shoham's notion is that a newly constructed argument is only appended to the sequence if it survives all counterattacks from the argument structure developed thus far. Thus the notion of an argumentation sequence embodies, like Pollock's notion of `justi cation', the idea of partial computation, i.e., of assessing arguments relative to the inferences made so far. Vreeswijk's argumentation sequences also resemble BDKT's procedure for computing admissible semantics. The dierence is that BDKT adopt arguments that are defended (admissible semantics), while Vreeswijk argumentation sequences adopt arguments that are not defeated (stable semantics). Vreeswijk also develops a procedural version of his framework in dialectical style. It will be discussed below in Section 6. Plausible reasoning

Vreeswijk further discusses a distinction between two kinds of nonmonotonic reasoning, `defeasible' and `plausible' reasoning. According to him, the above de nition of defeasible entailment captures defeasible reasoning, which is unsound (i.e., defeasible) reasoning from rm premises, like in `typically birds y, Tweety is a bird, so presumably Tweety ies'. Plausible

286

HENRY PRAKKEN & GERARD VREESWIJK

reasoning, by contrast, is sound (i.e., deductive) reasoning from uncertain premises, as in `all birds y (we think), Tweety is a bird, so Tweety ies (we think)' [Rescher, 1976]. The dierence is that in the rst case a default proposition is accepted categorically, while in the second case a categorical proposition is accepted by default. In fact, Vreeswijk would regard reasoning with ordered premises, as studied in many nonmonotonic logics, not as defeasible but as plausible reasoning. One element of this distinction is that for defeasible reasoning the ordering on arguments is not part of the input theory, re ecting priority relations between, or degrees of belief in premises, but a general ordering of types of arguments, such as `deductive arguments prevail over inductive arguments' and `statistical inductive arguments prevail over generic inductive arguments'. Accordingly, Vreeswijk assumes that the ordering on arguments is the same for all sets of premises (although relative to a set of inference rules). Vreeswijk formalises plausible reasoning independent of defeasible reasoning, with the possibility to de ne input orderings on the premises, and he then combines the two formal treatments. To our knowledge, Vreeswijk's framework is unique in treating these two types of reasoning in one formalism as distinct forms of reasoning; usually the two forms are regarded as alternative ways to look at the same kind of reasoning. Evaluating Vreeswijk's framework, we can say that it has little attention for the details of comparing arguments and that, as Pollock but in contrast to BDKT, it formalises only one type of defeasible consequence, but that it is philosophically well-motivated, and quite detailed with respect to the structure of arguments and the process of argumentation.

5.6 Simari & Loui Simari & Loui [1992] present a declarative system for defeasible argumentation that combines ideas of Pollock [1987] on the interaction of arguments with ideas of Poole [1985] on speci city and ideas of Loui [1987] on defaults as twoplace meta-linguistic rules. Simari & Loui divide the premises into sets of contingent rst-order formulas KC , and necessary rst-order formulas KN , and one-directional default rules , e.g.

KC KN

= = =

fP (a)g f8x:P (x) B (x)g fB (x) > F (x); P (x) > :F (x)g:

Note that Simari & Loui's default rules are not threeplace as Reiter's defaults, but twoplace. The set of grounded instances of , i.e., of defeasible rules without variables, is denoted by # . The notion of argument that Simari & Loui maintain is somewhat uncommon:

LOGICS FOR DEFEASIBLE ARGUMENTATION

287

DEFINITION 65. (Arguments.) Given a context K = KN [ KC and a set of defeasible rules we say that a subset T of # is an argument for h 2 SentC (L) in the context K, denoted by hT; hiK if and only if 1. K [ T j h 2. K [ T j6 ? 3. 6 9T 0 T : K [ T 0 j h An argument hT; h1 iK is a subargument of an argument hS; h2 iK i T S . That K [ T j h means that h is derivable from K [ T with rst-order inferences applied to rst-order formulas and modus ponens applied to defaults. Thus, an argument T is a set of grounded instances of defeasible rules containing suÆcient rules to infer h (1), containing no rules irrelevant for inferring h (3), and not making it possible to infer ? (2). This notion of argument is somewhat uncommon because it does not refer to a tree or chain of inference rules. Instead, De nition 65 merely demands that an argument is a unordered collection of rules that together imply a certain conclusion. Simari & Loui de ne con ict between arguments as follows. An argument hT; h1iK counterargues an argument hS; h2 iK i the latter has a subargument hS 0 ; hiK such that hT; h1 iK disagrees with hS 0 ; hiK , i.e., K [ fh1 ; hg ` ?. Arguments are compared with Poole's [1985] de nition of speci city: an argument A defeats an argument B i A disagrees with a subargument B of B and A is more speci c than B . Note that this allows for subargument defeat: this is necessary since Simari & Loui's de nition of the status of arguments is not explicitly recursive. In fact, they use Pollock's theory of level-n arguments. Since they exclude self-defeating arguments by de nition, they can use the version of De nition 11. An important component of Simari & Loui's system is the k -operator. Of all the conclusions that can be argued, the k -operator returns the conclusions that are supported by level-k arguments. Simari & Loui prove that arguments for which k = k+1 , are justi ed. The main theorem of the paper states that the set of justi ed conclusions is uniquely determined, and that a repeated application of the -operator will bring us to that set. A strong point of Simari & Loui's approach is that it combines the ideas of speci city (Poole) and level-n arguments (Pollock) into one system. Another strong point of the paper is that it presents a convenient calculus of arguments, that possesses elegant mathematical properties. Finally, Simari & Loui sketch an interesting architecture for implementation, which has a dialectical form (see below, Section 6 and, for a full description, [Simari et al., 1994; Garcia et al., 1998]). However, the system also has some limitations. Most of them are addressed by Prakken & Sartor [1996; 1997b], to be discussed next.

288

HENRY PRAKKEN & GERARD VREESWIJK

5.7 Prakken & Sartor Inspired by legal reasoning, Prakken & Sartor [1996; 1997b] have developed an argumentation system that combines the language (but not the rest) of default logic with the grounded semantics of the BDKT approach.14 Actually, Prakken & Sartor originally used the language of extended logic programming, but Prakken [1997] generalised the system to default logic's language. Below we present the latter version. The main contributions to defeasible argumentation are a study of the relation between rebutting and assumption attack, and a formalisation of argumentation about the criteria for defeat. The use of default logic's language and grounded semantics make Prakken & Sartor's system rather similar to Simari & Loui's. However, as just noted, they extend and revise it in a number of respects, to be indicated in more detail below. As for the logical language, the premises are divided into factual knowledge F , a set of rst-order formulas subdivided into the necessary facts Fn and the contingent facts Fc , and defeasible knowledge , consisting of Reiter-defaults. The set F is assumed consistent. Prakken & Sartor write defaults as follows. d: '1 ^ : : : ^ 'j ^ 'k ^ : : : ^ 'n ) where d, a term, is the informal name of the default, and each 'i and is a rst-order formula. The part 'k ^ : : : ^ 'n corresponds to the middle part of a Reiter-default. The symbol can be informally read as `not provable that'. For each 'i in a default, :'i is called an assumption of the default. The language is de ned such that defaults cannot be nested, nor combined with other formulas. Arguments are, as in [Simari & Loui, 1992], chains of defaults `glued' together by rst-order reasoning. More precisely, consider the set R consisting of all valid rst-order inference rules plus the following rule of defeasible modus ponens (DMP):

d : '0 ^ : : : ^ 'j ^ 'k ^ : : : ^ 'm ) 'n ; '0 ^ : : : ^ 'j 'n where all 'i are rst-order formulas. Note that DMP ignores a default's assumptions; the idea is that such an assumption is untenable, this will be re ected by a successful attack on the argument using the default. An argument is de ned as follows. DEFINITION 66. (Arguments.) Let be any default theory (Fc [Fn [ ). An argument based on is a sequence of distinct rst-order formulas and/or ground instances of defaults ['1 ; : : : ; 'n ] such that for all 'i : 14 A

forerunner of this system was presented in [Prakken, 1993].

LOGICS FOR DEFEASIBLE ARGUMENTATION

289

- 'i 2 ; or - There exists an inference rule 1 ; : : : ; m ='i in R such that 1 ; : : : ; m 2 f'1 ; : : : ; 'i 1 g For any argument A - ' 2 A is a conclusion of A i ' is a rst-order formula; - ' 2 A is an assumption of A i ' is an assumption of a default in A; - A is strict i A does not contain any default; A is defeasible otherwise. The set of conclusions of an argument A is denoted by CONC (A) and the set of its assumptions by ASS (A). Note that unlike in Simari & Loui, arguments are not assumed consistent. Here is an example of an argument: [a, r1 : a ^ :b ) c, c, a ^ c, r2 : a ^ c ) d; d; d _ e] CONC (A) = fa; c; a ^ c; d; d _ eg and ASS (A) = fbg. The presence of assumptions in a rule gives rise to two kinds of con icts between arguments, conclusion-to-conclusion attack and conclusionto-assumption attack. DEFINITION 67. (Attack.) Let A and B be two arguments. A attacks B i 1. CONC (A) [ CONC (B ) [ Fn ` ?; or 2. CONC (A) [ Fn ` :' for any ' 2 ASS (B ). Prakken & Sartor's notion of defeat among arguments is built up from two other notions, `rebutting' and `undercutting' an argument. An argument A rebuts an argument B i A conclusion-to-conclusion attacks B and either A is strict and B is defeasible, or A's default rules involved in the con ict have no lower priority than B 's defaults involved in the con ict. Identifying the involved defaults and applying the priorities to them requires some subtleties for which the reader is referred to Prakken & Sartor [1996; 1997b] and Prakken [1997]. The source of the priorities will be discussed below. An argument A undercuts an argument B precisely in case of the second kind of con ict (attack on an assumption). Note that it is not necessary that the default(s) responsible for the attack on the assumption has/have no lower priority than the default containing the assumption. Note also that Prakken & Sartor's undercutters capture a dierent situation than Pollock's: their undercutters attack an explicit non-provability assumption of another argument (in Section 3 called `assumption attack'), while Pollock's undercutters deny the relation between premises and conclusion in a non-deductive argument. Prakken & Sartor's notion of defeat also diers from that of Pollock [1995]. An inessential dierence is that their notion allows for `subargument defeat';

290

HENRY PRAKKEN & GERARD VREESWIJK

this is necessary since their de nition of the status of arguments is not explicitly recursive (cf. Subsection 4.1). More importantly, Prakken & Sartor regard undercutting defeat as prior to rebutting defeat. DEFINITION 68. (Defeat.) An argument A defeats an argument B i A = [] and B attacks itself, or else if - A undercuts B ; or - A rebuts B and B does not undercut A. As mentioned above in Subsection 4.1, the empty argument serves to adequately deal with self-defeating arguments. By de nition the empty argument is not defeated by any other argument. The rationale for the precedence of undercutters over rebutters is explained by the following example. EXAMPLE 69. Consider r1 : : Brutus is innocent ) Brutus is innocent r2 : ' ) : Brutus is innocent Assume that for some reason r2 has no priority over r1 and consider the arguments [r1 ] and [: : : ; r2 ].15 Then, although [r1 ] rebuts [: : : ; r2 ], [r1 ] does not defeat [: : : ; r2 ], since [: : : ; r2 ] undercuts [r1 ]. So [: : : ; r2 ] strictly defeats [r1 ]. Why should this be so? According to Prakken & Sartor, the crux is to regard the assumption of a rule as one of its conditions (albeit of a special kind) for application. Then the only way to accept both rules is to believe that Brutus is not innocent: in that case the condition of r1 is not satis ed. By contrast, if it is believed that Brutus is innocent, then r2 has to be rejected, in the sense that its conditions are believed but its consequent is not (`believing an assumption' here means not believing its negation). Note that this line of reasoning does not naturally apply to undercutters Pollock-style, which might explain why in Pollock's [1995] rebutting and undercutting defeaters stand on equal footing. Finally, we come to Prakken & Sartor's de nition of the status of arguments. As remarked above, they use the grounded semantics of De nition 7. However, they change it in one important respect. This has to do with the origin of the default priorities with which con icting arguments are compared. In arti cial intelligence research the question where these priorities can be found is usually not treated as a matter of common-sense reasoning. Either a xed ordering is simply assumed, or use is made of a speci city ordering, read o from the syntax or semantics of an input theory. However, Prakken 15 We abbreviate arguments by omitting their conclusions and only giving the names of their defaults. Furthermore, we leave implicit that r2 's antecedent ' is derived by a subargument of possibly several steps.

LOGICS FOR DEFEASIBLE ARGUMENTATION

291

& Sartor want to capture that in many domains of common-sense reasoning, like the law or bureaucracies, priority issues are part of the domain theory. This even holds for speci city; although checking which argument is more speci c may be a logical matter, deciding to prefer the most speci c argument is an extra-logical decision. Besides varying from domain to domain, the priority sources can also be incomplete or inconsistent, in the same way as `ordinary' domain information can be. In other words, reasoning about priorities is defeasible reasoning. (This is why our example of the introduction contains a priority argument, viz. A's use of (9) and (10).) For these reasons, Prakken & Sartor want that the status of arguments does not only depend on the priorities, but also determines the priorities. Accordingly, priority conclusions can be defeasibly derived within their system in the same way as conclusions like `Tweety ies'.16 To formalise this, Prakken & Sartor need a few technicalities. First the rst-order part of the language is extended with a special twoplace predicate . That x y means that y has priority over x. The variables x and y can be instantiated with default names. This new predicate symbol should denote a strict partial order on the set of defaults that is assumed by the metatheory of the system. For this reason, the set Fn must contain the axioms of a strict partial order: transitivity : 8x; y; z . x y ^ y z x z asymmetry : 8x; y. x y : y x For simplicity, some restrictions on the syntactic form of priority expressions are assumed. Fc may not contain any priority expressions, while in the defaults priority expressions may only occur in the consequent, and only in the form of conjunctions of literals (a literal is an atomic formula or a negated atomic formula). This excludes, for instance, disjunctive priority expressions. Next, the rebut and defeat relations must be made relative to an ordering relation that might vary during the reasoning process. DEFINITION 70. For any set S of arguments - <S = fr < r0 j r r0 is a conclusion of some A 2 S g - A (strictly) S -defeats B i, assuming the ordering <S on , A (strictly) defeats B . The idea is that when it must be determined whether an argument is acceptable with respect to a set S of arguments, the relevant defeat relations are veri ed relative to the priority conclusions drawn by the arguments in S . 16 For some non-argument-based nonmonotonic logics that deal with this phenomenon, see Grosof [1993], Brewka [1994a; 1996], Prakken [1995] and Hage [1997]; see also Gordon's [1995] use of [Gener & Pearl, 1992].

292

HENRY PRAKKEN & GERARD VREESWIJK

DEFINITION 71. An argument A is acceptable with respect to a set S of arguments i all arguments S -defeating A are strictly S -defeated by some argument in S . Note that this de nition also replaces the second occurrence of defeat in De nition 6 with strict defeat. This is because otherwise it cannot be proven that no two justi ed arguments are in con ict with each other. Prakken & Sartor then apply the construction of Proposition 9 with Definition 71. They prove that the resulting set of justi ed arguments is unique and con ict-free and that, when S is this set, the ordering <S is a strict partial order. They also prove that if an argument is justi ed, all its subarguments are justi ed. We illustrate the system with the following example. EXAMPLE 72. Consider an input theory with empty Fc , Fn containing the above axioms for , and containing the following defaults.

r0 : ) a r4 : ) r0 r3 r1 : a ) b r5 : ) r3 r0 r2 : b ) c r6 : ) r5 r4 r3 : ) :a The set of justi ed arguments is constructed as follows (for simplicity we ignore combinations of the listed arguments). F0 = ;

edited by Dov M. Gabbay and F. Guenthner

CONTENTS Editorial Preface Dov M. Gabbay

Conditional Logic

D. Nute and C. B. Cross

Dynamic Logic

D. Harel, D. Kozen, and J. Tiuryn

Logics for Defeasible Argumentation H. Prakken and G. Vreeswijk

Preference Logic S. O. Hansson

Diagrammatic Logic E. Hammer

Index

vii 1 99 219 319 395 423

PREFACE TO THE SECOND EDITION It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the rst edition and there have been great changes in the landscape of philosophical logic since then. The rst edition has proved invaluable to generations of students and researchers in formal philosophy and language, as well as to consumers of logic in many applied areas. The main logic article in the Encyclopaedia Britannica 1999 has described the rst edition as `the best starting point for exploring any of the topics in logic'. We are con dent that the second edition will prove to be just as good.! The rst edition was the second handbook published for the logic community. It followed the North Holland one volume Handbook of Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook of Philosophical Logic, published 1983{1989 came at a fortunate temporal junction at the evolution of logic. This was the time when logic was gaining ground in computer science and arti cial intelligence circles. These areas were under increasing commercial pressure to provide devices which help and/or replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisation on the one hand and to provide the theoretical basis for the computer program constructs on the other. The result was that the Handbook of Philosophical Logic, which covered most of the areas needed from logic for these active communities, became their bible. The increased demand for philosophical logic from computer science and arti cial intelligence and computational linguistics accelerated the development of the subject directly and indirectly. It directly pushed research forward, stimulated by the needs of applications. New logic areas became established and old areas were enriched and expanded. At the same time, it socially provided employment for generations of logicians residing in computer science, linguistics and electrical engineering departments which of course helped keep the logic community thriving. In addition to that, it so happens (perhaps not by accident) that many of the Handbook contributors became active in these application areas and took their place as time passed on, among the most famous leading gures of applied philosophical logic of our times. Today we have a handbook with a most extraordinary collection of famous people as authors! The table below will give our readers an idea of the landscape of logic and its relation to computer science and formal language and arti cial intelligence. It shows that the rst edition is very close to the mark of what was needed. Two topics were not included in the rst edition, even though

viii

they were extensively discussed by all authors in a 3-day Handbook meeting. These are:

a chapter on non-monotonic logic

a chapter on combinatory logic and -calculus

We felt at the time (1979) that non-monotonic logic was not ready for a chapter yet and that combinatory logic and -calculus was too far removed.1 Non-monotonic logic is now a very major area of philosophical logic, alongside default logics, labelled deductive systems, bring logics, multi-dimensional, multimodal and substructural logics. Intensive reexaminations of fragments of classical logic have produced fresh insights, including at time decision procedures and equivalence with non-classical systems. Perhaps the most impressive achievement of philosophical logic as arising in the past decade has been the eective negotiation of research partnerships with fallacy theory, informal logic and argumentation theory, attested to by the Amsterdam Conference in Logic and Argumentation in 1995, and the two Bonn Conferences in Practical Reasoning in 1996 and 1997. These subjects are becoming more and more useful in agent theory and intelligent and reactive databases. Finally, fteen years after the start of the Handbook project, I would like to take this opportunity to put forward my current views about logic in computer science, computational linguistics and arti cial intelligence. In the early 1980s the perception of the role of logic in computer science was that of a speci cation and reasoning tool and that of a basis for possibly neat computer languages. The computer scientist was manipulating data structures and the use of logic was one of his options. My own view at the time was that there was an opportunity for logic to play a key role in computer science and to exchange bene ts with this rich and important application area and thus enhance its own evolution. The relationship between logic and computer science was perceived as very much like the relationship of applied mathematics to physics and engineering. Applied mathematics evolves through its use as an essential tool, and so we hoped for logic. Today my view has changed. As computer science and arti cial intelligence deal more and more with distributed and interactive systems, processes, concurrency, agents, causes, transitions, communication and control (to name a few), the researcher in this area is having more and more in common with the traditional philosopher who has been analysing 1 I am really sorry, in hindsight, about the omission of the non-monotonic logic chapter. I wonder how the subject would have developed, if the AI research community had had a theoretical model, in the form of a chapter, to look at. Perhaps the area would have developed in a more streamlined way!

PREFACE TO THE SECOND EDITION

ix

such questions for centuries (unrestricted by the capabilities of any hardware). The principles governing the interaction of several processes, for example, are abstract an similar to principles governing the cooperation of two large organisation. A detailed rule based eective but rigid bureaucracy is very much similar to a complex computer program handling and manipulating data. My guess is that the principles underlying one are very much the same as those underlying the other. I believe the day is not far away in the future when the computer scientist will wake up one morning with the realisation that he is actually a kind of formal philosopher! The projected number of volumes for this Handbook is about 18. The subject has evolved and its areas have become interrelated to such an extent that it no longer makes sense to dedicate volumes to topics. However, the volumes do follow some natural groupings of chapters. I would like to thank our authors are readers for their contributions and their commitment in making this Handbook a success. Thanks also to our publication administrator Mrs J. Spurr for her usual dedication and excellence and to Kluwer Academic Publishers for their continuing support for the Handbook.

Dov Gabbay King's College London

x

Logic

IT Natural language processing

Temporal logic

Expressive power of tense operators. Temporal indices. Separation of past from future

Modal logic. Multi-modal logics

generalised quanti ers

Action logic

Algorithmic proof

Discourse representation. Direct computation on linguistic input Resolving ambiguities. Machine translation. Document classi cation. Relevance theory logical analysis of language Quanti ers in logic

Montague semantics. Situation semantics

Nonmonotonic reasoning

Probabilistic and fuzzy logic Intuitionistic logic

Set theory, higher-order logic, calculus, types

Program control speci cation, veri cation, concurrency Expressive power for recurrent events. Speci cation of temporal control. Decision problems. Model checking.

Arti cial intelligence

Logic programming

Planning. Time dependent data. Event calculus. Persistence through time| the Frame Problem. Temporal query language. temporal transactions. Belief revision. Inferential databases

Extension of Horn clause with time capability. Event calculus. Temporal logic programming.

New logics. Generic theorem provers

General theory of reasoning. Non-monotonic systems

Procedural approach to logic

Loop checking. Non-monotonic decisions about loops. Faults in systems.

Intrinsic logical discipline for AI. Evolving and communicating databases

Negation by failure. Deductive databases

Real time systems

Semantics for logic programs

Constructive reasoning and proof theory about speci cation design

Expert systems. Machine learning Intuitionistic logic is a better logical basis than classical logic

Non-wellfounded sets

Hereditary nite predicates

-calculus ex-

Negation by failure and modality

Horn clause logic is really intuitionistic. Extension of logic programming languages tension to logic programs

PREFACE TO THE SECOND EDITION

xi

Imperative vs. declarative languages

Database theory

Complexity theory

Agent theory

Special comments: A look to the future

Temporal logic as a declarative programming language. The changing past in databases. The imperative future

Temporal databases and temporal transactions

Complexity questions of decision procedures of the logics involved

An essential component

Temporal systems are becoming more and more sophisticated and extensively applied

Dynamic logic

Database updates and action logic

Ditto

Possible tions

Multimodal logics are on the rise. Quanti cation and context becoming very active

Types. Term rewrite systems. Abstract interpretation

Abduction, relevance

Ditto

Agent's implementation rely on proof theory.

Inferential databases. Non-monotonic coding of databases

Ditto

Agent's reasoning is non-monotonic

A major area now. Important for formalising practical reasoning

Fuzzy and probabilistic data Database transactions. Inductive learning

Ditto

Connection with decision theory Agents constructive reasoning

Major now

Semantics for programming languages. Martin-Lof theories Semantics for programming languages. Abstract interpretation. Domain recursion theory.

Ditto

Ditto

ac-

area

Still a major central alternative to classical logic More central than ever!

xii

Classical logic. Classical fragments

Basic ground guage

Labelled deductive systems

Extremely useful in modelling

A unifying framework. Context theory.

Resource and substructural logics Fibring and combining logics

Lambek calculus

Truth maintenance systems Logics of space and time

backlan-

Dynamic syntax

Program synthesis

Modules. Combining languages

A basic tool

Fallacy theory

Logical Dynamics Argumentation theory games

Widely applied here Game semantics gaining ground

Object level/ metalevel

Extensively used in AI

Mechanisms: Abduction, default relevance Connection with neural nets

ditto

Time-actionrevision models

ditto

Annotated logic programs

Combining features

PREFACE TO THE SECOND EDITION

Relational databases Labelling allows for context and control. Linear logic Linked databases. Reactive databases

Logical complexity classes

xiii

The workhorse of logic

The study of fragments is very active and promising.

Essential tool.

The new unifying framework for logics

Agents have limited resources Agents are built up of various bred mechanisms

The notion of self- bring allows for selfreference Fallacies are really valid modes of reasoning in the right context.

Potentially applicable

A dynamic view of logic On the rise in all areas of applied logic. Promises a great future

Important feature of agents

Always central in all areas

Very important for agents

Becoming part of the notion of a logic Of great importance to the future. Just starting

A new theory of logical agent

A new kind of model

DONALD NUTE AND CHARLES B. CROSS

CONDITIONAL LOGIC

Prior to 1968 several writers had explored the conditions for the truth or assertability of conditionals, but this work did not result in an attempt to provide formal models for the semantical structure of conditionals. It had also been suggested that a proper logic for conditionals might be provided by combining modal operators with material conditionals in some way, but this suggestion never led to any widely accepted formal logic for conditionals.1 Then Stalnaker [1968] provided both a formal semantics for conditionals and an axiomatic system of conditional logic. This important paper eectively inaugurated that branch of philosophical logic which we today call conditional logic. Nearly all the work on the logic of conditionals for the next ten years, and a great deal of work since then, has either followed Stalnaker's lead in investigating possible worlds semantics for conditionals or posed problems for such an approach. But in 1978, Peter Gardenfors [1978] initiated a new line of inquiry focused on the use of conditionals to represent policies for belief revision. Thus, two main lines of development appeared, one an ontological approach concerned with truth or assertability conditions for conditionals and the other an epistemological approach focused on conditionals and change of belief. With these two major lines of development, the material which has appeared on conditionals is prodigious. Consequently, we have had to focus upon certain aspects of conditional logic and to give other aspects less attention. We have followed the trend set in the literature and given the most attention to the analysis of so-called subjunctive conditionals as they are used in ordinary discourse and to triviality results for the Ramsey test. Accordingly, our discussion of conditionals and belief revision will be more heavily technical than our discussion of subjunctive conditionals. Other topics are discussed in less detail. Some of the important papers which it has not been possible to review are included in the accompanying bibliography, but the bibliography itself is far from complete. 1 ONTOLOGICAL CONDITIONALS

1.1 Introduction Conditional logic is, in the rst place, concerned with the investigation of the logical and semantical properties of a certain class of sentences occurring 1 Another suggestion which has never been fully developed (but see Hunter [1980; 1982] is that an adequate theory of ordinary conditionals may be derived from relevance logic. We will say no more about this suggestion than it seems to us that conditional logic and relevance logic are concerned with very dierent problems, and it would be a tremendous coincidence if the correct logic for the conditionals of ordinary usage should turn out to resemble some version of relevance logic at all closely.

2

DONALD NUTE AND CHARLES B. CROSS

in a natural language. We will draw our examples from English, but much of what we have to say can be applied, with due caution, to other natural languages. Paradigmatically, a conditional declarative sentence in English is one which contains the words `if' and `then'. Examples include 1. If it is raining, then we are taking a taxi. and 2. If I were warm, then I would remove my jacket. We could delete the occurrences of `then' in (1) and (2) and we would still have perfectly acceptable sentences of English. In the case of (2), we can omit both `if' and `then' if we change the word order. Example (2) surely says the same thing as 3. Were I warm, I would remove my jacket. Other conditionals in which neither `if' nor `then' occur include 4. When I nd a good man, I will praise him. and 5. You will need my number should you ever wish to call me. Notice that all of these examples involve two component sentences or clauses, one expressing some sort of condition and another expressing some sort of claim which in some way depends upon the condition. The conditional or `if' part of a conditional sentence is called the antecedent, and the main or `then' part its consequent even when `if' and `then' do not actually occur. Notice that the antecedent precedes the consequent in (1){(4), but the consequent comes rst in (5). These examples should give the reader a fair idea of the types of sentences with which conditional logic is concerned. While the verbs in (1) are in the indicative mood, those in (2) are in the subjunctive mood. Researchers often rephrase (2), forming a new conditional in which the verbs contained in antecedent and consequent are in the indicative mood. This practice implicitly assumes that (2) has the same content as 6. If it were the case that I am warm, then it would be the case that I remove my jacket. Even without the rephrasing, it is sometimes said that `I am warm' is the antecedent of both (2) and (6). Thus the mood of the verbs in the grammatical antecedent and consequent of (2) are taken logically to be a component of the conditional construction, while the logical antecedent and consequent

CONDITIONAL LOGIC

3

are viewed as containing verbs in the indicative mood. Seen in this way, the conditional constructions in (1) and (2) look quite dierent and investigators have as a consequence made a distinction between indicative conditionals like (1) and subjunctive conditional like (2). This distinction is important because it appears that these two kinds of conditionals have dierent logical and semantical properties. Much of the work done in conditional logic has focused on conditionals having antecedents and consequents which are false. Such conditionals are called counterfactuals. In actual practice, little distinction is made between counterfactuals and subjunctive conditionals which have true antecedents or consequents. Authors frequently refer to conditionals in the subjunctive mood as counterfactuals regardless of whether their antecedents or consequents are true or false. Another special kind of conditional is the so-called counterlegal conditional whose antecedent is incompatible with physical law. An example is 7. If the gravitational constant were to take on a slightly higher value in the immediate vicinity of the earth, then people would suer bone fractures more frequently. Also recognized are counteridenticals like 8. If I were the pope, I would support the use of the pill in India. and countertemporals like 9. If it were 3.00 a.m., it would be dark outside. Analysis of these special conditionals may involve special diÆculties, but we can say very little about these special problems in a paper of this length. Two other interesting conditional constructions are the even-if construction used in 10. It would rain even if the shaman did not do his dance. and the might construction used in 11. If you don't take the umbrella, you might get wet. We might paraphrase (10) using the word `still' to get 12. It would still rain if the shaman did not do his dance. even-if and might conditionals have somewhat dierent properties from those of other conditionals. It is believed by many, though, that these two kinds of conditionals can be analyzed in terms of subjunctive conditionals once we have an acceptable analysis of these. The strategy in this

4

DONALD NUTE AND CHARLES B. CROSS

paper will be to concentrate on the many proposals for subjunctive conditionals, returning later (brie y) to the topics of indicative, even-if and might conditionals. We will use two dierent symbols to represent indicative and subjunctive conditionals. For indicative conditionals we will use the double arrow ), and for the subjunctive conditional we will use the corner >. (Where context makes our intention clear, we will sometimes use symbols and formulas autonomously to refer to themselves.) With these devices we may represent (1) as 13. It is raining ) I am taking a taxi. and represent (2) as 14. I am warm > I remove my jacket. Frequently we will have no particular antecedent or consequent in mind as we discuss one or the other of these two kinds of conditionals and as we examine forms which arguments involving these conditionals may take. In these cases we will use standard notation for classical rst-order logic augmented by our symbols for indicative and subjunctive conditionals to represent the forms of sentences and arguments under discussion. We assume, as have nearly all investigators, that conditional have truth values and may therefore appear as arguments for truth-functional operators. Students in introductory symbolic logic courses are normally taught to treat English conditionals as material conditionals. By material conditionals we mean certain truth-functional compounds of simpler sentences. A material condition ! is true just in case is false or is true. There can be little doubt that neither material implication nor any other truth function can be used by itself to provide an adequate representation of the logical and semantical properties of English conditionals or, presumably, the conditionals of any other language. Consider the following two examples. 15. If I were seven feet tall, then I would be over two meters tall. 16. If I were seven feet tall, then I would be less than two yards tall. In fact one of the authors is more than two yards tall but less than two meters tall, so for him the common antecedent and the two consequents of (15) and (16) are all false. Yet surely (15) is true while (16) is false. When both the antecedent and the consequent of an English subjunctive conditional are false, the conditional may be either true or false. Now consider two more examples. 17. If I were eight feet tall, I would be less than seven feet tall.

CONDITIONAL LOGIC

5

18. If I were seven feet tall, I would be over six feet tall. Here we have two conditionals each of which has a false antecedent and a true consequent. but the rst of these conditionals is false and the second is true. The moral of these examples is that when the antecedent of an English subjunctive conditional is false, the truth value of the conditional is not determined by the truth values of the antecedent and the consequent of the conditional alone. Some other factors must be involved in determining the truth values of such conditionals. But what about English conditionals with true antecedents? It is generally accepted that any conditional with a true antecedent and a false consequent is false, but the situation is more controversial where the conditionals with true antecedents and true consequents are concerned. Some researchers have maintained that all such conditionals are true while others have claimed that such conditionals are sometimes false. Later we will consider some of the issues involved in this controversy. For now we simply recognize that there are some very good reasons for rejecting the view that all English conditionals can be represented adequately by material implication or by any other truth function.

1.2 Cotenability theories of conditionals Chisholm [1946], Goodman [1955], Sellars [1958], Rescher [1964] and others have proposed accounts of conditionals which share some important features. Borrowing a term from Goodman, we can call these proposals cotenability theories of conditionals. The basic idea which these proposals share is that the conditional > is true in case , together with some set of laws and true statements, entails . A crucial problem for such an analysis is that of determining the appropriate set of true statements to involve in the truth condition for a particular conditional. If the antecedent of the conditional is false, then of course its negation is true. But any proposition together with its negation will entail anything. The set of true statements upon which the truth of the conditional is to depend must at least be logically compatible with the antecedent of the conditional or the conditional will turn out to be trivially true on such an account. But logical compatibility is not enough either. We can have a true proposition such that and are logically compatible but such that > : is also true. Then we should not wish to include in the set of propositions upon which the evaluation of > depends. Goodman said of such a that it is not cotenable with . So Goodman's ultimate position is that > is true just in case is entailed by together with the set of all physical laws and the set of all true propositions cotenable with , i.e. with the set of all true propositions such that no member of that set counterfactually implies the negation of and the negation of no member

6

DONALD NUTE AND CHARLES B. CROSS

of that set is counterfactually implied by . Such an account is obviously circular since the truth conditions for counterfactuals are given in terms of cotenability, while cotenability is de ned in terms of the truth values of various counterfactual conditionals. Although this is certainly a serious problem, it is not the only problem which theories of this type encounter. As a result of the role which law plays in such a theory, all counterlegal conditionals are counted as trivially true, and this is counterintuitive. Furthermore, even if we could provide a noncircular account of cotenability, another problem arises for conditionals which are not counterlegal. Suppose two true propositions and are each cotenable with , but that ^ is not. In selecting the set of propositions upon which the evaluation of > shall rest we must omit either or since otherwise our conditional will be trivially true once again. But which of these two propositions shall we omit? Most recent work in conditional logic is compatible with cotenability theory even though no attempt is made to de ne and use the notion of cotenability. We might view the resultant theories at least in part as attempts to determine, without ever specifying exactly what cotenability is, the logical and semantical properties which conditionals must have if the cotenability approach is essentially correct for conditionals without counterlegal antecedents. Indeed, the vagueness deliberately built into many of these recent theories suggests that our notion of cotenability, if we have one, varies according to our purposes and the context in which we use a conditional.2

1.3 Strict Conditionals We have seen that the truth value of a conditional is not always determined by the actual truth values of its antecedent and consequent, but perhaps it is determined by the truth values which its antecedent and consequent take in some other possible worlds. One way such an analysis might be developed is suggested by the role laws play in the cotenability theories. Perhaps we should look not only at the truth values of the antecedent and the consequent in the actual world, but also at their truth values in all possible worlds which have the same laws as does our own. When two worlds obey the same physical laws, we can say that each is a physical alternative of the other. The proposal, then, is that > is true if is true at every physical alternative to the actual world at which is true. Suppose we say a proposition is physically necessary if and only if it is true at every physical alternative to the actual worlds, and suppose we express 2 Bennett [1974] and Loewer [1978] arrive at opposite conclusions concerning the question whether Lewis's semantics is compatible with cotenability theory. Their discussions are instructive for other semantics as well.

CONDITIONAL LOGIC

7

the claim that a proposition is physically necessary by . Then the proposal we are considering is that the following equivalence always holds: 19. ( > ) $ ( ! ). Another way of arriving at (19) is the following. English subjunctive conditionals are not truth-functional because they say more than that the antecedent is false or the consequent is true. The additional content is a claim that there is some sort of connection between the antecedent and the consequent. The kind of connection which seems to occur to people most readily in this context is a physical or causal connection. How can we represent this additional content in our formalization of English subjunctive conditionals? One way is to interpret > as involving the claim that it is physically impossible that be true and false. Once again we come up with (19). A proposal resembling the one we have outlined can be found in [Burks, 1951], although we do not wish to suggest that Burks arrived at his account by exactly the same line of reasoning as we have suggested. We can generalize the proposal represented by (19). We might suppose that the basic form of (19) is correct but that the short of necessity involved in English subjunctive conditionals is not pure physical necessity. One reason for suspecting this is that the notion of cotenability has been ignored. It is not simply a consequence of physical law that Jane would develop hives if she were to eat strawberries; it is also in part a consequence of her having a particular physical make-up. In evaluating the claim that Jane would become ill if she were to eat strawberries, we do not count the fact that in some worlds which share the same physical laws as our own but in which Jane has a radically dierent physical make-up, she is able to eat strawberries with impunity, as a legitimate reason for rejecting this claim. Another reason for seeking a dierent kind of necessity for the analysis of conditionals is that some conditionals may be true because of connections between their antecedents and consequents which are not physical connections at all. Consider, for example, conditionals such as `If you deserted your family you would be a cad', which seems to be founded on normative rather than physical connections. The general theory we are considering, then, is that English subjunctive conditionals are strict conditionals of some sort, i.e. that their logical form is given by the equivalence (19). There remains the problem of determining which kind of necessity is involved in these conditionals. Regardless of the kind of necessity we choose in such an analysis of conditionals, we should expect our modal logic to have certain minimal properties. By a modal logic we mean any set L of sentences formed from the symbols of classical sentential logic together with the symbol in the usual ways, provided that L contains all tautologies and is closed under the rule modus ponens. We should expect that for any tautology our modal logic will contain . We should also expect our modal logic to contain all substitution

8

DONALD NUTE AND CHARLES B. CROSS

instances of the following thesis: 20.

( !

) ! ( ! ).

But when we de ne our conditionals according to (19), our logic will then also contain all substitution instances of the following theses: Transitivity: [( > ) ^ ( > )] ! ( > ) Contraposition: ( > : ) ! ( > :) Strengthening antecedents: ( > ) ! [( ^ ) > ]. But none of these theses seem to be reliable for English subjunctive conditionals. As a counterexample to Transitivity, consider the following conditionals: 21. If Carter had not lost the election in 1980, Reagan would not have been President in 1981. 22. If Carter had died in 1979, he would not have lost the election in 1980. 23. If Carter had died in 1979, Reagan would not have been President in 1981. (21) and (22) are true, but is far from clear that (23) is true. As a counterexample to Contraposition, consider: 24. If it were to rain heavily at noon, the farmer would not irrigate his eld at noon. 25. If the farmer were to irrigate his eld at noon, it would not rain heavily at noon. And nally, for Strengthening Antecedents, consider: 26. If the left engine were to fail, the pilot would make an emergency landing. 27. If the left engine were to fail and the right wing were to shear o, the pilot would make an emergency landing. Since even very weak modal logics will contain all substitution instances of these three theses, and since most speakers of English nd counterexamples of the sort we have considered convincing, most investigators are convinced that English conditionals are not a variety of strict conditional.

CONDITIONAL LOGIC

9

1.4 Minimal Change Theories While treating ordinary conditionals as strict conditionals does not seem too promising, investigators have still found the possible worlds semantics often associated with modal logic very attractive. The basic intuition, that a conditional is true just in case its consequent is true at every member of some set of worlds at which its antecedent is true, may yet be salvageable. We can avoid Transitivity, etc. if we allow that the set of worlds involved in the truth conditions for dierent conditionals may be dierent. But we do not wish to allow that this set of worlds be chosen arbitrarily for a given conditional. Stalnaker [1968] proposes that the conditional > is true just in case is true at the world most like the actual world at which is true. According to Stalnaker, in evaluating a conditional we add the antecedent of the conditional to our set of beliefs and modify our set of beliefs as little as possible in order to accommodate the new belief tentatively adopted. Then we consider whether the consequent of the conditional would be true if this revised set of beliefs were all true. In the ideal case, we would have a belief about every single matter of fact before and after this operation of adding the antecedent of the conditional to our stock of beliefs. Possible worlds correspond to these epistemically ideal situations. Stalnaker's assumption, then, is that at least when the antecedent of a conditional is logically possible, there is always a unique possible world at which the antecedent is true and which is more like the actual world than is any other world at which the antecedent is true. We will call this Stalnaker's Uniqueness Assumption. On some fairly reasonable assumptions about the notion of similarity of worlds, Stalnaker's truth conditions generate a very interesting logic for conditionals. Essentially these assumptions are that any world is more similar to itself than is any other world, that the -world closest to world i (that is, the world at which is true which is more similar to i than is any other world at which is true) is always at least as close as the ^ -world closest to i, and that if the - world closest to i is a -world and the -world closest to i is a -world, then the -world closest to i and the -world closest to i are the same world. The model theory Stalnaker develops is complicated by his use of the notion of an absurd world, a world at which every sentence is true. This invention is motivated by the need to provide truth conditions for conditionals with impossible antecedents. Stalnaker's semantics can be simpli ed by omitting this device and adjusting the rest of the model theory accordingly. When we do this, we produce what could be called simpli ed Stalnaker models. Such a model is an ordered quadruple hI; R; s; [ ]i where I is a set of possible worlds, R is a binary re exive (accessibility) relation on I , s is a partial world selection function which, when de ned, assigns to sentence and a world i in I a world s(; i) (the -world closest to i), and [ ] is a

10

DONALD NUTE AND CHARLES B. CROSS

function which assigns to each sentence a subset [] of I (all those worlds in I at which is true). Stalnaker's assumptions about the similarity of worlds become a set of restrictions on the items of these models: (S1)

s(; i) 2 [];

(S2)

hi; s(; i)i 2 R;

(S3)

if s(; i) is not de ned then for all j j 62 [];

(S4) (S5) (S6)

2I

such that hi; j i

2 R,

if i 2 [] then s(; i) = i;

if s(; i) 2 [ ] and s( ; i) 2 [], then s(; i) = s( ; i);

i 2 [ > ] if and only if s(; i) 2 [ ] or s(; i) is unde ned.

Until otherwise indicated, we will understand by a conditional logic any set L of sentences which can be constructed from the symbols of classical sentential logic together with the symbol >, provided that L contains all tautologies and is closed under the inference rule modus ponens. The conditional logic determined by Stalnaker's model theory is the smallest conditional logic which is closed under the two inference rules RCEC: from $ , to infer ( > ) $ ( > ) RCK:

from (1 ^ : : : ^ n ) ! , to infer [( > 1 ) ^ : : : ^ ( > n )] ! ( > ), n 0

and which contains all substitution instances of the theses ID:

>

MP:

( > ) ! ( ! )

MOD: CSO: CV: CEM:

(: > ) ! ( > )

[( > ) ^ ( > )] ! [( > ) $ ( > )] [( > ) ^ :( > :)] ! [( ^ ) > ] ( > ) _ ( > : )

Together with modus ponens and the set of tautologies, these rules and theses can be viewed as an axiomatization of Stalnaker's logic, which he calls C2. While Stalnaker supplies a rather dierent axiomatization for C2, these rules and theses enjoy the advantage that they allow easy comparison of C2 with other conditional logics. Several of these rules and theses are due to Chellas [1975]. It can be shown that a sentence is a member of C2 if and only if that sentence is true at every world in every simpli ed

CONDITIONAL LOGIC

11

Stalnaker model. Thus we say that the class of simpli ed Stalnaker models determines or characterizes the conditional logic C2. None of Transitivity, Contraposition, and Strengthening Antecedents is contained in C2. A variation of the semantics developed by Stalnaker treats the function s as taking sets of worlds rather than sentences as arguments and values. In this variation, s is a function which assigns to each subset A of I and each member i of I a subset s(A; i) of I . Then > will be true at i just in case s([]; i) [ ]. By setting our semantics up in this way, we ensure that we can substitute one antecedent for another in a conditional provided that the two antecedents are true at exactly the same worlds, and we can do this without any additional restrictions on the function s. Since many authors have called sets of worlds propositions, we could call Stalnaker's original semantics a sentential semantics and the present variation on Stalnaker's semantics a propositional semantics to represent this dierence in the kind of argument the function s takes. As we look at alternatives to Stalnaker's semantics we will always consider the sentential forms of these semantics although equivalent propositional forms will often be available. Equivalence of the two versions of a particular semantics is guaranteed so long as the conditional logic characterized by the sentential version is closed under substitution of provable equivalents, i.e. so long as it is closed under both RCEC and RCEA: from $ to infer ( > ) $ ( > ). C2 is closed under RCEA as is any conditional logic closed under RCK and containing all substitution instances of CSO. The dierence between sentential and propositional formulations of a particular kind of model theory becomes important if we wish to consider conditional logics which are not closed under RCEA. Reasons for considering such `non-classical' logics are discussed in Section 1.7 below. For parallel development of sentential and propositional versions of certain kinds of model theories for conditional logics, see [Nute, 1980b]. Lewis [1973b; 1973c] questions Stalnaker's assumptions about the similarity of worlds and thus his semantics for conditionals. It is Stalnaker's Uniqueness Assumption which Lewis rejects. Lewis argues that there may be no unique - world which is closer to i than is any other -world. As an example, Lewis asks us to consider a straight line printed in a book and to suppose that this line were longer than it is. No matter what greater length we choose for the line, there is a shorter length which is still greater than the actual length of the line. The conclusion is that worlds which dier from the actual world only in the length of the sample line may be more and more like the actual world as the length of the line in those worlds comes closer to the line's actual length. But none of these worlds is the closest world at which the line is longer. In fact, examples of this sort can also be oered against an assumption about similarity of worlds which is

12

DONALD NUTE AND CHARLES B. CROSS

weaker than Stalnaker's Uniqueness Assumption. This assumption, which Lewis calls the Limit Assumption, is that, at least for a sentence which is logically possible, there is always at least one -world which is as much like i as is any other -world. Both the Uniqueness Assumption and the weaker Limit Assumption are highly suspect. If we follow Lewis's advice and drop the Uniqueness Assumption, we must give up Conditional Excluded Middle (CEM). But this is exactly the feature of Stalnaker's logic which is most often cited as objectionable. Both disjuncts in CEM will be true if is impossible and hence s is not de ned for and the actual world. On the other hand, if is possible, then must be either true or false at the nearest -world. Lewis ([1973b], p. 80) oers the following as a counterexample to CEM: 28a It is not the case that if Bizet and Verdi were compatriots, Bizet would be Italian; and it is not the case that if Bizet and Verdi were compatriots, Bizet would not be Italian; nevertheless, if Bizet and Verdi were compatriots, Bizet either would or would not be Italian. Lewis [1973b] admits that (28a) sounds, ohand, like a contradiction, but he insists that the cost of respecting this ohand opinion is too high: However little there is to choose for closeness between worlds where Bizet and Verdi are compatriots by both being Italian and worlds where they are compatriots by both being French, the selection function still must choose. I do not think it can choose|not if it is based entirely on comparative similarity, anyhow. Comparative similarity permits ties, and Stalnaker's selection function does not.3 Van Fraassen [1974] has employed the notion of supervaluation in defense of CEM. The suggestion is that in actual practice we do not depend upon a single world selection function s in evaluating conditionals. Instead we consider a number of dierent ways in which we might measure the similarity of worlds, each with its appropriate world selection function. Each world selection function provides a way of evaluating conditionals. A sentence can also have the property that it is true regardless of which world selection function we use. We can call such a sentence supertrue. If we accept Stalnaker's semantics together with a multiplicity of world selection functions, it turns out that every instance of CEM is supertrue even though it may be the case that neither disjunct of some instance of CEM is supertrue. In fact, all the members of C2 are supertrue when we apply Van Fraassen's method of supervaluation, and the method mandates the following reinterpretation of the Bizet-Verdi example: 3 [Lewis,

1973b], p. 80.

CONDITIONAL LOGIC

13

28b `If Bizet and Verdi were compatriots, Bizet would be Italian' is not supertrue; and `If Bizet and Verdi were compatriots, Bizet would not be Italian' is not supertrue; nevertheless, `If Bizet and Verdi were compatriots, Bizet either would or would not be Italian' is supertrue. (The relevant instance of CEM is also supertrue: `Either Bizet would be Italian if Bizet and Verdi were compatriots, or Bizet would not be Italian if Bizet and Verdi were compatriots.') In the Bizet-Verdi example, what Lewis accounts for as a tie in comparative world similarity, the method of supervaluation accounts for as a case of indeterminacy in the choice of a closest compatriot-world. Lewis [1973b] admits that ohand opinion seems to favor CEM, but, Stalnaker [1981a] shows that there is systematic intuitive evidence for CEM: the apparent absence of scope ambiguities in conditionals where Lewis' theory predicts we should nd them. Consider the following dialogue (see [Stalnaker, 1981a], pp. 93{95): X:

President Carter has to appoint a woman to the Supreme Court.

Y:

Who do you think he has to appoint?

X:

He doesn't have to appoint any particular woman; he just has to appoint some woman or other.

There is a clear scope ambiguity in X's statement, and this scope ambiguity explains why X's response to Y makes sense: Y reads X as having intended `a woman' to have wide scope, and X's response corrects Y by making it clear that X intended `a woman' to have narrow scope. Now compare this dialogoue to another, in which necessity is replaced by the past-tense operator: X:

President Carter appointed a woman to the Supreme Court.

Y:

Who do you think he appointed?

X:

He didn't appoint any particular woman; he just appointed some woman or other.

In this case X's response does not make sense. There is no semantically distinct narrow scope reading that X could have had in mind, so there is no room for Y to have misunderstand X's statement. Finally, consider a dialogue involving a conditional instead of a necessity or past tense statement: X:

President Carter would have appointed a woman to the Supreme Court last year if there had been a vacancy.

Y:

Who do you think he would have appointed?

14

X:

DONALD NUTE AND CHARLES B. CROSS

He wouldn't have appointed any particular woman; he just would have appointed some woman or other.

If Lewis' analysis of counterfactuals is correct, then in this dialogue, as in the rst dialogue, one should perceive an ambiguity in the scope of `a woman' in X's statement, and X's response should make sense as a correction of Y's misinterpretation. In fact there is no room for Y to have misunderstood X's statement, and X's response simply doesn't make sense. In this respect, the third dialogue parallels the second dialogue, not the rst, and the apparent lack of a scope ambiguity in X's statement in the third dialogue is evidence for CEM.4 If Stalnaker's example does not convince one to accept CEM, it is quite possible to formulate a logic and a semantics for conditionals which resembles Stalnaker's but which does not include CEM. Lewis [1971; 1973b; 1973c] suggests more than one way of doing this. The rst way is to replace the Uniqueness Assumption with the weaker Limit Assumption. Instead of looking at the closest antecedent-world, we look at all closest antecedent-worlds. These functions might better be called class selection functions rather than world selection functions. It is also not necessary to incorporate the accessibility relation into our models for conditionals if we use class selection functions since, if we make a certain reasonable assumption, we can de ne such a relation in terms of our class selection function. The assumption is that if is possible at all at i, then there is at least one closest -world for our selection function to pick out. Our models are then ordered triples hI; f; [ ]i such that I and [ ] are as before and f is a function which assigns to each sentence and each world i in I a subset of I (all the - worlds closest to i). By restricting these models appropriately, we can characterize a logic very similar to Stalnaker's C2. This logic, which Lewis calls VC, is the smallest conditional logic which is closed under the same rules as those listed for C2 and which contains all those theses used in de ning C2 except that we replace CEM with CS:

( ^ ) ! ( > ).

CS is contained by C2 although CEM is not contained by VC.5 A sentence 4 A dierent sort of argument for CEM can be found in [Cross, 1985], which adopts Bennett's [1982] analysis of `even if' conditionals and argues for the validity of CEM based on the intuitive validity of the following formulas: (e > ) ! ( > ) ( ^ :( > : )) ! (e > ); where (e > ) means `Even if , '. The argument turns on the fact that in any system of conditional logic that includes classical propositional logic and RCEC, CEM is a theorem i ( ^ :( > : )) ! ( > ) is a theorem. 5 This and other independence results cited in this paper are provided in Nute [1979;

CONDITIONAL LOGIC

15

is a member of VC if and only if it is true at every world in every class selection function model which satis es the following restrictions: (CS1): (CS2): (CS3): (CS4): (CS5): (CS6):

if j 2 f (; i) then j 2 [];

if i 2 [] then f (; i) = fig;

if f (; i) is empty then f ( ; i) \ [] is also empty;

if f (; i) [ ] and f ( ; i) [], then f (; i) = f ( ; i); if f (; i) \ [ ] 6= ;, then f ( ^ ; i) f (; i);

i 2 [ > ] i f (; i) [ ].

Although Lewis endorses VC as the proper logic for subjunctive conditionals, he nds the Limit Assumption and, hence, the version of class selection function semantics we have developed, to be no more satisfactory than the Uniqueness Assumption. Consequently, Lewis proposes an alternative semantics for subjunctive conditionals. This alternative is also based on the similarity of worlds. The dierence is in the way Lewis uses similarity in giving the truth conditions for conditionals. A conditional > with a logically possible antecedent is true at a world i, according to Lewis, if there is a ^ -world which is closer to i than is any ^ : -world. Lewis uses nested systems of spheres in his models to indicate the relative similarity of worlds. A system-of-spheres model is an ordered triple hI; $; [ ]i such that I and [ ] are as before and $ is a function which assigns to each i in I a nested set $i of subsets of I (the spheres about i). If there is some sphere S about i such that j is in S but k isn't in S , then j is closer to or more similar to i than is k. To characterize the logic VC, we must adopt the following restrictions of system-of-spheres models: (SOS1): fig 2 $i ;

(SOS2): i 2 [ > ] if and only if $i \ [] is empty or there is an S such that S \ [] is not empty and S \ [] [ ].

2 $i

While Lewis rejects the Limit Assumption, it should be noted that in those cases in which there is a closest -world to i the conditions for a conditional with antecedent being true at i are exactly the same for system-of-spheres models as for the type of class selection function model we examined earlier. For this reason we classify Lewis's semantics as a minimal change semantics to contrast it with other accounts which lack this feature. Pollock [1976] also develops a minimal change semantics for conditionals. In fact, Pollock's semantics is a type of class selection function semantics. There are two primary reasons why Pollock rejects Lewis's semantics and the 1980b].

16

DONALD NUTE AND CHARLES B. CROSS

conditional logic VC. First Pollock rejects the thesis CV, a thesis which is unavoidable in Lewis's semantics. Second Pollock embraces the Generalized Consequence Principle: GCP:

If is a set of sentences such that > and if entails , then > is true.

is true for each

2

,

GCP does not hold in all system-of-spheres models, but it does hold in all class selection function models.6 The conditional logic SS which Pollock favors is the smallest conditional logic closed under the rules listed for VC and containing all those theses used in de ning VC except that we replace CV with CA:

[( > ) ^ ( > )] ! [( _ ) > ].

This again gives us a weaker system since CA is contained by VC while CV is not contained by SS. Obviously, SS is not determined by the class of class selection function models which satisfy conditions (CS1){(CS6) since this class of models characterizes the logic VC. Let's replace the condition (CS5) with (CS50 ) f ( _ ; i) f (; i) [ f ( ; i). Then SS is determined by the class of all class selection function models which satisfy this new set of conditions. One reason for Pollock's lack of concern for Lewis's counterexamples to the Limit Assumption may be that Pollock conceives of what would count as a minimal change quite dierently from the way Lewis does. Pollock [1976] oers a detailed account of the notion of a minimal change, an account 6 To see how GCP might fail in Lewis's semantics, consider the example Lewis uses to show that for a particular there may be no -world closest to i. The example, which we considered earlier, involves a line printed on a page of [Lewis, 1973b]. Lewis invites us to consider worlds in which this line is longer than its actual length, which we will suppose to be exactly one inch. If the only way in which these worlds dier from the actual world is in the length of Lewis's line, then it is plausible that we rank these worlds in their similarity to the actual world according to how close to one inch Lewis's line is in each of these worlds. But no matter how close to one inch the line is, so long as it is longer than one inch there will be another such world in which it is closer to an inch in length. This means that for any length m greater than one inch, there is a world in which the line is longer than one inch and in which the line does not have length m which is nearer the actual world than is any world in which the line has length m. but then Lewis's truth conditions for conditionals dictate that if the line were longer than one inch, its length would not be m, and this is true for any length m greater than the actual length of the line. then consider the set of sentences of the form `Lewis's line is not length m' where m ranges over every length greater than the actual length of Lewis's line. But entails the sentence `Lewis's line is not greater than one inch in length'. Applying GCP, we conclude that if Lewis's line were greater than one inch in length, then it would not be greater than one inch in length. This conclusion is not intuitively reasonable nor is it true at any world in the system-of- spheres model which Lewis describes in his discussion.

CONDITIONAL LOGIC

17

which is later modi ed in [Pollock, 1981]. The later view, which avoids many problems of the earlier view, will be discussed here. While Stalnaker, Lewis and others maintain that the notions of similarity of worlds and of minimal change are vague notions which may change given dierent purposes and contexts, thus accounting for the vagueness we often nd in the use of conditionals, Pollock claims that the similarity relation is not vague but quite de nite. Pollock's account rests upon his use of two epistemological notions, that of a subjunctive generalisation and that of a simple state of aairs. Subjunctive generalisations are statements of the form `Any F would be a G;. The truth of some subjunctive generalisations like `Anyone who drank from the Chisholm's bottle would die' depends upon contingent matters of fact, in this case the fact that Chisholm's bottle contains strychnine and the fact that people have a certain physical makeup. Other subjunctive generalisations like `Any creature with a physical make up like ours who drank strychnine would die' do not depend for their truth on contingent matters of fact in the same way. Pollock calls the former `weak' subjunctive generalisations and the latter `strong' subjunctive generalisations. Some subjunctive generalisations are supposed by Pollock to be directly con rmable by their instances, and these he calls basic. The problem of con rmation is discussed in [Pollock, 1984]. The second crucial ingredient in Pollock's analysis is the notion of a simple state of aairs. A state of aairs is simple if it can be known non-inductively to be the case without rst coming to know some other state(s) of aairs which entail(s) it. The actual world is supposed by Pollock to be determined by the set of true basic strong subjunctive generalisations together with the set of true simple states of aairs. The justi cation conditions for a subjunctive conditional > are stated in terms of making minimal changes in these two sets in order to accommodate . The rst step is to generate all maximal subsets of the set of true basic strong subjunctive generalisations which are consistent with . For each such maximally -consistent set N of true basic strong subjunctive generalisations, we then generate all sets of true simple states of aairs which are maximally consistent with N [ fg. Finally, we consider every possible world at which , every member of some maximally -consistent set N of true basic strong subjunctive generalisations, and every member of some set S of true basic strong subjunctive generalisations, and every member of some set S of true simple states of aairs maximally consistent with N [ fg are all true. If is true at all such worlds, then > is true at the actual world. The set of worlds determined by this procedure serves as the value of a class selection function. If we try to de ne a relative similarity relation for worlds based upon Pollock's analysis of minimal change, we come up with a partial order rather than the `complete' order assumed by Lewis and, apparently, by Stalnaker. Because we can have two worlds j and k such that their similarity to a

18

DONALD NUTE AND CHARLES B. CROSS

third world i is incomparable, the thesis CV does not hold for Pollock's semantics.7 A simple model of Pollock's sort which rejects CV as well as another thesis which has been attributed to Pollock's conditional logic SS is developed in [Mayer, 1981]. Several authors have proposed theories which resemble Pollock's in important respects. One of these is Blue [1981] who suggests that we think of subjunctive conditionals as metalinguistic statements about a certain semantic relation between an antecedent set of sentences in an object language and another sentence of the object language viewed as a consequent. A theory (set of sentences of the object language) and the set of true basic (atomic and negations of atomic) sentences of the language play roles similar to those played by laws (true basic strong subjunctive generalisations) and simple states of aairs in Pollock's account. One problem with Blue's proposal is that treating conditional metalinguistically as he does prevents iteration of conditionals without climbing a hierarchy of metalanguages. Another problem concerns the role which temporal relations between the basic sentences plays in his theory, a problem for other theories as well. (This problem is discussed in Section 1.8 below.) For a more detailed discussion of Blue's view, see [Nute, 1981c]. The similarity of an account like Pollock's or Blue's to the cotenability theories of conditionals should be obvious. A conditional is true just in case its consequent is entailed (Blue uses a somewhat dierent relation) by its 7 Pollock has oered various counterexamples to CV, the most recent of which involves a circuit having among its components two light bulbs L1 and L2 , three simple switches A; B , and C , and a power source. These components are supposed to be wired together in such a way that bulb L1 is lit exactly when switch A is closed or both switches B and C are closed, while bulb L2 is lit exactly when switch A is closed or switch B is closed. At the moment, both bulbs are unlit and all three switches are open. Then the following conditionals are true: (5a) :(L2 > :L1 ) (5b) :[(L2 ^ L1 ) > :(B ^ C )] The justi cation for (5a) is that one way to bring it about that L2 (i.e. that bulb L2 is lit) is to bring it about that A (i.e. that switch A is closed), but A > L1 is true. The justi cation for (5b) is that one way to make L1 and L2 both true is to close both B and C . Pollock claims that the following counterfactual is also true: (5c) L2 > :(B ^ C ) If Pollock is correct, then these three counterfactuals comprise a counterexample to CV. Pollock's argument for (5c) is that L2 requires only A or B , and to also make C the case is a gratuitous change and should therefore not be allowed. But this is an oversimpli cation. It is not true that only A; B , and C are involved. Other changes which must be made if L2 is to be lit include the passage of current through certain lengths of wire where no current is now passing, etc. Which path would the current take if L2 were lit? We will probably be forced to choose between current passing through a certain piece of wire or switch C being closed. It is diÆcult to say exactly what the choices may be without a diagram of the kind of circuit Pollock envisions, but without such a diagram it is also diÆcult to judge whether closing switch C is gratuitous in the case of (5c) as Pollock claims.

CONDITIONAL LOGIC

19

antecedent together with some subset of the set of laws or theoretical truths and some (cotenable) set of simple states of aairs or basic sentences. Veltman [1976] and Kratzer [1979; 1981] also propose theories of conditionals which resemble Pollock's in important respects. We will discuss Kratzer's view, although the two are similar. Kratzer suggests what can be called a premise or a partition semantics for subjunctive conditionals. Like Pollock, she associates with each world i a set Hi of propositions or states of aairs which uniquely determines that world. The set Hi is called a partition for i or a precise set for i. Kratzer proposes that we evaluate a subjunctive conditional > by considering - consistent subsets of Hi . > is true at a world i if and only if each -consistent subset X of Hi is contained in some -consistent subset Y of H such that Y [ fg entails . Kratzer points out that if every - consistent subset of Hi is contained in some maximally - consistent subset of Hi , then this truth condition is equivalent to the condition that > is true at i just in case is entailed by X [ fg for every maximally -consistent subset X of Hi . If we assume that every -consistent subset of Hi is contained in some maximally -consistent subset of Hi , the specialized version of Kratzer's semantics we obtain looks very much like Pollock's. Lewis [1981a] notes that this assumption plays the same role in premise semantics that the Limit Assumption plays in class selection function semantics. In fact, Lewis shows that on this assumption Kratzer's premise semantics is formally equivalent to Pollock's semantics. Given this equivalence, these two semantics will determine exactly the same conditional logic SS. Even if we assume that the required maximal sets always exist and adopt a version of premise semantics which is formally equivalent to Pollock's semantics, Kratzer's position would still dier radically from Pollock's since she does not assign to laws and simple states of aairs a privileged role in her analysis. Nor does she prefer Blue's object language theory and basic sentences for such a role. The set of premises which we associate with a world and use in the evaluation of conditionals varies, according to Kratzer, as the purposes and circumstances of the language users vary. Thus Kratzer reintroduces the vagueness which so many investigators have observed in ordinary usage and which Pollock and Blue would deny or at least eliminate. Apparently Kratzer does not accept the Limit Assumption, in her case the assumption that the required maximal sets always exist. Yet in [Kratzer, 1981] she describes what she calls the most intuitive analysis of counterfactuals, saying that The truth of counterfactuals depends on everything which is the case in the world under consideration: in assessing them, we have to consider all the possibilities of adding as many facts to the antecedent as consistency permits.

20

DONALD NUTE AND CHARLES B. CROSS

This certainly suggests maximal antecedent-consistent subsets of a premise set (the Limit Assumption) and a minimal change semantics. But if the Limit Assumption is unacceptable, this initial intuition must be modi ed. Kratzer's modi cation takes the form of the truth condition reported earlier. Besides the Limit Assumption, Kratzer's semantics also fails to support the GCP. One principle which does remain, a principle common to all the semantics discussed in this section, is the thesis CS. Beginning with some sort of minimalist intuition, all of these authors claim subjunctive conditionals with true antecedents have the same truth values as their consequents. When the antecedent of the conditional is true, the actual world is the unique closest antecedent world and hence the only world to be considered in evaluating the conditional. If Lewis's counterexamples to the Limit Assumption are conclusive, we must conclude that all the semantics for subjunctive conditionals which we have discussed in this section must be inadequate except for Lewis' system-of-spheres semantics and the general version of Kratzer's premise semantics. And if the GCP is a principle which we wish to preserve, then Lewis's semantics and Kratzer's semantics are also inadequate. Besides these diÆculties, minimal change theories have been criticized because they endorse the thesis CS. As was mentioned in Section 1.1, many researchers claim that some conditionals with true antecedent and consequent are false. For an excellent polemic against the minimal change theorists on this issue, see [Bennett, 1974].

1.5 Small Change Theories Aqvist [1973] presents a very interesting analysis of conditionals in which the conditionals in which the conditional operator is de ned in terms of material implication and some unusual monadic operators. Simplifying a bit, Aqvist's semantics involves ordered quintuplets hI; i; R; f; [ ]i such that I and [ ] are as in other models we have discussed, i is a member of I; R is an accessibility relation on I , and f is a function which assigns to each sentence a subset f () of [] such that for every member j of f (); hi; j i 2 R. A sentence whose primary connective is the monadic star operator is true at a world j in I just in case j 2 f (). The usual truth conditions are provided for a necessity operator, so that is true at j in I just in case for every world k such that hj; ki 2 R, is true at k, i.e. just in case k 2 []. Finally, a conditional > is true at a world j just in case ( ! ) is true at i. Aqvist modi es this semantics in an appendix. The modi cation involves a set of models of the sort described, each with the same set of possible worlds, the same accessibility relation, and the same valuation function, but each with its own designated world and selection function. The resulting semantics turns out once again to be equivalent to a version of class selection semantics.

CONDITIONAL LOGIC

21

While the interesting formal details of Aqvist's theory are quite dierent from those of other investigators, the most signi cant feature of his account may be his suggestion that a class selection function might properly pick out for a sentence and a world i all those -worlds which are `suÆciently' similar to i rather than only those -worlds which are `most' similar to i. By changing the intended interpretation for the class selection function, we avoid the trivialisation of the truth conditions for conditionals in all those cases where the Limit Assumption in either of its forms fails. At the same time, class selection function semantics supports the GCP. Aqvist's suggestion looks very promising. A similar approach is taken by Nute, [1975a; 1975b; 1980b], but the semantics Nute proposes is explicitly a version of class selection function semantics. This model theory diers from versions of class selection function semantics we examined earlier in two important ways. First the intended interpretation is dierent, i.e. there is a dierent informal explanation to be given for the role which the class selection functions play in the models. Second the restriction (CS2) is replaced by the weaker restriction (CS20 ) if i 2 [] then i 2 f (; i). The second change is related to the rst. Surely any world is more similar to itself than is any other. Thus, if f picks out for and i the -worlds closest to i, and if i is itself a -world, then f will pick out i and nothing else for and i. The objection to the thesis CS, though can be thought of as a claim that there may be other worlds suÆciently similar to the actual world so that in some cases we should consider these worlds in evaluating conditionals with true antecedents. When we modify our earlier semantics for Lewis's system VC by replacing (CS2) with (CS20 ), the resulting class of models characterizes the logic which Lewis [1973b] calls VW. VW is the smallest conditional logic which is closed under all the rules and contains all the theses listed for VC except for the thesis CS. By weakening our semantics further we can characterize a logic which is closed under all the rules and contains all the theses of VW except CV. This, of course, would give us a logic for which Pollock's SS would be a proper extension. Although many count it as an advantage of small change class selection function semantics that such theories allow us to avoid CS, it should be noted that such semantics do not commit us to a rejection of CS. As we have seen, both Lewis's VC and Pollock's SS are characterized by classes of class selection function models. For those who favor CS, it is still possible to avoid the diÆculties of the Limit Assumption and embrace the GCP by adopting one of these versions of the class selection function semantics but giving a small change interpretation of the selection functions upon which such a semantics depends. It is possible to avoid CS within the restrictions of a minimal change semantics. We can do this by `coarsening' our similarity relation, to use

22

DONALD NUTE AND CHARLES B. CROSS

Lewis's phrase, counting worlds as equally similar to some third world despite fairly large dierences in these worlds. For example, we might count some worlds other than i as being just as similar to i as is i itself. When we do this for a minimal change version of class selection function semantics, the formal results are exactly the same as those proposed earlier in this section and the resulting logic is VW. Of course, we must still cope with Lewis's objections to the Limit Assumption. But it is even possible to avoid CS within Lewis's system-of-spheres semantics. All we need to do is replace the restriction (SOS1) with the following: (SOS10 ) i 2 \si . The class of all those system-of-spheres models which satisfy (SOS10 ) and (SOS2) determines the conditional logic VW. While such a concession to the critics of CS is possible within the con nes of Lewis's semantics, Lewis does not favor such a move. We should also remember that the resulting semantics still does not support the GCP. Since we can formulate a kind of minimal change semantics which avoids the controversial thesis CS, the only advantage we have shown for small change theories is that they avoid the problems of the Limit Assumption while giving support for the GCP. But this advantage may be illusory. Loewer [1978] shows that for many versions of class selection function semantics we can always view the selection function as picking out closest worlds. For a model of such a semantics, we can de ne a relative similarity relation R between the worlds of the model in terms of the model's selection function f . It can then be shown that for a sentence and a world i; j 2 f (; i) if and only if j is a -world which is at least as close to i with respect to R as is any other -world. Consider such a model and consider a proposition which is true at world j just in case there are in nitely many worlds closer to i with respect to R than is j . There will be no -world closest to i. Consequently f (; i) will be empty and any conditional with as antecedent will be trivially true. How seriously we view this example depends upon our attitude toward the assumption that there exists a proposition which has the properties attributed to . If we take propositions to be sets of worlds, then the existence of such a proposition is very plausible. We should also note that this argument involves a not so subtle change in our semantics. Until now we have been thinking of our selection functions as taking sentences as arguments rather than propositions. If we restrict ourselves to sentences it is very unlikely that our language for conditional logic will contain a sentence which expresses the troublesome proposition. Nevertheless, it is not entirely clear that every small change version of class selection function semantics will automatically avoid the problems associated with the Limit Assumption. There is another advantage which can be claimed for small change theories which doesn't involve the logic of conditionals. If, for example, VW

CONDITIONAL LOGIC

23

is the correct logic for conditionals, we have seen that it is possible to take either the minimal change or the small change approach to semantics for conditionals and still provide a semantics which determines VW. But even if agreement is reached about which sentences are valid, these two approaches are still likely to result in dierent assignments of truth values to contingent conditional sentences. Suppose for example that Fred's lawn is just slightly too short to come into contact with the blades of his lawnmower. Thus his lawnmower will not cut the grass at present. Suppose further that the engine on Fred's lawnmower is so weak that it will only cut about a quarter of an inch of grass. If the height of the grass is more than a quarter of an inch greater than the blade height, the mower will stall. Then is the following sentence true or false? 29. If the grass were higher, Fred's mower would cut it. On the minimal change approach, whether we use class selection function semantics or system-of-spheres semantics, the answers to this question must be `yes' for there will be worlds at which the lawn is higher than the blade height but no more than a quarter inch higher than the blade height, which are closer to the actual world than is any world at which the grass is more than a quarter inch higher than the blade height. But the correct answer to the question would seem to be `no'. If someone were to assert (29) we would likely object, `Not if the grass were much higher'. This shows that we are inclined to consider changes which are more than minimally small in our evaluations of conditionals. We might avoid particular examples of this sort by `coarsening' the similarity relation, but it may be possible to generate such examples for any similarity relation no matter how coarse. All of the small change theories we have considered propose semantics which are at least equivalent to some version of class selection function semantics. There is, however, at least one small change theory which does not share this feature. Warmbrod [1981] presents what he calls a pragmatic theory of conditionals. This theory is based on similarity of worlds but in a radically dierent way than are any of the theories we have yet examined. According to Warmbrod, the set of worlds we use in evaluating a conditional is determined not by the antecedent of that particular conditional but rather by all the antecedents of conditionals occurring in the piece of discourse containing that particular conditional. Thus a conditional is always evaluated relative to a piece of discourse rather than in isolation. For any piece of discourse D and world i we select a set of worlds S which satis es the following conditions: (W1) if > occurs in D and is logically possible, then some world j in S is a -world; (W2) for some > occurring in D; j 2 s if and only if j is at least as close to i as are the closest - worlds to i.

24

DONALD NUTE AND CHARLES B. CROSS

Condition (W1) ensures that S is what Warmbrod calls normal for D and (W2) ensures that S is what Warmbrod calls standard for some antecedent occurring in D. (Warmbrod formulates his theory in terms of an accessibility relation, but the semantics provided here is formally equivalent.) Then a conditional > is true at i with respect to D if and only if ! is true at every world in S . The resulting semantics resembles both class selection function semantics and an analysis of conditionals as strict conditionals, but it diers from each of these approaches in important respects. Like other proposals which treat subjunctive conditionals as being strict conditionals, Warmbrod's theory validates Transitivity, Contraposition, and Strengthening Antecedents. Warmbrod argues that the evidence against these theses can be explained away. Apparent counterexamples to transitivity, for example, depend according to Warmbrod on the use of dierent sets S in the evaluation of the sentences involved in the putative counterexamples. Consider the example (21){(23) in Section 1.3 above. According to Warmbrod, this example can be a counterexample to Transitivity only if there is some set of worlds S which contains worlds at which Carter did not lose in 1980, contains some worlds at which Carter died in 1979, which is normal for these two antecedents, and for which the material conditional corresponding to (21) and (22) are true at all members of S while the material conditional corresponding to (23) is false at some world in S . But this, Warmbrod claims, is exactly what does not happen. The apparent counterexample depends upon an equivocation, a shift of the set S during the course of the argument. Warmbrod's theory has a certain attraction. It is certainly true that Transitivity and other controversial theses are harmless in many contexts, and it is certainly true that these theses are frequently used in ordinary discourse. The problem is to provide an account of the dierence between those situations in which the thesis is reliable and those in which it is not. Warmbrod's strategy is to consider the thesis to be always reliable and then to provide a way of falsifying the premises in unhappy cases. An alternative approach is to count these theses as being invalid and then to look for those features of context which sometimes allow us to use them with impunity. We think the second strategy is safer. It is probably better to occasionally overlook a good argument than it is to embrace a bad one. Or to put a bit dierently, it is better to force the argument to bear the burden of proof rather than to consider it sound until proven unsound. Another problem with Warmbrod's theory is that it suggests that we should nd apparent counterexamples to certain theses which have until now been considered uncontroversial. For example, we should nd apparent counterexamples for CA. (See [Nute, 1981b] for details.) Warmbrod's semantics also runs into diÆculty with the Limit Assumption. The requirement (W2) that S be standard for some antecedent in D involves the Limit Assumption explicitly. although Warmbrod's semantics

CONDITIONAL LOGIC

25

may tolerate small, non-minimal changes for some of the antecedents in a piece of discourse, it demands that only minimal changes be considered for at least one such antecedent. Of course, we might be able to modify (W2) in such a way as to avoid this problem. There remains, though, the nagging suspicion that none of the small change theories we have considered will in the end be able to escape the Limit Assumption, with all its diÆculties, in some form or other.

1.6 Maximal Change Theories Both minimal change theories and small change theories of conditionals are based on the premise that a conditional > is true at i just in case is true at some - world(s) satisfying certain conditions. The dierence, of course, is that for the one approach it is suÆcient that be true at all closest -worlds while the other requires that be true at all -worlds which are reasonably or suÆciently close to i. There is a third type of theory which shares the same basic premise as these two but which does not require that the worlds upon which the evaluation of > at i depends be very close or similar to i at all. According to this way of looking at conditionals, all that is required is that the relevant worlds resemble i in certain very minimal respects. Otherwise the relevant worlds may dier from i to any degree whatever. We might even think of this approach as requiring us to consider worlds which dier from i maximally except for the narrowly de ned features which must be shared with i. One theory of this sort is developed by Gabbay [1972]. To facilitate comparison, we will simplify Gabbay's account of conditionals rather drastically. When we do this, Gabbay's semantics for conditionals resembles the class selection function semantics we have discussed, but there are some very important dierences. A simpli ed Gabbay model is an ordered triple hI; g; [ ]i such that I and [ ] are as in earlier models, and g is a function which assigns to sentences and and world i in I a subset g(; ; i) of I . A conditional > is true at i in such a model just in case g(; ; i) [ ! ]. The dierence between this and class selection function semantics of the sort we have seen previously is obvious: the selection function g takes both antecedent and consequent as argument. This means that quire dierent sets of worlds might be involved in the truth conditions for two conditionals having exactly the same antecedent. This change in the formal semantics re ects a dierence in Gabbay's attitude toward conditionals and toward the way in which we evaluate conditionals. When we evaluate > , we are not concerned to preserve as much as we can of the actual world in entertaining ; instead we are concerned to preserve only those features of the actual world which are relevant to the truth of , or perhaps to the eect would have on the truth of . In actual practice the kind of similarity which is required is supposed by Gabbay to be determined by , by , and

26

DONALD NUTE AND CHARLES B. CROSS

also by general knowledge and particular circumstances which hold in i at the time when the conditional is uttered. What this involves is left vague, but it is not more vague than the notions of similarity assumed in earlier theories. When we modify Gabbay's semantics in this way, we must impose three restrictions on the resulting models: (G1)

i 2 g(; ; i);

(G2)

if [] = [ ] and [] = [] then g(; ; i) = g( ; ; i);

(G3)

g(; ; i) = g(; : ; i) = g(:; ; i).

With these restrictions Gabbay's semantics determines the smallest conditional logic which is closed under RCEC and the following two rules: RCEA: from $ , to infer ( > ) $ ( > ). RCE: from ! , to infer > .

We will call this logic G. At the end of [Gabbay, 1972], a conjectured axiomatisation of G is presented, but it was later shown to be unsound and incomplete in [Nute, 1977], where the axiomatisation of G presented here was conjectured to be sound and complete (see [Nute, 1980b]). Working independently, David Butcher [1978] also disproved Gabbay's conjecture, and proved the soundness and completeness of G for the Gabbay semantics (see [Butcher, 1983a]). It is obvious that G is the weakest conditional logic we have yet considered. We can characterize a stronger logic if we place additional restrictions on our Gabbay models, but we may not be able to guarantee a suÆciently strong logic without restricting our models to the point where they become formally equivalent to models we examined earlier. Consider, for example the theses CC: [( > ) ^ ( > )] ! [ > ( CM: [ > (

^ )] ! [( >

^ )]

) ^ ( > )].

To ensure that our conditional logic contains CC and CM, we could impose the following restriction on Gabbay's semantics: (G4) g(; ; i) = g(; ; i). Once we do this we have eliminated the most distinctive feature of Gabbay's semantics. According to David Butcher [1983a], it is possible to ensure CC and CM by adopting conditions weaker than (G4). However, Butcher has indicated that these conditions are problematic for other reasons.8 8 Many

of these isues are also discussed in Butcher [1978; 1983a].

CONDITIONAL LOGIC

27

A rather dierent and very specialized maximal change theory has been developed in two dierent forms by Fetzer and Nute [1979; 1980] and by Nute [1981a]. Both forms of this theory are intended not as analyses of ordinary subjunctive conditionals as they are used in ordinary discourse, but rather as analyses of scienti c, nomological, or causal conditionals, i.e. of subjunctive conditionals as they are used in the very special circumstances of scienti c investigation. Formally the two theories propose class selection function semantics for scienti c conditionals, but the intended interpretation is quite dierent from that of any theory we have yet considered. In the version of the theory developed by Fetzer and Nute the selection function f is intended to pick out for a sentence and a world i the set of all those -worlds at which all the individuals mentioned in possess, in so far as the truth of will allow, all those dispositional properties which they permanently possess in i. This forces us to ignore all features of worlds except those assumed by the underlying theory of causality to aect the causal eÆcacy of the situation, events, etc., described in . We are forced, in other words, to consider worlds which preserve only these features and otherwise dier maximally from the world at which the scienti c conditional is being evaluated. In this way we can ensure that the conditional in question is true if but only if the antecedent and the consequent are related causally or nomologically in an appropriate manner. Physical law statements are then analysed as universal generalisations or sets of universal generalisations of such scienti c conditionals. The view subsequently developed in [Nute, 1981a] departs a bit from the requirement of maximal speci city which we seek in our scienti c pronouncements and in doing so comes closer to representing a kind of conditional used in ordinary discourse. Nute suggests that the selection function f selects for a sentence and a world i all those -worlds at which all those individuals mentioned in possess, so far as the truth of allows, not only all those dispositional properties which they permanently possess in i but also all those dispositional properties which they accidentally or as a matter of particular fact possess in i. For example, a particular piece of litmus paper permanently possesses the tendency to turn red when dipped in an acidic solution, since it could not lose this tendency and still be litmus paper, but it only accidentally possesses the tendency to re ect blue light, since it could certainly lose this disposition through being dipped in acid and yet still be litmus paper. Where it is impossible to accommodate without giving up some of the dispositional properties possessed by individuals mentioned in , preference is given to dispositions which are possessed permanently. On this account, but not on the account developed by Fetzer and Nute conjointly, the following conditional is true where x is a piece of litmus paper which is in fact blue: 30. If x were cut in half, it would be blue.

28

DONALD NUTE AND CHARLES B. CROSS

Nute [1981a] suggests that many ordinary conditionals may have such truth conditions, or may be abbreviations of other more explicit conditionals which have such truth conditions. Each of the theories presented in this section is in fact only a fragment of a more complex theory. It is impossible to discuss the larger theories in any greater detail and the reader is encouraged to consult the original publications. What allows us to consider them under a single category is their departure from the premise that the truth of a conditional depends upon what happens in antecedent- worlds which are very much like the actual world. Each of these theories assumes and even requires that the divergence from the actual world be rather larger than minimal or small change theories would indicate.

1.7 Disjunctive Antecedents One thesis in particular has caused considerable controversy among the investigators of conditional logic. This thesis is Simpli cation of Disjunctive Antecedents: SDA: [( _ ) > ] ! [( > ) ^ ( > )]. The intuitive plausibility of SDA has been suggested in [Fine, 1975], in [Nute, 1975b] and in [Ellis et al., 1977]. Unfortunately, any conditional logic which contains SDA and which is also closed under substitution of provable equivalents will also contain the objectionable thesis Strengthening Antecedents. If we add SDA to any of the logics we have discussed, then Transitivity and Contraposition will be contained in the extended logic as well.9 Ellis et al. suggest that the evidence for SDA is so strong and the problems involved in trying to incorporate SDA into any account of conditionals based upon possible worlds semantics is so great that the possibility of an adequate possible worlds semantics for ordinary subjunctive conditionals is quite eliminated. With all the problems which the various theories encounter, the possible worlds approach has still proven to be a powerful tool for the investigation of the logical and semantical properties of conditionals and we should be unwilling to abandon it without rst trying to defend it against such a charge. The rst line of defence has been a `translation lore' approach to the problem of disjunctive antecedents. It is rst noted that, despite the intuitive appeal of SDA, there are examples from ordinary discourse which show that SDA is not entirely reliable. The following sentences comprise one such example: 9 As further evidence of the problematic character of SDA, David Butcher [1983b] has shown that any logic containing SDA and CS will contain ! , where is de ned as : > .

CONDITIONAL LOGIC

29

31a. If the United States devoted more than half of its national budget to defence or to education, it would devote more than half of its national budget to defence. 31b. If the United States devoted more than half of its national budget to education, it would devote more than half of its national budget to defence. Contrary to what we should expect if SDA were completely reliable, it looks very much as if (31a) is true even though (31b) cannot be true. Fine [1975], Loewer [1976], McKay and Van Inwagen [1977] and others have suggested that those examples which we take to be evidence for SDA actually have a quite dierent logical form from that which supporters of SDA suppose them to have. While a sentence like (31a) really does have the form ( _ ) > , a sentence like 32. If the world's population were smaller or agricultural productivity were greater, fewer people would starve. has the quite dierent logical form ( > ) ^ ( > ). According to this suggestion, the word `or' represents wide scope conjunction rather than narrow scope disjunction in (32). Since we can obviously simplify a conjunction, this confusion about the logical form of sentences like (32) results in the mistaken commitment to a thesis like SDA. This would be a neat solution to the problem if it would work, but the translation lore approach has a serious aw. According to the translation lorist, the two sentences (31a) and (32) have dierent logical forms even though they share the same surface or grammatical structure. We can point out an obvious dierence in surface structure since one of the apparent disjuncts in the antecedent of (31a) is also the consequent of (31a), a feature which (32) lacks. But we can easily produce examples where this is not the case. Suppose after asserting (31a) a speaker went on to assert 33. So if the United States devoted over half its national budget to defence or education, my Lockheed stock would be worth much more than it is. It would be very reasonable to accept this conditional but at the same time to reject the following conditional: 34. So if the United States devoted over half of its national budget to education, my Lockheed stock would be worth much more than it is. The occurrence of the same component sentence in antecedent and consequent is not a necessary condition for the failure of SDA and cannot be used as a criterion for distinguishing those cases in which English conditional with `or' in their antecedents are of the logical form ( _ ) > from

30

DONALD NUTE AND CHARLES B. CROSS

those in which they are of the logical form ( > ) ^ ( > ). We cannot decide on purely syntactical grounds which of the two possible symbolisations is proper for an English conditional with `or' in its antecedent. Loewer [1976] suggests that this decision may be made on pragmatic grounds, but it is diÆcult to see what the distinguishing criterion is to be except that English conditionals with disjunctive antecedents are to be symbolized as ( > ) ^ ( > ) when simpli cation of their disjunctive antecedents is legitimate and to be symbolized as ( _ ) > when such simpli cation is not legitimate. Until Loewer's suggestion concerning the pragmatic pressures which prompt one symbolisation rather than another can be provided with suÆcient detail, the translation lore account of disjunctive conditional does not provide us with an adequate solution to our problem. We nd an interesting variation on the translation lore solution in [Humberstone, 1978] and in [Hilpinen, 1981]. Both suggest the use of an antecedent forming operator like Aqvist's . We will discuss Hilpinen's theory here since it diers the most from Aqvist's view. Hilpinen's analysis utilizes two separate operators which we can represent as If and Then. The If operator attaches to a sentence to produce an antecedent If . the Then operator connects an antecedent and a sentence to form a conditional Then . The role of the dyadic truth functional connectives is expanded so that _, for example, can connect two antecedents and to form a new antecedent _ . An important dierence between Hilpinen's If operator and Aqvist's is that for Aqvist is a sentence or proposition bearing a truth value while for Hilpinen If is not. Finally Hilpinen proposes that sentences like (31a) be symbolized as If ( _ ) Then while sentences like (32) be symbolized as (If _ If ) Then . Hilpinen then accepts a rule similar to SDA for sentences having the latter form but not for sentences having the former. This proposal allows us to incorporate a rule like SDA into our conditional logic while avoiding Strengthening Antecedents, etc., and, unlike other versions of the translation lore approach, Hilpinen's proposal seems to suggest how it might be possible for sentences like (31a) and (32) to have a legitimate scope ambiguity in their syntactical structure, like the scope ambiguity in `President Carter has to appoint a woman'. In fact, however, the ambiguity postulated by Hilpinen's proposal does not seem simply to be a scope ambiguity. The sentence `President Carter has to appoint a woman' is ambiguous with respect to the scope of the phrase `a woman', but the phrase `a woman' has the same syntactical function and the same semantics on both readings of the sentence. The same cannot be said of the word `or' in Hilpinen's account of disjunctive antecedents: on one resolution of the ambiguity, what `or' connects in examples like (31a) and (32) are sentences; on the other resolution of the ambiguity, `or' connects phrases that are not sentences. It is diÆcult to see how the ambiguity in (31a) and (32) can be simply a scope ambiguity if `or' does not have the same syntactical role in both readings of a given sentence.

CONDITIONAL LOGIC

31

Another approach to disjunctive antecedents is developed by Nute [1975b; 1978b] and [1980b]. Formally the problem with SDA is that it together with substitution of provable equivalents results in Strengthening Antecedents and other unhappy results. The translation lorist's suggestion is that we abandon SDA. Nute's suggestion, on the other hand, is that we abandon substitution of provable equivalents, at least for antecedents of subjunctive conditionals. One fairly strong logic which does not allow substitution of provably equivalent antecedents is the smallest conditional logic which is closed under RCEC and RCK and contains ID, MP, MOD, CV, and SDA. Logics of this sort have been called `non- classical' or `hyperintensional' to contrast them with those intensional logics which are closed under substitution of provable equivalents. Classical logics (those closed under substitution of provable equivalents) are preferred by most investigators. Besides the fact that non-classical logics are much less elegant than classical logics, Nute's proposal has other very serious diÆculties. First, substitution of certain provable equivalents within antecedents appears to be perfectly harmless. For example, we can surely substitute _ for _ in ( _ ) > with impunity. How are we to decide which substitutions are to be allowed and which are not? Non-classical conditional logics which allow extensive substitutions are developed in Nute [1978b] and [1980b]. But these systems are extremely cumbersome and there still is the extra-formal problem of justifying the particular choice of substitutions which are to be allowed in the logic. Second, we are still left with the apparent counterexamples to SDA like (31a). Nute suggests a pluralist position, maintaining that there are actually several dierent conditionals in common use. For some of these conditionals SDA is reliable while for others it is not. The conditional involved in (31a), it is claimed, is unusual and should not be represented in the same way as other subjunctive conditionals. While there is good reason to admit a certain pluralism, to admit, for example, the distinction between subjunctive and indicative conditionals, Nute's proposal is little more than a new translation lore in disguise. The translation lore we discussed earlier at least has the virtue that it attempts to explain the perplexities surrounding disjunctive antecedents in terms of a widely accepted set of logical operators without requiring the recognition of any new conditional operators. Non-classical logic appears to be a dead end so far as the problem of disjunctive antecedents is concerned. A completely dierent solution is suggested in [Nute, 1980a], a solution based upon the account of conversational score keeping developed in [Lewis, 1979b]. Basically, the proposal concerns the way in which the class selection function (or the system-of-spheres if Lewis-style semantics is employed) becomes more and more de nite as a linguistic exchange proceeds. During a conversation, the participants tend to restrict the selection function which they use to interpret conditionals in such a way as to accommodate claims made by their fellow participants. This growing set of restrictions on the se-

32

DONALD NUTE AND CHARLES B. CROSS

lection function forms part of what Lewis calls the score of the conversation at any given stage. Some accommodations, of course, will not be forthcoming since some participant will be unwilling to evaluate conditionals in the way which these accommodations would require. Each restriction on the selection function which the participants implicitly accept will also rule out other restrictions which might otherwise have been allowed. Nute's suggestion is that our inclination is to restrict the selection function in such a way to make SDA reliable, but that this inclination can be overridden in certain circumstances by our desire to accommodate the utterance of another speaker. When we hear the utterance of a sentence like (31a), for example, we restrict our selection function so that SDA becomes unreliable for sentences which have `the United States devotes more than half its national budget to defence or education' as antecedent. Once (31a) is accommodated in this way, this restriction on the selection function remains in eect so long as the conversational context does not change. Nute completes his account by formulating some `accommodation' rules for class selection functions. By oering a pragmatic account of the way in which the selection function becomes restricted during the course of a conversation, and by paying attention to the inclination to restrict the selection function in such a way as to make SDA reliable whenever possible, it may be possible to explain the fact that SDA is usually reliable while at the same time avoiding the many diÆculties involved in accepting SDA as a thesis of our conditional logic. This proposal is similar in certain respects to Loewer's [1976]. Like Loewer, Nute is recognising the important role which pragmatic features play in our use of conditionals with disjunctive antecedents. However, Nute's use of Lewis's notion of conversational score keeping results in an account which provides more details about what these pragmatic features might be than does Loewer's account. We also notice that Nute's suggestions might provide the criterion which Loewer needs to distinguish those conditionals which should be symbolized s ( _ ) > from those which should be symbolized as ( > ) ^ ( > ). But once the distinction is explained in terms of the evolving restrictions on class selection functions, there is no need to require that these conditionals be symbolized dierently. The point of Nute's theory is that all such conditionals have the same logical form, but the reliability of SDA will depend on contextual features. There is also considerable similarity between Nute's second proposal and Warmbrod's semantics for conditionals which was discussed in Section 1.5. In fact, Warmbrod's semantics is oered at least in part as an alternative to Nute's proposed solution to the problem of disjunctive antecedents. The important similarity between the two approaches is that both recognize that the interpretation of a conditional is a function not of the conditional alone but also of the situation within which the conditional is used. The important dierence is that Warmbrod's semantics makes SDA, Transitivity, Contraposition, Strengthening Antecedents, etc. valid and uses pragmatic

CONDITIONAL LOGIC

33

considerations to explain and guard us from those cases where it seems to be a mistake to rely upon these principles, while Nute ultimately rejects all of these principles, but uses pragmatic considerations to explain why it is perfectly reasonable to use at least one of these theses, SDA, in many situations. Warmbrod also oers a translation lore as part of his account. His suggestion about the way in which we should symbolize English conditionals with disjunctive antecedents is essentially that of Fine, Lewis, Loewer, and others, but he oers purely syntactic criteria for determining which symbolisation is appropriate in a particular case. His semantics is oered as a justi cation for his translation lore in an attempt to make his rules for symbolisation appear less ad hoc. Warmbrod points out some diÆculties with Nute's rules of accommodation for class selection functions, and his translation rules might be used as a model for improving the formulation of Nute's rules. Nute's theory of disjunctive antecedents in terms of conversational score might also be proposed as an alternative justi cation for Warmbrod's translation rules.

1.8 The Direction of Time We turn now to a problem alluded to in Section 1.4, a problem which concerns the role temporal relations play in the truth conditions for subjunctive conditionals. Actually, there are two dierent sets of problems to be considered. One of these involves the use of tensed language in conditionals and the other does not depend essentially on the use of tense and conditionals together. We will consider the latter set of problems in this section and save problems concerning tense for the next section. A particularly thorny problem for logicians working on conditionals has to do with so-called backtracking conditionals, i.e. conditionals having antecedents concerned with events or states of aairs occurring or obtaining at times later than those involved in the consequents of the conditional. It is widely held that such conditionals are rarely true, and that when they are true they usually involve much more complicated antecedents and consequents than do the more usual true non-backtracking conditionals. Consider, for example, the two conditionals: 35. If Hinckley had been a better shot, Reagan would be dead. 36. If Reagan were dead, Hinckley would have been a better shot. The rst of these two conditionals is an ordinary non-backtracking conditional, while the second is a backtracking conditional. the rst is very plausible and perhaps true, while the second is surely false. The problem with (36) which makes it so much less plausible than (35) is that Reagan might have died subsequent to the assassination attempt from any number

34

DONALD NUTE AND CHARLES B. CROSS

of causes which would not involve an improvement of Hinckley's aim. The problem for the logician or semanticist is to explain why non-backtracking conditionals are more often true than are backtracking conditionals. The primary goal of Lewis [1979a] is to explain this phenomenon. Lewis's proposal makes explicit, extensive use of the technical notion of a miracle. In a certain sense miracles do not occur at all in Lewis's analysis: rather a miracle occurs in one world relative to another world. No event ever occurs in any world which violates the physical laws of that world, but events can certainly occur in one world which violate the physical laws of some other world. These are the kinds of miracles Lewis relies upon. Assuming complete determinism, which Lewis does at least for the sake of argument, any world which shares a common history with the actual world up to a certain point in time but which diverges from the actual world after that time cannot obey the same physical laws as does the actual worlds. Basically Lewis proposes that the worlds most similar to the actual world in which some counterfactual sentence is true are those worlds which share their history with the actual world up until a brief transitional period beginning just prior to the times involved in the truth conditions for . In the case of (35) this might mean that everything happens exactly as it did except that Hinckley miraculously aimed better than he actually did. this might only require something as small as a neuron ring at a slightly dierent time than it actually did. This is about as small a miracle as we could hope for. Once this miracle occurs, events are assumed by Lewis to once again follow their lawful course with the result, perhaps, that Reagan is mortally wounded. In the case of (36), on the other hand, Reagan might be dead if the FBI agent miraculously failed to jump in front of Reagan, if Reagan miraculously moved in such a way that the bullet struck him dierently, or even if Reagan miraculously had a massive stroke at any time after the assassination attempt. Even if events followed their lawful course after any of these miracles, Hinckley's aim would not be improved. Lewis notes that the vagueness of conditionals requires that there may be various ways of determining the relative similarity of worlds, dierent ways being employed on dierent occasions. There is one way of resolving vagueness which Lewis considers to be standard, and it is this way which provides us with the explanation of (35) and (36) given above. This standard resolution of vagueness is expressed in the following guidelines for determining the relative similarity of worlds: (L1) It is of the rst importance to avoid big, complicated, varied, widespread violations of law. (L2) It is of the second importance to maximize the spatio- temporal region throughout which perfect match of particular fact prevails.

CONDITIONAL LOGIC

35

(L3) It is of the third importance to avoid even small, simple, localized violations of law. (L4) It is of little or no importance to secure approximate similarity of particular fact, even in matters that concern us greatly. Lewis would maintain that application of these guidelines together with his system-of-spheres semantics for subjunctive conditionals will have the desired result of making (35) at least plausible while making (36) clearly false. One major objection to Lewis's account is that once we allow miracles in order to produce a world which diverges from the actual world, there is nothing in Lewis's guidelines to prevent us from allowing another small miracle in order to get the worlds to converge once again. Since Lewis's guidelines place a higher priority on maximising the area of perfect match of particular facts over the avoidance of small, localized violations of law, we should prefer a small convergence miracle to a future which is radically dierent. Lewis's response to such a suggestion is that divergence miracles tend to be much smaller than convergence miracles or, what amounts to the same thing, that past events are overdetermined to a greater extent than are future events. If correct, then Lewis's guidelines would place greater importance on avoidance of a large convergence miracle than on maximising the area of perfect match of a particular fact. and careful consideration of examples indicates that Lewis's suggestion is at least plausible, although no conclusive argument has been provided. In [Nute, 1980b] examples of very simple worlds are given in which convergence miracles could be quite small and in which Lewis's guidelines would thus dictate that for some counterfactual antecedents the nearest antecedent worlds are those in which such small convergence miracles occur. In these examples, we get the (intuitively) wrong result when we apply Lewis's standard method for resolving the vagueness of conditionals. Lewis [1979a] warns that his guidelines might not work for very simple worlds, though, so the force of Nute's examples is uncertain. Lewis's guidelines may give an adequate explanation for our use of conditionals in the context of a complex world like the actual world, and since our intuitions are developed for such a world they may be unreliable when applied to very simple worlds. If we consider Lewis's proposal in the context of a probabilistic world, we discover that we no longer need employ the troublesome notion of a miracle. Instead of a miracle, we can accommodate a counterfactual antecedent in a probabilistic world by going back to some underdetermined state of aairs among the causal antecedents of the events or states of aairs which must be eliminated if the antecedent is to be true and change them accordingly. Since these states of aairs were underdetermined to begin with, they could have been otherwise without any violation of the probabilistic laws governing the

36

DONALD NUTE AND CHARLES B. CROSS

universe. But if we do this, Lewis's emphasis on maximising the spatiotemporal area of perfect match of particular fact would require that we always change a more recent rather than an earlier causal antecedent when we have a choice. This consequence is very much like the Requirement of Temporal Priority in [Pollock, 1976], a principle which is superseded by the more complex account to be discussed below. Such a principle is unacceptable. Suppose, for example, that Fred left his coat unattended in a certain room yesterday. Today he returned to the room and found the coat had not been disturbed. Suppose that both yesterday and earlier today a number of people have been in the room who had an opportunity to take the coat. Then a principle like Lewis's L2 or Pollock's RTP will dictate that if the coat had been taken, it would have been taken today rather than yesterday. Other things being equal, the later the coat is taken the greater the area of perfect match of particular fact. But this is counterintuitive. (In fact, experience teaches that unguarded objects tend to disappear earlier rather than later.) While Lewis's theory is intended to explain why many backtracking conditionals are false, a consequence of the theory is that some very unattractive backtracking conditionals turn out to be true. In fact, this particular problem plagues Lewis's analysis whether the world is determined or probabilistic. As it is presented, Lewis's account does rely upon miracles. As a result, Lewis in eect treats all counterfactual conditionals as also being counterlegals. This is the feature of his account which most writers have found objectionable. Pollock, Blue, and others place a much higher priority on preservation of all law than on preservation of particular fact no matter how large the divergence of particular fact might be. Given such priorities, and given a deterministic world of the sort Lewis supposes, any change in what happens will result in a world which is dierent at every moment in the past and every moment in the future. If we adopt such a position, how can we hope to explain the asymmetry between normal and backtracking counterfactual conditionals? Probably the most sophisticated attempt to deal with these problems within the framework of a non-miraculous analysis of counterfactuals is that developed by John Pollock [1976; 1981]. Pollock has re ned his account between 1976 and 1981, but we will try to explain what we take to be his latest position on conditionals and temporal relations. Pollock says that a state of aairs P has historical antecedents if there is a set of true simple states of aairs such that all times of members of are earlier than the time of P and nominally implies P . nominally implies P just in case together with the set of universal generalisations of material implications corresponding to Pollock's true strong subjunctive generalisations entail P (or entail a sentence which is true just in case P obtains). Pollock next de nes a nomic pyramid which is supposed to be a set of states of aairs which contains every historical antecedent of each of its members. Then P

CONDITIONAL LOGIC

37

undercuts another state of aairs Q if and only if for every set of true states of aairs such that is a nomic pyramid and Q 2 , nominally implies that P does not obtain. In revising his set S of true simple states of aairs to accommodate a particular counterfactual antecedent P , Pollock tells us that we are to minimize the deletion of members of S which are not undercut by P . (We hope the reader will forgive the vacillation here since Pollock talks about entailment and other logical relations holding between states of aairs where most authors prefer to speak of sentences or propositions.) Perhaps this procedure will give us the correct results for backtracking and non-backtracking conditionals as Pollock suggests it will if the world is deterministic, but problems arise if we allow the possibility that there may be indeterministic states of aairs which lack historical antecedents. Consider a modi ed version of an example taken from [Pollock, 1981]. Suppose that protons sometimes emit photons when subjected to a strong magnetic eld under a set of circumstances C , but suppose also that protons never emit photons under circumstances C if they are not also subjected to a strong magnetic eld. As a background condition, let us assume that circumstances C obtain. Now let be true just in case a certain proton is subjected to a strong magnetic eld at time t and let be true just in case the same proton emits a photon shortly after t. Suppose that both and are true. Assuming that no other states of aairs nomologically relevant to obtain, we would intuitively say that : > : is true, i.e. if the proton hadn't been subjected to the magnetic eld at t, then it would not have emitted a proton shortly after t. But Pollock cannot say this. Since has no historical antecedents in Pollock's sense, it cannot be undercut by :. Because Pollock does not recognize historical antecedents of states of aairs when the nomological connection involved is merely probable, he must say that : > is true. Pollock's earlier account, which included the Requirement of Temporal Priority, and Lewis's account with its principle L2, in either its original miraculous formulation or the probabilistic, non-miraculous version, both tend to make objectionable backtracking conditionals true when they are intended to explain why they should be false. Blue [1981] includes a feature in his analysis which produces the same result in much the same way. While Pollock's latest theory of counterfactuals avoids examples like that of the unattended coat, it nevertheless encounters new problems with backtracking conditionals in the context of a probabilistic universe. It makes certain backtracking counterfactuals false which our intuitions say are true while making others true which appear to be false. Yet these are the only positive proposals known to the authors at the time of this writing. Other work in the area such as [Nute, 1980b] and [Post, 1981] is essentially critical. An adequate explanation of the role the temporal order plays in the truth conditions for conditionals is still a very live issue.

38

DONALD NUTE AND CHARLES B. CROSS

1.9 Tense There are relatively few papers among the large literature on conditionals which attempt an account of English sentences which involve both tense and conditional constructions. Two of the earliest are [Thomason and Gupta, 1981] and [Van Fraassen, 1981]. Both of these papers attempt the obvious, a fairly straightforward conjunction of tense and conditional operators within a single formal language. Basic items in the semantics for this language are a set of moments, an earlier-than relation on the set of moments which orders moments into tree-like structures, and an equivalence relation which holds between two moments when they are `co-present'. A branch on one of these trees plays the role of a possible world in the semantics. Such a branch is called a history, and sentences of the language are interpreted as having truth values at a moment-history pair, i.e. at a moment in a history. Note that a moment is not a clock time but rather a time-slice belonging to each history that passes through it. The tense operators in the language include two past-tense operators P and H , two future-tense operators F and G, and a `settledness' or historical necessity operator S . P is true at moment i in history h just in case is true at some moment j in h where j is earlier than i. H is true at some moment i in h if and only if is true at j in h for every moment j in h which is earlier than i. F is true at i in h if is true at a moment later than i in h, and G is true at i in h if is true at every moment later than i in h. S is true at i in h if and only if is true at i in every history h0 which contains i. For a further discussion of semantics for such tense operators, see Burgess [1984] (Chapter 2.2 of this Handbook). In both of these papers, that part of the semantics which is used to interpret conditionals is patterned after the semantics of Stalnaker. A conditional > is true at a moment i in a history h just in case is true at the pair hi0 ; h0 i at which is true which is closest or most similar to the pair hi; hi. Much of the discussion in the two papers is devoted to the eort to assure that certain theses which the authors favor are valid in their model theories. The measures needed to ensure some of the desired theses within the context of a Stalnakerian semantics are quite complicated, but the set of theses that represents the most important contribution of the account of [Thomason and Gupta, 1981], namely the doctrine of Past Predominance, turns out to be quite tractable model theoretically. According to Past Predominance, similarities and dierences with respect to the present and past have lexical priority over similarities and dierences with respect to the future in any evaluation of how close hi; hi is to hi0 ; h0 i, where i and i0 are co-present moments. This doctrine aects the interaction between the settledness operator S and the conditional. For example, Past Predominance implies the validity of the following thesis: (:S : ^ S ) ! ( > ):

CONDITIONAL LOGIC

39

This thesis is clearly operative in the reasoning that leads to the two-box solution to Newcomb's Problem: `If it's not settled that I won't take both boxes but it is settled that there is a million dollars in the opaque box, then if I take both boxes there will (still) be a million dollars in the opaque box.'10 Cross [1990b] shows that since, concerning the selection of a closest momenthistory pair, Past Predominance places no constraints on what is true at past or future moments, Past Predominance can be formalized and axiomatized in terms of settledness and the conditional using ordinary possible worlds models in which relations of temporal priority between moments are not represented. The issue of how the conditional interacts with tense operators, such as P , H , F and G, is more problematic. The accounts presented by Thomason and Gupta and by Van Fraassen adopt the hypothesis that English sentences involving both tense and conditional constructions can be adequately represented in a formal language containing a conditional operator and the tense operators mentioned above. Nute [1983] argues that this is a mistake. Consider an example discussed in [Thomason and Gupta, 1981]: 37. If Max missed the train he would have taken the bus. According to Thomason and Gupta, this and other English sentences of similar grammatical form are of the logical form P ( > F ). Nute argues that this is not true. To see why, consider a second example. Suppose we have a computer that upon request will give us a `random' integer between 1 and 12. Suppose further that what the computer actually does is increment a certain location in memory by a certain amount every time it performs other operations of certain sorts. When asked to return a random number, it consults this memory location and uses the value stored there in its computation. Thus the `random' number one gets depends upon when one requests it. We just now left the keyboard to roll a pair of dice. If anyone cares, we rolled a 9. Consider the following conditional: 38. If we had used the computer instead of dice, we would have got a 5 instead of a 9. It is certainly true that there is a time in the past such that if we had used the computer at that time we would have got a 5, so a sentence corresponding to (38) of the form P ( > F ) is certainly true. Yet (38) itself is not true. Depending upon when we used the computer and what operations the computer had performed before we used it, we could have obtained any integer from 1 to 12. Perhaps we are simply using the wrong combination of operators. Instead of P ( > F ), perhaps sentences like (37) and (38) are of the form H ( > F ). A problem with this suggestion is that such conditionals do 10 See

[Gibbard and Harper, 1981].

40

DONALD NUTE AND CHARLES B. CROSS

not normally concern every time prior to the time at which they are uttered but only certain times or periods of time which are determined by context. Suppose in a football game Walker carries the ball into the end zone for a touchdown. During the course of his run, he came very close to the sideline. Consider the conditional 39. If Walker had stepped on the sideline, he would not have scored. Can this sentence be of the form H ( > F )? Surely not, for Walker could have stepped on the sideline many times in the past, and probably did, yet he did score on this particular play. Perhaps we can patch things up further by introducing a new tense operator H which has truth conditions similar to H except that it only concerns times going a certain distance into the past, the distance to be determined by context. Once again, Nute argues, this will not work. Consider the conditional 40. If Fred had received an invitation, he would have gone to the party. This sentence might very well be accepted even though Fred would not have gone to the party had he received an invitation ve minutes before the party began. The period of time involved does not begin with the present moment and extend back to some past moment determined by context. Indeed if this were the case, for (40) to be true it would even have to be true that Fred would have gone to the party if he had received an invitation after the party ended. It would seem, then, that if a context-dependent operator is to be the solution to the problem Nute describes, then the contextually determined period of time involved in the truth conditions for English sentences of the sort we have been investigating must be some subset of past times, but one that need not be a continuous interval extending back from the present moment. This is the solution suggested by Thomason [1985].11 Nute [1991] argues for a dierent approach: the introduction of a new tensed conditional operator, i.e. an operator which involves in its truth conditions both dierences in time and dierences in world. Using a class selection function semantics for this task, we could let our selection function f pick out for a sentence , a moment or time i, and a history or world h a set f (; i; h) of pairs hi0 ; h0 i of times and histories at which is true and which are otherwise similar enough to hi; hi for our consideration. We would introduce into our formal language a new conditional operator, say iP F i, and sentences of the form iP F i would be true in an appropriate model at hi; hi if and only if for every pair hi0 ; h0 i 2 11 The following example may be linguistic evidence for this sort of context-dependence in tensed constructions not involving conditionals: a dean, worried about faculty absenteeism, asks a department chair, `Was Professor X always in his classroom last term?' the correct answer may be `Yes' even though Professor X was not in his classroom at times last term when his classes were not scheduled to meet.

CONDITIONAL LOGIC

41

f (; i; h) such that there is a time j in h0 which is copresent with i and later than i0 ; is true at hj; h0 i. It appears that three more operators of this sort will be needed, together with appropriate truth conditions. These operators may be represented as iP P i, iF F i, and iF P i. These operators would be used to represent sentences like 41. If Fred had gone to the party, he would have had to have received an invitation. 42. If Fred were to receive an invitation, he would go to the party. 43. If Fred were to go to the party, he would have to have received an invitation. Notice that (41) and (43) are types of backtracking conditionals. Since such conditionals are rarely true, we may use the operators iP P i and iF P i infrequently. This may also account for the cumbersomeness of the English locution which we must use to clearly express what is intended by (41) and (43). A number of other interesting problems concerning tense and conditionals occur to us. One of these is the way in which the consequent may aect the times included in the pairs picked by a class selection function. Consider the sentences 44. If he had broken his leg, he would have missed the game. 45. If he had broken his leg, the mend would have shown on his X- ray. The times at which the leg might have been broken varies in the truth conditions for these two conditionals. This suggests that a semantics like Gabbay's which makes both antecedent and consequent arguments for the class selection function might after all be the preferred semantics. Another possibility is that despite its awkwardness we must introduce some sort of context-dependent tense operator like the operator H discussed earlier. When we represent (44) as H ( > F ), H has the whole of the conditional within its scope and can consider the consequent in determining which times are appropriate. A third possibility is that the consequent does not gure as an argument for the selection function but it does gure as part of the context which determines the selection function which is, in fact, used during a particular piece of discourse. This sort of approach utilizes the concept of conversational score discussed in Section 1.7 of this paper. One piece of evidence in favor of this approach is the fact that it would be unusual to assert both (44) and (45) in the same conversation. Whichever of these two sentences was asserted rst, the antecedent of the other would likely be modi ed in some appropriate way to indicate that a change in the times to be considered was required. Besides these interesting puzzles, we need

42

DONALD NUTE AND CHARLES B. CROSS

also to explain the fact that we maintain the distinction between indicative and subjunctive conditionals involving present and past tense much more carefully than we do where the future tense is concerned. These topics are considered in more detail in [Nute, 1982 and 1991] and [Nute, 1991].

1.10 Other Conditionals Besides the subjunctive conditionals we have been considering, we also want an analysis for the might conditionals, the even-if conditionals, and the indicative conditionals mentioned in Section 1.1. It is time we took another look at these important classes of conditionals. Most authors who discuss the might and the even-if conditional constructions propose that their logical structure can be de ned by reference to subjunctive conditionals. Lewis [1973b] and Pollock [1976] suggest that English sentences having the form `If were the case, then might be the case' should be symbolized as :( > : ). Stalnaker [1981a] presents strong linguistic evidence against this suggestion, but the suggestion has achieved wide acceptance nonetheless. Pollock [1976] also oers a symbolisation of even-if conditionals. English sentences of the form ` even if ', he suggests, should be symbolized as ^ ( > ). The adequacy of this suggestion may depend upon our choice of conditional logic and particularly upon whether we accept the thesis CS. If we accept both CS and Pollock's proposal, then ` even if ' will be true whenever both and are true. An alternative analysis of even-if conditionals is developed in [Gardenfors, 1979]. Gardenfors's objection to Pollock's proposal seems to be that a person who knows that both and are true might still reject an assertion of the sentence ` even if '. Normally, says Gardenfors, one does not assert ` even if ' when one knows that is true; an assertion of ` even if ' presupposes that is true and is false. Even when the presupposition that is false truth turns out to be incorrect, Gardenfors argues that there is a presumption that the falsity of would not interfere with the truth of . Consequently, Gardenfors suggests that ` even if ' has the same truth conditions as ( > ) ^ (: > ). Another suggestion comes from Jonathan Bennett [1982]. Bennett gives a comprehensive account of even-if conditionals, tting them into the context of uses of `even' that don't involve `if', and uses of `if' that don't involve `even'. That is, Bennett rejects the treatment of `even if' as an idiom with no internal structure. The rst of three proposals we will consider concerning the analysis of indicative conditionals, which can be found in [Lewis, 1973b; Jackson, 1987] and elsewhere, is that indicative conditionals have the same truth conditions as do material conditionals, paradoxes of implication and problems with Transitivity, Contraposition, and Strengthening Antecedents notwithstanding. It is diÆcult and perhaps impossible to nd really persuasive

CONDITIONAL LOGIC

43

counterexamples to Transitivity and Strengthening Antecedents using only indicative conditionals, but apparent counterexamples to Contraposition are easy to construct. Consider, for example, the following two sentences: 46. If it is after 3 o'clock, it is not much after 3 o'clock. 47. If it is much after 3 o'clock, it is not after 3 o'clock. It is easy to imagine situations in which (46) would be true or appropriate, but are there any situations in which (47) would be true or appropriate? Another problem with this analysis concerns denials of indicative conditionals. Stalnaker [1975] oers an interesting example: 48. If the butler didn't do it, then Fred did it. Being quite sure that Fred didn't do it, we would deny this conditional. At the same time, we may believe that the butler did it, and therefore when we hear someone say what we would express by 49. Either the butler did it or Fred did it. We might respond, \Yes, one of them did it, but it wasn't Fred". Yet (48) and (49) are equivalent if (48) has the same truth conditions as the corresponding material conditional. One possible response to these criticisms is that we must distinguish between the truth conditions for an indicative conditional and the assertion conditions for that conditional. It may be that a conditional is true even though certain conventions make it inappropriate to assert the conditional. This might lead us to say that (47) is true even though it would be inappropriate to assert it. We might also attempt to explain away the paradoxes of implication in this way, relying on the assumed convention that it is misleading and therefore inappropriate to assert a weaker sentence when we are in a position to assert a stronger sentence which entails . For example, it is inappropriate to assert _ when one knows that is true. Just so, the argument goes, it is inappropriate to assert ) when one is in a position to assert either : or . and in general we may reject other putative counterexamples to the proposal that indicative conditionals have the same truth conditions as material conditionals by saying that in these cases not all the assertion conditions are met for some conditional rather than admit that the truth conditions for the conditional are not met. This line of defence is suggested, for example, by [Grice, 1967; Lewis, 1973b; Lewis, 1976] and by [Clark, 1971]. A second proposal is that indicative conditionals are Stalnaker conditionals, i.e. that Stalnaker's world selection function semantics is the correct semantics for indicative conditionals and Stalnaker's conditional logic C2 is the proper logic for these conditionals. This suggestion is found in [Stalnaker, 1975] and in [Davis, 1979]. While both Stalnaker and Davis propose

44

DONALD NUTE AND CHARLES B. CROSS

the same model theory for indicative and subjunctive conditionals, both also suggest that the properties of the world selection function appropriate to indicative conditionals are dierent from those of the world selection function appropriate to subjunctive conditionals. The dierence for Stalnaker has to do with the presuppositions involved in the utterance of the conditional. During the course of a conversation, the participants come to share certain presuppositions. In evaluating an indicative conditional ) , Stalnaker says that we look for the closest -world at which all of these presuppositions are true. In the case of a subjunctive conditional, on the other hand, we may look outside this `context set' for the closest -world. Of course the overall closest -world may not be a world at which all of the presuppositions are true since making true could tend to make one of the presuppositions false. This means that dierent worlds may be chosen by the selection function used to evaluate indicative conditionals and the selection function used to evaluate subjunctive conditionals. While accepting Stalnaker's model theory for both indicative and subjunctive conditionals, Davis oers a dierent distinction between the world selection function appropriate to indicative conditionals and that the appropriate to subjunctive conditionals. In fact, Davis claims that Stalnaker's analysis of subjunctive conditionals is actually the correct analysis of indicative conditionals. To evaluate an indicative conditional ! , Davis says we look at the -world which bears the greatest overall similarity to the actual world to see if it is a -world. For a subjunctive conditional > , we look at the - world which most resembles the actual world up until just before what Davis calls the time of reference of . Apparently, the time of reference of is the time at which events reported by occur, or states of aairs described by obtain, or etc. A third proposal, due to Adams [1966; 1975b; 1975a; 1981], holds that indicative conditionals lack truth conditions altogether. They do, however, have probabilities and these probabilities are just the corresponding standard conditional probabilities. Thus pr( ) ) = pr( ^ )=pr(), at least in those cases where pr() is non-zero. We must remember that Adams does not identify the probability of a conditional with the probability that that conditional is true since he rejects the very notion of truth values for conditionals. Adams proposes that an argument involving indicative conditionals is valid just in case its structure makes it possible to ensure that the probability of the conclusion exceeds any arbitrarily chosen value less than 1 by ensuring that the probabilities of each of the premises exceeds some appropriate value less than 1. In other words, we can push the probability of the conclusion arbitrarily high by pushing the probabilities of the premises suitably high. When an argument is valid in this sense, Adams says that the conclusion of the argument is `p-entailed' by its premises and the argument itself is `p-sound'. Since Adams rejects truth values for conditionals, conditionals can cer-

CONDITIONAL LOGIC

45

tainly not occur as arguments for truth functions. Given his identi cation of the probability of a conditional with the corresponding standard conditional probability, this further entails that conditionals may not occur within the scope of the conditional operator. Adams attempts to justify this consequence of this theory by suggesting that we don't really understand sentences which involve the embedding of one conditional within another in any case. This claim, though is far from obvious. Such sentences as 50. If this glass will break if it is dropped on the carpet, then it will break if it is dropped on the bare wooden oor. seem absolutely ordinary and at least as comprehensible as most other indicative conditionals. the inability to handle such conditionals must count as a disadvantage of Adams's theory. In [Adams, 1977] it is shown that p-soundness is equivalent to soundness in Lewis's system-of-spheres semantics. This implies that the proper logic for indicative conditionals is the ` rst-degree fragment' of Lewis's VC. By the rst degree fragment of VC We mean the set of all those sentences in VC within which no conditional operator occurs within the scope of any other operator. Since the logic Adams proposes for indicative conditionals can be supported by a semantics which also allows us to interpret sentences involving iterated conditional operators, we will need very strong reasons to accept Adam's account with its restrictions rather than some possible worlds account like Lewis's. In fact it may be possible to reconcile Lewis's view that the truth conditions for indicative conditionals are the same as those for the corresponding material conditionals with Adams work on the probabilities of conditionals and p-entailment. Lewis [1973b] suggests that the truth conditions for ) are given by ! while the assertion conditions for ) are given by the corresponding standard conditional probability. Jackson [1987] also entertains such a possibility. If we accept this, then we might accept Adams's theory as a basis for an adequate account of the logic of assertion conditions for indicative conditionals. Since we would be assuming that conditionals have truth values as well as probabilities, we could also overcome the restrictions of Adams's theory and assign probabilities to conditionals which have other conditionals embedded in them. One problem with this approach, though, is that it would seem to require that we identify the probability of a conditional with the probability that the conditional is true. When we do this and also take the probability of a conditional to be the corresponding standard conditional probability, serious problems arise as is shown in [Lewis, 1976] and in [Stalnaker, 1976]. These diÆculties will be discussed brie y in Section 3.

46

DONALD NUTE AND CHARLES B. CROSS

2 EPISTEMIC CONDITIONALS The idea that there is an important connection between conditionals and belief change seems to have been inspired by this suggestion of Frank Ramsey's: If two people are arguing \if p will q?" and are both in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q.12 The issue of how, precisely, to formalize Ramsey's suggestion and extend it from the case where p is in doubt to the general case has received a great deal of attention|too much attention to permit an exhaustive survey here. We will focus here on the Gardenfors triviality result for the Ramsey test (see [Gardenfors, 1986]) and related results, and the implications of these results for the project of formalizing the Ramsey test for conditionals. Despite the narrowness of this topic our discussion will not mention all worthy contributions to the subject. Sections 2.1 and 2.2 provide a general framework for formalizing belief change and the Ramsey test. Section 2.3 makes connections between this framework and the literature on belief change and the Ramsey test. Section 2.4 presents the Ramsey test itself, and Section 2.5 presents versions of several triviality results found in the literature, including a version of Gardenfors' 1986 result that subsumes several of the de nitions of triviality found in the literature. Section 2.6 examines how triviality can be avoided, and section 2.7 examines systems of conditional logic associated with the Ramsey test. We will provide proofs for some of the results stated below and in other cases refer the reader to the literature.

2.1 Languages By a Boolean language we will mean any logical language containing at least the propositional constant `?', the binary operator `^', and the unary operator `:'. We will assume that `>' is de ned as `:?' and that any other needed Boolean operators are de ned. We do not assume anything at this stage about how the operators and propositional constant of a Boolean language are interpreted, but it will turn out that in most cases `^', `:', and `?' will receive classical truth-functional interpretations. We will use the symbol ``' as a variable ranging over logical inference relations. Eective immediately we will cease using quotes when mentioning formulas and logical symbols. We de ne a language (whether Boolean or nonBoolean) to be of type L0 i it contains the propositional constants > and ? but does not contain the 12 [Ramsey,

1990], p. 155.

CONDITIONAL LOGIC

47

binary conditional operator >. We next de ne two language-types for doing conditional logic. A language, whether Boolean or nonBoolean, is of type L1 i it contains > and ? and the only >-conditionals allowed as formulas are rst-degree or \ at" conditionals, i.e. conditionals > where ; are conditional-free. We de ne a language, whether Boolean or nonBoolean, to be of type L2 (a \full" conditional language) i it contains > and ? and allows arbitrary nesting of conditionals in formulas.

2.2 A general framework for belief change We will describe belief change using a framework that is related to the AGM (Alchourron, Gardenfors, Makinson) framework for belief revision.13 Our framework extends that of AGM and is adapted (with further enrichment) from the notion of an enriched belief revision model introduced in [Cross, 1990a]. For a given language L containing >; ? as formulas, let wffL be the set of all formulas of L and let KL be P (wffL ) f;g (where P (wffL ) is the powerset of wffL). For a given inference relation ` and set of formulas of L, de ne Cn` ( ) (the `-consequence set for ) to be f : ` g, and let TL;` = f : wffL and Cn` ( ) = g be the set of all theories in L with respect to `. A set is `-consistent i 6` ?. We next de ne the notion of a belief change model : (DefBCM) A belief change model on a language L containing >; ? as formulas is an ordered septuple

hK; I ; `; K? ; ; ; si whose components are as follows:

13 See

1. K KL and ` is a subset of P (wffL ) wffL ; 2. I and K? are sets of formulas meeting the following requirements: (a) K? 2 K; (b) >; ? 2 I ; (c) K? is the set of all formulas of L or a fragment of L; (d) I is the set of all formulas of L or a fragment of L, and I K?. (e) K K? for all K 2 K. 3. and are binary functions mapping each K 2 K and each 2 I to sets K and K , respectively, where K K? and K K?;

[Alchourron et al., 1985] and [Gardenfors, 1988].

48

DONALD NUTE AND CHARLES B. CROSS

4. s is a function taking values in P (wffL ), where K dom(s) P (wffL ). A classical belief change model is a belief change model de ned on a Boolean language whose logical consequence relation ` includes all classical truthfunctional entailments and respects the deduction theorem for the material conditional. A deductively closed belief change model is a belief change model for which K = Cn` (K ) \ K? and K = Cn` (K ) \ K? and K = Cn` (K ) \ K? for all K 2 K and all 2 I . Note that in a deductively closed belief change model on language L, belief sets are theories in the fragment of L represented by K? and not necessarily theories in L itself. Informally, the items in a belief change model can be described as follows. K represents the set of all possible belief states recognized by the model; often K will be a subset of TL;` but not always. I represents the set of all formulas eligible to serve as inputs for contraction and revision. ` is an inference relation de ned on L and will in most cases be an extension of truth-functional propositional logic. K? contains all of the formulas of that fragment of L from which the belief sets in K are constructed and represents the absurd belief state; thus every belief set in K is a subset of K? . For each K 2 K and each 2 I , K represents the result of contracting K to remove (if possible), whereas K represents the result of revising K to include as a new belief. Revision is normally assumed to involve not only adding the given formula to the given belief set but also resolving any inconsistencies thereby created. For the sake of generality, we have not stipulated that K ; K 2 K, though this will usually be the case. Finally, s is the support function for the model, which determines for each belief state K (and perhaps for other sets, as well) the set of formulas of L supported by K . For belief sets K in belief change models for which the Ramsey test holds, s(K ) will contain Ramsey test conditionals even if K does not.

2.3 Comparisons With an eye toward our presentation of the basic triviality result for the Ramsey test we will brie y review diering positions about the elements making up a belief change model. The list of authors we mention here is not exhaustive but constitutes a representative sample of the diversity of positions taken with respect to belief change models and their elements in discussions of the Ramsey test. Belief states: the language of the model and the set K

Segerberg places no restrictions on the language in his discussion of the triviality result in [Segerberg, 1989]. Gardenfors ([1988] and elsewhere),

CONDITIONAL LOGIC

49

Rott [1989], and Cross [1990a] all adopt a type-L2 language for their Ramsey test belief change models, whereas Makinson [1990], Morreau [1992], Hansson ([1992], section III), Arlo-Costa [1995], and Levi ([1996] and elsewhere) restrict themselves to a type-L1 language. Hansson, Arlo-Costa, and Levi allow only the type-L0 formulas of a type-L1 language to belong to the sets that individuate belief states. Makinson and Morreau, like Gardenfors, Rott, Segerberg, and Cross, do not restrict the membership of belief-stateindividuating sets to type-L0 formulas. Most authors on the Ramsey test follow Gardenfors in representing the set of all possible belief states as a set of theories. One exception is Hansson [1992], who takes each possible belief state to be represented by a pair consisting of a set of formulas and a revision operator that de nes the dynamic properties of the belief state. For Hansson, the set of formulas in question is a belief base , a set of conditional-free formulas that need not be deductively closed. The belief base of a belief state is a (not necessarily nite) axiom set for the belief state, the idea being to allow dierent belief states to be associated with the same deductively closed theory. A belief state in Hansson's model can still be individuated by means of its belief base, however, because the revision operator of a belief state is a function of that belief state's belief base. Morreau [1992] also gives a two-component analysis of belief states, but in Morreau's analysis the two components are a set of \worlds" (truth-value assignments to atomic formulas) and a selection function (of the Stalnaker-Lewis variety) that determines which conditionals are believed in the belief state. A third exception is Rott [1991], who identi es belief states with epistemic entrenchment relations and notes that a nonabsurd belief set can be recovered from an epistemic entrenchment relation that supports at least one strict entrenchment: the belief set will be the set of all formulas strictly more entrenched than ?. Among those authors who take belief states to be deductively closed theories, most follow Gardenfors in assuming that not every theory corresponds to a possible belief state. On this issue Segerberg and Makinson are exceptions. In their respective extensions of Gardenfors' basic triviality result Segerberg and Makinson assume that revision is de ned on all theories in a given language rather than on a nonempty subset of the set of all theories for that language.14 We note above that Hansson, Arlo-Costa, and Levi allow only conditionalfree formulas into the sets that individuate belief states.15 Why exclude conditionals from these sets? In Levi's view, the formulas eligible for membership in the theories that individuate belief states are precisely those statements about which agents can be concerned to avoid error. Levi argues against including conditionals in the theories that individuate belief 14 See [Segerberg, 1989] and [Makinson, 1990]. 15 See, for example, [Hansson, 1992], [Arl o-Costa,

and Levi, 1996].

1995], [Levi, 1996], and [Arlo-Costa

50

DONALD NUTE AND CHARLES B. CROSS

states because in his view conditionals do not have truth conditions or truth values and so are not sentences about which agents can be concerned to avoid error.16 On Levi's view, a conditional > in a type-L1 language is acceptable relative to a belief set K in a type-L0 language i : is not epistemically possible relative to the result of revising K to include , and the negated conditional :( > ) is acceptable relative to K i : is epistemically possible relative to the revision of K to include . The part of this view governing negated conditionals, the negative Ramsey test, will be discussed later.17 An important consequence of Levi's view is the thesis that conditionals are \parasitic" on conditional-free statements in the following sense: the set of conditionals supported by a given belief state is determined by the conditional-free formulas accepted in that belief state or a subset thereof. Hansson [1992] shows, however, that it is possible to motivate a parasitic account of conditionals without taking a position on whether conditionals have truth conditions or truth values. Gardenfors [1988] criticizes Levi's view of conditionals on the grounds that it fails to account for iterated conditionals, a species of conditional about which Levi has expressed skepticism, but Levi [1996] and Hansson [1992] show that iterated conditionals can be accounted for (if necessary) even if conditionals do not have truth conditions or truth values. Levi [1996] points out, however, that axiom schema (MP) fails to be valid in the sense he favors if iterated conditionals are allowed.18 In this connection Levi exploits examples like the following, which was described by McGee [1985] as a counterexample to modus ponens : Opinion polls taken just before the 1980 election showed the Republican Ronald Reagan decisively ahead of the Democrat Jimmy Carter, with the other Republican in the race, John Anderson, a distant third. Those apprised of the poll results believed, with good reason: If a Republican wins the election, then if it's not Reagan who wins it will be Anderson. A Republican will win the election. Yet they will not have good reason to believe If it's not Reagan who wins, it will be Anderson.19 16 See [Levi, 1988] and [Levi, 1996], for example. As Arl o-Costa and Levi [1996] point out, Ramsey agreed that conditionals lack truth conditions and truth values: this is clear from the context of the quote from Ramsey with which we began Section 2. 17 For the most recent account of Levi's views on this topic, see [Levi, 1996]. 18 See [Levi, 1996], pp. 105-112. 19 [McGee, 1985], p. 462.

CONDITIONAL LOGIC

51

Arlo-Costa [1998] embraces iterated conditionals and uses McGee's example to argue, via the Ramsey test, against the following principle of invariance for iterated supposition . (K INV) If 2 K 6= K? , then (K ) = K .

Supposition, i.e. hypothetical revision of belief \for the sake of argument," is the notion of revision that Arlo-Costa [1998] and Levi [1996] both associate with Ramsey test conditionals. Since (K INV) holds in any deductively closed classical belief change model that satis es (K 3) and (K 4), Arlo-Costa takes McGee's example as evidence against (K 4) as a principle governing supposition. Contraction and revision inputs: the set I

Gardenfors does not exclude conditionals from the class of formulas eligible to be inputs for belief change in the models he formulates, but Morreau, Arlo-Costa, and Levi do. In Levi's case this restriction clearly follows from his view that conditionals have neither truth conditions nor truth values, and Arlo-Costa appears to agree with this view. Morreau's exclusion of conditionals as revision inputs appears to be an artifact of the nontriviality theorem he proves for the Ramsey test ([Morreau, 1992], THEOREM 14, p. 48) rather than indicative of a philosophical position about the status of conditionals. Logical consequence and support:

` and s

Most authors on the Ramsey test follow Gardenfors in assuming a compact background logic ` that includes all truth functional propositional entailments while respecting the deduction theorem for the material conditional, but there has been research on the Ramsey test in frameworks where the background logic is nonclassical or not necessarily classical. For example, Segerberg's triviality result in [Segerberg, 1989] assumes only the minimal constraints of Re exiveness, Transitivity, and Monotony for `,20 and in [Gardenfors, 1987] Gardenfors credits Peter Lavers with having established in an unpublished note a triviality result for the Ramsey test in which ` is de ned to be minimal logic 21 instead of an extension of classical truth-functional logic. Also, Cross and Thomason [1987; 1992] investigate a four-valued system of conditional logic that is motivated by an application of the Ramsey test in the context of the nonmonotonic logic of multiple inheritance with exceptions in semantic networks. 20 See the de nition of a Segerberg belief change model near 21 Minimal logic has modus ponens as its only inference rule

following schemata as axioms:

the end of section 2.3. and every instance of the

52

DONALD NUTE AND CHARLES B. CROSS

Levi [1988] introduces the function RL, which maps a conditional-free belief set to a conditional-laden belief set via the Positive and Negative Ramsey tests. Cross [1990a] formulates a version of the triviality result proved in [Gardenfors, 1986] in a framework where an extension ` of classical logic is coupled with a not-necessarily-monotonic consequence operation cl . Makinson [1990] does the same, calling his not-necessarily-monotonic consequence operation C . Hansson [1992] makes use of a function s which maps each belief state to the set of all formulas the belief state \supports." Support functions are also adopted by Arlo-Costa [1995] and by Arlo-Costa and Levi [1996]. Our view is that Levi's RL, Cross' cl , Makinson's C , and Hansson's s should be regarded as variations on the same theoretical construct, and we will follow Hansson in calling this construct a support function and in using s to represent it. More on this in Section 2.6 below. The following postulates are examples of requirements that might be imposed on s. Assume a belief change model on a language L, and assume that ranges over dom(s), which always includes K as a subset: (Identity over K) s(K ) = K for all K 2 K. (Monotonicity over K) For all H; K 2 K, if H K then s(H ) s(K ). (Re exivity)

s(

).

(Closure) Cn` [s( )] = s( ). (Consistency) If is `-consistent then s( ) is `-consistent. (Superclassicality) Cn` ( ) s( ). (Transitivity) s( ) = s[s( )].

(Reasoning by Cases) s( [fg) \ s( [f:g) s( ) for all ; such that [ fg 2 dom(s) and [ f:g 2 dom(s).

(Conservativeness) L has type-L0 fragment L0 and for all 2 wffL0 , 2 s( ) i 2 Cn` ( ). None of Gardenfors, Morreau, or Segerberg uses the notion of a support function: they assume, in eect, that s(K ) = K for all K 2 K. 1: ( ^ ) ! : 2: ( ^ ) ! : 3: ! ( _ ): 4: ! ( _ ): 5: ( ! ) ! [( ! ) ! (( _ ) ! )]: 6: ( ! ) ! [( ! ) ! (( ! ( ^ )]: 7: [ ! ( ! )] ! [( ! ) ! ( ! )]: 8: ! ( ! ): The formula : is de ned to be ! ?. This axiomatization is found in [Segerberg, 1968].

CONDITIONAL LOGIC

Contraction and revision: postulates for belief change

53

Since and are to represent functions legitimately describable as contraction and revision, respectively, it is appropriate to consider additional conditions on these functions. Which additional conditions should be imposed is a matter of dispute, and some of the additional postulates that will be under consideration are listed below. In the case of postulates (K+ 1), (K 1), (K 2), (K 3), (K 4), (K 5), (K 6), (K 7), (K 8), (K 1), (K 2), (K 3), (K 4), (K 5), (K 6), (K 7), (K 8), (K L), (K M), and (K P) we follow the labeling used in [Gardenfors, 1988]. Please note that we have not adopted any of the postulates given below in the de nition of belief change model . In each postulate, the variable K is understood to range over K; also ; are understood to range over I . We begin with a de nition of a third important belief change operation: expansion. De nition of and postulate for expansion

(Def+) K+ = Cn` (K [ fg) \ K? . (K+ 1)

K+ 2 K. (K+ is a belief set.)

Postulates for contraction

(K 1) (K 2) (K 3) (K 4) (K 4w) (K 5) (K 6) (K 7) (K 8)

2 K. (K is a belief set.) K K . If 62 K , then K = K . If 6` , then 62 K . If 6` and K = 6 K? , then 62 K . If 2 K , then K (K )+ . If ` $ , then K = K . K \ K K^ . If 62 K^ , then K^ K . K

Postulates for revision (K 1) K 2 K. (K is a belief set.) (K 2) 2 K .

54

(K 3) (K 4)

DONALD NUTE AND CHARLES B. CROSS

K K+ .

If : 62 K , then K+ K .

(K 4s) If : 62 K , then K+ = K .

(K 4ss) If K+ 6= K? , then K+ = K . (K 4w) If 2 K 6= K? , then K K. (K 5) K = K? i ` :.

(K 5w) If K = K? , then ` :. (K 5ws) If K = K? , then Cn` (fg) = K?. (K C) (K 6)

If K 6= K? and K = K? , then ` :. If ` $ , then K = K . (K 6s) If 2 K and 2 K , then K = K . (K 7) K^ (K )+ . (K 7 0 ) K \ K K_ . (K 8) If : 62 K , then (K )+ K^ . (K L) If :( > : ) 2 K , then (K )+ K^ . (K M) If s(K ) s(K 0 ), then K K0 . (K IM) If K 6= K? 6= K 0 and s(K ) s(K 0 ), then K 0 K . (K T) (K P)

If K 6= K? , then K> = K . If : 62 K , then K K.

(K PI ) If : 62 K then K \ I K \ I . (LI) K = (K: )+ . A few other postulates will be identi ed as needed. Our treatment of contraction and revision is not general enough to include every treatment of contraction and revision as a special case. For example, in the formalization of belief revision in [Morreau, 1992], the revision operation is nondeterministic, i.e. its value for a given belief set K and proposition is a set of belief sets rather than a belief set. We will not attempt to formalize nondeterministic contraction or revision. Also, for

CONDITIONAL LOGIC

55

Levi, contraction and revision are to be evaluated by means of a measure of informational value , which we do not explicitly formalize. The contraction operation, which we include in every belief change model, is not often used in the presentation of triviality results for the Ramsey test. Exceptions to this pattern include Cross [1990a] and Makinson [1990], who involve contraction explicitly in their respective formulations of triviality results for the Ramsey test. A catalog of belief change models

We conclude our discussion of comparisons by de ning several categories of belief change model that illustrate how the framework de ned above can be made to re ect the diering assumptions of a subset of authors who have written on belief revision and the Ramsey test. In associating a name with a class of belief change models we do not claim that the person named de ned this class of models; rather, we claim that the belief change models associated with this name are the appropriate counterpart in our framework of models that the named person did de ne in the context of work on the Ramsey test. Note that postulates on contraction and revision are not part of these de nitions. 1. By a Gardenfors belief change model (see, for example, [Gardenfors, 1986], [Gardenfors, 1987], and [Gardenfors, 1988]) we will mean a deductively closed classical belief change model hK; I ; `; K? ; ; ; si de ned on a type-L2 language L where I = wffL = K?, and dom(s) = K, and s satis es Identity over K. 2. By a Segerberg belief change model (see [Segerberg, 1989]) we will mean a belief change model hK; I ; `; K? ; ; ; si, de ned on any language, such that the following hold: K = TL;` ; I = wffL = K? ; dom(s) = K; s satis es identity over K; and Cn` meets the following requirements, for all ; wffL : (Re exivity for `) Cn` ( ). (Monotonicity for `) If , then Cn` () Cn` ( ). (Transitivity for `) Cn` ( ) = Cn` [Cn` ( )]. 3. By a Makinson belief change model (see [Makinson, 1990]) we will mean a belief change model hK; I ; `; K?; ; ; si de ned on a typeL1 language L and satisfying the following: K = f : wffL and s( ) = g; I = wffL = K? , ` is classical propositional consequence; dom(s) = P (wffL ); and s satis es Superclassicality, Transitivity, and Reasoning by Cases.22 22 Note that in a Makinson belief change model s satis es both Re exivity and Closure. Closure holds since Superclassicality and Transitivity for s imply that for each wffL , we have s( ) Cn` [s( )] s[s( )] = s( ).

56

DONALD NUTE AND CHARLES B. CROSS

4. By a Morreau belief change model (see [Morreau, 1992]) we will mean a deductively closed classical belief change model hK; I ; `; K?; ; ; si de ned on a type-L1 language L whose type-L0 fragment is L0 and where the following hold: I = wffL0 ; K? = wffL; dom(s) = K; and s satis es Identity over K. 5. By a Hansson belief change model (see [Hansson, 1992], section 3) we will mean a classical belief change model hK; I ; `; K?; ; ; si de ned on a type-L1 language L whose type-L0 fragment is L0 and where the following hold: K P (wffL0 ); I = wffL0 = K? ; dom(s) = K; and s satis es Re exivity, Conservativeness, and Closure. 6. By an Arlo-Costa/Levi belief change model (see [Arlo-Costa, 1990], [Arlo-Costa, 1995], [Arlo-Costa and Levi, 1996], and [Levi, 1996]) we will mean a deductively closed classical belief change model hK; I ; ` ; K? ; ; ; si de ned on a type-L1 language L whose type-L0 fragment is L0 and where the following hold: I = wffL0 = K?; dom(s) = K; and s satis es Re exivity, Conservativeness, and Closure. As we have already noted, our belief change models do not capture every feature of every belief revision model appearing in the literature on the Ramsey test, and the models we associate with the names of authors in some cases omit some of the structure that these authors include in their own respective accounts of what constitutes a belief revision model. On the other hand, we have stipulated more detail for the models we associate with certain authors than do the authors themselves. For example, none of Gardenfors, Morreau, or Segerberg uses the notion of a support function s in the sources cited above, and neither Gardenfors, nor Makinson, nor Segerberg restricts the applicability of contraction and revision to a subset I of the set of formulas of the language on which the model is de ned. Finally, as was pointed out earlier, the contraction operation, which we include in every belief change model, is not often discussed in connection with the Ramsey test. In general, the stipulation of extra detail will serve to highlight tacit assumptions and make comparisons easier.

2.4 The Ramsey test for conditionals Ramsey's original suggestion can be put as follows: if an agent's beliefs entail neither nor :, then the agent's beliefs support > i his or her initial beliefs together with entail , i.e. (RTR) For all K 2 K and all 2 I such that ; : 62 K and all > 2 s(K ) i 2 K+ .

2 K?,

This suggestion covers only the case in which the epistemic status of is undetermined. What about the case in which the agent's initial beliefs

CONDITIONAL LOGIC

57

entail and the case in which the agent's initial beliefs entail :? Stalnaker [1968] suggests the following rule for evaluating a conditional in the general case: First, add the antecedent (hypothetically) to your stock of beliefs; second, make whatever adjustments are required to maintain consistency (without modifying the hypothetical belief in the antecedent); nally, consider whether or not the consequent is true.23 Stalnaker's proposal handles the general case by substituting the operation of revision for that of expansion in Ramsey's original proposal. In our framework Stalnaker's suggestion amounts to the following: (RT)

For all K 2 K .

2 K and all 2 I and all 2 K?, > 2 s(K ) i

Revision postulates (K 3) and (K 4) jointly entail (K 4s) If : 62 K then K + = K .

Hence, if (K 3), (K 4) are assumed, then (RT) agrees with (RTR) in the case where neither nor : belongs to K . That is, if (K 3) and (K 4) hold, then (RT) can be considered an extension of Ramsey's original proposal. In [Gardenfors, 1978] and in later writings Gardenfors adopts Stalnaker's version of the Ramsey test for type-L2 languages and assumes, in addition, the following: every formula of a type-L2 language L is an eligible input for revision and an eligible member of a belief set, i.e. I = wffL = K?, and a conditional, like any other formula, is accepted with respect to (supported by) a belief set K i it belongs to K , i.e. s(K ) = K for all K 2 K. We have already noted that Levi, in contrast to Gardenfors, excludes conditionals as revision inputs and as members of belief sets. Levi's view is that the conditional > in a type-L1 language expresses the attitude of an agent for whom : is not epistemically possible relative to K, and the negated conditional :( > ) expresses the attitude of an agent for whom : is epistemically possible relative to K. Assuming a type-L1 language L with type-L0 fragment L0 , and assuming that K TL0 ;` and I = wffL0 = K?, Levi's view amounts in our framework to the conjunction of the following: (PRTL) For all 2 I and all 2 K? and all K > 2 s(K ) i 2 K. (NRTL) For all 2 I and all :( > ) 2 s(K ) i

2 K such that K 6= K?,

2 K? and all K 2 K such that K 6= K? , 62 K.

23 [Stalnaker, 1968], p. 44. (The page reference is to [Harper et al., 1981], where [Stalnaker, 1968] is reprinted.)

58

DONALD NUTE AND CHARLES B. CROSS

Note that in both (PRTL) and (NRTL), unlike in (RT), K is restricted to `-consistent members of K. Note also that the adoption of (RT) (or of (PRTL) without (NRTL)) places no constraints on how negated conditionals are related to belief change. Other versions of the Ramsey test appearing in the literature include the following, due to Hans Rott, who, like Gardenfors, assumes a language L of type L2 and no restrictions on which formulas can appear as members of belief sets or as revision inputs (i.e. I = wffL = K?): (R1) (R2) (R3)

For all K 2 K and all 2 K and 62 K . For all K 2 K and all 2 K and 62 K: . For all K 2 K and all 2 (K ) .

2I

and all

2 K? , >

2K

i

2I

and all

2 K? , >

2K

i

2I

and all

2 K? , >

2K

i

Here we follow the labeling in [Gardenfors, 1987]. The interest of (R1)-(R3) stems in part from the fact that whereas (RT) can be used with (K 3) and (K 4) to derive the following thesis (U), none of (R1)-(R3) can be so used: (U)

If 2 K and

2 K , then > 2 K .

Thesis (U) is related to the strong centering axiom CS of VC, and Rott [1986] suggests that (U) should be rejected. Since none of (R1)-(R3) entails (K M), one of the assumptions of Gardenfors' 1986 triviality result for the Ramsey test, (R1)-(R3) might seem worth investigating as alternatives to (RT), but Gardenfors [1987] shows that (R1)-(R3) do not avoid the problem faced by (RT). Consider the Weak Ramsey Test: (WRT) For all K 2 K and all 2 I and all > 2 K i 2 K.

2 K? such that _ 62 K ,

Each of (R1)-(R3) entails (WRT), and Gardenfors [1987] proves a triviality result that holds for any version of the Ramsey test which entails (WRT), including (R1)-(R3) and (RT).24

2.5 Triviality results for the Ramsey test The basic result

Many versions of the basic triviality result for the Ramsey test have appeared in the literature, all of them variations on the result proved by Gardenfors [1986]. All proofs of the basic triviality result we know of exploit the same maneuver, however, one which Hansson [1992] makes explicit: 24 See

also [Gardenfors, 1988], Chapter 7, Corollary 7.15

CONDITIONAL LOGIC

59

in terms of our framework, the nding of forking support sets within a belief change model. (DefFORK) A belief change model hK; I ; `; K?; ; ; si will be said to contain forking support sets i there exist H; J; K 2 K, such that H = Cn` (H ) \ K?, and J = Cn` (J ) \ K?, and K = Cn` (K ) \ K? 6= K? , and H \ I 6 J , and J \ I 6 H , and s(H ) s(K ), and s(J ) s(K ). For Gardenfors and Segerberg belief change models this condition can be stated in the form in which Hansson originally formulated it: PROPOSITION 1. A Gardenfors or Segerberg belief change model hK; I ; `; K?; ; ; si contains forking support sets i there exist H; J; K 2 K, where H; J K 6= K? , H 6 J , and J 6 H . This proposition follows from the fact that in Gardenfors and Segerberg belief change models (i) s(K ) = K = Cn` (K ) for all K 2 K, and (ii) I and K? both exhaust the formulas of the language of the model. Next we present the main lemmas for the basic triviality result: LEMMA 2. If (RT) holds in a belief change model, then so does (K M).

Proof. Trivial; left to reader.

Postulate (K M) is a postulate of monotonicity for belief revision. We discuss Gardenfors' argument against (K M) in Section 2.6 below. LEMMA 3. No classical belief change model containing forking support sets satis es (K 2), (K C), (K P), and (K M).

Proof. Assume for reductio that hK; I ; `; K?; ; ; si is a classical belief change model that contains forking support sets and satis es (K 2), (K C), (K P), and (K M). For clarity, we follow the example of [Rott, 1989] in numbering the steps in the reductio argument. (1) H = Cn` (H )\K? , J = Cn` (J )\K? , K = Cn` (K ) \ K? 6= K? , H \I 6 J , J \ I 6 H , and s(H ); s(J ) s(K ), for some H; J; K 2 K (2) 2 (H \ I ) J , for some (3) 2 (J \ I ) H , for some (4) :( ^ ) 2 I (5) ::( ^ ) 62 H (6) H H: (^ )

(DefFORK)

(1) (1) (2), (3), (DefBCM) (3), classicality of `, fact that H = Cn` (H ) \ K? (4), (5), (K P)

60

DONALD NUTE AND CHARLES B. CROSS

(7) 2 H: (^ ) (8) ::( ^ ) 62 J (9) (10) (11) (12) (13) (14) (15)

J J:(^ ) 2 J:(^ ) H: (^ ); J:(^ ) K: (^ ) ; 2 K: (^ ) :( ^ ) 2 K: (^ ) K: (^ ) is `-inconsistent K is `-consistent

(16) (17)

` ::( ^ 6` ::( ^

) )

(2), (6) (2), classicality of `, fact that J = Cn` (J ) \ K? (4), (8), (K P) (3), (9) (1), (4), (K M) (7), (10), (11) (4), (K 2) (12), (13), classicality of ` classicality of `, fact that K = Cn` (K ) \ K? 6= K? (14), (15), (K C) (5), classicality of `, fact that H = Cn` (H ) \ K?

Since (17) contradicts (16), this completes the proof.

Lemmas 2 and 3 suÆce to prove the following: THEOREM 4. No classical belief change model de ned on a language of type L1 or type L2 and containing forking support sets satis es (K 2), (K C), (K P), and (RT). Note that we have not assumed that K is a set of theories either in the language of the model or in the fragment thereof represented by K?. We have not even assumed (K 1): that the sets produced by revision always belong to K. It is however required that the belief sets H , J , and K used in the proof be theories in the fragment of the language represented by K?. Do we have a triviality result? Not yet: we do not yet have a criterion of triviality. The following criteria have appeared in the literature: 1. A belief change model is Gardenfors nontrivial i there is a K 0 2 K and ; ; 2 I such that :; : ; : 62 Cn` (K 0 ) and ` :( ^ ) and ` :( ^ ) and ` :( ^ ). 2. A belief change model is Rott nontrivial i there is a K 0 2 K and ; 2 I such that 6` and 6` and _ ; :_ ; _: ; : _: 62 Cn` (K 0 ). 3. A belief change model is Segerberg nontrivial i there exist ; ; 2 I such that 6` and 6` and Cn` f; ; g = K? and Cn` (fg), Cn` (f g), Cn` (f; g) 2 K.

Recall that a support function s is monotone over K i s(H ) s(K ) for all H; K 2 K such that H K . A support function can be monotone

CONDITIONAL LOGIC

61

over K even if it is a nonmonotonic consequence operation provided that K does not exhaust dom(s). For example, s is monotone over K (but not necessarily over dom(s)) in all Makinson belief change models, since in a Makinson belief change model s(K ) = K for all K 2 K. Recall that the operation of expansion is de ned by (Def+); it turns out that if (K+ 1) and the monotonicity of s over K are assumed, then nontriviality by any of the above criteria will imply the existence of forking support sets: LEMMA 5. A classical belief change model de ned on a language of type L1 or L2 contains forking support sets if it satis es (K+ 1) and its support function is monotone over K and it is Gardenfors nontrivial.25

Proof. Suppose that the model is Gardenfors nontrivial; we will show that it contains forking support sets. Let K 0 , , , and be as in the de nition of Gardenfors nontriviality; also, let H = K0+_ ; let J = K0+_ ; and let K = K0+ . Then by (Def+) and the classicality of `, H = Cn` (H ) \ K? , J = Cn` (J )\K? , and K = Cn` (K )\K? 6= K?. (Def+) and the classicality of ` also imply that H; J K , hence by the monotonicity of s we have that s(H ); s(J ) s(K ). H \I 6 J holds because _ 2 (H \I ) J ; J \I 6 H holds because _ 2 (J \ I ) H . LEMMA 6. A classical belief change model de ned on a language of type L1 or L2 contains forking support sets if it satis es (K+ 1) and its support function is monotone over K and it is Rott nontrivial.26

Proof. Like the proof of Lemma 5, but let H = K0+_ ; let J = K0+_: , let K = K:0+, where K 0 , , and are as in the de nition of Rott nontriviality. H \ I 6 J holds because _ 2 (H \ I ) J ; J \ I 6 H holds because _ : 2 (J \ I ) H . LEMMA 7. A classical belief change model de ned on a language of type L1 or L2 contains forking support sets if it satis es (K+ 1) and its support function is monotone over K and it is Segerberg nontrivial.

Proof. Like the proof of Lemma 5, but let H = Cn` (fg), J = Cn` (f g), K = Cn` (f; g), where , , and are as in the de nition of Segerberg nontriviality. Theorem 4 and Lemmas 5, 6, and 7 immediately imply Theorem 8, the basic triviality result for the Ramsey test: THEOREM 8. No classical belief change model de ned on a language of type L1 or L2 that satis es (K+ 1), (K 2), (KC), (K P), and (RT) and 25 See 26 See

[Gardenfors, 1986]. [Rott, 1989].

62

DONALD NUTE AND CHARLES B. CROSS

whose support function is monotonic over K is Gardenfors nontrivial or Rott nontrivial or Segerberg nontrivial. The basic result of [Gardenfors, 1986] can be derived by applying Theorem 8 to Gardenfors belief change models. Gardenfors [1987; 1988] notes that (K P) and (K 2) can be replaced by (K 4) in the triviality result he proves there, and this same replacement can be made in Theorem 8, with a corresponding change in Lemma 3 and its proof.27 As was mentioned in Section 2.3 above, Segerberg has proved a version of the Gardenfors result in which the constraints on ` are limited to Re exivity, Transitivity, and Monotonicity. The counterpart in our framework of Segerberg's result is the following: THEOREM 9. No Segerberg nontrivial Segerberg belief change model satis es (K M), (K 4ss), and (K5ws).28 If contraction and revision are assumed to be related by the Levi Identity (LI) in a deductively closed belief change model, then triviality results for the Ramsey test can be formulated in terms of contraction rather than in terms of revision. In particular, we have the following as a corollary of Theorem 8:29 THEOREM 10. No deductively closed classical belief change model de ned on a language of type L1 or L2 that satis es (K+ 1), (K 3), (K 4w), (LI), and (RT) and whose support function is monotonic over K is Gardenfors nontrivial or Rott nontrivial or Segerberg nontrivial.

Proof. It suÆces to note that where ` is classical, we have the following: (Def+) and (LI) jointly imply (K 2); (LI) and (K 4w) jointly imply (K C); (Def+), (K 3), and (LI) jointly imply (K P). Makinson [1990] proves a variant of Theorem 10 for type-L1 languages that replaces (K 4w) and weakens both (RT) and (LI) while making stronger assumptions about s than merely that it is monotone over K: THEOREM 11. Let hK; I ; `; K?; ; ; si be a Makinson belief change model de ned on a language L of type L1 . De ne postulates (RTM), (K 4c), and (MI) as follows: (RTM) For all ; 2 wffL0 , where L0 is the type-L0 fragment of L, > 2 s(K ) i 2 K . 27 Steps (6), (9), and (13) must be dierently justi ed. 28 See [Segerberg, 1989]. Segerberg's version of the G ardenfors

triviality result makes no assumption about which operators are available in the language, hence is used in the de nition of Segerberg nontriviality to play the role that :( ^ ) plays in the proof of Lemma 3. Also, Segerberg's result does not assume that the language contains both > and ?, which we assume here in (DefBCM). 29 A similar result is proved in [Cross, 1990a].

CONDITIONAL LOGIC

63

(K 4c) If 2 s(K ), then 2 s(;). (MI)

K: K s(K: [ fg).

Then we have the following: (1) Limiting Case. If (RTM), (K 4c), and (MI) hold for K = K?, then the model is trivial in the sense that s(;) = K? . (2) Principal Case. If (RTM), (K 3), (K 4c), and (MI) hold for all K 2 K such that K 6= K?, then the model is trivial in the sense that there are no conditional-free formulas and of L such that ^ 62 s(;) and 62 s(f g) and 62 s(fg) and s[s(fg) [ s(f g)] 6= K? .

The Limiting Case generalizes Theorem 12 discussed below. It is the Principal Case that more closely corresponds to Theorem 10. (K 4c) neither entails nor is entailed by (K 4), its AGM counterpart, but (MI), which we will refer to as Makinson's Inequality, is the result of weakening (LI), the Levi Identity, to say that a revision of K to include must lie \between" K: and s(K: [ fg). Since, as we have seen, (LI), (Def+), and (K 3) entail (K P), one might expect that replacing (LI) with (MI) would leave (K P) unsupported, but this is not the case: (MI), (Def+), and (K 3) already entail (K P). Making up for the fact that (MI) is weaker than (LI) are Makinson's strengthened assumptions about s: that it satis es Superclassicality, Transitivity, and Reasoning by Cases. Makinson [1990] points out that these conditions are known not to imply that s is monotone over its entire domain (P (wffL )), but since contraction and revision in a Makinson belief change model are de ned only on K such that s(K ) = K , s is nevertheless monotone \where it counts", namely over the set K of belief sets on which contraction and revision are de ned. Several authors (e.g. Grahne [1991], Hansson [1992], and Morreau [1992]) have concluded from Makinson's result that nonmonotonic consequence does not provide a way out of the Gardenfors triviality result. In fact, adopting a nonmonotonic consequence operation does provide a way out, provided that this consequence relation plays the role of a support function s that is nonmonotonic over the belief sets to which contraction and revision are applied. Indeed, it is by adopting such support functions that Hansson, Arlo-Costa and Levi are able to make the Ramsey test nontrivial, though these authors do not describe the support function as a consequence operation. (See also Section 2.6 below.) Theorem 8 and its variants pose a dilemma: which of an inconsistent set of constraints on belief change models should be rejected? We return to this later in Section 2.6 below.

64

DONALD NUTE AND CHARLES B. CROSS

The problem with (K 5w)

In the version of Theorem 8 that Gardenfors proves in [Gardenfors, 1988], postulate (K C) is replaced by the stronger (K 5w), but Arlo-Costa [1990] proves that (K 5w) faces problems that have nothing to do with (K P). Arlo-Costa's result, which is not so much a triviality result as an impossibility result, can be formulated as follows in our framework: THEOREM 12. There is no Gardenfors belief change model de ned on a language of type L1 or L2 in which 6` ?, (K 5w), and (RT) hold. Whereas Gardenfors' results against the Ramsey test exploit the fact that (RT) entails (K M), Arlo-Costa's result exploits the fact that (RT) entails the following, which Arlo-Costa calls \Unsuccess": (US) If K = K? , then K = K? .

In other words, the Ramsey test requires revision into inconsistency if the initial belief state is already inconsistent, regardless whether the revision input is a consistent proposition. Contrary to this, (K 5w) prohibits revision into inconsistency when the revision input is a consistent proposition, regardless whether the initial belief state is consistent. The labeling of (US) as a postulate of \unsuccess" is appropriate since (K 5w), which (US) contradicts, follows from (Def+), (LI), and the Postulate of Success for contraction (K 4). Arlo-Costa's result can be strengthened to include belief change models in which s(K ) = K does not always hold: THEOREM 13. There is no deductively closed classical belief change model de ned on a language of type L1 or L2 whose support function satis es Re exivity and Closure and in which 6` ?, (K 5w), and (RT) hold.

Proof. Let a deductively closed classical belief change model on a language of type L1 or L2 be given and suppose for reductio that 6` ?, that s satis es Re exivity and Closure, and that the model satis es (K 5w) and (RT). First we prove (US). Let K = K?; then we have K

s(K )

Re exivity of s = Cn` [s(K )] Closure of s

Since ? 2 K? = K we have s(K ) = wffL , by the classicality of `. Next, let 2 K? , and let 2 I . Since s(K ) = wffL , we have > 2 s(K ). Hence by (RT) we have B 2 K . Thus K? K ; the converse inclusion holds by (DefBCM), so K = K?, as required to show (US). By (DefBCM) K? 2 K and :? 2 I ; by (US) we have (K? ):? = K? . By hypothesis 6` ?, hence by the classicality of ` we have not only (K? ):? = K? but also 6` ::?, which contradicts (K 5w).

CONDITIONAL LOGIC

65

The Limiting Case of Theorem 11, like Theorem 13, is a strengthening of Theorem 12. The problem posed by Theorems 12 and 13 can be solved by retreating from (K 5w) to something weaker, such as the postulate (K C) mentioned in Theorem 8, or by restricting the applicability of the Ramsey test, as Aro-Costa and Levi both do by adopting (PRTL) (see Section 2.4 above), which eliminates K? from the domain of belief sets to which the Ramsey test can be applied. The negative Ramsey test Levi has argued (see, for example, [Levi, 1988] and [Levi, 1996]) that a negated conditional :( > ) expresses the propositional attitude of an agent for whom : is a serious (i.e. epistemic) possibility relative to K . Abstracting from Levi's requirements on what is allowed to be a revision input, the result is this thesis, the negative Ramsey test: (NRTL) For all 2 I and all 2 K? and all K 2 K such that K 6= K? , :( > ) 2 s(K ) i 62 K. Rott [1989] takes the view that adopting both the negative Ramsey test and the Ramsey test amounts to an assumption of autoepistemic omniscience. Given the view of Gardenfors, Rott, and others that conditionals and negated conditionals belong in belief sets along with other beliefs (so that s satis es Identity over K), the conjunction of (RT) and (NRTL) does amount to a kind of epistemic omniscience. That is, if s satis es Identity over K, then \closing" each belief set under (RT) and (NRTL) amounts to an idealization that parallels the idealization represented by \closing" each belief set under `. On Levi's view, conditionals do not express propositions and so are not objects of belief, thus on Levi's view the positive and negative Ramsey tests cannot be said to represent an idealization concerning what beliefs an agent holds. For Levi, what the positive and negative Ramsey tests represent is not a pair of closure conditions on the unary propositional attitude of belief but rather a de nition of a binary propositional attitude toward the antecedent and consequent of a conditional that an agent is said to `accept'. Regardless how the issue of autoepistemic omniscience is resolved, the adoption of (NRTL) has consequences. Gardenfors, Lindstrom, Morreau, and Rabinowicz [1991] prove what they consider to be a triviality result for (NRTL) with assumptions weaker than those needed for Gardenfors' 1986 triviality result for (RT); in particular, (K P) is not needed. In our framework their result is equivalent to the following: THEOREM 14. If hK; I ; `; K? ; ; ; si is a belief change model de ned on a language of type L1 or L2 for which both (NRTL) and (K T) hold and for which s satis es Identity over K, then there are no K; K 0 2 K such that K 6= K? 6= K 0 and K 0 6= K K 0 .

66

DONALD NUTE AND CHARLES B. CROSS

The latter can be derived as a corollary of the following stronger result: THEOREM 15. If hK; I ; `; K? ; ; ; si is a belief change model de ned on a language of type L1 or L2 for which both (NRTL) and (K T) hold and for which s is monotone over K, then there are no K; K 0 2 K such that K 6= K? 6= K 0 and K 0 6= K K 0.

Proof. Suppose that hK; I ; `; K? ; ; ; si is a belief change model de ned on a language of type L1 or L2 such that s is monotone over K. First we prove that (NRTL) implies (K IM): assume (NRTL) and suppose that K; K 0 2 K and 2 I and K 6= K? 6= K 0 and s(K ) s(K 0 ), and let 62 K. Then by (NRTL) :( > ) 2 s(K ), hence :( > ) 2 s(K 0 ). By (NRTL) it follows that 62 K0 , as required to establish (K IM). Now suppose for reductio that hK; I ; `; K?; ; ; si satis es both (NRTL) and (K T), that s is monotone over K, and there are K; K 0 2 K such that K 6= K? 6= K 0 and K 0 6= K K 0 . Since K K 0 we have s(K ) s(K 0 ) by the monotonicity of s. By (DefBCM) we know that > 2 I , so we have K>0 K> by (K IM). But K> = K and K>0 = K 0 by (K T), hence K 0 K , which contradicts K 0 6= K K 0 , completing the reductio. Note that neither theorem assumes that all members of K must be deductively closed, nor does either result include any assumption about `. In [Gardenfors et al., 1991] Theorem 14 is presented as a triviality result because the authors maintain that a model of belief change is trivial if it contains no consistent, conditional-laden belief sets K; K 0 such that K is a proper subset of K 0 . As Rott [1989], Morreau [1992], and Hansson [1992] point out, however, it is a substantive (and, they argue, mistaken) assumption to hold that principles of belief revision that are justi ed in the context of conditional-free belief sets (e.g. the closure of K under expansions) can be carried over without modi cation to conditional-laden belief sets. More on this in Section 2.6 below. One might therefore respond to Theorem 14 by questioning the criterion of triviality: perhaps a model of belief change whose belief sets are conditional-laden should not be classi ed as trivial simply because it contains no consistent K; K 0 such that K is a proper subset of K 0 , even though a belief change model with conditional-free belief sets would be trivial in that case. But if the criterion of triviality espoused by Gardenfors, et al [1991] is appropriate for conditional-free belief sets, then what about Theorem 15, which does cover belief change models with conditional-free belief sets? Our discussion in Section 2.6 below may appear to suggest that the problem raised by Theorems 14 and 15 might ultimately

CONDITIONAL LOGIC

67

be solved by giving up the monotonicity of s over K, but even that is not guaranteed to be enough. Consider this result:30 THEOREM 16. No classical belief change model de ned on a language of type-L1 or type-L2 that satis es (K T), (PRTL), and (NRTL), and whose support function satis es Conservativeness and Closure, is Gardenfors nontrivial or Rott nontrivial or Segerberg nontrivial.

Proof. By Lemmas 5, 6, and 7, it suÆces to show that no classical belief change model de ned on a language of type-L1 or type-L2 that satis es (K T), (PRTL), and (NRTL), and whose support function satis es Conservativeness and Closure, contains forking support sets. Consider a classical belief change model hK; I ; `; K? ; ; ; si de ned on a language of type-L1 or type-L2 that satis es (K T), (PRTL), and (NRTL), and whose support function satis es Conservativeness and Closure. Note rst that since s satis es Conservativeness and Closure, s must also satisfy Consistency. Suppose the model contains forking support sets. Then there exist H; J; K 2 K such that H = Cn` (H ) \ K? , and J = Cn` (J ) \ K? , and K = Cn` (K ) \ K? 6= K? , and H \ I 6 J , and J \ I 6 H , and s(H ) s(K ), and s(J ) s(K ). Since J \ I 6 H we have 2 J but 62 H for some conditional-free . By (K T) we have H = H> and J = J> and K = K> , hence 2 J> and 62 H> . By (PRTL) we have > > 2 s(J ), and by (NRTL) we have :(> > ) 2 s(H ). We also have > > 2 s(K ), since s(J ) s(K ); hence s(K ) is not `-consistent. This contradicts the `-consistency of K , since s satis es Consistency. As Theorem 16 shows, (NRTL), (K T) and (PRTL) cannot be nontrivially combined, even in a broad category of models where s is not monotone over K, unless we abandon the Rott, Gardenfors, and Segerberg criteria of nontriviality.

2.6 Resolving the con ict On giving up (RT) Gardenfors interprets Theorem 8 as forcing a choice between (K P) and the Ramsey test (RT), and he has argued (see, e.g., [Gardenfors, 1986], pp. 8687 and [Gardenfors, 1988], p. 59 and p. 159) that (K M) and with it (RT) should be rejected. In this connection he oers the following example:

Let us assume that Miss Julie, in her present state of belief K , believes that her own blood group is O and that Johan is her 30 The authors thank Horacio Arl o-Costa for showing us the proof of this result in correspondence.

68

DONALD NUTE AND CHARLES B. CROSS

father, but she does not know anything about Johan's blood group. Let A be the proposition that Johan's blood group is AB and C the proposition that Johan is Miss Julie's father. If she were to revise her beliefs by adding the proposition A, she would still believe that C , that is, C 2 KA . But in fact she now learns that a person with blood group AB can never have a child with blood group O. This information, which entails C ! :A, is consistent with her present state of belief K , and thus her new state of belief, call it K 0 , is an expansion of K . If she then revises K 0 by adding the information that Johan's blood group is AB, she will no longer believe that Johan is her father, that is C 62 KA0 . Thus (K M) is violated. ([Gardenfors, 1986], pp. 86-87) The example assumes that s satis es Identity over K, so let us assume that as well. In reply to Gardenfors one might say that if (RT) and Identity over K are assumed, then the presence of conditionals in belief sets prevents this example from being a counterexample to (K M): if (RT) and Identity over K are assumed, then since we have C 2 KA and C 62 KA0 it follows that A > C 2 K and A > C 62 K 0 , in which case K 6 K 0 , i.e. K 0 is not an expansion of K . But to accept this, Gardenfors argues, would violate certain intuitions: [I]f we assume (RT) and not only (K M), then Miss Julie would have believed A > C in K . But then the information that a person with blood group AB can never have a child with blood group O, would contradict her beliefs in K , which violates our intuitions that this information is indeed consistent with her beliefs in K . ([Gardenfors, 1986], p. 87) Let B stand for the statement that a person with blood group AB can never have a child with blood group O. Gardenfors has claimed in a context where s satis es Identity over K that if (RT) holds, then Miss Julie's beliefs in K contradict B , but this claim requires further justi cation: how exactly does K contradict B ? We might suppose that B entails L(C ! :A), where L is an alethic nomological necessity operator expressing the modal force of B . Assuming (RT) and Identity over K, the question whether K contradicts B depends on whether the set fC; B; A > C g is consistent, and this can be assumed to depend on whether the set fC; L(C ! :A); A > C g is consistent. But the latter set is consistent if the semantics for > is not tied to nomological necessity. For example, given a selection function semantics for > and an accessibility relation semantics for L, all of C , A > C , and L(C ! :A) can be true at possible world w if none of the A-worlds selected relative to w happen to be nomologically accessible at w.31 So in 31 For

a less abstract version of essentially this point, see [Cross, 1990a], pp. 229-232.

CONDITIONAL LOGIC

69

order to sustain Gardenfors' claim in the passage cited above, the claim that K contradicts B if (RT) holds (and if s satis es Identity), we would have to assume the right sort of semantic connection between Ramsey test conditionals and the nomological modality in B , but the case for assuming that connection is not at all obvious: Ramsey test conditionals, after all, are epistemic . And if K indeed does not contradict B , then the same conditional that prevents the example from being a counterexample to (K M) makes the example a counterexample to (K P): since K 0 = KB , and since C 2 KA but C 62 KA0 , it follows that if (RT) holds, then A > C 2 K 6 KB 63 A > C even though :B 62 K . So it might be argued that the presence of Ramsey test conditionals in belief sets will render (K M) intuitively innocuous while providing perfectly reasonable counterexamples to (K P). Still, the arguments in favor of (K P) seem strong. One argument appeals to the Bayesian model of rationality. Suppose that an agent's belief state is represented as a probability function P . According to Bayesian doctrine, upon becoming certain of a rational agent in belief state P revises her belief state by conditionalizing on , assuming P () > 0. If this doctrine is correct and if an agent's belief set consists of those statements to which she assigns unit probability, then (K P) reduces to a theorem of probability theory: if P (:) 6= 1 and P ( ) = 1, then P ( j) = 1. A second argument appeals to the doctrine that revision can be de ned in terms of contraction and expansion via the Levi Identity (LI), which prescribes the following: to revise with , rst contract relative to : and then expand with . If ` is classical and if (LI), (Def+), and (K 3) hold, then (K P) follows, the role of (K 3) being to require any contraction of K to be vacuous if the proposition contracted does not belong to K : if one does not believe a given proposition then no prior belief need be discarded when one contracts one's beliefs to exclude that proposition|it is already excluded. As Theorem 10 shows, we can incorporate the second of these arguments for (K P) directly into the triviality result by recasting Theorem 8 in terms of an inconsistency between (RT), (LI), and postulates (K+ 1), (K 3), and (K 4w). (K 3) deals with what is in some sense the degenerate case of contraction: contraction with respect to an absent proposition. Postulate (K 4w) is similarly weak: it requires only that a contraction should really be a contraction in any case where a logically contingent proposition is contracted from a logically consistent belief set. Both postulates are very weak constraints on contraction, and their weakness makes the case against (RT) seem strong, as long as we assume that s satis es Identity over K or at least Monotonicity over K. Is there a weakened version of (RT) that is compatible with the other postulates mentioned in Theorem 8? Lindstrom and Rabinowicz [1992] show that there is. They suggest replacing (RT) with a condition whose counterpart in our framework is the following:

70

DONALD NUTE AND CHARLES B. CROSS

(SRT)

For all K 2 K and all 2 I and all 2 K? , > 2 K0 for all K 0 2 K such that K K 0 .

2 s(K ) i

Like Gardenfors, Lindstrom and Rabinowicz do not distinguish between K and s(K ), and in a context where this distinction is not made (i.e where s satis es Identity over K), replacing (RT) with (SRT) has the eect of excluding from belief sets many of the conditionals that must be present in them if (RT) and Identity over K are assumed. For example, in Gardenfors' Miss Julie case, (RT) and Identity over K force the conclusion that A > C 2 K and A > C 62 KA0 , since C 2 KA and C 62 KA0 , giving us a counterexample to (K P) since :B 62 K and K 0 = KB . If (SRT) and Identity over K are assumed instead, then the falsity of (K P) no longer follows. The question whether A > C belongs to K depends not simply on KA but on the revision behaviour of all belief sets that include K as a subset, and similarly for the question whether A > C belongs to K 0 . On giving up (K P)

Should the Ramsey test be preserved at the expense of (K P)? The answer is certainly yes if the Ramsey test is applied to the notion of theory change to which Katsuno and Mendelzon in [Katsuno and Mendelzon, 1992] attach the label update . They write:

: : : [U]pdate , consists of bringing the knowledge base up to date when the world described by it changes. For example, most database updates are of this variety, e.g. \increase Joe's salary by 5%". Another example is the incorporation into the knowledge base of changes caused in the world by the actions of a robot.32 Update, according to Katsuno and Mendelzon, contrasts with revision :33

: : : [R]evision, is used when we are obtaining new information about a static world. For example, we may be trying to diagnose a faulty circuit and want to incorporate into the knowledge base the results of successive tests, where newer results may contradict old ones. We claim the AGM postulates describe only revision.34 Katsuno and Mendelzon represent knowledge bases as formulas and introduce a binary modal connective to represent the update operation. Following [Grahne, 1991] we will use the symbol `Æ' for this operation; then the 32 [Katsuno and Mendelzon, 33 See also [Winslett, 1990]. 34 Ibid.

1992], p. 183.

CONDITIONAL LOGIC

71

formula Æ is the knowledge base that results from updating knowledge base with new information . Grahne [1991] provides an interpretation of the Ramsey test in terms of update. Given a type-L2 language that includes the binary connective `Æ' Grahne simply adds to Lewis' system VCU the following (validity preserving) rule of inference: RR:

From ! ( > ) infer ( Æ ) ! , and from ( Æ ) ! ! ( > ).

infer

Grahne calls the resulting logical system VCU2 . In Grahne's framework, the formula Æ is true in possible world w i w belongs to the set of closest worlds to w0 in which is true for at least one world w0 in which is true. Grahne proves soundness, completeness, decidability, and nontriviality results for VCU2 , and he notes that VCU2 fails to satisfy the following principle: (U 4s) If 6` :( ^ ), then ` ( Æ ) $ ( ^ ). (U 4s) states that if is consistent with knowledge base , then the result of updating with is a formula logically equivalent to ^ . Grahne cites the following example to illustrate the failure of (U 4s), which is the update counterpart of revision postulate (K 4s): A room has two objects in it, a book and a magazine. Suppose p1 means that the book is on the oor, and p2 means that the magazine is on the oor. Let the knowledge base be (p1 _ p2 ) ^ :(p1 ^ p2 ), i.e. either the book or the magazine is on the oor, but not both. Now we order a robot to put the book on the oor, that is, our new piece of knowledge is p1 . If this change is taken as a revision [so that (K 4s) is assumed], then we nd that since the knowledge base is consistent with p1 , our new knowlege base will be equivalent to p1 ^ :p2 , i.e. the book is on the oor and the magazine is not. But the above change is inadequate. After the robot moves the book to the oor, all we know is that the book is on the oor; why should we conclude that the magazine is not on the oor?35 That is, upon updating to include p1 , we should give up something we believed in our initial epistemic state, namely :(p1 ^ p2 ), even though the new information p1 is consistent with our initial epistemic state. Apparently, then, we have made a belief change using a method that does not satisfy an appropriate counterpart of (K P). Isaac Levi disagrees. The mechanism which underlies update is imaging : the \image" of a set S of possible worlds 35 [Grahne,

1991], pp. 274{275.

72

DONALD NUTE AND CHARLES B. CROSS

under is the set of worlds each of which is one of the closest -worlds to some world belonging to S . Levi [Levi, 1996] argues that while imaging may be useful for describing how changes over time in the state of a system (such as the room in Grahne's example) are regulated, such changes are not an example of belief change. We may, of course, have beliefs about how changes in a system over time are regulated, but an analysis of Grahne's example along the lines recommended by Levi would show it to be a straightforward case in which belief revision took place via expansion : if t is a time before the book was moved and t0 is a time just after the book is moved and if propositional variables p1 and p2 are replaced by formulas containing predicates P1 and P2 , where Pi u means that pi is true at time u, then our initial epistemic state can be represented as (P1 t _ P2 t) ^ :(P1 t ^ P2 t), and upon learning of the change in the position of the book our new epistemic state is (P1 t _ P2 t) ^ :(P1 t ^ P2 t) ^ P1 t0 . On giving up (K+ 1)

Gardenfors interprets Theorem 8 as forcing a choice between (RT) and (K P), but Rott [1989], Morreau [1992], and Hansson [1992] have argued that (K+ 1) is the real culprit. Postulates (K 3) and (K 4) entail (K 4s): if is consistent with K , then a revision to accept should be the result of expanding K with . Rott [1989] argues that (K 4s), while ne for belief revision in a type-L0 language, is an inappropriate requirement on belief revision in a language with Ramsey test conditionals. Once (K 4s) is rejected in the context of Ramsey test conditionals, Rott argues, (K+ 1) is robbed of any intuitive basis: the only reason for thinking that belief change models should be closed under expansion would be the assumption that expansion is a species of revision. Why think that expansion is a species of revision in the rst place? One could justify (K 4s) as the qualitative analog of the Bayesian doctrine that upon becoming certain of a rational agent whose belief state is represented by probability function P revises her belief state by conditionalizing on if P () > 0. This doctrine supports (K 4s) because if P () > 0, then the set f : P ( j) = 1g is precisely the result of expanding the set f : P ( ) = 1g with . But, Morreau [1992] counters, Bayesian doctrine supports (K 4s) in this way only for belief sets over a type-L0 language. Still, one might argue, regardless whether revision ever leads from a belief set to one of its expansions, should not the expansion of every belief set in a belief change model be available in the model as a possible starting point for revision? Not so, argues Morreau [1992]: a belief change model over which (RT) holds and in which belief sets contain conditionals incorporates the idealizing assumption that the conditionals an agent believes form a complete and correct record of how the agent would revise his or her beliefs.

CONDITIONAL LOGIC

73

Not just any collection of theories in a conditional language can be the belief sets of a Ramsey test respecting belief change model because not just any theory will conform to the idealization. Morreau interprets the Gardenfors triviality result as showing in particular that the idealization required by the Ramsey test cannot be achieved in a nontrivial belief change model that respects (K+ 1) while incorporating conditionals in belief sets. But are there in fact nontrivial belief change models containing conditionalladen belief sets in which (RT) holds but (K+ 1) does not? Morreau's Example 6 ([Morreau, 1992], p. 41), which we adapt to our framework, con rms that there are. Let L be a type-L1 language and let L0 be its type-L0 fragment. Assume that L0 contains at least two distinct atomic formulas. Let `0 be truth-functional consequence, and assume (Def+), and let K0 be the set of all `0 -theories in L0 . De ne a belief revision operation ? as follows for all 2 K0 and all formulas of L0 : 8 if ? 2 ; : 2 ? g). Let K = fK : 2 K0 g; let (K ) = K? ; let I = wffL0 (as before); let K? = wffL; let dom(s) = K; and let s(K ) = K for all K 2 K. Letting contraction ( ) again be arbitrary, hK; I ; `0; K? ; ; ; si satis es (K 2), (K C), and (RT), but this model, unlike the rst, does not satisfy (K+ 1) or (K P).36 For example, let A; B be distinct atomic formulas of L0, and let 0 = Cn`0 (fB g); thus 0 2 K0 . Since :A 62 0 , we have that 0A? = 0A+ = Cn`0 (fA; B g). Accordingly, A > B 2 K 2 K, but note that (K )+:A does not belong to K, for there is no 2 K0 such that both :A and A > B belong to K . The second belief change model constructed above is the result of closing the rst model under the Ramsey test (restricted to non-nested conditionals), and both models are Gardenfors nontrivial. The Gardenfors nontriviality of the second model is established by KCn 0 (;) , which belongs to K, and A ^ :B , B ^ :A, and A ^ B which belong to L. In addition to this example Morreau provides a general recipe for constructing nontrivial models of belief revision in a type-L1 language L whose type-L0 fragment is L0 and where (K 1), (K 2), (K C), (RT), and (K PI ) hold and I = wffL0 . 0

0

`

36 The model does satisfy a weakened version of (K P), however, as Morreau points out: (K PI ) For all 2 I , if : 62 K then K \ I K \ I .

74

DONALD NUTE AND CHARLES B. CROSS

Where does the triviality proof break down when applied to Morreau's example? Hansson [1992] proves a theorem that provides the answer: Morreau's example is one of a set of belief change models in which forking support sets cannot be constructed. The counterpart of Hansson's theorem in our framework is the following: THEOREM 17. Suppose that hK; I ; `; K?; ; ; si is a belief change model de ned on a language L of type L1 or L2 and that ` includes all truthfunctional entailments and respects the deduction theorem for the material conditional. Suppose also that dom(s) = K, that s(K ) wffL for all K 2 K, and that s satis es the following for all K 2 K, all 2 I and all ; 2 K? :

s(K ) and ` then 2 s(K ). 2. If K is `-consistent and 2 s(K ), then : 62 s(K ). 3. If K is `-consistent and 6` : and > ; > 2 s(K ), then 6` :( ^ ). 4. If > ( ^ ) 2 wffL and 2 s(K ) and 62 s(K ) and : 62 s(K ), then > ( ^ ) 2 s(K ). Suppose that K1 ; K2 ; K 2 K fK?g and that s(K1 ) and s(K2 ) are both subsets of s(K ). Then either s(K1 ) s(K2 ) or s(K2 ) s(K1 ). 1. If

Conditions 1 and 2 are equivalent to Closure and Consistency for s, respectively. Note that the Ramsey test itself is not assumed: the point is that simply having conditionals in a belief change model that meets these four conditions ensures that forking support sets cannot be constructed. On giving up the monotonicity of s over K Rott [1989; 1991] suggests that nonmonotonic reasoning may provide a solution to the dilemma posed by the Gardenfors triviality result, and Cross [1990a] argues that Gardenfors' triviality result should be interpreted as showing not that the Ramsey test should be abandoned but that, given the Ramsey test, s must be nonmonotonic over K, i.e. for some H; K 2 K H K but s(H ) 6 s(K ).37 Makinson counters in [Makinson, 1990] with a triviality result for models in which s is permitted to be nonmonotonic, but Makinson's result does not bear on the suggestion endorsed by Cross and by Rott. More on this below. Other authors have brought nonmonotonic reasoning into the discussion of the Ramsey test without advertising it as such. For example, in [Hansson, 1992] Hansson writes: 37 Since the monotonicity of s is not assumed in Theorem 13, however, it is clear that the problem for (RT) posed by (K 5) cannot be solved by making s nonmonotonic.

CONDITIONAL LOGIC

75

: : : the addition of an indicative sentence that is compatible with all previously supported indicative sentences typically withdraws the support of conditional sentences that were previously supported.38 The type-L1 statements that are in Hansson's sense supported by a given \indicative" (i.e. conditional-free) belief base K represent what Cross (and possibly Rott) would classify as the nonmonotonic consequences of K . Hansson and Cross both think of the sets that individuate belief states as belief bases and de ne contraction and revision as operations on these sets, but Cross' belief bases dier from Hansson's in two respects: rst, whereas Hansson's belief bases are conditional-free, Cross' are not; secondly, whereas Hansson's belief bases are not closed under ` or closed under s, in Cross' enriched belief revision models belief bases are closed under `, though not under s. That is, for Hannson, belief states are individuated in terms of sets that function as belief bases with respect to both ` and s, whereas for Cross, belief states are individuated in terms of sets that function as belief bases only with respect to s. For Hansson, belief bases need not be closed under ` and are never closed under s. Makinson [1990], like Cross [1990a], supplements the classical ` with a not-necessarily-monotonic s,39 and like Cross, Makinson explicitly advertises s as a consequence operation. But in Makinson's discussion revision and contraction are de ned only on K that are closed under s, and the proof of Makinson's triviality theorem, whose counterpart here is Theorem 11 above, requires a belief change model containing three belief sets closed under s. Makinson's triviality result does not apply to belief change models in which K contains no K such that s(K ) = K , and such authors as Arlo-Costa and Levi (see [Levi, 1988], [Arlo-Costa, 1995], and [Arlo-Costa and Levi, 1996]) avoid Makinson's triviality result precisely by requiring s(K ) 6= K for all K 2 K. As we noted above, Hansson does not explicitly speak of the support function as a consequence operation, nor does Arlo-Costa or Levi. Yet, if one looks at the conditions that Hansson, Arlo-Costa, and Levi place on the support function, mirrored here in the de nitions of Hansson and Arlo-Costa/Levi belief change models as the requirements of Re exivity, Conservativeness, and Closure, it seems natural to think of s as a nonmonotonic consequence operation. But if we do think of s in a Hansson or Arlo-Costa/Levi belief change model as a nonmonotonic consequence operation, what sort of nonmonotonic reasoning does it represent? In [Moore, 1983] Robert Moore distinguishes two types of nonmonotonic reasoning: By default reasoning, we mean drawing plausible inferences from less than conclusive evidence in the absence of any information 38 [Hansson, 39 Makinson

1992], p. 526. uses the symbol C and Cross the symbol cl for s.

76

DONALD NUTE AND CHARLES B. CROSS

to the contrary. The examples about birds being able to y are of this type.40 He continues: Default reasoning is nonmonotonic because, to use a term from philosophy, it is defeasible . Its conclusions are tentative, so, given better information, they may be withdrawn.41 Default reasoning, according to Moore, contrasts with autoepistemic reasoning , or reasoning about one's state of belief. Moore writes: Autoepistemic reasoning is nonmonotonic because the meaning of an autoepistemic statement is context-sensitive; it depends on the theory in which the statement is embedded.42 For example, if } is de ned as being accepted in belief state K just in case : is not accepted in K , then } is an autoepistemic statement in Moore's sense. If the support function s in a belief change model is thought of as a nonmonotonic consequence operation, then how should s be classi ed with respect to Moore's distinction? It depends on the properties s is assumed to have. If a belief change model satis es some version of the Ramsey test (e.g. (RT), (PRTL), or (NRTL)), then the support function of that model is at least a form of autoepistemic reasoning. This is clear since the acceptability of a Ramsey test conditional for a given agent is in part a function of the agent's current epistemic state, and this holds true regardless whether conditionals themselves are objects of belief.43 Moreover, the context sensitivity to which Moore refers in the passage just quoted is clearly present in the support function of any belief change model that satis es (RT), and indeed this context sensitivity was exploited by Morreau [1992] in his construction of a nontrivial Ramsey test, by Lindstrom and Rabinowicz [1995] and Lindstrom [1996] in a proposed indexical interpretation of conditionals,44 by Hansson [1992] in his accounts of type-L1 conditionals and iterated conditionals, respectively, and by Boutilier and Goldszmidt [1995] in their account of the revision of conditional belief sets. Given a support function s for a belief change model that satis es a version of the Ramsey test, can s be not only a mechanism for autoepistemic reasoning but a mechanism for default reasoning, too? This depends on whether s can be used to make ampliative inferences to conclusions that 40 [Moore, 1983], p. 273. 41 [Moore, 1983], p. 274. 42 [Moore, 1983], p. 274. 43 Rott [1989] and Morreau

tionals are autoepistemic. 44 See also [D oring, 1997].

[1992] explicitly adopt the view that Ramsey test condi-

CONDITIONAL LOGIC

77

are not epistemically context-sensitive from premises that are not epistemically context-sensitive. Since Hansson, Arlo-Costa, and Levi assume that s satis es Conservativeness, it is clear that for them s is an operation of autoepistemic reasoning but not an operation of default reasoning: s(K ) will contain conditionals, i.e. autoepistemic statements, that are not logical consequences of K , but no conditional-free formula gets into s(K ) without being a logical consequence of K , which is itself conditional-free for any K 2 dom(s) according to Hansson, Arlo-Costa, and Levi. Cross and Makinson, on the other hand, do not require the support function to satisfy Conservativeness; accordingly, they allow belief change models in which s supports default reasoning. No distinction between s(K ) and K exists for Morreau, Gardenfors, and Segerberg, hence the issue of the status of s does not arise in their respective cases.

2.7 Logics for Ramsey test conditionals Gardenfors [1978] proves the soundness and completeness of David Lewis' system of conditional logic VC with respect to an epistemic, Ramsey test semantics for the conditional. Several other authors have proposed variants of Gardenfors' Ramsey test semantics, including variants that generalize Gardenfors' semantics, but it will be convenient for our purposes to adopt formalisms similar to those of [Arlo-Costa, 1995] and [Arlo-Costa and Levi, 1996]. Primitive belief revision models

Since the conditional is to be given a semantics in terms of belief revision, the notion of a belief set must be de ned in terms that do not assume a logic for the conditional. To this end we de ne primitive belief sets , primitive expansion , and primitive belief revision models . For a Boolean language L of type L1 or L2 let a primitive belief set de ned on this language be any set K of formulas of L meeting three requirements: (pBS1) K 6= ;; (pBS2) if 2 K and (pBS3)

2 K , then ^ 2 K ; if 2 K and ! is a truth-functional tautology, then 2 K .

If K K 0 and K , K 0 are both primitive belief sets, then K 0 is a primitive expansion of K . The operation of primitive expansion is de ned as follows: (DEF+) K+ = f : !

2 K g.

It is easy to see that if K is a primitive belief set, then so is K+. Finally, let us de ne the notion of a primitive belief revision model on L:

78

DONALD NUTE AND CHARLES B. CROSS

(DefpBRM) A primitive belief revision model (or pBRM) on a Boolean language L is an ordered quadruple hK; ; K? ; si whose components are as follows: 1. K? = wffL , where L0 is L or a fragment of L; 2. K is a nonempty set of primitive belief sets de ned on L0, and if K 2 K then K contains every primitive expansion of K on L0 ; 3. is a function mapping each K 2 K and each formula 2 K? to a primitive belief set K belonging to K; 4. s is a function mapping each K 2 K to a primitive belief set s(K ) of formulas of L, where s satis es the following: (a) if 2 K? and 2 s(K ) then 2 K ; (b) K s(K ) if K? 6= K 2 K. 0

Note that while K and s(K ) must both be primitive belief sets, they need not be primitive belief sets of the same language. When referring to the belief revision postulates (K 1), (K 2), etc., in the context of primitive belief revision models we will assume that and range over K? . A primitive belief revision model hK; ; K? ; si de ned on a Boolean language L is a Gardenfors pBRM i L is of type L2 and K? = wffL and s is the identity function on K and s satis es the following unrestricted version of the positive Ramsey test: (pRTG) For all K 2 K, if ; 2 K?, then ( > ) 2 s(K ) i 2 K .

A primitive belief revision model hK; ; K? ; si de ned on a Boolean language L is an Arlo-Costa/Levi pBRM i L is of type L1 and K? is the set of all formulas of the largest conditional-free fragment of L and s satis es the following versions of both the positive and negative Ramsey tests: (pPRT) For all K s(K ) i

2 K such that K 6= K? , if ; 2 K? , then ( > 2 K . For all K 2 K such that K = 6 K? , if ; 2 K? , then :( > s(K ) i 62 K .

)2

(pNRT)

)2

Positive and negative validity In [Arlo-Costa, 1995] (and in [Arlo-Costa and Levi, 1996], with Isaac Levi) Arlo-Costa distinguishes between positive and negative concepts of validity. The concepts are distinct in Arlo-Costa/Levi pBRMs, though not in Gardenfors pBRMs. A formula is positively valid (PV) relative to hK; ; K? ; si, where the latter is a primitive belief revision model, i 2 s(K ) for all K such that

CONDITIONAL LOGIC

79

K? 6= K 2 K; and is positively valid relative to a set of belief revision models i is positively valid relative to each member of the set. is negatively valid (NV) relative to hK; ; K? ; si i : 62 s(K ) for each K such that K? 6= K 2 K; and is negatively valid relative to a set of belief revision models i is negatively valid relative to each member of the set. Notions of entailment can be associated with positive and negative validity, respectively. Given a set of formulas of a type-L1 or type-L2 language L, and a formula of L, positively entails ( j=+ ) with respect to a primitive belief revision model hK; ; K?; si i 2 s(K ) for every K such that s(K ) and K? 6= K 2 K. By contrast, negatively entails ( j= ) with respect to a primitive belief revision model hK; ; K? ; si i there is no K such that K? 6= K 2 K and [ f:g s(K ). In a Gardenfors pBRM, positive and negative validity coincide: PROPOSITION 18. Relative to any Gardenfors pBRM de ned on a language L of type L2 , a formula of L is positively valid i is negatively valid. Proof. Let hK; ; K?; si be a Gardenfors pBRM de ned on a language L of type L2 , and let be a formula of L. First, suppose that is positively valid relative to hK; ; K?; si and choose an arbitrary K such that K? 6= K 2 K. Then 2 s(K ), but since s(K ) = K 6= K? we have : 62 s(K ), as required. Conversely, assume that is negatively valid relative to hK; ; K? ; si and choose an arbitrary K such that K? 6= K 2 K. Assume for reductio that 62 s(K ). Since s is the identity function, we have that 62 K , in which case K:+ 6= K?. Since K is closed under primitive expansions, we have in addition that K:+ 2 K. Thus, : 2 s(K:+) and K? 6= K:+ 2 K, which is contrary to the negative validity of . Positive and negative validity do not coincide in Arlo-Costa/Levi pBRMs, however. The negative Ramsey test prevents it. Consider the following pair of lemmas regarding the thesis (CS): LEMMA 19. For any type-L1 language L, if , are conditional-free, then ( ^ ) ! ( > ) is negatively valid in an Arlo-Costa/Levi pBRM de ned on L i the model satis es (K 4w). The latter is equivalent to Observation 4.7 of [Arlo-Costa and Levi, 1996]. LEMMA 20. For any type-L1 language L containing at least one atomic formula other than > and ?, there are conditional-free and such that ( ^ ) ! ( > ) is not positively valid in any Arlo-Costa/Levi pBRM de ned on L that satis es (K 3) and contains a primitive belief set K where :; 62 K . Proof. Suppose L is a type-L1 language containing at least one atomic formula dierent from > and ?, and consider an Arlo-Costa/Levi pBRM

80

DONALD NUTE AND CHARLES B. CROSS

hK; ; K? ; si de ned on L that satis es (K 3) and contains a primitive belief set K such that :; 62 K . Suppose for reductio that ( ^ ) ! ( >

) is positively valid for all conditional-free and . Then, in particular (> ^ ) ! (> > ) is positively valid relative to hK; ; K?; si. By (K 3) we have K> K>+ = K . Since, in addition, :; 62 K we have that :; 62 K> . Since 62 K> , it follows by the Negative Ramsey test that :(> > ) 2 s(K ), hence by the positive validity of (> ^ ) ! (> > ) relative to hK; ; K? ; si we have that :(> ^ ) 2 s(K ). Since :(> ^ ) is conditional-free, it follows that :(> ^ ) 2 K . Since primitive belief sets are deductively closed, we have : 2 K , contrary to assumption. The proof just given is derived from that given by Arlo-Costa for Observation 3.14 in [Arlo-Costa, 1995]. Finally, we state the following obvious but necessary lemma: LEMMA 21. For some type-L1 language L, there is an Arlo-Costa/Levi pBRM de ned on L that satis es (K 3) and (K 4w) and also contains a primitive belief set K where :; 62 K for some conditional-free formula of L. These three lemmas suÆce to show the following: THEOREM 22. There are Arlo-Costa/Levi pBRMs relative to which at least some formulas of the form ( ^ ) ! ( > ) are negatively valid but not positively valid. Interestingly, despite Theorem 22, f ^ g j=+ > holds relative to every Arlo-Costa/Levi pBRM that satis es (K 4w).45 Belief revision models for VC

Gardenfors provides an epistemic semantics for VC based on negative validity. He begins with a minimal conditional logic CM de ned as follows: Axiom schemata

Taut:

All truth-functional tautologies;

CC:

[( > ) ^ ( > )] ! [ > (

CN:

> >.

^ )];

Rules of inference

Modus Ponens From and !

RCM: From 45 See

! to infer ( >

to infer ; ) ! ( > ).

OBSERVATION 3.15 in [Arlo-Costa, 1995].

CONDITIONAL LOGIC

81

Gardenfors [1978] proves a soundness/completeness theorem for CM that is equivalent to the following: THEOREM 23. A formula of any type L2 language is a theorem of CM i it is negatively valid in every Gardenfors pBRM. Gardenfors then proves the following: THEOREM 24. A formula is a theorem of VC i it is derivable from CM together with (ID), (CSO0 ), (CS), (MP), (CA), and (CV) as additional axiom schemata: ID:

>

CS:

( ^ ) ! ( > )

CSO 0 : [( > ) ^ ( > )] ! [( > ) ! ( > )] MP: CA: CV:

( > ) ! ( ! )

[( > ) ^ ( > )] ! [( _ ) > ]

[( > ) ^ :( > :)] ! [( ^ ) > ]

An epistemic semantics for VC is obtained by restricting attention to Gardenfors pBRMs that satisfy constraints corresponding to axioms ID, CSO 0, CS, MP, CA, and CV. Gardenfors [1978] proves lemmas equivalent to the following: LEMMA 25. Where M is any Gardenfors pBRM, 1. all instances of ID are negatively valid in M i M satis es (K2); 2. all instances of CSO 0 are negatively valid in M i M satis es (K 6s); 3. all instances of CS are negatively valid in M i M satis es (K 4w);

4. all instances of MP are negatively valid in M i M satis es (K 3); 5. if M satis es (K 2), (K 6s), (K 4w), and (K 3), then all instances of CA are negatively valid in M if M satis es (K 7); 6. if all instances of ID, CSO 0 , CS, and MP are negatively valid in M, then M satis es (K 7) if all instances of CA are negatively valid in M; 7. if M satis es (K 2), (K 6s), (K 4w), and (K 3), then all instances of CV are negatively valid in M if M satis es (K L); 8. if all instances of ID, CSO 0 , CS, and MP are negatively valid in M, then M satis es (K L) if all instances of CV are negatively valid in M.

82

DONALD NUTE AND CHARLES B. CROSS

Theorem 24 and Lemma 25 allow the soundness and completeness result of Theorem 23 to be extended to yield the following: THEOREM 26. A formula is a theorem of VC i it is negatively valid in all Gardenfors pBRMs that satisfy (K 2), (K3), (K 4w), (K 6s), (K 7), and (K L). This theorem shows that if VC is translated into a theory of belief revision on Gardenfors pBRMs using that version of the Ramsey test which is built into the notion of a Gardenfors pBRM, then the resulting theory of belief revision is de ned by (K 1) (which is built into the de nition of a pBRM), (K 2), (K 3), (K 4w), (K 6s), (K 7), and (K L). The absence of (K 5w) should not be surprising, given Theorem 13. Since in a Gardenfors pBRM (K 3), (K 4w), and (DEF+) imply (K T), and since (K T) together with (DEF+), (K 6s) and (K 8) imply (K P), and given Theorem 8, the absence of (K 8) should not be surprising. A conditional logic that approximates AGM belief revision

Whereas Gardenfors [1978] sets out to nd epistemic models for Lewis's system VC of conditional logic, Arlo-Costa [1995] sets out to nd a system of conditional logic whose primitive belief revision models are de ned at least approximately by the AGM belief revision postulates for transitive relational partial meet contraction (see [Gardenfors, 1988], Chapters 3{4). The result is the system EF , which is de ned only on languages of type L1 (languages of at conditionals). EF has the following axioms and rules, where ; ; and are conditional free: Axiom schemata

Taut:

All truth-functional tautologies

ID:

>

MP:

( > ) ! ( ! )

CC:

[( > ) ^ ( > )] ! [ > (

CA:

[( > ) ^ ( > )] ! [( _ ) > ]

CV:

[( > ) ^ :( > :)] ! [( ^ ) > ]

CN:

>>

CD:

:( > ?) for all non-tautologous .

^ )]

CONDITIONAL LOGIC

83

Rules of inference

Modus Ponens: From and !

to infer .

! to infer ( > ) ! ( > ). From $ to infer ( > ) $ ( > ).

RCM: From RCEA:

One obvious dierence between VC and Arlo-Costa's EF is that EF is de ned for type L1 languages only whereas VC is de ned for type L2 languages. Another dierence is that CS, an axiom of VC, is not a theorem of EF . A third dierence is that CD, an axiom of EF , is not a theorem of VC. Arlo-Costa's epistemic semantics for EF is crucially dierent from Gardenfors' epistemic semantics for VC in that the semantics of EF is de ned in terms of positive validity over Arlo-Costa/Levi pBRMs rather than in terms of negative validity over Gardenfors pBRMs. Positive and negative validity coincide in Gardenfors pBRMs (see Proposition 18) but not in Arlo-Costa/Levi pBRMs (see Theorem 22). Which notion of validity should then be adopted? Arlo-Costa and Levi argue that positive validity should be adopted rather than negative validity both because positive validity is more intuitive and because in Arlo-Costa/Levi models, which satisfy the Negative Ramsey Test favored by Arlo-Costa and Levi, the inference rule modus ponens does not preserve negative validity.46 Consider a type-L1 language L; relative to L the logical system Flat CM is the smallest set of formulas of L that contains all instances of the axiom schemata of Gardenfors' CM and is closed under the rules of CM. Note that EF is an extension of Flat CM. Arlo-Costa [1995] proves the completeness of EF with respect to an epistemic semantics (based on positive validity) by proving a result equivalent to Theorem 31 below. We begin with a series of results to be used as lemmas for Theorem 31:47 THEOREM 27. A formula of any type L1 language L is a theorem of Flat CM i it is positively valid in every Arlo-Costa/Levi pBRM de ned on L. THEOREM 28. Let CM+ be the result of extending Flat CM by adding the rule (RCEA) (restricted to the conditionals of a type-L1 language). A formula is derivable in CM+ i it is positively valid in the class of all ArloCosta/Levi pBRMs that satisfy (K 6). THEOREM 29. Let CMU + be the result of extending CM+ by adding :( > ?) for +every non-tautologous conditional-free . A formula is derivable in CMU i it is positively valid in the class of all Arlo-Costa/Levi pBRMs that satisfy (K 6) and (K C). 46 See [Arl o-Costa and Levi, 1996], pp. 239-240. 47 Our formulation of these results re ects the organization

Levi, 1996].

found in [Arlo-Costa and

84

DONALD NUTE AND CHARLES B. CROSS

LEMMA 30. Where M is any Arlo-Costa/Levi pBRM,

1. all instances of ID are positively valid in M i M satis es (K 2); 2. all instances of MP are positively valid in M i M satis es (K 3); 3. all instances of CA are positively valid in M i M satis es (K 70 ); 4. if all instances of ID are positively valid in M, then all instances of CV are positively valid in M if M satis es (K 8);

Theorems 27, 28, and 29, together with Lemma 30 and the fact that (K 7) and (K 70 ) are equivalent in any pBRM that satis es (K 2) and (K 6), yield the following completeness theorem for EF :48 THEOREM 31. A formula of any type L1 language L is a theorem of EF i it is positively valid in every Arlo-Costa/Levi pBRM de ned on L satisfying (K 2), (K 3), (K C ), (K 6), (K 7), and (K 8). Postulates (K 1), (K 2), (K 3), (K 4), (K 5), (K 6), (K 7), and (K 8) jointly capture that notion of revision that is derivable via the Levi Identity (LI) from the AGM notion of transitively relational partial meet contraction (AGM Revision , for short).49 Since (K 1) holds in all pBRMs, EF comes very close to capturing AGM Revision, but (K 1) and the postulates mentioned in Theorem 31 de ne a notion of revision (EF Revision, for short) that is strictly weaker than AGM revision in two respects. First, whereas AGM revision includes (K 4), EF revision does not. It turns out that (K 4) does not correspond to the positive validity of any typeL1 formula. Still, (K 4) does correspond to a certain positive entailment, as Arlo-Costa [1995] shows: PROPOSITION 32. If M is an Arlo-Costa/Levi pBRM de ned on a typeL1 language L, then M satis es (K 4) i

f ! ; :(> > :)g j=+ > holds in M for all conditional-free formulas and of L. This result is equivalent to OBSERVATION 3.16 of [Arlo-Costa, 1995]. Note that Proposition 32 does not establish conditions for the positive validity of ( ! ) ! [:(> > :) ! ( > )]: But if nesting of conditionals is allowed, then (K 4) can be associated with the positive validity of nested conditionals of the form [( ! ) ^ :(> > 48 Arl o-Costa [1995] notes that Theorems 27 and 29 and Lemma 30 suÆce to yield completeness theorem for the type-L1 fragment of David Lewis' system VW. 49 See, for example, [G ardenfors, 1988], Chapters 3 and 4.

CONDITIONAL LOGIC

85

:)] > ( >

) (see THEOREM 8.1 and OBSERVATION 8.3 in [ArloCosta, 1995]). In general, [ fg j=+ is not equivalent to j=+ > in an Arlo-Costa/Levi pBRM, but this equivalence does hold for certain and when = ; (see OBSERVATION 3.17 of [Arlo-Costa, 1995]). A second dierence between EF revision and AGM revision is this: where`as AGM Revision includes (K 5), which entails (K 5w), EF revision includes neither (K 5) nor (K 5w) but instead includes (K C). The only difference between (K 5w) and (K C) is that (K 5w) places a constraint on the revision of all belief sets that (K C) places just on the revision of consistent belief sets. In particular, where is nontautologous, (K 5w) requires (K?) to be distinct from K? (and therefore, actually, a contraction of K?), whereas (K C) implies no such requirement. Theorem 29 reveals that (K C) is secured in Arlo-Costa/Levi pBRMs via (pNRT) and the positive validity of negated conditionals of the form :( > ?), where is nontautologous. These negated conditionals also belong to K?, of course, but allowing K to take K? as a value in (pNRT) is not an option. Allowing K to take K? as a value in (pPRT) also does not help: Theorem 13 shows that (K 5w) and a consistent underlying logic cannot be combined with the positive Ramsey test in that case. Still, leaving aside the revision of K? , it is true, as Arlo-Costa [1995] has shown, that AGM revision of nonabsurd belief sets can be speci ed in terms of positive validity in a type-L2 language or in terms of positive validity and positive entailment in a type-L1 language. 3 OTHER TOPICS Our discussion of the major kinds of conditionals is far from exhaustive. We have looked at several dierent approaches to the problem of providing an adequate formal semantics and logic for various kinds of conditionals without being able to demonstrate that one approach is clearly superior to all the others. Furthermore, there are many problems involved in the analysis of conditionals which we either have not discussed at all or have only just mentioned in passing. In this section we will look at several of these, giving each the very briefest attention. One issue which has received much attention is the relationship between conditionals and probability. Stalnaker [1970] proposed that the probability that a conditional is true should be identical with the standard conditional probability. Lewis demonstrates in [Lewis, 1976], however, that this assumption can only be true if we restrict our probability functions to those which assign only a small nite number of distinct values to propositions. Stalnaker [1976] provides a dierent proof for a similar result, a proof which does not depend upon certain assumptions which Lewis used and which some investigators have questioned. Van Fraassen [1976] avoids

86

DONALD NUTE AND CHARLES B. CROSS

these Triviality Results for a weakened, non-classical version of Stalnaker's conditional logic C2. Lewis [1976] shows, however, that we can embrace a result which resembles Stalnaker's while avoiding the Triviality Result. Lewis's suggestion depends upon a technique which he calls imaging. This technique, which provides an alternative method for determining conditional probabilities, requires that in conditionalising a probability assignment with respect to , i.e. in modifying the assignment in a way which produces a new assignment which assigns probability 1 to , all the probability which was originally assigned to each :-world i would be transferred to the -world closest to i. Lewis demonstrates that if we accept Stalnaker's semantics and if we assign conditional probabilities in this new, non-standard way, then the probability that a conditional is true turns out to be identical with the conditional probability even when the probabilities of truth for conditionals take on in nitely many dierent values. Lewis's imaging techniques can be adapted to semantics other than Stalnaker's. Nute [1980b] adapts Lewis's imaging technique to class selection function semantics, producing a notion of subjunctive probability which diers from both the standard conditional probability and the probability that the corresponding conditional is true. While promising in some ways, Nute's account is extremely cumbersome. Gardenfors [1982] presents a generalized form of imaging and shows that conditional probability cannot be described even in terms of generalized imaging. Other papers on conditionals and probability include [Doring, 1994; Fetzer and Nute, 1979; Fetzer and Nute, 1980; Hajek, 1994; Hall, 1994; Lance, 1991; Lewis, 1981b; Lewis, 1986; McGee, 1989; Nute, 1981a; Stalnaker and Jerey, 1994]. For a careful and comprehensive survey of results relating the probabilities of conditionals to conditional probabilities see [Hajek and Hall, 1994]. The relationship between causation and conditionals has certainly not been overlooked either. Many authors like Jackson [1977] and Kvart [1980; 1986] assign a special role to causation in their analyses of counterfactual conditionals. Others like Lewis [1973a] and Swain [1978] attempt to provide analyses of causation in terms of counterfactual dependence. Still others like Fetzer and Nute [1979; 1980] have tried to develop a semantics for a special kind of causal conditional. These special causal conditionals have then been employed in the formulation of a single-case propensity interpretation of law statements. Conditional logic also has applications in deontic logic (see, for example, [Hilpinen, 1981]), in decision theory (see, for example, [Gibbard and Harper, 1981; Stalnaker, 1981a]), and in nonmonotonic logic (for a summary of some of this work, see [Nute, 1994]). In addition, there has been signi cant attention in recent years to the issue of whether so-called future indicative conditionals (e.g. `If Oswald doesn't shoot President Kennedy, then someone else will') should be classi ed as indicative or as subjunctive (see, for example, [Bennett, 1988; Bennett, 1995; Dudman, 1984; Dudman, 1989;

CONDITIONAL LOGIC

87

Dudman, 1994; Jackson, 1990]). For a careful and comprehensive review of this and other recent topics of discussion see [Edgington, 1995]. It is not possible in this essay to discuss or even to list all of the material that can be found in the literature on conditional logic and its applications. 4 LIST OF SOME IMPORTANT RULES, THESES, AND LOGICS In this section we collect some of the most important rules and theses of conditional logic together with de nitions for a few of the better known conditional logics. Rules RCEC: from $ , to infer ( > ) $ ( > ). RCK:

from (1 ^ : : : ^ n ) ! , to infer [( > 1 )(^ : : : ( > n )] ! ( > ); n 0.

RCEA: from $ , to infer ( > ) RCE:

from ! , to infer > .

RCM:

from

RR:

$ ( > ).

! , to infer ( > ) ! ( > ). from ! ( > ) infer ( Æ ) ! , and from ( Æ ) ! ! ( > ).

Theses

Transitivity: [( > ) ^ ( > )] ! ( > ) Contraposition: ( > : ) ! ( > :) Strengthening Antecedents: ( > ) ! [( ^ ) > ] ID:

>

MP:

( > ) ! ( ! )

MOD:

(: > ) ! ( > )

CSO: CSO 0 :

[( > ) ^ ( > )] ! [( > ) $ ( > )]

CV:

[( > ) ^ :( > :)] ! [( ^ ) > ]

CEM: CS:

[( > ) ^ ( > )] ! [( > ) ! ( > )] ( > ) _ ( > : ) ( ^ ) ! ( > )

infer

88

DONALD NUTE AND CHARLES B. CROSS

CC:

[( > ) ^ ( > )] ! [ >

CM:

[ > (

CA: SDA: CN: CT: CU: CD:

^ )]

^ )] ! [( > ) ^ ( > )] [( > ) ^ ( > )] ! [( _ ) > ] [( _ ) > ] ! [( > ) ^ ( > )] >> (: > ?) ! :( > ?) ! (( > ?) > ?) :( > ?) for all non-tautologous .

Recall that in Section 1 we de ned a conditional logic as any collection L of sentences formed in the usual way from the symbols of classical sentential logic together with a conditional operator >, such that L is closed under modus ponens and L contains every tautology. We now modify this de nition as follows, adopting the terminology of Section 2, to take dierent language types for conditional logic into account: let a conditional logic on a Boolean language L of type L1 or type L2 be any collection L of sentences of L such that L is closed under modus ponens and L contains every tautology. Logics for full conditional languages For a given Boolean language L of type L2 , each of the following is the smallest conditional logic on L closed under all the rules and containing all the theses associated with it below.

CM:

RCM, CC, CN

VW:

RCEC, RCK; ID, MOD, CSO, MP, CV

SS:

RCEC, RCK; ID, MOD, CSO, MP, CA, CS

VC:

RCEC, RCK; ID, MOD, CSO, MP, CV, CS

VCU: RCEC, RCK; ID, MOD, CSO, MP, CV, CS, CT, CU VCU2 : RCEC, RCK, RR; ID, MOD, CSO, MP, CV, CS, CT, CU (with Æ as an additional binary operator) C2:

RCEC, RCK; ID, MOD, CSO, MP, CV, CEM

Neither of VW and SS is an extension of the other, and neither of VCU and C2 is an extension of the other. VCU2 is an extension of VCU, and C2 and VCU are both extensions of VC, which is an extension of both VW and SS. VW and SS are both extensions of CM. For the de nitions

CONDITIONAL LOGIC

89

of several weaker conditional logics, see [Lewis, 1973b; Chellas, 1975; Nute, 1980b]. Logics for languages of \ at" conditionals If L is a Boolean language of type L1 , then each of the following logics is the smallest conditional logic on L closed under all the rules and containing all the theses associated with it below.

Flat CM: RCM, CC, CN Flat VW: RCEC, RCK; ID, MOD, CSO, MP, CV Flat VC: RCEC, RCK; ID, MOD, CSO, MP, CV, CS

EF :

RCM, RCEA, ID, MP, CC, CA, CV, CN, CD

Flat CM is contained in Flat VW, which is contained in both Flat VC and EF , but neither of Flat VC and EF is contained in the other. For a discussion of the logic of at conditionals aimed at being as true as possible to Ramsey's ideas, see [Levi, 1996], Chapter 4.

Acknowledgements Sections 1, 3 and 4 are primarily the work of the rst author, but revised from the rst edition of this Handbook with input from the second author. Section 2 is primarily the work of the second author. We are grateful to Lennart Aqvist, Horacio Arlo-Costa, Ermanno Bencivenga, John Burgess, David Butcher, Michael Dunn, Dov Gabbay, Christopher Gauker, Franz Guenthner, Hans Kamp, David Lewis and Christian Rohrer for their helpful comments and suggestions on material contained in this paper. We are also grateful to Richmond Thomason for help in assembling our list of references. Finally, we thank Kluwer and the editor of the Journal of Philosophical Logic for permission to use material from [Nute, 1981b] in this article. Donald Nute University of Georgia, USA. Charles B. Cross University of Georgia, USA. BIBLIOGRAPHY [Adams, 1966] E. Adams. Probability and the logic of conditionals. In J. Hintikka and P. Suppes, editors, Aspects of Inductive Logic. North Holland, Amsterdam, 1966. [Adams, 1975a] E. Adams. Counterfactual conditionals and prior probabilities. In A. Hooker and W. Harper, editors, Proceedings of International Congress on the Foundations of Statistics. Reidel, Dordrecht, 1975.

90

DONALD NUTE AND CHARLES B. CROSS

[Adams, 1975b] E. Adams. The Logic of Conditionals; An Application of Probability to Deductive Logic. Reidel, Dordrecht, 1975. [Adams, 1977] E. Adams. A note on comparing probabilistic and modal logics of conditionals. Theoria, 43:186{194, 1977. [Adams, 1981] E. Adams. Transmissible improbabilities and marginal essentialness of premises in inferences involving indicative conditionals. Journal of Philosophical Logic, 10:149{177, 1981. [Adams, 1995] E. Adams. Remarks on a theorem of McGee. Journal of Philosophical Logic, 24:343{348, 1995. [Adams, 1996] E. Adams. Four probability preserving properties of inferences. Journal of Philosophical Logic, 25:1{24, 1996. [Adams, 1997] E. Adams. A Primer of Probability Logic. Cambridge University Press, Cambridge, England, 1997. [Alchourron et al., 1985] C. Alchourron, P. Gardenfors, and D. Makinson. On the logic of theory change: partial meet contraction and revision functions. The Journal of Symbolic Logic, 50:510{530, 1985. [Alchourron, 1994] C. Alchourron. Philosophical foundations of deontic logic and the logic of defeasible conditionals. In J.J. Meyer and R.J. Wieringa, editors, Deontic Logic in Computer Science: Normative System Speci cation, pages 43{84. John Wiley and Sons, New York, 1994. [Appiah, 1984] A. Appiah. Generalizing the probabilistic semantics of conditionals. Journal of Philosophical Logic, 13:351{372, 1984. [Appiah, 1985] A. Appiah. Assertion and Conditionals. Cambridge University Press, Cambridge, England, 1985. [ Aqvist, 1973] Lennart Aqvist. Modal logic with subjunctive conditionals and dispositional predicates. Journal of Philosophical Logic, 2:1{76, 1973. [Arlo-Costa and Levi, 1996] H. Arlo-Costa and I. Levi. Two notions of epistemic validity: epistemic models for Ramsey's conditionals. Synthese, 109:217{262, 1996. [Arlo-Costa and Segerberg, 1998] H. Arlo-Costa and K. Segerberg. Conditionals and hypothetical belief revision (abstract). Theoria, 1998. Forthcoming. [Arlo-Costa, 1990] H. Arlo-Costa. Conditionals and monotonic belief revisions: the success postulate. Studia Logica, 49:557{566, 1990. [Arlo-Costa, 1995] H. Arlo-Costa. Epistemic conditionals, snakes, and stars. In G. Crocco, L. Fari~nas del Cerro, and A. Herzig, editors, Conditionals: From Philosophy to Computer Science. Oxford University Press, Oxford, 1995. [Arlo-Costa, 1998] H. Arlo-Costa. Belief revision conditionals: Basic iterated systems. Annals of Pure and Applied Logic, 1998. Forthcoming. [Asher and Morreau, 1991] N. Asher and M. Morreau. Commonsense entailment: a modal theory of nonmonotonic reasoning. In J. Mylopoulos and R. Reiter, editors, Proceedings of the Twelfth International Joint Conference on Arti cial Intelligence, Los Altos, California, 1991. Morgan Kaufmann. [Asher and Morreau, 1995] N. Asher and M. Morreau. What some generic sentences mean. In Gregory Carlson and Francis Jerey Pelletier, editors, The Generic Book. Chicago University Press, Chicago, IL, 1995. [Balke and Pearl, 1994a] A. Balke and J. Pearl. Counterfactual probabilities: Computational methods, bounds, and applications. In R. Lopez de Mantaras and D. Poole, editors, Uncertainty in Arti cial Intelligence 10. Morgan Kaufmann, San Mateo, California, 1994. [Balke and Pearl, 1994b] A. Balke and J. Pearl. Counterfactuals and policy analysis in structural models. In P. Besnard and S. Hanks, editors, Uncertainty in Arti cial Intelligence 11. Morgan Kaufmann, San Francisco, 1994. [Balke and Pearl, 1994c] A. Balke and J. Pearl. Probabilistic evaluation of counterfactual queries. In B. Hayes-Roth and R. Korf, editors, Proceedings of the Twelfth National Conference on Arti cial Intelligence, Menlo Park, California, 1994. American Association for Arti cial Intelligence, AAAI Press. [Barwise, 1986] J. Barwise. Conditionals and conditional information. In E. Traugott, A. ter Meulen, J. Reilly, and C. Ferguson, editors, On Conditionals. Cambridge University Press, Cambridge, England, 1986.

CONDITIONAL LOGIC

91

[Bell, 1988] J. Bell. Predictive Conditionals, Nonmonotonicity, and Reasoning About the Future. Ph.D. dissertation, University of Essex, Colchester, 1988. [Benferat et al., 1997] S. Benferat, D. Dubois, and H. Prade. Nonmonotonic reasoning, conditional objects, and possibility theory. Arti cial Intelligence, 92:259{276, 1997. [Bennett, 1974] J. Bennett. Counterfactuals and possible worlds. Canadian Journal of Philosophy, 4:381{402, 1974. [Bennett, 1982] J. Bennett. Even if. Linguistics and Philosophy, 5:403{418, 1982. [Bennett, 1984] J. Bennett. Counterfactuals and temporal direction. Philosophical Review, 43:57{91, 1984. [Bennett, 1988] J. Bennett. Farewell to the phlogiston theory of conditionals. Mind, 97:509{527, 1988. [Bennett, 1995] J. Bennett. Classifying conditionals: the traditional way is right. Mind, 104:331{354, 1995. [Bigelow, 1976] J. C. Bigelow. If-then meets the possible worlds. Philosophia, 6:215{236, 1976. [Bigelow, 1980] J. Bigelow. Review of [Pollock, 1976]. Linguistics and Philosophy, 4:129{ 139, 1980. [Blue, 1981] N. A. Blue. A metalinguistic interpretation of counterfactual conditionals. Journal of Philosophical Logic, 10:179{200, 1981. [Boutilier and Goldszmidt, 1995] C. Boutilier and M. Goldszmidt. On the revision of conditional belief sets. In G. Crocco, L. Fari~nas del Cerro, and A. Herzig, editors, Conditionals: From Philosophy to Computer Science. Oxford University Press, Oxford, 1995. [Boutilier, 1990] C. Boutilier. Conditional logics of normality as modal systems. In T. Dietterich and W. Swartout, editors, Proceedings of the Eighth National Conference on Arti cial Intelligence, Menlo Park, California, 1990. American Association for Arti cial Intelligence, AAAI Press. [Boutilier, 1992] C. Boutilier. Conditional logics for default reasoning and belief revision. Technical Report KRR{TR{92{1, Computer Science Department, University of Toronto, Toronto, Ontario, 1992. [Boutilier, 1993a] C. Boutilier. Belief revision and nested conditionals. In R. Bajcsy, editor, Proceedings of the Thirteenth International Joint Conference on Arti cial Intelligence, San Mateo, California, 1993. Morgan Kaufmann. [Boutilier, 1993b] C. Boutilier. A modal characterization of defeasible deontic conditionals and conditional goals. In Working Notes of the AAAI Spring Symposium on Reasoning about Mental States, Menlo Park, California, 1993. American Association for Arti cial Intelligence. [Boutilier, 1993c] C. Boutilier. Revision by conditional beliefs. In R. Fikes and W. Lehnert, editors, Proceedings of the Eleventh National Conference on Arti cial Intelligence, Menlo Park, California, 1993. American Association for Arti cial Intelligence, AAAI Press. [Boutilier, 1996] C. Boutilier. Iterated revision and minimal change of conditional beliefs. Journal of Philosophical Logic, 25:263{305, 1996. [Bowie, 1979] G. Lee Bowie. The similarity approach to counterfactuals: some problems. No^us, 13:477{497, 1979. [Burgess, 1979] J. P. Burgess. Quick completeness proofs for some logics of conditionals. Notre Dame Journal of Formal Logic, 22:76{84, 1979. [Burgess, 1984] J. P. Burgess. Chapter II.2: Basic tense logic. In Handbook of Philosophical Logic. Reidel, Dordrecht, 1984. [Burks, 1951] A. W. Burks. The logic of causal propositions. Mind, 60:363{382, 1951. [Butcher, 1978] D. Butcher. Subjunctive conditional modal logic. Ph.D. dissertation, Stanford, 1978. [Butcher, 1983a] D. Butcher. Consequent-relative subjunctive implication, 1983. Unpublished. [Butcher, 1983b] D. Butcher. An incompatible pair of subjunctive conditional modal axioms. Philosophical Studies, 44:71{110, 1983. [Chellas, 1975] B. F. Chellas. Basic conditional logic. Journal of Philosophical Logic, 4:133{153, 1975.

92

DONALD NUTE AND CHARLES B. CROSS

[Chisholm, 1946] R. Chisholm. The contrary-to-fact conditional. Mind, 55:289{307, 1946. [Clark, 1971] M. Clark. Ifs and hooks. Analysis, 32:33{39, 1971. [Costello, 1996] T. Costello. Modeling belief change using counterfactuals. In L. Carlucci Aiello, J. Doyle, and S. Shapiro, editors, KR'96: Principles of Knowledge Representation and Reasoning. Morgan Kaufmann, San Francisco, California, 1996. [Creary and Hill, 1975] L. G. Creary and C. S. Hill. Review of [Lewis, 1973b]. Philosophy of Science, 43:431{344, 1975. [Crocco and Fari~nas del Cerro, 1996] G. Crocco and L. Fari~nas del Cerro. Counterfactuals: Foundations for nonmonotonic inferences. In A. Fuhrmann and H. Rott, editors, Logic, Action, and Information: Essays on Logic in Philosophy and Arti cial Intelligence. Walter de Gruyter, Berlin, 1996. [Cross and Thomason, 1987] C. Cross and R. Thomason. Update and conditionals. In Z. Ras and M. Zemankova, editors, Methodologies for Intelligent Systems. NorthHolland, Amsterdam, 1987. [Cross and Thomason, 1992] C. Cross and R. Thomason. Conditionals and knowledgebase update. In P. Gardenfors, editor, Cambridge Tracts in Theoretical Computer Science: Belief Revision, volume 29. Cambridge University Press, Cambridge, England, 1992. [Cross, 1985] C. Cross. Jonathan Bennett on `even if'. Linguistics and Philosophy, 8:353{357, 1985. [Cross, 1990a] C. Cross. Belief revision, nonmonotonic reasoning, and the Ramsey test. In H. Kyburg and R. Loui, editors, Knowledge Representation and Defeasible Reasoning. Kluwer, Boston, 1990. [Cross, 1990b] C. Cross. Temporal necessity and the conditional. Studia Logica, 49:345{ 363, 1990. [Daniels and Freeman, 1980] B. Daniels and J. B. Freeman. An analysis of the subjunctive conditional. Notre Dame Journal of Formal Logic, 21:639{655, 1980. [Darwiche and Pearl, 1994] A. Darwiche and J. Pearl. On the logic of iterated belief revision. In Ronald Fagin, editor, Theoretical Aspects of Reasoning About Knowledge: Proceedings of the Fifth Conference, San Francisco, 1994. Morgan Kaufmann. [Davis, 1979] W. Davis. Indicative and subjunctive conditionals. Philosophical Review, 88:544{564, 1979. [Decew, 1981] J. Decew. Conditional obligation and counterfactuals. Journal of Philosophical Logic, 10(1):55{72, 1981. [Delgrande, 1988] J. Delgrande. An approach to default reasoning based on a rst-order conditional logic: Revised report. Arti cial Intelligence, 36:63{90, 1988. [Delgrande, 1995] J. Delgrande. Syntactic conditional closures for defeasible reasoning. In C. Mellish, editor, Proceedings of the Fourteenth International Joint Conference on Arti cial Intelligence, San Francisco, 1995. Morgan Kaufmann. [Doring, 1994] F. Doring. On the probabilities of conditionals. Philosophical Review, 103:689{699, 1994. See Philosophical Review, 105: 231, 1996, for corrections. [Doring, 1997] F. Doring. The Ramsey test and conditional semantics. Journal of Philosophical Logic, 26:359{376, 1997. [Dudman, 1983] V. Dudman. Tense and time in English verb clusters of the primary pattern. Australasian Journal of Linguistics, 3:25{44, 1983. [Dudman, 1984] V. Dudman. Parsing `if'-sentences. Analysis, 44:145{153, 1984. [Dudman, 1984a] V. Dudman. Conditional interpretations of if-sentences. Australian Journal of Linguistics, 4, 143{204, 1984. [Dudman, 1986] V. Dudman. Antecedents and consequents. Theoria, 52, 168{199, 1986. [Dudman, 1989] V. Dudman. Vive la revolution! Mind, 98:591{603, 1989. [Dudman, 1991] V. Dudman. V. Dudman. Jackson classifying conditionals. Analysis, 51:131-136, 1991. [Dudman, 1994] V. Dudman. On conditionals. Journal of Philosophy, 91:113{128, 1994. [Dudman, 1994a] V. Dudman. Against the indicative. Australasian Journal of Philosophy, 72:17-26, 1994. [Dudman, 2000] V. Dudman. Classifying `conditionals': the traditional way is wrong, Analysis, 60:147, 2000.

CONDITIONAL LOGIC

93

[Edgington, 1995] D. Edgington. On conditionals. Mind, 104:235{329, 1995. [Eells and Skyrms, 1994] E. Eells and B. Skyrms, editors. Probability and Conditionals: Belief Revision and Rational Decision. Cambridge University Press, Cambridge, England, 1994. [Eiter and Gottlob, 1993] T. Eiter and G. Gottlob. The complexity of nested counterfactuals and iterated knowledge base revision. In R. Bajcsy, editor, Proceedings of the Thirteenth International Joint Conference on Arti cial Intelligence, San Mateo, California, 1993. Morgan Kaufmann. [Ellis et al., 1977] B. Ellis, F. Jackson, and R. Pargetter. An objection to possible worlds semantics for counterfactual logics. Journal of Philsophical Logic, 6:355{357, 1977. [Ellis, 1978] Brian Ellis. A uni ed theory of conditionals. Journal of Philosophical Logic, 7:107{124, 1978. [Farkas and Sugioka, 1987] D. Farkas and Y. Sugioka. Restrictive if/when clauses. Linguistics and Philosophy, 6:225{258, 1987. [Fetzer and Nute, 1979] J. H. Fetzer and D. Nute. Syntax, semantics and ontology: a probabilistic causal calculus. Synthese, 40:453{495, 1979. [Fetzer and Nute, 1980] J. H. Fetzer and D. Nute. A probabilistic causal calculus: con icting conceptions. Synthese, 44:241{246, 1980. [Fine, 1975] K. Fine. Review of [Lewis, 1973b]. Mind, 84:451{458, 1975. [Friedman and Halpern, 1994] N. Friedman and J. Halpern. Conditional logics for belief change. In B. Hayes-Roth and R. Korf, editors, Proceedings of the Twelfth National Conference on Arti cial Intelligence, Menlo Park, California, 1994. American Association for Arti cial Intelligence, AAAI Press. [Fuhrmann and Levi, 1994] A. Fuhrmann and I. Levi. Undercutting and the Ramsey test for conditionals. Synthese, 101:157{169, 1994. [Gabbay, 1972] D. M. Gabbay. A general theory of the conditional in terms of a ternary operator. Theoria, 38:97{104, 1972. [Galles and Pearl, 1997] D. Galles and J. Pearl. An axiomatic characterization of causal counterfactuals. Technical report, Computer Science Department, UCLA, Los Angeles, California, 1997. [Gardenfors et al., 1991] P. Gardenfors, S. Lindstrom, M. Morreau, and R. Rabinowicz. The negative Ramsey test: another triviality result. In A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change. Cambridge University Press, Cambridge, England, 1991. [Gardenfors, 1978] P. Gardenfors. Conditionals and changes of belief. In I. Niiniluoto and R. Tuomela, editors, The Logic and Epistemology of Scienti c Change. North Holland, Amsterdam, 1978. [Gardenfors, 1979] P. Gardenfors. Even if. In F. V. Jensen, B. H. Mayoh, and K. K. Moller, editors, Proceedings from 5th Scandinavian Logic Symposium. Aalborg University Press, Aalborg, 1979. [Gardenfors, 1982] P. Gardenfors. Imaging and conditionalization. Journal of Philosophy, 79:747{760, 1982. [Gardenfors, 1986] P. Gardenfors. Belief revisions and the Ramsey test for conditionals. Philosophical Review, 95:81{93, 1986. [Gardenfors, 1987] P. Gardenfors. Variations on the Ramsey test: more triviality results. Studia Logica, 46:321{327, 1987. [Gardenfors, 1988] P. Gardenfors. Knowledge in Flux. MIT Press, Cambridge, MA, 1988. [Gibbard and Harper, 1981] A. Gibbard and W. Harper. Counterfactuals and two kinds of expected utility. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Ginsberg, 1985] M. Ginsberg. Counterfactuals. In A. Joshi, editor, Proceedings of the Ninth International Joint Conference on Arti cial Intelligence, Los Altos, California, 1985. Morgan Kaufmann. [Goodman, 1955] N. Goodman. Fact, Fiction and forecast. Harvard, Cambridge, MA, 1955. [Grahne and Mendelzon, 1994] G. Grahne and A. Mendelzon. Updates and subjunctive queries. Information and Computation, 116:241{252, 1994.

94

DONALD NUTE AND CHARLES B. CROSS

[Grahne, 1991] G. Grahne. Updates and counterfactuals. In J. Allen, R. Fikes, and E. Sandewall, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference. Morgan-Kaufmann, Los Altos, 1991. [Grice, 1967] P. Grice. Logic and conversation, 1967. The William James Lectures, given at Harvard University. [Hajek and Hall, 1994] A. Hajek and N. Hall. The hypothesis of the conditional construal of conditional probability. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Hajek, 1989] A. Hajek. Probabilities of conditionals|revisited. Journal of Philosophical Logic, 18:423{428, 1989. [Hajek, 1994] A. Hajek. Triviality on the cheap. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Hall, 1994] N. Hall. Back in the CCCP. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Hansson, 1992] S. Hansson. In defense of the Ramsey test. Journal of Philosophy, 89:499{521, 1992. [Harper et al., 1981] W. Harper, R. Stalnaker, and G. Pearce, editors. Ifs: conditionals, belief, decision, chance, and time. D. Reidel, Dordrecht, 1981. [Harper, 1975] W. Harper. Rational belief change, Popper functions and the counterfactuals. Synthese, 30:221{262, 1975. [Hausman, 1996] D. Hausman. Causation and counterfactual dependence reconsidered. No^us, 30:55{74, 1996. [Hawthorne, 1996] J. Hawthorne. On the logic of nonmonotonic conditionals and conditional probabilities. Journal of Philosophical Logic, 25:185{218, 1996. [Herzberger, 1979] H. Herzberger. Counterfactuals and consistency. Journal of Philosophy, 76:83{88, 1979. [Hilpinen, 1981] R. Hilpinen. Conditionals and possible worlds. In G. Floistad, editor, Contemporary Philosophy: A New Survey, volume I: Philosophy of Language/Philosophical Logic. Martinus Nijho, The Hague, 1981. [Hilpinen, 1982] R. Hilpinen. Disjunctive permissions and conditionals with disjunctive antecedents. In I. Niiniluoto and Esa Saarinen, editors, Proceedings of the Second Soviet{Finnish Logic Conference, Moscow, December 1979. Acta Philosphica Fennica, 1982. [Horgan, 1981] T. Horgan. Counterfactuals and Newcomb's problem. Journal of Philosophy, 78:331{356, 1981. [Horty and Thomason, 1991] J. Horty and R. Thomason. Conditionals and arti cial intelligence. Fundamenta Informaticae, 15:301{324, 1991. [Humberstone, 1978] I. L. Humberstone. Two merits of the circumstantial operator language for conditional logic. Australasian Journal of Philosophy, 56:21{24, 1978. [Hunter, 1980] G. Hunter. Conditionals, indicative and subjunctive. In J. Dancey, editor, Papers on Language and Logic. Keele University Library, 1980. [Hunter, 1982] G. Hunter. Review of [Nute, 1980b]. Mind, 91:136{138, 1982. [Jackson, 1977] F. Jackson. A causal theory of counterfactuals. Australasian Journal of Philosophy, 55:3{21, 1977. [Jackson, 1979] F. Jackson. On assertion and indicative conditionals. Philosophical Review, 88:565{589, 1979. [Jackson, 1987] F. Jackson. Conditionals. Blackwell, Oxford, 1987. [Jackson, 1990] F. Jackson. Classifying conditionals. Analysis, 50:134{147, 1990. [Jackson, 1991] F. Jackson, editor. Conditionals. Oxford University Press, Oxford, 1991. [Jackson, 1991a] F. Jackson, Classifying conditionals II. Analysis, 51:137-143, 1991. [Katsuno and Mendelzon, 1992] H. Katsuno and A. Mendelzon. Updates and counterfactuals. In P. Gardenfors, editor, Cambridge Tracts in Theoretical Computer Science: Belief Revision, volume 29. Cambridge University Press, Cambridge, England, 1992. [Kim, 1973] J. Kim. Causes and counterfactuals. Journal of Philosophy, 70:570{572, 1973. [Kratzer, 1979] A. Kratzer. Conditional necessity and possibility. In R. Bauerle, U. Egli, and A. von Stechow, editors, Semantics from Dierent Points of View. SpringerVerlag, Berlin, 1979.

CONDITIONAL LOGIC

95

[Kratzer, 1981] A. Kratzer. Partition and revision: the semantics of counterfactuals. Journal of Philosophical Logic, 10:201{216, 1981. [Kremer, 1987] M. Kremer. `if' is unambiguous. No^us, 21:199{217, 1987. [Kvart, 1980] I. Kvart. Formal semantics for temporal logic and counterfactuals. Logique et analyse, 23:35{62, 1980. [Kvart, 1986] I. Kvart. A Theory of Counterfactuals. Hackett, Indianapolis, 1986. [Kvart, 1987] I. Kvart. Putnam's counterexample to `A theory of counterfactuals'. Philosophical Papers, 16:235-239, 1987. [Kvart, 1991] I. Kvart. Counterfactuals and causal relevance. Paci c Philosophical Quarterly, 72:314-337, 1991. [Kvart, 1992] I. Kvart. Counterfactuals. Erkenntnis, 36:139-179, 1992. [Kvart, 1994] I. Kvart. Counterfactual ambiguities, true premises and knowledge. Synthese, 100:133-164, 1994. [Lance, 1991] M. Lance. Probabilistic dependence among conditionals. Philosophical Review, 100:269{276, 1991. [Lehmann and Magidor, 1992] D. Lehmann and M. Magidor. What does a conditional knowledge base entail? Arti cial intelligence, 55:1{60, 1992. [Levi, 1977] I. Levi. Subjunctives, dispositions and chances. Synthese, 34:423{455, 1977. [Levi, 1988] I. Levi. Iteration of conditionals and the Ramsey test. Synthese, 76:49{81, 1988. [Levi, 1996] I. Levi. For the Sake of the Argument: Ramsey Test Conditionals, Inductive Inference, and Nonmonotonic Reasoning. Cambridge University Press, Cambridge, England, 1996. [Lewis, 1971] D. Lewis. Completeness and decidability of three logics of counterfactual conditionals. Theoria, 37:74{85, 1971. [Lewis, 1973a] D. Lewis. Causation. Journal of Philosophy, 70:556{567, 1973. [Lewis, 1973b] D. Lewis. Counterfactuals. Harvard, Cambridge, MA, 1973. [Lewis, 1973c] D. Lewis. Counterfactuals and comparative possibility. Journal of Philosophical Logic, 2:418{446, 1973. [Lewis, 1976] D. Lewis. Probabilities of conditionals and conditional probabilities. Philosophical Review, 85:297{315, 1976. [Lewis, 1977] D. Lewis. Possible world semantics for counterfactuals logics: a rejoinder. Journal of Philosophical Logic, 6:359{363, 1977. [Lewis, 1979a] D. Lewis. Counterfactual dependence and time's arrow. No^us, 13:455{ 476, 1979. [Lewis, 1979b] D. Lewis. Scorekeeping in a language game. Journal of Philosophical Logic, 8:339{359, 1979. [Lewis, 1981a] D. Lewis. Ordering semantics and premise semantics for counterfactuals. Journal of Philosophical Logic, 10:217{234, 1981. [Lewis, 1981b] D. Lewis. A subjectivist's guide to objective change. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Lewis, 1986] D. Lewis. Probabilities of conditionals and conditional probabilities II. Philosophical Review, 95:581{589, 1986. [Lindstrom and Rabinowicz, 1992] S. Lindstrom and W. Rabinowicz. Belief revision, epistemic conditionals, and the Ramsey test. Synthese, 91:195{237, 1992. [Lindstrom and Rabinowicz, 1995] S. Lindstrom and W. Rabinowicz. The Ramsey test revisited. In G. Crocco, L. Fari~nas del Cerro, and A. Herzig, editors, Conditionals: From Philosophy to Computer Science. Oxford University Press, Oxford, 1995. [Lindstrom, 1996] S. Lindstrom. The Ramsey test and the indexicality of conditionals: A proposed resolution of Gardenfors' paradox. In A. Fuhrmann and H. Rott, editors, Logic, Action, and Information: Essays on Logic in Philosophy and Arti cial Intelligence. Walter de Gruyter, Berlin, 1996. [Loewer, 1976] B. Loewer. Counterfactuals with disjunctive antecedents. Journal of Philosophy, 73:531{536, 1976. [Loewer, 1978] B. Loewer. Cotenability and counterfactual logics. Journal of Philosophical Logic, 8:99{116, 1978. [Lowe, 1991] E.J. Lowe. Jackson on classifying conditionals. Analysis, 51:126-130, 1991.

96

DONALD NUTE AND CHARLES B. CROSS

[Makinson, 1989] D. Makinson. General theory of cumulative inference. In M. Reinfrank, J. de Kleer, and M. Ginsberg, editors, Lecture Notes in Arti cial Intelligence: NonMonotonic Reasoning, volume 346. Springer-Verlag, Berlin, 1989. [Makinson, 1990] D. Makinson. The Gardenfors impossibility theorem in nonmonotonic contexts. Studia Logica, 49:1{6, 1990. [Mayer, 1981] J. C. Mayer. A misplaced thesis of conditional logic. Journal of Philosophical Logic, 10:235{238, 1981. [McDermott, 1996] M. McDermott. On the truth conditions of certain `if'-sentences. The Philosophical Review, 105:1{37, 1996. [Mcgee, 1981] V. Mcgee. Finite matrices and the logic of conditionals. Journal of Philosophical Logic, 10:349{351, 1981. [McGee, 1985] V. McGee. A counterexample to modus ponens. Journal of Philosophy, 82:462{471, 1985. [McGee, 1989] V. McGee. Conditional probabilities and compounds of conditionals. Philosophical Review, 98:485{541, 1989. [McGee, 2000] V. McGee. To tell the truth about conditionals. Analysis, 60:107-111, 2000. [McKay and Inwagen, 1977] T. McKay and P. Van Inwagen. Counterfactuals with disjunctive antecedents. Philosophical Studies, 31:353{356, 1977. [Mellor, 1993] D.H. Mellor. How to believe a conditional. Journal of Philosophy, 90:233248, 1993.` [Moore, 1983] R. Moore. Semantical considerations on nonmonotonic logic. In Proceedings of the Eighth International Joint Conference on Arti cial Intelligence, volume 1. Morgan Kaufman, San Mateo, 1983. [Morreau, 1992] M. Morreau. Epistemic semantics for counterfactuals. Journal of Philosophical Logic, 21:33{62, 1992. [Morreau, 1997] M. Morreau. Fainthearted conditionals. The Journal of Philosophy, 94:187{211, 1997. [Nute and Mitcheltree, 1982] D. Nute and W. Mitcheltree. Review of [Adams, 1975b]. No^us, 15:432{436, 1982. [Nute, 1975a] D. Nute. Counterfactuals. Notre Dame Journal of Formal Logic, 16:476{ 482, 1975. [Nute, 1975b] D. Nute. Counterfactuals and the similarity of worlds. Journal of Philosophy, 72:73{778, 1975. [Nute, 1977] D. Nute. Scienti c law and nomological conditionals. Technical report, National Science Foundation, 1977. [Nute, 1978a] D. Nute. An incompleteness theorem for conditional logic. Notre Dame Journal of Formal Logic, 19:634{636, 1978. [Nute, 1978b] D. Nute. Simpli cation and substitution of counterfactual antecedents. Philosophia, 7:317{326, 1978. [Nute, 1979] D. Nute. Algebraic semantics for conditional logics. Reports on Mathematical Logic, 10:79{101, 1979. [Nute, 1980a] D. Nute. Conversational scorekeeping and conditionals. Journal of Philosophical Logic, 9:153{166, 1980. [Nute, 1980b] D. Nute. Topics in Conditional Logic. Reidel, Dordrecht, 1980. [Nute, 1981a] D. Nute. Causes, laws and law statements. Synthese, 48:347{370, 1981. [Nute, 1981b] D. Nute. Introduction. Journal of Philosophical Logic, 10:127{147, 1981. [Nute, 1981c] D. Nute. Review of [Pollock, 1976]. No^us, 15:212{219, 1981. [Nute, 1982 and 1991] D. Nute. Tense and conditionals. Technical report, Deutsche Forschungsgemeinschaft and the University of Georgia, 1982 and 1991. [Nute, 1983] D. Nute. Review of [Harper et al., 1981]. Philosophy of Science, 50:518{520, 1983. [Nute, 1991] D. Nute. Historical necessity and conditionals. No^us, 25:161{175, 1991. [Nute, 1994] D. Nute. Defeasible logic. In D. Gabbay and C. Hogger, editors, Handbook of Logic for Arti cial Intelligence and Logic Programming, volume III. Oxford University Press, Oxford, 1994.

CONDITIONAL LOGIC

97

[Pearl, 1994] J. Pearl. From Adams' conditionals to default expressions, causal conditionals, and counterfactuals. In E. Eells and B. Skyrms, editors, Probability and Conditionals: Belief Revision and Rational Decision. Cambridge University Press, Cambridge, England, 1994. [Pearl, 1995] J. Pearl. Causation, action, and counterfactuals. In A. Gammerman, editor, Computational Learning and Probabilistic Learning. John Wiley and Sons, New York, 1995. [Pollock, 1976] J. Pollock. Subjunctive Reasoning. Reidel, Dordrecht, 1976. [Pollock, 1981] J. Pollock. A re ned theory of counterfactuals. Journal of Philosophical Logic, 10:239{266, 1981. [Pollock, 1984] J. Pollock. Knowledge and Justi cation. Princeton University Press, Princeton, 1984. [Posch, 1980] G. Posch. Zur Semantik der Kontrafaktischen Konditionale. Narr, Tuebingen, 1980. [Post, 1981] J. Post. Review of [Pollock, 1976]. Philosophia, 9:405{420, 1981. [Ramsey, 1990] F. Ramsey. Philosophical Papers. Cambridge University Press, Cambridge, England, 1990. [Rescher, 1964] N. Rescher. Hypothetical Reasoning. Reidel, Dordrecht, 1964. [Rott, 1986] H. Rott. Ifs, though, and because. Erkenntnis, 25:345{370, 1986. [Rott, 1989] H. Rott. Conditionals and theory change: revisions, expansions, and additions. Synthese, 81:91{113, 1989. [Rott, 1991] H. Rott. A nonmonotonic conditional logic for belief revision. In A. Fuhrmann and M. Morreau, editors, The Logic of Theory Change. Cambridge University Press, Cambridge, England, 1991. [Sanford, 1992] D. Sanford. If P then Q. Routledge, London, 1992. [Schlechta and Makinson, 1994] K. Schlechta and D. Makinson. Local and global metrics for the semantics of counterfactual conditionals. Journal of applied Non-Classical Logics, 4:129{140, 1994. [Segerberg, 1968] K. Segerberg. Propositional logics related to Heyting's and Johansson's. Theoria, 34:26{61, 1968. [Segerberg, 1989] K. Segerberg. A note on an impossibility theorem of Gardenfors. No^us, 23:351{354, 1989. [Sellars, 1958] W. S. Sellars. Counterfactuals, dispositions and the causal modalities. In Feigl, Scriven, and Maxwell, editors, Minnesota Studies in the Philosophy of Science, volume 2. University of Minnesota, Minneapolis, 1958. [Slote, 1978] M. A. Slote. Time and counterfactuals. Philosophical Review, 87:3{27, 1978. [Stalnaker and Jerey, 1994] R. Stalnaker and R. Jerey. Conditionals as random variables. In E. Eells and B. Skyrms, editors, Probability and Conditionals. Cambridge University Press, Cambridge, England, 1994. [Stalnaker and Thomason, 1970] R. Stalnaker and R. Thomason. A semantical analysis of conditional logic. Theoria, 36:23{42, 1970. [Stalnaker, 1968] R. Stalnaker. A theory of conditionals. In N. Rescher, editor, Studies in Logical Theory. American Philosophical quarterly Monograph Series, No. 2, Blackwell, Oxford, 1968. Reprinted in [Harper et al., 1981]. [Stalnaker, 1970] R. Stalnaker. Probabilities and conditionals. Philosophy of Science, 28:64{80, 1970. [Stalnaker, 1975] R. Stalnaker. Indicative conditionals. Philosophia, 5:269{286, 1975. [Stalnaker, 1976] R. Stalnaker. Stalnaker to Van Fraassen. In C. Hooker and W. Harper, editors, Foundations of Probability Theory, Statistical Inference and Statistical Theories of Science. Reidel, Dordrecht, 1976. [Stalnaker, 1981a] R. Stalnaker. A defense of conditional excluded middle. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Stalnaker, 1981b] R. Stalnaker. Letter to David Lewis. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Stalnaker, 1984] R. Stalnaker. Inquiry. MIT Press, Cambridge, MA, 1984. [Swain, 1978] M. Swain. A counterfactual analysis of event causation. Philosophical Studies, 34:1{19, 1978.

98

DONALD NUTE AND CHARLES B. CROSS

[Thomason and Gupta, 1981] R. Thomason and A. Gupta. A theory of conditionals in the context of branching time. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Thomason, 1985] R. Thomason. Note on tense and subjunctive conditionals. Philosophy of Science, pages 151{153, 1985. [Traugott et al., 1986] E. Traugott, A. ter Meulen, J. Reilly, and C. Ferguson, editors. On Conditionals. Cambridge University Press, Cambridge, England, 1986. [Turner, 1981] R. Turner. Counterfactuals without possible worlds. Journal of Philosophical Logic, 10:453{493, 1981. [van Benthem, 1984] J. van Benthem. Foundations of conditional logic. Journal of Philosophical Logic, 13:303{349, 1984. [Van Fraassen, 1974] B. C. Van Fraassen. Hidden variables in conditional logic. Theoria, 40:176{190, 1974. [Van Fraassen, 1976] B. C. Van Fraassen. Probabilities of conditionals. In C. Hooker and W. Harper, editors, Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science. Reidel, Dordrecht, 1976. [Van Fraassen, 1981] B. C. Van Fraassen. A temporal framework for conditionals and chance. In W. Harper, R. Stalnaker, and G. Pearce, editors, Ifs. Reidel, Dordrecht, 1981. [Veltman, 1976] F. Veltman. Prejudices, presuppositions and the theory of conditionals. In J. Groenendijk and M. Stokhof, editors, Amsterdam Papers in Formal Grammar. Vol. 1, Centrale Interfaculteit, Universiteit van Amsterdam, 1976. [Veltman, 1985] Frank Veltman. Logics for Conditionals. Ph.D. dissertation, University of Amsterdam, Amsterdam, 1985. [Warmbrod, 1981] Warmbrod. Counterfactuals and substitution of equivalent antecedents. Journal of Philosophical Logic, 10:267{289, 1981. [Winslett, 1990] M. Winslett. Updating Logical Databases. Cambridge University Press, Cambridge, England, 1990. [Woods, 1997] M. Woods. Conditionals. Oxford University Press, Oxford, 1997. Published posthumously. Edited by D. Wiggins, with a commentary by D. Edgington.

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

DYNAMIC LOGIC PREFACE Dynamic Logic (DL) is a formal system for reasoning about programs. Traditionally, this has meant formalizing correctness speci cations and proving rigorously that those speci cations are met by a particular program. Other activities fall into this category as well: determining the equivalence of programs, comparing the expressive power of various programming constructs, synthesizing programs from speci cations, etc. Formal systems too numerous to mention have been proposed for these purposes, each with its own peculiarities. DL can be described as a blend of three complementary classical ingredients: rst-order predicate logic, modal logic, and the algebra of regular events. These components merge to form a system of remarkable unity that is theoretically rich as well as practical. The name Dynamic Logic emphasizes the principal feature distinguishing it from classical predicate logic. In the latter, truth is static : the truth value of a formula ' is determined by a valuation of its free variables over some structure. The valuation and the truth value of ' it induces are regarded as immutable; there is no formalism relating them to any other valuations or truth values. In Dynamic Logic, there are explicit syntactic constructs called programs whose main role is to change the values of variables, thereby changing the truth values of formulas. For example, the program x := x + 1 over the natural numbers changes the truth value of the formula \x is even". Such changes occur on a metalogical level in classical predicate logic. For example, in Tarski's de nition of truth of a formula, if u : fx; y; : : : g ! N is a valuation of variables over the natural numbers N , then the formula 9x x2 = y is de ned to be true under the valuation u i there exists an a 2 N such that the formula x2 = y is true under the valuation u[x=a], where u[x=a] agrees with u everywhere except x, on which it takes the value a. This de nition involves a metalogical operation that produces u[x=a] from u for all possible values a 2 N . This operation becomes explicit in DL in the form of the program x := ?, called a nondeterministic or wildcard assignment . This is a rather unconventional program, since it is not eective; however, it is quite useful as a descriptive tool. A more conventional way to obtain a square root of y, if it exists, would be the program (1) x := 0 ; while x2 < y do x := x + 1:

In DL, such programs are rst-class objects on a par with formulas, complete with a collection of operators for forming compound programs inductively

100

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

from a basis of primitive programs. To discuss the eect of the execution of a program on the truth of a formula ', DL uses a modal construct <>', which intuitively states, \It is possible to execute starting from the current state and halt in a state satisfying '." There is also the dual construct []', which intuitively states, \If halts when started in the current state, then it does so in a state satisfying '." For example, the rst-order formula 9x x2 = y is equivalent to the DL formula <x := ?> x2 = y. In order to instantiate the quanti er eectively, we might replace the nondeterministic assignment inside the < > with the while program (1); over N , the two formulas would be equivalent. Apart from the obvious heavy reliance on classical logic, computability theory and programming, the subject has its roots in the work of [Thiele, 1966] and [Engeler, 1967] in the late 1960's, who were the rst to advance the idea of formulating and investigating formal systems dealing with properties of programs in an abstract setting. Research in program veri cation

ourished thereafter with the work of many researchers, notably [Floyd, 1967], [Hoare, 1969], [Manna, 1974], and [Salwicki, 1970]. The rst precise development of a DL-like system was carried out by [Salwicki, 1970], following [Engeler, 1967]. This system was called Algorithmic Logic. A similar system, called Monadic Programming Logic, was developed by [Constable, 1977]. Dynamic Logic, which emphasizes the modal nature of the program/assertion interaction, was introduced by [Pratt, 1976]. Background material on mathematical logic, computability, formal languages and automata, and program veri cation can be found in [Shoen eld, 1967] (logic), [Rogers, 1967] (recursion theory), [Kozen, 1997a] (formal languages, automata, and computability), [Keisler, 1971] (in nitary logic), [Manna, 1974] (program veri cation), and [Harel, 1992; Lewis and Papadimitriou, 1981; Davis et al., 1994] (computability and complexity). Much of this introductory material as it pertains to DL can be found in the authors' text [Harel et al., 2000]. There are by now a number of books and survey papers treating logics of programs, program veri cation, and Dynamic Logic [Apt and Olderog, 1991; Backhouse, 1986; Harel, 1979; Harel, 1984; Parikh, 1981; Goldblatt, 1982; Goldblatt, 1987; Knijnenburg, 1988; Cousot, 1990; Emerson, 1990; Kozen and Tiuryn, 1990]. In particular, much of this chapter is an abbreviated summary of material from the authors' text [Harel et al., 2000], to which we refer the reader for a more complete treatment. Full proofs of many of the theorems cited in this chapter can be found there, as well as extensive introductory material on logic and complexity along with numerous examples and exercises.

DYNAMIC LOGIC

101

1 REASONING ABOUT PROGRAMS

1.1 Programs For us, a program is a recipe written in a formal language for computing desired output data from given input data. EXAMPLE 1. The following program implements the Euclidean algorithm for calculating the greatest common divisor (gcd) of two integers. It takes as input a pair of integers in variables x and y and outputs their gcd in variable x: while y 6= 0 do begin z := x mod y; x := y; y := z end The value of the expression x mod y is the (nonnegative) remainder obtained when dividing x by y using ordinary integer division. Programs normally use variables to hold input and output values and intermediate results. Each variable can assume values from a speci c domain of computation , which is a structure consisting of a set of data values along with certain distinguished constants, basic operations, and tests that can be performed on those values, as in classical rst-order logic. In the program above, the domain of x, y, and z might be the integers Z along with basic operations including integer division with remainder and tests including 6=. In contrast with the usual use of variables in mathematics, a variable in a program normally assumes dierent values during the course of the computation. The value of a variable x may change whenever an assignment x := t is performed with x on the left-hand side. In order to make these notions precise, we will have to specify the programming language and its semantics in a mathematically rigorous way. In this section we give a brief introduction to some of these languages and the role they play in program veri cation.

1.2 States and Executions As mentioned above, a program can change the values of variables as it runs. However, if we could freeze time at some instant during the execution of the program, we could presumably read the values of the variables at that instant, and that would give us an instantaneous snapshot of all information that we would need to determine how the computation would proceed from that point. This leads to the concept of a state |intuitively, an instantaneous description of reality.

102

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Formally, we will de ne a state to be a function that assigns a value to each program variable. The value for variable x must belong to the domain associated with x. In logic, such a function is called a valuation . At any given instant in time during its execution, the program is thought to be \in" some state, determined by the instantaneous values of all its variables. If an assignment statement is executed, say x := 2, then the state changes to a new state in which the new value of x is 2 and the values of all other variables are the same as they were before. We assume that this change takes place instantaneously; note that this is a mathematical abstraction, since in reality basic operations take some time to execute. A typical state for the gcd program above is (15; 27; 0; : : : ), where (say) the rst, second, and third components of the sequence denote the values assigned to x, y, and z respectively. The ellipsis \: : : " refers to the values of the other variables, which we do not care about, since they do not occur in the program. A program can be viewed as a transformation on states. Given an initial (input) state, the program will go through a series of intermediate states, perhaps eventually halting in a nal (output) state. A sequence of states that can occur from the execution of a program starting from a particular input state is called a trace . As a typical example of a trace for the program above, consider the initial state (15; 27; 0) (we suppress the ellipsis). The program goes through the following sequence of states: (15,27,0), (15,27,15), (27,27,15), (27,15,15), (27,15,12), (15,15,12), (15,12,12), (15,12,3), (12,12,3), (12,3,3), (12,3,0), (3,3,0), (3,0,0). The value of x in the last (output) state is 3, the gcd of 15 and 27. The binary relation consisting of the set of all pairs of the form (input state, output state) that can occur from the execution of a program , or in other words, the set of all rst and last states of traces of , is called the input/output relation of . For example, the pair ((15; 27; 0); (3; 0; 0)) is a member of the input/output relation of the gcd program above, as is the pair (( 6; 4; 303); (2; 0; 0)). The values of other variables besides x, y, and z are not changed by the program. These values are therefore the same in the output state as in the input state. In this example, we may think of the variables x and y as the input variables , x as the output variable , and z as a work variable , although formally there is no distinction between any of the variables, including the ones not occurring in the program.

1.3 Programming Constructs In subsequent sections we will consider a number of programming constructs. In this section we introduce some of these constructs and de ne a few general classes of languages built on them.

DYNAMIC LOGIC

103

In general, programs are built inductively from atomic programs and tests using various program operators .

While Programs A popular choice of programming language in the literature on DL is the family of deterministic while programs. This language is a natural abstraction of familiar imperative programming languages such as Pascal or C. Dierent versions can be de ned depending on the choice of tests allowed and whether or not nondeterminism is permitted. The language of while programs is de ned inductively. There are atomic programs and atomic tests, as well as program constructs for forming compound programs from simpler ones. In the propositional version of Dynamic Logic (PDL), atomic programs are simply letters a; b; : : : from some alphabet. Thus PDL abstracts away from the nature of the domain of computation and studies the pure interaction between programs and propositions. For the rst-order versions of DL, atomic programs are simple assignments x := t, where x is a variable and t is a term. In addition, a nondeterministic or wildcard assignment x := ? or nondeterministic choice construct may be allowed. Tests can be atomic tests , which for propositional versions are simply propositional letters p, and for rst-order versions are atomic formulas p(t1 ; : : : ; tn ), where t1 ; : : : ; tn are terms and p is an n-ary relation symbol in the vocabulary of the domain of computation. In addition, we include the constant tests 1 and 0. Boolean combinations of atomic tests are often allowed, although this adds no expressive power. These versions of DL are called poor test . More complicated tests can also be included. These versions of DL are sometimes called rich test . In rich test versions, the families of programs and tests are de ned by mutual induction. Compound programs are formed from the atomic programs and tests by induction, using the composition , conditional , and while operators. Formally, if ' is a test and and are programs, then the following are programs:

; if ' then else while ' do . We can also parenthesize with begin : : : end where necessary. The gcd program of Example 1 above is an example of a while program. The semantics of these constructs is de ned to correspond to the ordinary operational semantics familiar from common programming languages.

104

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Regular Programs Regular programs are more general than while programs, but not by much. The advantage of regular programs is that they reduce the relatively more complicated while program operators to much simpler constructs. The deductive system becomes comparatively simpler too. They also incorporate a simple form of nondeterminism. For a given set of atomic programs and tests, the set of regular programs is de ned as follows:

(i) any atomic program is a program (ii) if ' is a test, then '? is a program (iii) if and are programs, then ; is a program; (iv) if and are programs, then [ is a program; (v) if is a program, then is a program.

These constructs have the following intuitive meaning: (i) Atomic programs are basic and indivisible; they execute in a single step. They are called atomic because they cannot be decomposed further. (ii) The program '? tests whether the property ' holds in the current state. If so, it continues without changing state. If not, it blocks without halting. (iii) The operator ; is the sequential composition operator. The program ; means, \Do , then do ." (iv) The operator [ is the nondeterministic choice operator. The program [ means, \Nondeterministically choose one of or and execute it." (v) The operator is the iteration operator. The program means, \Execute some nondeterministically chosen nite number of times."

Keep in mind that these descriptions are meant only as intuitive aids. A formal semantics will be given in Section 2.2, in which programs will be interpreted as binary input/output relations and the programming constructs above as operators on binary relations. The operators [; ; ; may be familiar from automata and formal language theory (see [Kozen, 1997a]), where they are interpreted as operators on sets of strings over a nite alphabet. The language-theoretic and relationtheoretic semantics share much in common; in fact, they have the same equational theory, as shown in [Kozen, 1994a].

DYNAMIC LOGIC

105

The operators of deterministic while programs can be de ned in terms of the regular operators: (2) if ' then else (3) while ' do

def = '? ; [ :'? ; def = ('? ; ) ; :'?

The class of while programs is equivalent to the subclass of the regular programs in which the program operators [, ?, and are constrained to appear only in these forms. Recursion Recursion can appear in programming languages in several forms. Two such manifestations are recursive calls and stacks . Under certain very general conditions, the two constructs can simulate each other. It can also be shown that recursive programs and while programs are equally expressive over the natural numbers, whereas over arbitrary domains, while programs are strictly weaker. While programs correspond to what is often called tail recursion or iteration. R.E. Programs

A nite computation sequence of a program , or seq for short, is a nitelength string of atomic programs and tests representing a possible sequence of atomic steps that can occur in a halting execution of . Seqs are denoted ; ; : : : . The set of all seqs of a program is denoted CS (). We use the word \possible" loosely|CS () is determined by the syntax of alone. Because of tests that evaluate to false, CS () may contain seqs that are never executed under any interpretation. The set CS () is a subset of A , where A is the set of atomic programs and tests occurring in . For while programs, regular programs, or recursive programs, we can de ne the set CS () formally by induction on syntax. For example, for regular programs, CS (a) CS (skip) CS (fail) CS ( ; ) CS ( [ ) CS ( )

def = def = def = def = def = def =

fag; a an atomic program or test f"g

? f ; j 2 CS (); CS () [ CS ( ) CS () [ = CS (n ); n0

2 CS ( )g

106

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

where

0 n+1

def = skip def = n ; :

For example, if a is an atomic program and p an atomic formula, then the program

while p do a = (p? ; a) ; :p?

has as seqs all strings of the form (p? ; a)n ; :p? = p| ?; a; p?; a{z; ; p?; a}; :p? n

for all n 0. Note that each seq of a program is itself a program, and CS () = fg:

While programs and regular programs give rise to regular sets of seqs, and recursive programs give rise to context-free sets of seqs. Taking this a step further, we can de ne an r.e. program to be simply a recursively enumerable set of seqs. This is the most general programming language we will consider in the context of DL; it subsumes all the others in expressive power. Nondeterminism

We should say a few words about the concept of nondeterminism and its role in the study of logics and languages, since this concept often presents diÆculty the rst time it is encountered. In some programming languages we will consider, the traces of a program need not be uniquely determined by their start states. When this is possible, we say that the program is nondeterministic. A nondeterministic program can have both divergent and convergent traces starting from the same input state, and for such programs it does not make sense to say that the program halts on a certain input state or that it loops on a certain input state; there may be dierent computations starting from the same input state that do each. There are several concrete ways nondeterminism can enter into programs. One construct is the nondeterministic or wildcard assignment x := ?. Intuitively, this operation assigns an arbitrary element of the domain to the variable x, but it is not determined which one.1 Another source of nondeterminism is the unconstrained use of the choice operator [ in regular 1 This construct is often called random assignment in the literature. This terminology is misleading, because it has nothing at all to do with probability.

DYNAMIC LOGIC

107

programs. A third source is the iteration operator in regular programs. A fourth source is r.e. programs, which are just r.e. sets of seqs; initially, the seq to execute is chosen nondeterministically. For example, over N , the r.e. program

fx := n j n 0g is equivalent to the regular program

x := 0 ; (x := x + 1) :

Nondeterministic programs provide no explicit mechanism for resolving the nondeterminism. That is, there is no way to determine which of many possible next steps will be taken from a given state. This is hardly realistic. So why study nondeterminism at all if it does not correspond to anything operational? One good answer is that nondeterminism is a valuable tool that helps us understand the expressiveness of programming language constructs. It is useful in situations in which we cannot necessarily predict the outcome of a particular choice, but we may know the range of possibilities. In reality, computations may depend on information that is out of the programmer's control, such as input from the user or actions of other processes in the system. Nondeterminism is useful in modeling such situations. The importance of nondeterminism is not limited to logics of programs. Indeed, the most important open problem in the eld of computational complexity theory, the P =NP problem, is formulated in terms of nondeterminism.

1.4 Program Veri cation Dynamic Logic and other program logics are meant to be useful tools for facilitating the process of producing correct programs. One need only look at the miasma of buggy software to understand the dire need for such tools. But before we can produce correct software, we need to know what it means for it to be correct. It is not good enough to have some vague idea of what is supposed to happen when a program is run or to observe it running on some collection of inputs. In order to apply formal veri cation tools, we must have a formal speci cation of correctness for the veri cation tools to work with. In general, a correctness speci cation is a formal description of how the program is supposed to behave. A given program is correct with respect to a correctness speci cation if its behavior ful lls that speci cation. For the gcd program of Example 1, the correctness might be speci ed informally by the assertion If the input values of x and y are positive integers c and d, respectively, then

108

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

(i) the output value of x is the gcd of c and d, and (ii) the program halts. Of course, in order to work with a formal veri cation system, these properties must be expressed formally in a language such as rst-order logic. The assertion (ii) is part of the correctness speci cation because programs do not necessarily halt, but may produce in nite traces for certain inputs. A nite trace, as for example the one produced by the gcd program above on input state (15,27,0), is called halting, terminating, or convergent. In nite traces are called looping or divergent. For example, the program

while x > 7 do x := x + 3 loops on input state (8; : : : ), producing the in nite trace (8; : : : ); (11; : : : ); (14; : : : ); : : : Dynamic Logic can reason about the behavior of a program that is manifested in its input/output relation. It is not well suited to reasoning about program behavior manifested in intermediate states of a computation (although there are close relatives, such as Process Logic and Temporal Logic, that are). This is not to say that all interesting program behavior is captured by the input/output relation, and that other types of behavior are irrelevant or uninteresting. Indeed, the restriction to input/output relations is reasonable only when programs are supposed to halt after a nite time and yield output results. This approach will not be adequate for dealing with programs that normally are not supposed to halt, such as operating systems. For programs that are supposed to halt, correctness criteria are traditionally given in the form of an input/output speci cation consisting of a formal relation between the input and output states that the program is supposed to maintain, along with a description of the set of input states on which the program is supposed to halt. The input/output relation of a program carries all the information necessary to determine whether the program is correct relative to such a speci cation. Dynamic Logic is well suited to this type of veri cation. It is not always obvious what the correctness speci cation ought to be. Sometimes, producing a formal speci cation of correctness is as diÆcult as producing the program itself, since both must be written in a formal language. Moreover, speci cations are as prone to bugs as programs. Why bother then? Why not just implement the program with some vague speci cation in mind? There are several good reasons for taking the eort to produce formal speci cations:

DYNAMIC LOGIC

109

1. Often when implementing a large program from scratch, the programmer may have been given only a vague idea of what the nished product is supposed to do. This is especially true when producing software for a less technically inclined employer. There may be a rough informal description available, but the minor details are often left to the programmer. It is very often the case that a large part of the programming process consists of taking a vaguely speci ed problem and making it precise. The process of formulating the problem precisely can be considered a de nition of what the program is supposed to do. And it is just good programming practice to have a very clear idea of what we want to do before we start doing it. 2. In the process of formulating the speci cation, several unforeseen cases may become apparent, for which it is not clear what the appropriate action of the program should be. This is especially true with error handling and other exceptional situations. Formulating a speci cation can de ne the action of the program in such situations and thereby tie up loose ends. 3. The process of formulating a rigorous speci cation can sometimes suggest ideas for implementation, because it forces us to isolate the issues that drive design decisions. When we know all the ways our data are going to be accessed, we are in a better position to choose the right data structures that optimize the tradeos between eÆciency and generality. 4. The speci cation is often expressed in a language quite dierent from the programming language. The speci cation is functional |it tells what the program is supposed to do|as opposed to imperative |how to do it. It is often easier to specify the desired functionality independent of the details of how it will be implemented. For example, we can quite easily express what it means for a number x to be the gcd of y and z in rst-order logic without even knowing how to compute it. 5. Verifying that a program meets its speci cation is a kind of sanity check. It allows us to give two solutions to the problem|once as a functional speci cation, and once as an algorithmic implementation| and lets us verify that the two are compatible. Any incompatibilities between the program and the speci cation are either bugs in the program, bugs in the speci cation, or both. The cycle of re ning the speci cation, modifying the program to meet the speci cation, and reverifying until the process converges can lead to software in which we have much more con dence.

110

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Partial and Total Correctness

Typically, a program is designed to implement some functionality. As mentioned above, that functionality can often be expressed formally in the form of an input/output speci cation. Concretely, such a speci cation consists of an input condition or precondition ' and an output condition or postcondition . These are properties of the input state and the output state, respectively, expressed in some formal language such as the rst-order language of the domain of computation. The program is supposed to halt in a state satisfying the output condition whenever the input state satis es the input condition. We say that a program is partially correct with respect to a given input/output speci cation '; if, whenever the program is started in a state satisfying the input condition ', then if and when it ever halts, it does so in a state satisfying the output condition . The de nition of partial correctness does not stipulate that the program halts; this is what we mean by partial. A program is totally correct with respect to an input/output speci cation '; if

it is partially correct with respect to that speci cation; and

it halts whenever it is started in a state satisfying the input condition '.

The input/output speci cation imposes no requirements when the input state does not satisfy the input condition '|the program might as well loop in nitely or erase memory. This is the \garbage in, garbage out" philosophy. If we really do care what the program does on some of those input states, then we had better rewrite the input condition to include them and say formally what we want to happen in those cases. For example, in the gcd program of Example 1, the output condition might be the condition (i) stating that the output value of x is the gcd of the input values of x and y. We can express this completely formally in the language of rst-order number theory. We may try to start o with the input speci cation '0 = 1 (true ); that is, no restrictions on the input state at all. Unfortunately, if the initial value of y is 0 and x is negative, the nal value of x will be the same as the initial value, thus negative. If we expect all gcds to be positive, this would be wrong. Another problematic situation arises when the initial values of x and y are both 0; in this case the gcd is not de ned. Therefore, the program as written is not partially correct with respect to the speci cation '0 ; . We can remedy the situation by providing an input speci cation that rules out these troublesome input values. We can limit the input states to those in which x and y are both nonnegative and not both zero by taking

DYNAMIC LOGIC

111

the input speci cation

'1 = (x 0 ^ y > 0)

_ (x > 0 ^ y 0):

The gcd program of Example 1 above would be partially correct with respect to the speci cation '1 ; . It is also totally correct, since the program halts on all inputs satisfying '1 . Perhaps we want to allow any input in which not both x and y are zero. In that case, we should use the input speci cation '2 = :(x = 0 ^ y = 0). But then the program of Example 1 is not partially correct with respect to '2 ; ; we must amend the program to produce the correct (positive) gcd on negative inputs.

1.5 Exogenous and Endogenous Logics There are two main approaches to modal logics of programs: the exogenous approach, exempli ed by Dynamic Logic and its precursor Hoare Logic [Hoare, 1969], and the endogenous approach, exempli ed by Temporal Logic and its precursor, the invariant assertions method of [Floyd, 1967]. A logic is exogenous if its programs are explicit in the language. Syntactically, a Dynamic Logic program is a well-formed expression built inductively from primitive programs using a small set of program operators. Semantically, a program is interpreted as its input/output relation. The relation denoted by a compound program is determined by the relations denoted by its parts. This aspect of compositionality allows analysis by structural induction. The importance of compositionality is discussed in [van Emde Boas, 1978]. In Temporal Logic, the program is xed and is considered part of the structure over which the logic is interpreted. The current location in the program during execution is stored in a special variable for that purpose, called the program counter, and is part of the state along with the values of the program variables. Instead of program operators, there are temporal operators that describe how the program variables, including the program counter, change with time. Thus Temporal Logic sacri ces compositionality for a less restricted formalism. We discuss Temporal Logic further in Section 14.2. 2 PROPOSITIONAL DYNAMIC LOGIC (PDL) Propositional Dynamic Logic (PDL) plays the same role in Dynamic Logic that classical propositional logic plays in classical predicate logic. It describes the properties of the interaction between programs and propositions that are independent of the domain of computation. Since PDL is a subsystem of rst-order DL, we can be sure that all properties of PDL that we discuss in this section will also be valid in rst-order DL.

112

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Since there is no domain of computation in PDL, there can be no notion of assignment to a variable. Instead, primitive programs are interpreted as arbitrary binary relations on an abstract set of states K . Likewise, primitive assertions are just atomic propositions and are interpreted as arbitrary subsets of K . Other than this, no special structure is imposed. This level of abstraction may at rst appear too general to say anything of interest. On the contrary, it is a very natural level of abstraction at which many fundamental relationships between programs and propositions can be observed. For example, consider the PDL formula (4) [](' ^ )

$ []' ^ [] :

The left-hand side asserts that the formula ' ^ must hold after the execution of program , and the right-hand side asserts that ' must hold after execution of and so must . The formula (4) asserts that these two statements are equivalent. This implies that to verify a conjunction of two postconditions, it suÆces to verify each of them separately. The assertion (4) holds universally, regardless of the domain of computation and the nature of the particular , ', and . As another example, consider (5) [ ; ]'

$ [][ ]':

The left-hand side asserts that after execution of the composite program ; , ' must hold. The right-hand side asserts that after execution of the program , [ ]' must hold, which in turn says that after execution of , ' must hold. The formula (5) asserts the logical equivalence of these two statements. It holds regardless of the nature of , , and '. Like (4), (5) can be used to simplify the veri cation of complicated programs. As a nal example, consider the assertion (6) []p

$ [ ]p

where p is a primitive proposition symbol and and are programs. If this formula is true under all interpretations, then and are equivalent in the sense that they behave identically with respect to any property expressible in PDL or any formal system containing PDL as a subsystem. This is because the assertion will hold for any substitution instance of (6). For example, the two programs

= if ' then else Æ = if :' then Æ else are equivalent in the sense of (6).

DYNAMIC LOGIC

113

2.1 Syntax Syntactically, PDL is a blend of three classical ingredients: propositional logic, modal logic, and the algebra of regular expressions. There are several versions of PDL, depending on the choice of program operators allowed. In this section we will introduce the basic version, called regular PDL. Variations of this basic version will be considered in later sections. The language of regular PDL has expressions of two sorts: propositions or formulas '; ; : : : and programs ; ; ; : : : . There are countably many atomic symbols of each sort. Atomic programs are denoted a; b; c; : : : and the set of all atomic programs is denoted 0 . Atomic propositions are denoted p; q; r; : : : and the set of all atomic propositions is denoted 0 . The set of all programs is denoted and the set of all propositions is denoted . Programs and propositions are built inductively from the atomic ones using the following operators: Propositional operators:

! 0

implication falsity

Program operators: ;

[

composition choice iteration

Mixed operators: []

?

necessity test

The de nition of programs and propositions is by mutual induction. All atomic programs are programs and all atomic propositions are propositions. If '; are propositions and ; are programs, then

'! 0 []'

propositional implication propositional falsity program necessity

are propositions and

; [ '?

sequential composition nondeterministic choice iteration test

are programs. In more formal terms, we de ne the set of all programs and the set of all propositions to be the smallest sets such that

114

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

0 if '; 2 , then ' ! 2 and 0 2 if ; 2 , then ; , [ , and 2 if 2 and ' 2 , then []' 2 if ' 2 then '? 2 : 0

Note that the inductive de nitions of programs and propositions are intertwined and cannot be separated. The de nition of propositions depends on the de nition of programs because of the construct []', and the de nition of programs depends on the de nition of propositions because of the construct '?. Note also that we have allowed all formulas as tests. This is the rich test version of PDL. Compound programs and propositions have the following intuitive meanings: []' \It is necessary that after executing ,

;

' is true."

\Execute , then execute ."

[ \Choose either or nondeterministically and execute it." '?

\Execute a nondeterministically chosen nite number of times (zero or more)." \Test '; proceed if true, fail if false."

We avoid parentheses by assigning precedence to the operators: unary operators, including [], bind tighter than binary ones, and ; binds tighter than [. Thus the expression [; [ ]' _

should be read

([(; ( )) [ ( )]') _ :

Of course, parentheses can always be used to enforce a particular parse of an expression or to enhance readability. Also, under the semantics to be given in the next section, the operators ; and [ will turn out to be associative, so we may write ; ; and [ [ without ambiguity. We often omit the symbol ; and write the composition ; as . The propositional operators ^, _, :, $, and 1 can be de ned from ! and 0 in the usual way.

DYNAMIC LOGIC

115

The possibility operator < > is the modal dual of the necessity operator

[ ]. It is de ned by

def = :[]:': The propositions []' and <>' are read \box '" and \diamond '," <>'

respectively. The latter has the intuitive meaning, \There is a computation of that terminates in a state satisfying '." One important dierence between < > and [ ] is that <>' implies that terminates, whereas []' does not. Indeed, the formula []0 asserts that no computation of terminates, and the formula []1 is always true, regardless of . In addition, we de ne

if '1 ! 1

skip fail 'n ! n

j j do '1 ! 1 j j 'n ! n od if ' then else while ' do repeat until '

f'g f g

def = 1? def = 0? def = '1 ?; 1 [ [ 'n ?; n n n [ ^ def = ( 'i ?; i ) ; ( :'i )? i=1 i=1 def = if ' ! j :' ! = '?; [ :'?; def = do ' ! od = ('?; ) ; :'? def = ; while :' do = ; (:'?; ) ; '? def = ' ! [] :

The programs skip and fail are the program that does nothing (noop) and the failing program, respectively. The ternary if-then-else operator and the binary while-do operator are the usual conditional and while loop constructs found in conventional programming languages. The constructs if-j- and do-j-od are the alternative guarded command and iterative guarded command constructs, respectively. The construct f'g f g is the Hoare partial correctness assertion. We will argue later that the formal de nitions of these operators given above correctly model their intuitive behavior.

2.2 Semantics The semantics of PDL comes from the semantics for modal logic. The structures over which programs and propositions of PDL are interpreted

116

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

are called Kripke frames in honor of Saul Kripke, the inventor of the formal semantics of modal logic. A Kripke frame is a pair K = (K; mK ); where K is a set of elements u; v; w; : : : called states and mK is a meaning function assigning a subset of K to each atomic proposition and a binary relation on K to each atomic program. That is, mK (p) K; p 2 0 mK (a) K K; a 2 0 : We will extend the de nition of the function mK by induction below to give a meaning to all elements of and such that mK (') K; '2 mK () K K; 2 : Intuitively, we can think of the set mK (') as the set of states satisfying the proposition ' in the model K, and we can think of the binary relation mK () as the set of input/output pairs of states of the program . Formally, the meanings mK (') of ' 2 and mK () of 2 are de ned by mutual induction on the structure of ' and . The basis of the induction, which speci es the meanings of the atomic symbols p 2 0 and a 2 0 , is already given in the speci cation of K. The meanings of compound propositions and programs are de ned as follows.

mK (' ! ) mK (0) mK ([]') (7)

mK (; )

mK ( [ ) (8) mK ( ) mK ('?)

def = (K mK (')) [ mK ( ) def = ? def = K (mK () Æ (K mK ('))) = fu j 8v 2 K if (u; v) 2 mK () then v 2 mK (')g def = mK () Æ mK ( ) = f(u; v) j 9w 2 K (u; w) 2 mK () and (w; v) 2 mK ( )g def = mK () [ mK ( ) [ def = mK () = mK ()n n0 def = f(u; u) j u 2 mK (')g:

The operator Æ in (7) is relational composition. In (8), the rst occurrence of is the iteration symbol of PDL, and the second is the re exive transitive closure operator on binary relations. Thus (8) says that the program is interpreted as the re exive transitive closure of mK (). We write K; u ' and u 2 mK (') interchangeably, and say that u satis es ' in K, or that ' is true at state u in K. We may omit the K and write u '

DYNAMIC LOGIC

117

when K is understood. The notation u 2 ' means that u does not satisfy ', or in other words that u 62 mK ('). In this notation, we can restate the de nition above equivalently as follows:

u'! u20 u []' (u; v) 2 mK ( ) (u; v) 2 mK ( [ ) (u; v) 2 mK ( ) (u; v) 2 mK ('?)

def u ' implies u () def () def () def () def ()

8v if (u; v) 2 mK () then v ' 9w (u; w) 2 mK () and (w; v) 2 mK ( ) (u; v) 2 mK () or (u; v) 2 mK ( ) 9n 0 9u0 ; : : : ; un u = u0; v = un ; and (ui ; ui+1 ) 2 mK (); 0 i n 1 def () u = v and u ':

The de ned operators inherit their meanings from these de nitions:

mK (' _ ) mK (' ^ ) mK (:') mK (<>')

def = def = def = def =

= mK (1) def = def mK (skip) = mK (fail) def =

mK (') [ mK ( ) mK (') \ mK ( ) K mK (')

fu j 9v 2 K (u; v) 2 mK () and v 2 mK (')g mK () Æ mK (') K

mK (1?) = ; the identity relation mK (0?) = ?:

In addition, the if-then-else, while-do, and guarded commands inherit their semantics from the above de nitions, and the input/output relations given by the formal semantics capture their intuitive operational meanings. For example, the relation associated with the program while ' do is the set of pairs (u; v) for which there exist states u0 ; u1; : : : ; un, n 0, such that u = u0 , v = un, ui 2 mK (') and (ui ; ui+1 ) 2 mK () for 0 i < n, and un 62 mK ('). This version of PDL is usually called regular PDL and the elements of are called regular programs because of the primitive operators [, ;, and , which are familiar from regular expressions. Programs can be viewed as regular expressions over the atomic programs and tests. In fact, it can be shown that if p is an atomic proposition symbol, then any two test-free programs ; are equivalent as regular expressions|that is, they represent the same regular set|if and only if the formula <>p $ < >p is valid.

118

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

EXAMPLE 2. Let p be an atomic proposition, let a be an atomic program, and let K = (K; mK ) be a Kripke frame with

K = mK (p) = mK (a) =

fu; v; wg fu; vg f(u; v); (u; w); (v; w); (w; v)g:

s s s

The following diagram illustrates K.

w

7So

a

u

SSa -wS v a

p

In this structure, u :p ^ p, but v [a]:p and w [a]p. Moreover, every state of K satis es the formula [(aa) ]p ^ [(aa) ]:p:

2.3 Computation Sequences Let be a program. Recall from Section 1.3 that a nite computation sequence of is a nite-length string of atomic programs and tests representing a possible sequence of atomic steps that can occur in a halting execution of . These strings are called seqs and are denoted ; ; : : : . The set of all such sequences is denoted CS (). We use the word \possible" here loosely|CS () is determined by the syntax of alone, and may contain strings that are never executed in any interpretation. The formal de nition of CS () was given in Section 1.3. Note that each nite computation sequence of a program is itself a program, and CS ( ) = f g. Moreover, the following proposition is not diÆcult to prove by induction on the structure of : PROPOSITION 3.

mK () =

[

2CS ()

mK ():

2.4 Satis ability and Validity The de nitions of satis ability and validity of propositions come from modal logic. Let K = (K; mK ) be a Kripke frame and let ' be a proposition. We have de ned in Section 2.2 what it means for K; u '. If K; u ' for some

DYNAMIC LOGIC

119

u 2 K , we say that ' is satis able in K. If ' is satis able in some K, we say that ' is satis able. If K; u ' for all u 2 K , we write K ' and say that ' is valid in K. If K ' for all Kripke frames K, we write ' and say that ' is valid. If is a set of propositions, we write K if K ' for all ' 2 . A proposition is said to be a logical consequence of if K whenever K , in which case we write . (Note that this is not the same as saying that K; u whenever K; u .) We say that an inference rule '1 ; : : : ; 'n ' is sound if ' is a logical consequence of f'1 ; : : : ; 'n g. Satis ability and validity are dual in the same sense that 9 and 8 are dual and < > and [ ] are dual: a proposition is valid (in K) if and only if its negation is not satis able (in K). EXAMPLE 4. Let p; q be atomic propositions, let a; b be atomic programs, and let K = (K; mK ) be a Kripke frame with K = fs; t; u; vg mK (p) = fu; vg mK (q) = ft; vg mK (a) = f(t; v); (v; t); (s; u); (u; s)g mK (b) = f(u; v); (v; u); (s; t); (t; s)g: The following gure illustrates K.

s s s

a

6

? p u

b

s s -t 6 a

b

q

-?v

The following formulas are valid in K. p $ [(aba) ]p q $ [(bab) ]q: Also, let be the program = (aa [ bb [ (ab [ ba)(aa [ bb)(ab [ ba)) : Thinking of as a regular expression, generates all words over the alphabet fa; bg with an even number of occurrences of each of a and b. It can be shown that for any proposition ', the proposition ' $ []' is valid in K.

120

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

EXAMPLE 5. The formula

p ^ [a ]((p ! [a]:p) ^ (:p ! [a]p))

$

[(aa) ]p ^ [a(aa) ]:p

is valid. Both sides assert in dierent ways that p is alternately true and false along paths of execution of the atomic program a.

2.5 Basic Properties THEOREM 6. The following are valid formulas of PDL: (i) <>(' _ ) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) (xii) (xiii) (xiv)

$ <>' _ <> [](' ^ ) $ []' ^ [] <>' ^ [] ! <>(' ^ ) [](' ! ) ! ([]' ! [] <>(' ^ ) ! <>' ^ <> []' _ [] ! [](' _ ) <>0 $ 0 []' $ :<>:'. < [ >' $ <>' _ < >' [ [ ]' $ []' ^ [ ]' < ; >' $ <>< >' [ ; ]' $ [][ ]' :

'!

<>' ! <>

DYNAMIC LOGIC

121

(iii) Monotonicity of []:

'! []' ! [] The converse operator

is a program operator with semantics

mK ( ) = mK ()

=

f(v; u) j (u; v) 2 mK ()g:

Intuitively, the converse operator allows us to \run a program backwards;" semantically, the input/output relation of the program is the output/input relation of . Although this is not always possible to realize in practice, it is nevertheless a useful expressive tool. For example, it gives us a convenient way to talk about backtracking, or rolling back a computation to a previous state. THEOREM 8. For any programs and , (i) mK (( [ ) ) = mK ( [ ) (ii) mK (( ; ) ) = mK ( ; )

(iii) mK ('? ) = mK ('?)

(iv) mK ( ) = mK ( ) (v) mK (

) = mK ().

THEOREM 9. The following are valid formulas of PDL:

! []< >' ' ! [ ]<>' <>[ ]' ! ' < >[]' ! '.

(i) ' (ii) (iii) (iv)

The iteration operator is interpreted as the re exive transitive closure operator on binary relations. It is the means by which iteration is coded in PDL. This operator diers from the other operators in that it is in nitary in nature, as re ected by its semantics:

mK ( ) = mK () =

[

n

mK ()n

(see Section 2.2). This introduces a level of complexity to PDL beyond the other operators. Because of it, PDL is not compact: the set (9)

f< >'g [ f:'; :<>'; :<2 >'; : : : g

122

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is nitely satis able but not satis able. Because of this in nitary behavior, it is rather surprising that PDL should be decidable and that there should be a nitary complete axiomatization. The properties of the operator of PDL come directly from the properties of the re exive transitive closure operator on binary relations. In a nutshell, for any binary relation R, R is the -least re exive and transitive relation containing R. THEOREM 10. The following are valid formulas of PDL: (i) [ ]'

! ' (ii) ' ! < >' (iii) [ ]' ! []' (iv) <>' ! < >' (v) [ ]' $ [ ]' (vi) < >' $ < >' (vii) [ ]' $ []' (viii) < >' $ <>' (ix) [ ]' $ ' ^ [][ ]'. (x) < >' $ ' _ <>< >'. (xi) [ ]' $ ' ^ [ ](' ! []'). (xii) < >' $ ' _ < >(:' ^ <>'). Semantically, is a re exive and transitive relation containing , and Theorem 10 captures this. That is re exive is captured in (ii); that it is transitive is captured in (vi); and that it contains is captured in (iv). These three properties are captured by the single property (x). Re exive Transitive Closure and Induction

To prove properties of iteration, it is not enough to know that is a re exive and transitive relation containing . So is the universal relation K K , and that is not very interesting. We also need some way of capturing the idea that is the least re exive and transitive relation containing . There are several equivalent ways this can be done:

DYNAMIC LOGIC

123

(RTC) The re exive transitive closure rule : (' _ <> ) ! < >' !

(LI) The loop invariance rule :

! [] ! [ ] (IND) The induction axiom (box form): ' ^ [ ](' ! []')

! [ ]'

(IND) The induction axiom (diamond form): < >'

! ' _ < >(:' ^ <>')

The rule (RTC) is called the re exive transitive closure rule . Its importance is best described in terms of its relationship to the valid PDL formula of Theorem 10(x). Observe that the right-to-left implication of this formula is obtained by substituting < >' for R in the expression (10) ' _ <>R

! R:

Theorem 10(x) implies that < >' is a solution of (10); that is, (10) is valid when < >' is substituted for R. The rule (RTC) says that < >' is the least such solution with respect to logical implication. That is, it is the least PDL-de nable set of states that when substituted for R in (10) results in a valid formula. The dual propositions labeled (IND) are jointly called the PDL induction axiom . Intuitively, the box form of (IND) says, \If ' is true initially, and if, after any number of iterations of the program , the truth of ' is preserved by one more iteration of , then ' will be true after any number of iterations of ." The diamond form of (IND) says, \If it is possible to reach a state satisfying ' in some number of iterations of , then either ' is true now, or it is possible to reach a state in which ' is false but becomes true after one more iteration of ."

124

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Note that the box form of (IND) bears a strong resemblance to the induction axiom of Peano arithmetic:

'(0) ^ 8n ('(n) ! '(n + 1))

! 8n '(n):

Here '(0) is the basis of the induction and 8n ('(n) ! '(n + 1)) is the induction step, from which the conclusion 8n '(n) can be drawn. In the PDL axiom (IND), the basis is ' and the induction step is [ ](' ! []'), from which the conclusion [ ]' can be drawn.

2.6 Encoding Hoare Logic The Hoare partial correctness assertion f'g f g is encoded as ' ! [] in PDL. The following theorem says that under this encoding, Dynamic Logic subsumes Hoare Logic. THEOREM 11. The following rules of Hoare Logic are derivable in PDL: (i) Composition rule:

f'g fg; fg f g f'g ; f g (ii) Conditional rule:

f' ^ g f g; f:' ^ g f g fg if ' then else f g (iii)

While rule: f' ^ g f g f g while ' do f:' ^ g

(iv) Weakening rule:

'0 ! ';

f'g f g; f'0 g f 0 g

!

0

DYNAMIC LOGIC

125

3 FILTRATION AND DECIDABILITY The small model property for PDL says that if ' is satis able, then it is satis ed at a state in a Kripke frame with no more than 2j'j states, where j'j is the number of symbols of '. This result and the technique used to prove it, called ltration , come directly from modal logic. This immediately gives a naive decision procedure for the satis ability problem for PDL: to determine whether ' is satis able, construct all Kripke frames with at most 2j'j states and check whether ' is satis ed at some state in one of them. Considering only interpretations of the primitive formulas and prim' itive programs appearing in ', there are roughly 22 such models, so this algorithm is too ineÆcient to be practical. A more eÆcient algorithm will be described in Section 5. j

j

3.1 The Fischer{Ladner Closure Many proofs in simpler modal systems use induction on the well-founded subformula relation. In PDL, the situation is complicated by the simultaneous inductive de nitions of programs and propositions and by the behavior of the operator, which make the induction proofs somewhat tricky. Nevertheless, we can still use the well-founded subexpression relation in inductive proofs. Here an expression can be either a program or a proposition. Either one can be a subexpression of the other because of the mixed operators [ ] and ?. We start by de ning two functions FL : ! 2 FL2 : f[]' j 2 ; ' 2 g

! 2

by simultaneous induction. The set FL(') is called the Fischer{Ladner closure of '. The ltration construction for PDL uses the Fischer{Ladner closure of a given formula where the corresponding proof for propositional modal logic would use the set of subformulas. The functions FL and FL2 are de ned inductively as follows: (a) FL(p) def =

fpg, p an atomic proposition

(b) FL(' ! ) def = (c) FL(0) def =

f' ! g [ FL(') [ FL(

)

f0g

(d) FL([]') def = FL2 ([]') [ FL(') (e) FL2 ([a]') def =

f[a]'g, a an atomic program

126

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

(f) FL2 ([ [ ]') def =

f[ [ ]'g [ FL2 ([]') [ FL2([ ]')

(g) FL2 ([ ; ]') def = (h)

f[ ; ]'g [ FL2([][ ]') [ FL2 ([ ]') FL2 ([ ]') def = f[ ]'g [ FL2 ([][ ]')

(i) FL2 ([ ?]') def =

f[

?]'g [ FL( ).

This de nition is apparently quite a bit more involved than for mere subexpressions. In fact, at rst glance it may appear circular because of the rule (h). The auxiliary function FL2 is introduced for the express purpose of avoiding any such circularity. It is de ned only for formulas of the form []' and intuitively produces those elements of FL([]') obtained by breaking down and ignoring '. LEMMA 12.

2 FL('), then 2 FL('). If [?] 2 FL('), then 2 FL('). If [ [ ] 2 FL('), then [] 2 FL(') and [ ] 2 FL('). If [ ; ] 2 FL('), then [][ ] 2 FL(') and [ ] 2 FL('). If [ ] 2 FL('), then [][ ] 2 FL(').

(i) If [] (ii) (iii) (iv) (v)

Even after convincing ourselves that the de nition is noncircular, it may not be clear how the size of FL(') depends on the length of '. Indeed, the right-hand side of rule (h) involves a formula that is larger than the formula on the left-hand side. However, it can be shown by induction on subformulas that the relationship is linear: LEMMA 13. (i) For any formula ', #FL(')

j'j.

(ii) For any formula []', #FL2 ([]')

jj.

3.2 Filtration Given a PDL proposition ' and a Kripke frame K = (K; mK ), we de ne a new frame K=FL(') = (K=FL('); mK=FL(') ), called the ltration of K by FL('), as follows. De ne a binary relation on states of K by:

uv

def 8 2 FL(') (u 2 m ( ) , v 2 m ( )): () K K

DYNAMIC LOGIC

127

In other words, we collapse states u and v if they are not distinguishable by any formula of FL('). Let

def = def = def = def mK=FL(') (a) = [u] K=FL(') mK=FL(') (p)

fv j v ug f[u] j u 2 K g f[u] j u 2 mK (p)g; p an atomic proposition f([u]; [v]) j (u; v) 2 mK (a)g; a an atomic program.

The map mK=FL(') is extended inductively to compound propositions and programs as described in Section 2.2. The following key lemma relates K and K=FL('). Most of the diÆculty in the following lemma is in the correct formulation of the induction hypotheses in the statement of the lemma. Once this is done, the proof is a fairly straightforward induction on the well-founded subexpression relation. LEMMA 14 (Filtration Lemma). Let K be a Kripke frame and let u; v be states of K.

2 FL('), u 2 mK ( ) i [u] 2 mK=FL(')( ). For all [] 2 FL('), (a) if (u; v) 2 mK () then ([u]; [v]) 2 mK=FL(')(); (b) if ([u]; [v]) 2 mK=FL(')() and u 2 mK ([] ), then v 2 mK (

(i) For all (ii)

).

Using the ltration lemma, we can prove the small model theorem easily. THEOREM 15 (Small Model Theorem). Let ' be a satis able formula of PDL. Then ' is satis ed in a Kripke frame with no more than 2j'j states.

Proof. If ' is satis able, then there is a Kripke frame K and state u 2 K with u 2 mK ('). Let FL(') be the Fischer-Ladner closure of '. By the ltration lemma (Lemma 14), [u] 2 mK=FL(')('). Moreover, K=FL(') has no more states than the number of truth assignments to formulas in FL('), which by Lemma 13(i) is at most 2j'j. It follows immediately that the satis ability problem for PDL is decidable, since there are only nitely many possible Kripke frames of size at most 2j'j to check, and there is a polynomial-time algorithm to check whether a given formula is satis ed at a given state in a given Kripke frame. A more eÆcient algorithm exists (see Section 5). The completeness proof for PDL also makes use of the ltration lemma (Lemma 14), but in a somewhat stronger form. We need to know that it also holds for nonstandard Kripke frames as well as the standard Kripke frames de ned in Section 2.2.

128

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

A nonstandard Kripke frame is any structure N = (N; mN) that is a Kripke frame in the sense of Section 2.2 in every respect, except that mN ( ) need not be the re exive transitive closure of mN (), but only a re exive, transitive binary relation containing mN () satisfying the PDL axioms for (Axioms 17(vii) and (viii) of Section 4.1). LEMMA 16 (Filtration for Nonstandard Models). Let N be a nonstandard Kripke frame and let u; v be states of N.

2 FL('), u 2 mN ( ) i [u] 2 mN=FL(')( ). For all [] 2 FL('), (a) if (u; v) 2 mN () then ([u]; [v]) 2 mN=FL(')(); (b) if ([u]; [v]) 2 mN=FL(') () and u 2 mN([] ), then v 2 mN(

(i) For all (ii)

).

4 DEDUCTIVE COMPLETENESS OF PDL

4.1 A Deductive System The following list of axioms and rules constitutes a sound and complete Hilbert-style deductive system for PDL. Axiom System 17. (i) Axioms for propositional logic (ii) [](' ! ) (iii) (iv) (v) (vi) (vii) (viii)

! ([]' ! [] ) [](' ^ ) $ []' ^ [] [ [ ]' $ []' ^ [ ]' [ ; ]' $ [][ ]' [ ?]' $ ( ! ') ' ^ [][ ]' $ [ ]' ' ^ [ ](' ! []') ! [ ]'

In PDL with converse , we also include

! []< >' ' ! [ ]<>'

(ix) ' (x)

DYNAMIC LOGIC

129

Rules of Inference '; ' ! (MP) (GEN)

' []'

2

The axioms (ii) and (iii) and the two rules of inference are not particular to PDL, but come from modal logic. The rules (MP) and (GEN) are called modus ponens and (modal) generalization , respectively. Axiom (viii) is called the PDL induction axiom . Intuitively, (viii) says: \Suppose ' is true in the current state, and suppose that after any number of iterations of , if ' is still true, then it will be true after one more iteration of . Then ' will be true after any number of iterations of ." In other words, if ' is true initially, and if the truth of ' is preserved by the program , then ' will be true after any number of iterations of . We write ` ' if the proposition ' is a theorem of this system, and say that ' is consistent if 0 :'; that is, if it is not the case that ` :'. A set of propositions is consistent if all nite conjunctions of elements of are consistent. The soundness of these axioms and rules over Kripke frames can be established by elementary arguments in relational algebra using the semantics of Section 2.2. We write ` ' if the formula ' is provable in this deductive system. A formula ' is consistent if 0 :', that is, if it is not the caseV that ` :'; that a nite set of formulas is consistent if its conjunction is consistent; and that an in nite set of formulas is consistent if every nite subset is consistent. Axiom System 17 is complete: all valid formulas of PDL are theorems. This fact can be proved by constructing a nonstandard Kripke frame from maximal consistent sets of formulas, then using the ltration lemma for nonstandard models (Lemma 16) to collapse this nonstandard model to a nite standard model. THEOREM 18 (Completeness of PDL). If ' then ` '. In classical logics, a completeness theorem of the form of Theorem 18 can be adapted to handle the relation of logical consequence ' j= between formulas because of the deduction theorem, which says

'`

, `'! :

Unfortunately, the deduction theorem fails in PDL, as can be seen by taking = [a]p and ' = p. However, the following result allows Theorem

130

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

18, as well as the deterministic exponential-time satis ability algorithm described in the next section, to be extended to handle the logical consequence relation: THEOREM 19. Let ' and be any PDL formulas. Then

' j=

, j= [(a1 [ [ an ) ]' ! ;

where a1 ; : : : ; an are all atomic programs appearing in ' or . Allowing in nitary conjunctions, if is a set of formulas in which only nitely many atomic programs appear, then

j=

, j= f[(a1 [ [ an )]' j ' 2 g ! ; ^

where a1 ; : : : ; an are all atomic programs appearing in or .

5 COMPLEXITY OF PDL The small model theorem (Theorem 15) gives a naive deterministic algorithm for the satis ability problem: construct all Kripke frames of at most 2j'j states and check whether ' is satis ed at any state in any of them. Although checking whether a given formula is satis ed in a given state of a given Kripke frame can be done quite eÆciently, the naive satis ability algorithm is highly ineÆcient. For one thing, the models constructed are of exponential size in the length of the given formula; for another, there O( ' ) are 22 of them. Thus the naive satis ability algorithm takes double exponential time in the worst case. There is a more eÆcient algorithm [Pratt, 1979b] that runs in deterministic single-exponential time. One cannot expect to improve this signi cantly due to a corresponding lower bound. THEOREM 20. There is an exponential-time algorithm for deciding whether a given formula of PDL is satis able. j

j

THEOREM 21. The satis ability problem for PDL is EXPTIME-complete. COROLLARY 22. There is a constant c > 1 such that the satis ability problem for PDL is not solvable in deterministic time cn= log n , where n is the size of the input formula. EXPTIME -hardness can be established by constructing a formula of PDL whose models encode the computation of a given linear-space-bounded onetape alternating Turing machine M on a given input x of length n over M 's input alphabet. Since the membership problem for alternating polynomialspace machines is EXPTIME -hard [Chandra et al., 1981], so is the satis ability problem for PDL.

DYNAMIC LOGIC

131

It is interesting to compare the complexity of satis ability in PDL with the complexity of satis ability in propositional logic. In the latter, satis ability is NP -complete; but at present it is not known whether the two complexity classes EXPTIME and NP dier. Thus, as far as current knowledge goes, the satis ability problem is no easier in the worst case for propositional logic than for its far richer superset PDL. As we have seen, current knowledge does not permit a signi cant dierence to be observed between the complexity of satis ability in propositional logic and in PDL. However, there is one easily veri ed and important behavioral dierence: propositional logic is compact , whereas PDL is not. Compactness has signi cant implications regarding the relation of logical consequence. If a propositional formula ' is a consequence of a set of propositional formulas, then it is already a consequence of some nite subset of ; but this is not true in PDL. Recall that we write ' and say that ' is a logical consequence of if ' satis ed in any state of any Kripke frame K all of whose states satisfy all the formulas of . That is, if K , then K '. An alternative intepretation of logical consequence, not equivalent to the above, is that in any Kripke frame, the formula ' holds in any state satisfyingVall formulas in . Allowing in nite conjunctions,Vwe might write this as ! '. This is not the same as ', since ! ' implies ', but not necessarily vice versa. A counterexample is provided by = fpg and ' = [a]p. However, if contains only nitely many V 0 atomic programs, we can reduce the problem ' to the problem ! ' for a related 0 , as shown in Theorem 19. Under either interpretation, compactness fails: THEOREM 23. There is an in nite set of formulas and a formula ' such V 0 is it the that ! ' (hence '), but for no proper subset V 0 0 case that ' (hence neither is it the case that ! ').

As shown in Theorem 19, logical consequences ' for nite are no more diÆcult to decide than validity of single formulas. But what if is in nite? Here compactness is the key factor. If is an r.e. set and the logic is compact, then the consequence problem is r.e.: to check whether ', the nite subsets of can be eectively enumerated, and checking ' for nite is a decidable problem. Since compactness fails in PDL, this observation does us no good, even when is known to be recursively enumerable. However, the following result shows that the situation is much worse than we might expect: even if is taken to be the set of substitution instances of a single formula of PDL, the consequence problem becomes very highly undecidable. This is a rather striking manifestation of PDL's lack of compactness.

132

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Let ' be a given formula. The set S' of substitution instances of ' is the set of all formulas obtained by substituting a formula for each atomic proposition appearing in '. THEOREM 24. The problem of deciding whether S' is 11 -complete. The problem is 11 -hard even for a particular xed '. 6 NONREGULAR PDL In this section we enrich the class of regular programs in PDL by introducing programs whose control structure requires more than a nite automaton. For example, the class of context-free programs requires a pushdown automaton (PDA), and moving up from regular to context-free programs is really going from iterative programs to ones with parameterless recursive procedures. Several questions arise when enriching the class of programs of PDL, such as whether the expressive power of the logic grows, and if so whether the resulting logics are still decidable. It turns out that any nonregular program increases PDL's expressive power and that the validity problem for PDL with context-free programs is undecidable. The bulk of the section is then devoted to the diÆcult problem of trying to characterize the borderline between decidable and undecidable extensions. On the one hand, validity for PDL with the addition of even a single extremely simple nonregular program is already 11 -complete; but on the other hand, when we add another equally simple program, the problem remains decidable. Besides these results, which pertain to very speci c extensions, we discuss some broad decidability results that cover many languages, including some that are not even context-free. Since no similarly general undecidability results are known, we also address the weaker issue of whether nonregular extensions admit the nite model property and present a negative result that covers many cases.

6.1 Nonregular Programs Consider the following self-explanatory program: (11) while p do a ; now do b the same number of times This program is meant to represent the following set of computation sequences:

f(p? ; a)i ; :p? ; bi j i 0g: Viewed as a language over the alphabet fa; b; p; :pg, this set is not regular, thus cannot be programmed in PDL. However, it can be represented by the following parameterless recursive procedure:

DYNAMIC LOGIC

133

proc V f if p then f a ; call V ; b g else return

g

The set of computation sequences of this program is captured by the contextfree grammar

V

! :p? j p?aV b:

We are thus led to the idea of allowing context-free programs inside the boxes and diamonds of PDL. From a pragmatic point of view, this amounts to extending the logic with the ability to reason about parameterless recursive procedures. The particular representation of the context-free programs is unimportant; we can use pushdown automata, context-free grammars, recursive procedures, or any other formalism that can be eectively translated into these. In the rest of this section, a number of speci c programs will be of interest, and we employ special abbreviations for them. For example, we de ne:

a ba def = fai bai j i 0g a b def = fai bi j i 0g ba def = fbi ai j i 0g: Note that ab is really just a nondeterministic version of the program (11) in which there is simply no p to control the iteration. In fact, (11) could have been written in this notation as (p?a) :p?b .2 In programming terms, we can compare the regular program (ab) with the nonregular one ab by observing that if a is \purchase a loaf of bread" and b is \pay $1.00," then the former program captures the process of paying for each loaf when purchased, while the latter one captures the process of paying for them all at the end of the month. It turns out that enriching PDL with even a single arbitrary nonregular program increases expressive power. If L is any language over atomic programs and tests, then PDL + L is de ned exactly as PDL, but with the additional syntax rule stating that for any formula ', the expression ' is a new formula. The semantics of PDL + L is like that of PDL with the addition of the clause [ mK (L) def = mK ( ): 2L

2 It is noteworthy that the results of this section do not depend on nondeterminism. For example, the negative Theorem 28 holds for the deterministic version (11) too. Also, most of the results in this section involve nonregular programs over atomic programs only, but can be generalized to allow tests as well.

134

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Note that PDL + L does not allow L to be used as a formation rule for new programs or to be combined with other programs. It is added to the programming language as a single new stand-alone program only. If PDL1 and PDL2 are two extensions of PDL, we say that PDL1 is as expressive as PDL2 if for each formula ' of PDL2 there is a formula of PDL1 such that ' $ . If PDL1 is as expressive as PDL2 but PDL2 is not as expressive as PDL1 , we say that PDL1 is strictly more expressive than PDL2 . Thus, one version of PDL is strictly more expressive than another if anything the latter can express the former can too, but there is something the former can express that the latter cannot. A language is test-free if it is a subset of 0 ; that is, if its seqs contain no tests. THEOREM 25. If L is any nonregular test-free language, then PDL + L is strictly more expressive than PDL. We can view the decidability of regular PDL as showing that propositionallevel reasoning about iterative programs is computable. We now wish to know if the same is true for recursive procedures. We de ne context-free PDL to be PDL extended with context-free programs, where a context-free program is one whose seqs form a context-free language. The precise syntax is unimportant, but for de niteness we might take as programs the set of context-free grammars G over atomic programs and tests and de ne

mK (G) def =

[

2CS (G)

mK ( );

where CS (G) is the set of computation sequences generated by G as described in Section 1.3. THEOREM 26. The validity problem for context-free PDL is undecidable. Theorem 26 leaves several interesting questions unanswered. What is the level of undecidability of context-free PDL? What happens if we want to add only a small number of speci c nonregular programs? The rst of these questions arises from the fact that the equivalence problem for context-free languages is co-r.e.-complete, or complete for 01 in the arithmetic hierarchy. Hence, all Theorem 26 shows is that the validity problem for context-free PDL is 01 -hard, while it might in fact be worse. The second question is far more general. We might be interested in reasoning only about deterministic or linear context-free programs,3 or we might be interested only in a few special context-free programs such as a ba or ab . Perhaps PDL 3 A linear program is one whose seqs are generated by a context-free grammar in which there is at most one nonterminal symbol on the right-hand side of each rule. This corresponds to a family of recursive procedures in which there is at most one recursive call in each procedure.

DYNAMIC LOGIC

135

remains decidable when these programs are added. The general question is to determine the borderline between the decidable and the undecidable when it comes to enriching the class of programs allowed in PDL. Interestingly, if we wish to consider such simple nonregular extensions as PDL + aba or PDL + ab , we will not be able to prove undecidability by the technique used for context-free PDL in Theorem 26, since standard problems that are undecidable for context-free languages, such as equivalence and inclusion, are decidable for classes containing the regular languages and the likes of aba and ab . Moreover, we cannot prove decidability by the technique used for PDL in Section 3.2, since logics like PDL + aba and PDL + ab do not enjoy the nite model property. Thus, if we want to determine the decidability status of such extensions, we will have to work harder. THEOREM 27. There is a satis able formula in PDL + ab that is not satis ed in any nite structure. For PDL + a ba, the news is worse than mere undecidability: THEOREM 28. The validity problem for PDL + aba is 11 -complete. The 11 result holds also for PDL extended with the two programs a b and ba . It is easy to show that the validity problem for context-free PDL in its entirety remains in 11 . Together with the fact that aba is a contextfree language, this yields an answer to the rst question mentioned earlier: context-free PDL is 11 -complete. As to the second question, Theorem 28 shows that the high undecidability phenomenon starts occurring even with the addition of one very simple nonregular program. We now turn to nonregular programs over a single letter. Consider the language of powers of 2: i a2 def = fa2 j i 0g: Here we have: THEOREM 29. The validity problem for PDL + a2 is undecidable. It is actually possible to prove this result for powers of anyi xed k 2. Thus PDL with the addition of any language of the form fak j i 0g for xed k 2 is undecidable. Another class of one-letter extensions that has been proven to be undecidable consists of Fibonacci-like sequences: THEOREM 30. Let f0 ; f1 be arbitrary elements of N with f0 < f1 , and let F be the sequence f0 ; f1 ; f2; : : : generated by the recurrence fi = fi 1 + fi 2 for i 2. Let aF def = fafi j i 0g. Then the validity problem for PDL + aF is undecidable. In both these theorems, the fact that the sequences of a's in the programs grow exponentially is crucial to the proofs. Indeed, we know of no

136

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

undecidability results for any one-letter extension in which the lengths of the sequences of a's grow subexponentially. Particularly intriguing are the cases of squares and cubes:

a 3 a

def = fai j i 0g; def = fai j i 0g:

2

2 3

Are PDL + a and PDL + a undecidable? There is a decidability result for a slightly restricted version of the squares extension, which seems to indicate that the full unrestricted version PDL + 2 a is decidable too. However, we conjecture that for cubes the problem is undecidable. Interestingly, several classical open problems in number theory 3 reduce to instances of the validity problem for PDL+a . For example, while no one knows whether every integer greater than 10000 is the sum of ve cubes, the following formula is valid if and only if the answer is yes: 2

[(a )5 ]p 3

3

! [a10001 a ]p:

(The 5-fold and 10001-fold iterations have to be written out in full, of 3 course.) If PDL + a were decidable, then we could compute the answer in a simple manner, at least in principle.

6.2 Decidable Extensions

We now turn to positive results. Theorem 27 states that PDL + a b does not have the nite model property. Nevertheless, we have the following: THEOREM 31. The validity problem for PDL + ab is decidable. When contrasted with Theorem 28, the decidability of PDL + ab is very surprising. We have two of the simplest nonregular languages|aba and ab |which are extremely similar, yet the addition of one to PDL yields high undecidability while the other leaves the logic decidable. Theorem 31 was proved originally by showing that, although PDL + ab does not always admit nite models, it does admit nite pushdown models, in which transitions are labeled not only with atomic programs but also with push and pop instructions for a particular kind of stack. A close study of the proof (which relies heavily on the idiosyncrasies of the language a b) suggests that the decidability or undecidability has to do with the manner in which an automaton accepts the languages involved. For example, in the usual way of accepting aba , a pushdown automaton (PDA) reading an a will carry out a push or a pop, depending upon its location in the input word. However, in the standard way of accepting ab , the a's are always pushed and the b's are always popped, regardless of the location; the input symbol alone determines what the automaton does. More recent work, which we now set out to describe, has yielded a general decidability result

DYNAMIC LOGIC

137

that con rms this intuition. It is of special interest due to its generality, since it does not depend on speci c programs. Let M = (Q; ; ; q0 ; z0 ; Æ) be a PDA that accepts by empty stack. We say that M is simple-minded if, whenever Æ(q; ; ) = (p; b), then for each q0 and 0 , either Æ(q0 ; ; 0 ) = (p; b) or Æ(q0 ; ; 0 ) is unde ned. A context-free language is said to be simple-minded (a simple-minded CFL) if there exists a simple-minded PDA that accepts it. In other words, the action of a simple-minded automaton is determined uniquely by the input symbol; the state and stack symbol are only used to help determine whether the machine halts (rejecting the input) or continues. Note that such an automaton is necessarily deterministic. It is noteworthy that simple-minded PDAs accept a large fragment of the context-free languages, including ab and ba , as well as all balanced parenthesis languages (Dyck sets) and many of their intersections with regular languages. THEOREM 32. If L is accepted by a simple-minded PDA, then PDL + L is decidable. We can obtain another general decidability result involving languages accepted by deterministic stack automata. A stack automaton is a one-way PDA whose head can travel up and down the stack reading its contents, but can make changes only at the top of the stack. Stack automata can accept non-context-free languages such as a bc and its generalizations a 1 a 2 : : : an for any n, as well as many variants thereof. It would be nice to be able to prove decidability of PDL when augmented by any language accepted by such a machine, but this is not known. What has been proven, however, is that if each word in such a language is preceded by a new symbol to mark its beginning, then the enriched PDL is decidable: THEOREM 33. Let e 62 0 , and let L be a language over 0 that is accepted by a deterministic stack automaton. If we let eL denote the language feu j u 2 Lg, then PDL + eL is decidable. While Theorems 32 and 33 are general and cover many languages, they do not prove decidability of PDL + a bc , which may be considered the simplest non-context-free extension of PDL. Nevertheless, the constructions used in the proofs of the two general results have been combined to yield: THEOREM 34. PDL + ab c is decidable. As explained, we know of no undecidabile extension of PDL with a polynomially growing language, although we conjecture that the cubes extension is undecidable. Since the decidability status of such extensions seems hard to determine, we now address a weaker notion: the presence or absence of a nite model property. The technique used in Theorem 27 to show that PDL + ab violates the nite model property does not work for one-letter alphabets. Nevertheless, we now state a general result leading to many one-

138

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

letter extensions that violate the nite model property. In particular, the theorem will yield the following: 2 PROPOSITION 35 (squares and cubes). The logics PDL + a and PDL + 3 a do not have the nite model property. PROPOSITION 36 (polynomials). For every polynomial of the form

p(n) = ci ni + ci 1 ni 1 + + c0 2 Z[n] with i 2 and positive leading coeÆcient ci > 0, let Sp = fp(m) j m 2 N g \ N. Then PDL + aSp does not have the nite model property. PROPOSITION 37 (sums of primes). Let pi be the ith prime (with p1 = 2), and de ne Ssop def =

n X

f

i=1

pi j n 1g:

Then PDL + aSsop does not have the nite model property.

PROPOSITION 38 (factorials). Let Sfac def = fn! j n 2 N g. Then PDL + aSfac does not have the nite model property. The nite model property fails for any suÆciently fast-growing integer linear recurrence, not just the Fibonacci sequence, although we do not know whether these extensions also render PDL undecidable. A kth-order integer linear recurrence is an inductively de ned sequence (12) `n def = c1 `n 1 + + ck `n k + c0 ; n k; where k 1, c0 ; : : : ; ck 2 N , ck 6= 0, and `0 ; : : : ; `k 1 2 N are given. PROPOSITION 39 (linear recurrences). Let Slr = f`n j n 0g be the set de ned inductively by (12). The following conditions are equivalent: (i) aSlr is nonregular; (ii) PDL + aSlr does not have the nite model property; (iii) not all `0; : : : ; `k 1 are zero and

Pk

i=1 ci

> 1.

7 OTHER VARIANTS OF PDL

7.1 Deterministic Programs Nondeterminism arises in PDL in two ways:

atomic programs can be interpreted in a structure as (not necessarily single-valued) binary relations on states; and

DYNAMIC LOGIC

139

the programming constructs [ and involve nondeterministic choice. Many modern programming languages have facilities for concurrency and distributed computation, certain aspects of which can be modeled by nondeterminism. Nevertheless, the majority of programs written in practice are still deterministic. Here we investigate the eect of eliminating either one or both of these sources of nondeterminism from PDL. A program is said to be (semantically ) deterministic in a Kripke frame K if its traces are uniquely determined by their rst states. If is an atomic program a, this is equivalent to the requirement that mK (a) be a partial function; that is, if both (s; t) and (s; t0 ) 2 mK (a), then t = t0 . A deterministic Kripke frame K = (K; mK ) is one in which all atomic a are semantically deterministic. The class of deterministic while programs , denoted DWP, is the class of programs in which the operators [, ?, and may appear only in the context of the conditional test, while loop, skip, or fail;

tests in the conditional test and while loop are purely propositional; that is, there is no occurrence of the < > or [ ] operators. The class of nondeterministic while programs, denoted WP, is the same, except unconstrained use of the nondeterministic choice construct [ is allowed. It is easily shown that if and are semantically deterministic in K, then so are if ' then else and while ' do . By restricting either the syntax or the semantics or both, we obtain the following logics: DPDL (deterministic PDL), which is syntactically identical to PDL, but interpreted over deterministic structures only;

SPDL (strict PDL), in which only deterministic while programs are allowed; and

SDPDL (strict deterministic PDL), in which both restrictions are in

force. Validity and satis ability in DPDL and SDPDL are de ned just as in PDL, but with respect to deterministic structures only. If ' is valid in PDL, then ' is also valid in DPDL, but not conversely: the formula (13) ' ! [a]' is valid in DPDL but not in PDL. Also, SPDL and SDPDL are strictly less expressive than PDL or DPDL, since the formula (14) '

140

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is not expressible in SPDL, as shown in [Halpern and Reif, 1983]. THEOREM 40. If the axiom scheme (15) ' ! [a]'; a 2 0 is added to Axiom System 17, then the resulting system is sound and complete for DPDL. THEOREM 41. Validity in DPDL is deterministic exponential-time complete. Now we turn to SPDL, in which atomic programs can be nondeterministic but can be composed into larger programs only with deterministic constructs. THEOREM 42. Validity in SPDL is deterministic exponential-time complete. The nal version of interest is SDPDL, in which both the syntactic restrictions of SPDL and the semantic ones of DPDL are adopted. The exponentialtime lower bound fails here, and we have: THEOREM 43. The validity problem for SDPDL is complete in polynomial space. The question of relative power of expression is of interest here. Is DPDL < PDL? Is SDPDL < DPDL? The rst of these questions is inappropriate, since the syntax of both languages is the same but they are interpreted over dierent classes of structures. Considering the second, we have: THEOREM 44. SDPDL < DPDL and SPDL < PDL. In summary, we have the following diagram describing the relations of expressiveness between these logics. The solid arrows indicate added expressive power and broken ones a dierence in semantics. The validity problem is exponential-time complete for all but SDPDL, for which it is PSPACE complete. Straightforward variants of Axiom System 17 are complete for all versions.

3

kQ

SPDL

PDL

Q

kQ

SDPDL

Q DPDL 3

7.2 Representation by Automata A PDL program represents a regular set of computation sequences. This same regular set could possibly be represented exponentially more succinctly

DYNAMIC LOGIC

141

by a nite automaton. The dierence between these two representations corresponds roughly to the dierence between while programs and owcharts. Since nite automata are exponentially more succinct in general, the upper bound of Section 5 could conceivably fail if nite automata were allowed as programs. Moreover, we must also rework the deductive system of Section 4.1. However, it turns out that the completeness and exponential-time decidability results of PDL are not sensitive to the representation and still go through in the presence of nite automata as programs, provided the deductive system of Section 4.1 and the techniques of Sections 4 and 5 are suitably modi ed, as shown in [Pratt, 1979b; Pratt, 1981b] and [Harel and Sherman, 1985]. In recent years, the automata-theoretic approach to logics of programs has yielded signi cant insight into propositional logics more powerful than PDL, as well as substantial reductions in the complexity of their decision procedures. Especially enlightening are the connections with automata on in nite strings and in nite trees. By viewing a formula as an automaton and a treelike model as an input to that automaton, the satis ability problem for a given formula becomes the emptiness problem for a given automaton. Logical questions are thereby transformed into purely automata-theoretic questions. We assume that nondeterministic nite automata are given in the form (16) M = (n; i; j; Æ); where n = f0; : : : ; n 1g is the set of states, i; j 2 n are the start and nal states respectively, and Æ assigns a subset of 0 [ f'? j ' 2 g to each pair of states. Intuitively, when visiting state ` and seeing symbol a, the automaton may move to state k if a 2 Æ(`; k). The fact that the automata (16) have only one accept state is without loss of generality. If M is an arbitrary nondeterministic nite automaton with accept states F , then the set accepted by M is the union of the sets accepted by Mk for k 2 F , where Mk is identical to M except that it has unique accept state k. A desired formula [M ]' can be written as a conjunction ^

k2F

[Mk ]'

with at most quadratic growth. We now obtain a new logic APDL (automata PDL) by de ning and inductively using the clauses for from Section 2.1 and letting = 0 [ f'? j ' 2 g [ F , where F is the set of automata of the form (16).

142

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Axioms 17(iv), (v), and (vii) are replaced by: ^

(17) [n; i; j; Æ]'

$

(18) [n; i; i; Æ]'

$ '^

k2n 2Æ(i;k)

[][n; k; j; Æ ]';

^

k 2n 2Æ(i;k)

i 6= j

[][n; k; i; Æ ]':

The induction axiom 17(viii) becomes ^

(19) (

k2n

[n; i; k; Æ ]('k

!

^

m2n 2Æ(k;m)

[]'m ))

! ('i ! [n; i; j; Æ]'j ):

These and other similar changes can be used to prove: THEOREM 45. Validity in APDL is decidable in exponential time. THEOREM 46. The axiom system described above is complete for APDL.

7.3 Converse The converse operator \run backwards":

is a program operator that allows a program to be

mK ( ) def = f(s; t) j (t; s) 2 mK ()g:

PDL with converse is called CPDL.

The following identities allow us to assume without loss of generality that the converse operator is applied to atomic programs only. ( ; ) ( [ )

$ ; $ [ $ :

The converse operator strictly increases the expressive power of PDL, since the formula < >1 is not expressible without it. THEOREM 47. PDL < CPDL.

s s s

Proof. Consider the structure described in the following gure: s

6 a

t

u

DYNAMIC LOGIC

143

In this structure, s 1 but u 2 1. On the other hand, it can be shown by induction on the structure of formulas that if s and u agree on all atomic formulas, then no formula of PDL can distinguish between the two. More interestingly, the presence of the converse operator implies that the operator <> is continuous in the sense that W if A is any (possibly in nite) W family of formulas W possessing a join A, then <>A exists and is logically equivalent to <> A. In the absence of the converse operator, one can construct nonstandard models for which this fails. The completeness and exponential time decidability results of Sections 4 and 5 can be extended to CPDL provided the following two axioms are added:

' '

! []< >' ! [ ]<>':

The ltration lemma (Lemma 14) still holds in the presence of , as does the nite model property.

7.4 Well-foundedness

If is a deterministic program, the formula ' ! <> asserts the total correctness of with respect to pre- and postconditions ' and , respectively. For nondeterministic programs, however, this formula does not express the right notion of total correctness. It asserts that ' implies that there exists a halting computation sequence of yielding , whereas we would really like to assert that ' implies that all computation sequences of terminate and yield . Let us denote the latter property by TC ('; ; ):

Unfortunately, this is not expressible in PDL. The problem is intimately connected with the notion of well-foundedness . A program is said to be well-founded at a state u0 if there exists no in nite sequence of states u0; u1 ; u2 ; : : : with (ui ; ui+1 ) 2 mK () for all i 0. This property is not expressible in PDL either, as we will see. Several very powerful logics have been proposed to deal with this situation. The most powerful is perhaps the propositional -calculus, which is essentially propositional modal logic augmented with a least xpoint operator . Using this operator, one can express any property that can be formulated as the least xpoint of a monotone transformation on sets of states de ned by the PDL operators. For example, the well-foundedness of a program is expressed

144

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

(20) X:[]X in this logic. Two somewhat weaker ways of capturing well-foundedness without resorting to the full -calculus have been studied. One is to add to PDL an explicit predicate wf for well-foundedness:

mK (wf ) def = fs0 j :9s1 ; s2 ; : : : 8i 0 (si ; si+1 ) 2 mK ()g:

Another is to add an explicit predicate halt, which asserts that all computations of its argument terminate. The predicate halt can be de ned inductively from wf as follows: (21) halt a (22) halt ; (23) halt [ (24) halt

def () def () def () def ()

1; a an atomic program or test; halt ^ []halt ; halt ^ halt ; wf ^ [ ]halt : These constructs have been investigated under the various names loop, repeat, and . The predicates loop and repeat are just the complements of halt and wf , respectively: def :halt loop () def :wf : repeat () Clause (24) is equivalent to the assertion

def repeat _ < >loop : loop () It asserts that a nonhalting computation of consists of either an in nite sequence of halting computations of or a nite sequence of halting computations of followed by a nonhalting computation of . Let RPDL and LPDL denote the logics obtained by augmenting PDL with the wf and halt predicates, respectively.4 It follows from the preceding discussion that PDL LPDL RPDL the propositional -calculus: Moreover, all these inclusions are known to be strict. The logic LPDL is powerful enough to express the total correctness of nondeterministic programs. The total correctness of with respect to precondition ' and postcondition is expressed def ' ! halt ^ [] : TC ('; ; ) () 4 The L in LPDL stands for \loop" and the R in RPDL stands for \repeat." We retain these names for historical reasons.

DYNAMIC LOGIC

145

Conversely, halt can be expressed in terms of TC :

halt

,

TC (1; ; 1):

THEOREM 48. PDL < LPDL. THEOREM 49. LPDL < RPDL. It is possible to extend Theorem 49 to versions CRPDL and CLPDL in which converse is allowed in addition to wf or halt. Also, the proof of Theorem 47 goes through for LPDL and RPDL, so that 1 is not expressible in either. Theorem 48 goes through for the converse versions too. We obtain the situation illustrated in the following gure, in which the arrows indicate < and the absence of a path between two logics means that each can express properties that the other cannot.

CRPDL

3

QkQ

QkQ

CLPDL

3 CPDL kQ Q Q

Q LPDL 3

PDL

Q RPDL 3

The ltration lemma fails for all halt and wf versions as in Theorem 48. However, satis able formulas of the -calculus (hence of RPDL and LPDL) do have nite models. This nite model property is not shared by CLPDL or CRPDL. THEOREM 50. The CLPDL formula

:halt a ^ [a ]halt a is satis able but has no nite model.

As it turns out, Theorem 50 does not prevent CRPDL from being decidable.

146

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

THEOREM 51. The validity problems for CRPDL, CLPDL, RPDL, LPDL, and the propositional -calculus are all decidable in deterministic exponential time. Obviously, the simpler the logic, the simpler the arguments needed to show exponential time decidability. Over the years all these logics have been gradually shown to be decidable in exponential time by various authors using various techniques. Here we point to the exponential time decidability of the propositional -calculus with forward and backward modalities, proved in [Vardi, 1998b], from which all these can be seen easily to follow. The proof in [Vardi, 1998b] is carried out by exhibiting an exponential time decision procedure for two-way alternating automata on in nite trees. As mentioned above, RPDL possesses the nite (but not necessarily the small and not the collapsed) model property. THEOREM 52. Every satis able formula of RPDL, LPDL, and the propositional -calculus has a nite model. CRPDL and CLPDL are extensions of PDL that, like PDL + ab (Theorems 27 and 31), are decidable despite lacking a nite model property. Complete axiomatizations for RPDL and LPDL can be obtained by embedding them into the -calculus (see Section 14.4).

7.5 Concurrency Another interesting extension of PDL concerns concurrent programs. One can de ne an intersection operator \ such that the binary relation on states corresponding to the program \ is the intersection of the binary relations corresponding to and . This can be viewed as a kind of concurrency operator that admits transitions to those states that both and would have admitted. Here we consider a dierent and perhaps more natural notion of concurrency. The interpretation of a program will not be a binary relation on states, which relates initial states to possible nal states, but rather a relation between a states and sets of states. Thus mK () will relate a start state u to a collection of sets of states U . The intuition is that starting in state u, the (concurrent) program can be run with its concurrent execution threads ending in the set of nal states U . The basic concurrency operator will be denoted here by ^, although in the original work on concurrent Dynamic Logic [Peleg, 1987b; Peleg, 1987c; Peleg, 1987a] the notation \ is used. The syntax of concurrent PDL is the same as PDL, with the addition of the clause:

if ; 2 , then ^ 2 . The program ^ means intuitively, \Execute and in parallel."

DYNAMIC LOGIC

147

The semantics of concurrent PDL is de ned on Kripke frames K = (K; mK ) as with PDL, except that for programs ,

mK () K 2K :

Thus the meaning of is a collection of reachability pairs of the form (u; U ), where u 2 K and U K . In this brief description of concurrent PDL, we require that structures assign to atomic programs sequential, non-parallel, meaning; that is, for each a 2 0 , we require that if (u; U ) 2 mK (a), then #U = 1. The true parallelism will stem from applying the concurrency operator to build larger sets U in the reachability pairs of compound programs. For details, see [Peleg, 1987b; Peleg, 1987c]. The relevant results for this logic are the following: THEOREM 53. PDL < concurrent PDL. THEOREM 54. The validity problem for concurrent PDL is decidable in deterministic exponential time. Axiom System 17, augmented with the following axiom, can be be shown to be complete for concurrent PDL: < ^ >'

$ <>' ^ < >':

8 FIRST-ORDER DYNAMIC LOGIC (DL) In this section we begin the study of rst-order Dynamic Logic. The main dierence between rst-order DL and the propositional version PDL discussed in previous sections is the presence of a rst-order structure A, called the domain of computation , over which rst-order quanti cation is allowed. States are no longer abstract points, but valuations of a set of variables over A, the carrier of A. Atomic programs in DL are no longer abstract binary relations, but assignment statements of various forms, all based on assigning values to variables during the computation. The most basic example of such an assignment is the simple assignment x := t, where x is a variable and t is a term. The atomic formulas of DL are generally taken to be atomic rst-order formulas. In addition to the constructs of PDL, the basic DL syntax contains individual variables ranging over A, function and predicate symbols for distinguished functions and predicates of A, and quanti ers ranging over A, exactly as in classical rst-order logic. More powerful versions of the logic contain array and stack variables and other constructs, as well as primitive operations for manipulating them, and assignments for changing their values. Sometimes the introduction of a new construct increases expressive power and sometimes not; sometimes it has an eect on the complexity of

148

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

deciding satis ability and sometimes not. Indeed, one of the central goals of research has been to classify these constructs in terms of their relative expressive power and complexity. In this section we lay the groundwork for this by de ning the various logical and programming constructs we shall need.

8.1 Basic Syntax The language of rst-order Dynamic Logic is built upon classical rst-order logic. There is always an underlying rst-order vocabulary , which involves a vocabulary of function symbols and predicate (or relation) symbols. On top of this vocabulary, we de ne a set of programs and a set of formulas . These two sets interact by means of the modal construct [ ] exactly as in the propositional case. Programs and formulas are usually de ned by mutual induction. Let = ff; g; : : : ; p; r; : : : g be a nite rst-order vocabulary. Here f and g denote typical function symbols of , and p and r denote typical relation symbols. Associated with each function and relation symbol of is a xed arity (number of arguments), although we do not represent the arity explicitly. We assume that always contains the equality symbol =, whose arity is 2. Functions and relations of arity 0; 1; 2; 3 and n are called nullary, unary, binary, ternary, and n-ary, respectively. Nullary functions are also called constants. We shall be using a countable set of individual variables V = fx0 ; x1 ; : : : g. We always assume that contains at least one function symbol of positive arity. A vocabulary is polyadic if it contains a function symbol of arity greater than one. Vocabularies whose function symbols are all unary are called monadic . A vocabulary is rich if either it contains at least one predicate symbol besides the equality symbol or the sum of arities of the function symbols is at least two. Examples of rich vocabularies are: two unary function symbols, or one binary function symbol, or one unary function symbol and one unary predicate symbol. A vocabulary that is not rich is poor . Hence a poor vocabulary has just one unary function symbol and possibly some constants, but no relation symbols other than equality. The main dierence between rich and poor vocabularies is that the former admit exponentially many pairwise non-isomorphic structures of a given nite cardinality, whereas the latter admit only polynomially many. We say that the vocabulary is mono-unary if it contains no function symbols other than a single unary one. It may contain constants and predicate symbols. The de nitions of DL programs and formulas below depend on the vocabulary , but in general we shall not make this dependence explicit unless we have some speci c reason for doing so.

DYNAMIC LOGIC

149

Atomic Formulas and Programs

In all versions of DL that we will consider, atomic formulas are atomic formulas of the rst-order vocabulary ; that is, formulas of the form r(t1 ; : : : ; tn ), where r is an n-ary relation symbol of and t1 ; : : : ; tn are terms of . As in PDL, programs are de ned inductively from atomic programs using various programming constructs. The meaning of a compound program is given inductively in terms of the meanings of its constituent parts. Dierent classes of programs are obtained by choosing dierent classes of atomic programs and programming constructs. In the basic version of DL, an atomic program is a simple assignment x := t, where x 2 V and t is a term of . Intuitively, this program assigns the value of t to the variable x. This is the same form of assignment found in most conventional programming languages. More powerful forms of assignment such as stack and array assignments and nondeterministic \wildcard" assignments will be discussed later. The precise choice of atomic programs will be made explicit when needed, but for now, we use the term atomic program to cover all of these possibilities. Tests As in PDL, DL contains a test operator ?, which turns a formula into a program. In most versions of DL that we shall discuss, we allow only quanti erfree rst-order formulas as tests. We sometimes call these versions poor test . Alternatively, we might allow any rst-order formula as a test. Most generally, we might place no restrictions on the form of tests, allowing any DL formula whatsoever, including those that contain other programs, perhaps containing other tests, etc. These versions of DL are labeled rich test as in Section 2.1. Whereas programs can be de ned independently from formulas in poor test versions, rich test versions require a mutually inductive de nition of programs and formulas. As with atomic programs, the precise logic we consider at any given time depends on the choice of tests we allow. We will make this explicit when needed, but for now, we use the term test to cover all possibilities. Regular Programs For a given set of atomic programs and tests, the set of regular programs is de ned as in PDL (see Section 2.1): any atomic program or test is a program;

if and are programs, then ; is a program; if and are programs, then [ is a program; if is a program then is a program.

150

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

While Programs Much of the literature on DL is concerned with the class of while programs (see Section 2.1). Formally, deterministic while programs form the subclass of the regular programs in which the program operators [, ?, and are constrained to appear only in the forms skip def = 1? def fail = 0? (25) if ' then else def = ('?; ) [ (:'?; ) def (26) while ' do = ('?; ) ; :'? The class of nondeterministic while programs is the same, except that we allow unrestricted use of the nondeterministic choice construct [. Of course, unrestricted use of the sequential composition operator is allowed in both languages. Restrictions on the form of atomic programs and tests apply as with regular programs. For example, if we are allowing only poor tests, then the ' occurring in the programs (25) and (26) must be a quanti er-free rst-order formula. The class of deterministic while programs is important because it captures the basic programming constructs common to many real-life imperative programming languages. Over the standard structure of the natural numbers N , deterministic while programs are powerful enough to de ne all partial recursive functions, and thus over N they are as as expressive as regular programs. A similar result holds for a wide class of models similar to N , for a suitable de nition of \partial recursive functions" in these models. However, it is not true in general that while programs, even nondeterministic ones, are universally expressive. We discuss these results in Section 12. Formulas

A formula of DL is de ned in way similar to that of PDL, with the addition of a rule for quanti cation. Equivalently, we might say that a formula of DL is de ned in a way similar to that of rst-order logic, with the addition of a rule for modality. The basic version of DL is de ned with regular programs:

the false formula 0 is a formula; any atomic formula is a formula; if ' and are formulas, then ' ! is a formula; if ' is a formula and x 2 V , then 8x ' is a formula;

DYNAMIC LOGIC

151

if ' is a formula and is a program, then []' is a formula. The only missing rule in the de nition of the syntax of DL are the tests. In our basic version we would have:

if ' is a quati er-free rst-order formula, then '? is a test. For the rich test version, the de nitions of programs and formulas are mutually dependent, and the rule de ning tests is simply:

if ' is a formula, then '? is a test. We will use the same notation as in propositional logic that :' stands for ' ! 0. As in rst-order logic, the rst-order existential quanti er 9 is considered a de ned construct: 9x ' abbreviates :8x :'. Similarly, the modal construct < > is considered a de ned construct as in Section 2.1, since it is the modal dual of [ ]. The other propositional constructs ^, _, $ are de ned as in Section 2.1. Of course, we use parentheses where necessary to ensure unique readability. Note that the individual variables in V serve a dual purpose: they are both program variables and logical variables.

8.2 Richer Programs Seqs and R.E. Programs

Some classes of programs are most conveniently de ned as certain sets of seqs. Recall from Section 2.3 that a seq is a program of the form 1 ; ; k , where each i is an assignment statement or a quanti er-free rst-order test. Each regular program is associated with a unique set of seqs CS () (Section 2.3). These de nitions were made in the propositional context, but they apply equally well to the rst-order case; the only dierence is in the form of atomic programs and tests. Construing the word in the broadest possible sense, we can consider a program to be an arbitrary set of seqs. Although this makes sense semantically| we can assign an input/output relation to such a set in a meaningful way| such programs can hardly be called executable. At the very least we should require that the set of seqs be recursively enumerable, so that there will be some eective procedure that can list all possible executions of a given program. However, there is a subtle issue that arises with this notion. Consider the set of seqs

fxi := f i (c) j i 2 N g: This set satis es the above restriction, yet it can hardly be called a program. It uses in nitely many variables, and as a consequence it might change a

152

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

valuation at in nitely many places. Another pathological example is the set of seqs fxi+1 := f (xi ) j i 2 N g; which not only could change a valuation at in nitely many locations, but also depends on in nitely many locations of the input valuation. In order to avoid such pathologies, we will require that each program use only nitely many variables. This gives rise to the following de nition of r.e. programs, which is the most general family of programs we will consider. Speci cally, an r.e. program is a Turing machine that enumerates a set of seqs over a nite set of variables. The set of seqs enumerated will be called CS (). By FV () we will denote the nite set of variables that occur in seqs of CS (). An important issue connected with r.e. programs is that of bounded memory. The assignment statements or tests in an r.e. program may have in nitely many terms with increasingly deep nesting of function symbols (although, as discussed, these terms only use nitely many variables), and these could require an unbounded amount of memory to compute. We de ne a set of seqs to be bounded memory if the depth of terms appearing in it is bounded. In fact, without sacri cing computational power, we could require that all terms be of the form f (x1 ; : : : ; xn ) in a bounded-memory set of seqs. Arrays and Stacks Interesting variants of the programming language we use in DL arise from allowing auxiliary data structures. We shall de ne versions with arrays and stacks , as well as a version with a nondeterministic assignment statement called wildcard assignment . Besides these, one can imagine augmenting while programs with many other kinds of constructs such as blocks with declarations, recursive procedures with various parameter passing mechanisms, higher-order procedures, concurrent processes, etc. It is easy to arrive at a family consisting of thousands of programming languages, giving rise to thousands of logics. Obviously, we have had to restrict ourselves. It is worth mentioning, however, that certain kinds of recursive procedures are captured by our stack operations, as explained below. Arrays To handle arrays, we include a countable set of array variables Varray = fF0 ; F1 ; : : : g: Each array variable has an associated arity, or number of arguments, which we do not represent explicitly. We assume that there are countably many

DYNAMIC LOGIC

153

variables of each arity n 0. In the presence of array variables, we equate the set V of individual variables with the set of nullary array variables; thus V Varray . The variables in Varray of arity n will range over n-ary functions with arguments and values in the domain of computation. In our exposition, elements of the domain of computation play two roles: they are used both as indices into an array and as values that can be stored in an array. One might equally well introduce a separate sort for array indices; although conceptually simple, this would complicate the notation and would give no new insight. We extend the set of rst-order terms to allow the unrestricted occurrence of array variables, provided arities are respected. The classes of regular programs with arrays and deterministic and nondeterministic while programs with arrays are de ned similarly to the classes without, except that we allow array assignments in addition to simple assignments. Array assignments are similar to simple assignments, but on the left-hand side we allow a term in which the outermost symbol is an array variable: F (t1 ; : : : ; tn ) := t: Here F is an n-ary array variable and t1 ; : : : ; tn ; t are terms, possibly involving other array variables. Note that when n = 0, this reduces to the ordinary simple assignment. Recursion via an Algebraic Stack

We now consider DL in which the programs can manipulate a stack. The literature in automata theory and formal languages often distinguishes a stack from a pushdown store. In the former, the automaton is allowed to inspect the contents of the stack but to make changes only at the top. We shall use the term stack to denote the more common pushdown store, where the only inspection allowed is at the top of the stack. The motivation for this extension is to be able to capture recursion. It is well known that recursive procedures can be modeled using a stack, and for various technical reasons we prefer to extend the data-manipulation capabilities of our programs than to introduce new control constructs. When it encounters a recursive call, the stack simulation of recursion will push the return location and values of local variables and parameters on the stack. It will pop them upon completion of the call. The LIFO (last-in- rst-out) nature of stack storage ts the order in which control executes recursive calls. To handle the stack in our stack version of DL, we add two new atomic programs push(t) and pop(y);

154

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

where t is a term and y 2 V . Intuitively, push(t) pushes the current value of t onto the top of the stack, and pop(y) pops the top value o the top of the stack and assigns that value to the variable y. If the stack is empty, the pop operation does not change anything. We could have added a test for stack emptiness, but it can be shown to be redundant. Formally, the stack is simply a nite string of elements of the domain of computation. The classes of regular programs with stack and deterministic and nondeterministic while programs with stack are obtained by augmenting the respective classes of programs with the push and pop operations as atomic programs in addition to simple assignments. In contrast to the case of arrays, here there is only a single stack. In fact, expressiveness changes dramatically when two or more stacks are allowed. Also, in order to be able to simulate recursion, the domain must have at least two distinct elements so that return addresses can be properly encoded in the stack. One way of doing this is to store the return address itself in unary using one element of the domain, then store one occurrence of the second element as a delimiter symbol, followed by domain elements constituting the current values of parameters and local variables. The kind of stack described here is often termed algebraic , since it contains elements from the domain of computation. It should be contrasted with the Boolean stack described next. Parameterless Recursion via a Boolean Stack

An interesting special case is when the stack can contain only two distinct elements. This version of our programming language can be shown to capture recursive procedures without parameters or local variables. This is because we only need to store return addresses, but no actual data items from the domain of computation. This can be achieved using two values, as described above. We thus arrive at the idea of a Boolean stack. To handle such a stack in this version of DL, we add three new kinds of atomic programs and one new test. The atomic programs are

push-1

push-0

pop;

and the test is simply top?. Intuitively, push-1 and push-0 push the corresponding distinct Boolean values on the stack, pop removes the top element, and the test top? evaluates to true i the top element of the stack is 1, but with no side eect. With the test top? only, there is no explicit operator that distinguishes a stack with top element 0 from the empty stack. We might have de ned such an operator, and in a more realistic language we would certainly do so. However, it is mathematically redundant, since it can be simulated with the operators we already have.

DYNAMIC LOGIC

155

Wildcard Assignment

The nondeterministic assignment x := ? is a device that arises in the study of fairness; see [Apt and Plotkin, 1986]. It has often been called random assignment in the literature, although it has nothing to do with randomness or probability. We shall call it wildcard assignment . Intuitively, it operates by assigning a nondeterministically chosen element of the domain of computation to the variable x. This construct together with the [ ] modality is similar to the rst-order universal quanti er, since it will follow from the semantics that the two formulas [x := ?]' and 8x ' are equivalent. However, wildcard assignment may appear in programs and can therefore be iterated.

8.3 Semantics In this section we assign meanings to the syntactic constructs described in the previous sections. We interpret programs and formulas over a rstorder structure A. Variables range over the carrier of this structure. We take an operational view of program semantics: programs change the values of variables by sequences of simple assignments x := t or other assignments, and ow of control is determined by the truth values of tests performed at various times during the computation. States as Valuations An instantaneous snapshot of all relevant information at any moment during the computation is determined by the values of the program variables. Thus our states will be valuations u; v; : : : of the variables V over the carrier of the structure A. Our formal de nition will associate the pair (u; v) of such valuations with the program if it is possible to start in valuation u, execute the program , and halt in valuation v. In this case, we will call (u; v) an input/output pair of and write (u; v) 2 mA (). This will result in a Kripke frame exactly as in Section 2. Let A = (A; mA ) be a rst-order structure for the vocabulary . We call A the domain of computation . Here A is a set, called the carrier of A, and mA is a meaning function such that mA (f ) is an n-ary function mA (f ) : An ! A interpreting the n-ary function symbol f of , and mA (r) is an n-ary relation mA (r) An interpreting the n-ary relation symbol r of . The equality symbol = is always interpreted as the identity relation. For n 0, let An ! A denote the set of all n-ary functions in A. By convention, we take A0 ! A = A. Let A denote the set of all nite-length strings over A. The structure A determines a Kripke frame, which we will also denote by A, as follows. A valuation over A is a function u assigning an n-ary function over A to each n-ary array variable. It also assigns meanings to

156

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

the stacks as follows. We shall use the two unique variable names STK and BSTK to denote the algebraic stack and the Boolean stack, respectively. The valuation u assigns a nite-length string of elements of A to STK and a nite-length string of Boolean values 1 and 0 to BSTK . Formally: u(F ) 2 An ! A; if F is an n-ary array variable, u(STK ) 2 A ; u(BSTK ) 2 f1; 0g:

By our convention A0 ! A = A, and assuming that V Varray , the individual variables (that is, the nullary array variables) are assigned elements of A under this de nition: u(x) 2 A if x 2 V: The valuation u extends uniquely to terms t by induction. For an n-ary function symbol f and an n-ary array variable F ,

u(f (t1 ; : : : ; tn )) def = mA (f )(u(t1 ); : : : ; u(tn )) def u(F (t1 ; : : : ; tn )) = u(F )(u(t1 ); : : : ; u(tn )): The function-patching operator is de ned as follows: if X and D are sets, f : X ! D is any function, x 2 X , and d 2 D, then f [x=d] : X ! D is the function de ned by d; if x = y f [x=d](y) def = f (y); otherwise. We will be using this notation in several ways, both at the logical and metalogical levels. For example: If u is a valuation, x is an individual variable, and a 2 A, then u[x=a] is the new valuation obtained from u by changing the value of x to a and leaving the values of all other variables intact. If F is an n-ary array variable and f : An ! A, then u[F=f ] is the new valuation that assigns the same value as u to the stack variables and to all array variables other than F , and u[F=f ](F ) = f:

If f : An ! A is an n-ary function and a = a1 ; : : : ; an 2 An and a 2 A, then the expression f [a=a] denotes the n-ary function that agrees with f everywhere except for input a, on which it takes the value a. More precisely,

f [a=a](b) =

a; if b = a f (b); otherwise.

DYNAMIC LOGIC

157

We call valuations u and v nite variants of each other if u(F )(a1 ; : : : ; an ) = v(F )(a1 ; : : : ; an ) for all but nitely many array variables F and n-tuples a1 ; : : : ; an 2 An . In other words, u and v dier on at most nitely many array variables, and for those F on which they do dier, the functions u(F ) and v(F ) dier on at most nitely many values. The relation \is a nite variant of" is an equivalence relation on valuations. Since a halting computation can run for only a nite amount of time, it can execute only nitely many assignments. It will therefore not be able to cross equivalence class boundaries; that is, in the binary relation semantics given below, if the pair (u; v) is an input/output pair of the program , then v is a nite variant of u. We are now ready to de ne the states of our Kripke frame. For a 2 A, let wa be the valuation in which the stacks are empty and all array and individual variables are interpreted as constant functions taking the value a everywhere. A state of A is any nite variant of a valuation wa . The set of states of A is denoted S A . Call a state initial if it diers from some wa only at the values of individual variables. It is meaningful, and indeed useful in some contexts, to take as states the set of all valuations. Our purpose in restricting our attention to states as de ned above is to prevent arrays from being initialized with highly complex oracles that would compromise the value of the relative expressiveness results of Section 12. Assignment Statements As in Section 2.2, with every program we associate a binary relation mA () S A S A (called the input/output relation of p), and with every formula ' we associate a set mA (') S A : The sets mA () and mA (') are de ned by mutual induction on the structure of and '. For the basis of this inductive de nition, we rst give the semantics of all the assignment statements discussed earlier.

The array assignment F (t1 ; : : : ; tn ) := t is interpreted as the binary relation

mA (F (t1 ; : : : ; tn ) := t)

def = f(u; u[F=u(F )[u(t1); : : : ; u(tn)=u(t)]]) j u 2 S A g:

158

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

In other words, starting in state u, the array assignment has the effect of changing the value of F on input u(t1 ); : : : ; u(tn ) to u(t), and leaving the value of F on all other inputs and the values of all other variables intact. For n = 0, this de nition reduces to the following de nition of simple assignment:

mA (x := t) def = f(u; u[x=u(t)]) j u 2 S A g:

The push operations, push(t) for the algebraic stack and push-1 and push-0 for the Boolean stack, are interpreted as the binary relations

mA (push(t)) mA (push-1) mA (push-0)

def = f(u; u[STK =(u(t) u(STK ))]) j u 2 S A g def = f(u; u[BSTK =(1 u(BSTK ))]) j u 2 S A g def = f(u; u[BSTK =(0 u(BSTK ))]) j u 2 S A g;

respectively. In other words, push(t) changes the value of the algebraic stack variable STK from u(STK ) to the string u(t) u(STK ), the concatenation of the value u(t) with the string u(STK ), and everything else is left intact. The eects of push-1 and push-0 are similar, except that the special constants 1 and 0 are concatenated with u(BSTK ) instead of u(t).

The pop operations, pop(y) for the algebraic stack and pop for the Boolean stack, are interpreted as the binary relations

mA (pop(y))

def = f(u; u[STK =tail(u(STK ))][y=head(u(STK ); u(y))])u 2 S A g mA (pop) def = f(u; u[BSTK =tail(u(BSTK ))]) j u 2 S A g; respectively, where

tail(a ) tail(") head(a ; b) head("; b)

def = def = def = def =

" a b

and " is the empty string. In other words, if u(STK ) 6= ", this operation changes the value of STK from u(STK ) to the string obtained by

DYNAMIC LOGIC

159

deleting the rst element of u(STK ) and assigns that element to the variable y. If u(STK ) = ", then nothing is changed. Everything else is left intact. The Boolean stack operation pop changes the value of BSTK only, with no additional changes. We do not include explicit constructs to test whether the stacks are empty, since these can be simulated. However, we do need to be able to refer to the value of the top element of the Boolean stack, hence we include the top? test.

The Boolean test program top? is interpreted as the binary relation

mA (top?) def = f(u; u) j u 2 S A ; head(u(BSTK )) = 1g: In other words, this test changes nothing at all, but allows control to proceed i the top of the Boolean stack contains 1.

The wildcard assignment x :=? for x 2 V is interpreted as the relation

mA (x := ?) def = f(u; u[x=a]) j u 2 S A ; a 2 Ag: As a result of executing this statement, x will be assigned some arbitrary value of the carrier set A, and the values of all other variables will remain unchanged. Programs and Formulas

The meanings of compound programs and formulas are de ned by mutual induction on the structure of and ' exactly as in the propositional case (see Section 2.2). Seqs and R.E. Programs

Recall that an r.e. program is a Turing machine enumerating a set CS () of seqs. If is an r.e. program, we de ne

mA () def =

[

2CS ()

mA ():

Thus, the meaning of is de ned to be the union of the meanings of the seqs in CS (). The meaning mA () of a seq is determined by the meanings of atomic programs and tests and the sequential composition operator. There is an interesting point here regarding the translation of programs using other programming constructs into r.e. programs. This can be done for arrays and stacks (for Booleans stacks, even into r.e. programs with bounded memory), but not for wildcard assignment. Since later in the book we shall be referring to the r.e. set of seqs associated with such programs, it

160

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is important to be able to carry out this translation. To see how this is done for the case of arrays, for example, consider an algorithm for simulating the execution of a program by generating only ordinary assignments and tests. It does not generate an array assignment of the form F (t1 ; : : : ; tn ) := t, but rather \remembers" it and when it reaches an assignment of the form x := F (t1 ; : : : ; tn ) it will aim at generating x := t instead. This requires care, since we must keep track of changes in the variables inside t and t1 ; : : : ; tn and incorporate them into the generated assignments. Formulas

Here are the semantic de nitions for the constructs of formulas of DL. The semantics of atomic rst-order formulas is the standard semantics of classical rst-order logic. (27) mA (0) (28) mA (' ! ) (29) mA (8x ') (30) mA ([]')

def = def = def = def =

? fu j if u 2 mA (') then u 2 mA ( )g fu j 8a 2 A u[x=a] 2 mA (')g fu j 8v if (u; v) 2 mA () then v 2 mA (')g:

Equivalently, we could de ne the rst-order quanti ers 8 and 9 in terms of the wildcard assignment: (31) (32)

8x ' $ [x := ?]' 9x ' $ <x := ?>':

Note that for deterministic programs (for example, those obtained by using the while programming language instead of regular programs and disallowing wildcard assignments), mA () is a partial function from states to states; that is, for every state u, there is at most one v such that (u; v) 2 mA (). The partiality of the function arises from the possibility that may not halt when started in certain states. For example, mA (while 1 do skip) is the empty relation. In general, the relation mA () need not be singlevalued. If K is a given set of syntactic constructs, we refer to the version of Dynamic Logic with programs built from these constructs as Dynamic Logic with K or simply as DL(K ). Thus, we have DL(r:e:), DL(array), DL(stk), DL(bstk), DL(wild), and so on. As a default, these logics are the poor-test versions, in which only quanti er-free rst-order formulas may appear as tests. The unadorned DL is used to abbreviate DL(reg), and we use DL(dreg) to denote DL with while programs, which are really deterministic regular programs. Again, while programs use only poor tests. Combinations such as DL(dreg+wild) are also allowed.

DYNAMIC LOGIC

161

8.4 Satis ability and Validity The concepts of satis ability, validity, etc. are de ned as for PDL in Section 2 or as for rst-order logic under the standard semantics. Let A = (A; mA ) be a structure, and let u be a state in S A . For a formula ', we write A; u ' if u 2 mA (') and say that u satis es ' in A. We sometimes write u ' when A is understood. We say that ' is A-valid and write A ' if A; u ' for all u in A. We say that ' is valid and write ' if A ' for all A. We say that ' is satis able if A; u ' for some A; u. For a set of formulas , we write A if A ' for all ' 2 . Informally, A; u []' i every terminating computation of starting in state u terminates in a state satisfying ', and A; u <>' i there exists a computation of starting in state u and terminating in a state satisfying '. For a pure rst-order formula ', the metastatement A; u ' has the same meaning as in rst-order logic. 9 RELATIONSHIPS WITH STATIC LOGICS

9.1 Uninterpreted Reasoning In contrast to the propositional version PDL discussed in Sections 2{7, DL formulas involve variables, functions, predicates, and quanti ers, a state is a mapping from variables to values in some domain, and atomic programs are assignment statements. To give semantic meaning to these constructs requires a rst-order structure A over which to interpret the function and predicate symbols. Nevertheless, we are not obliged to assume anything special about A or the nature of the interpretations of the function and predicate symbols, except as dictated by rst-order semantics. Any conclusions we draw from this level of reasoning will be valid under all possible interpretations. Uninterpreted reasoning refers to this style of reasoning. For example, the formula

p(f (x); g(y; f (x)))

! p(z; g(y; z ))

is true over any domain, irrespective of the interpretations of p, f , and g. Another example of a valid formula is

z=y

^ 8x f (g(x)) = x

! [while p(y) do y := g(y)]<while y 6= z do y := f (y)>1:

Note the use of [ ] applied to < >. This formula asserts that under the assumption that f \undoes" g, any computation consisting of applying g some number of times to z can be backtracked to the original z by applying f some number of times to the result.

162

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

We now observe that three basic properties of classical (uninterpreted) rst-order logic, the Lowenheim{Skolem theorem, completeness, and compactness, fail for even fairly weak versions of DL. The Lowenheim{Skolem theorem for classical rst-order logic states that if a formula ' has an in nite model then it has models of all in nite cardinalities. Because of this theorem, classical rst-order logic cannot de ne the structure of elementary arithmetic N = (!; +; ; 0; 1; =)

up to isomorphism. That is, there is no rst-order sentence that is true in a structure A if and only if A is isomorphic to N . However, this can be done in DL. PROPOSITION 55. There exists a formula N of DL(dreg) that de nes N up to isomorphism. The Lowenheim{Skolem theorem does not hold for DL, because N has an in nite model (namely N ), but all models are isomorphic to N and are therefore countable. Besides the Lowenheim{Skolem Theorem, compactness fails in DL as well. Consider the following countable set of formulas:

f<while p(x) do x := f (x)>1g [ fp(f n (x)) j n 0g:

It is easy to see that is not satis able, but it is nitely satis able, i.e. each nite subset of it is satis able. Worst of all, completeness cannot hold for any deductive system as we normally think of it (a nite eective system of axioms schemes and nitary inference rules). The set of theorems of such a system would be r.e., since they could be enumerated by writing down the axioms and systematically applying the rules of inference in all possible ways. However, the set of valid statements of DL is not recursively enumerable. In fact, we will describe in Section 10 exactly how bad the situation is. This is not to say that we cannot say anything meaningful about proofs and deduction in DL. On the contrary, there is a wealth of interesting and practical results on axiom systems for DL that we will cover in Section 11. In this section we investigate the power of DL relative to classical static logics on the uninterpreted level. In particular, rich test DL of r.e. programs is equivalent to the in nitary language L!1ck! . Some consequences of this fact are drawn in later sections. First we introduce a de nition that allows to compare dierent variants of DL. Let us recall from Section 8.3 that a state is initial if it diers from a constant state wa only at the values of individual variables. If DL1 and DL2 are two variants of DL over the same vocabulary, we say that DL2 is as expressive as DL1 and write DL1 DL2 if for each formula ' in DL1 there is a formula in DL2 such that A; u ' $ for all structures A and initial

DYNAMIC LOGIC

163

states u. If DL2 is as expressive as DL1 but DL1 is not as expressive as DL2 , we say that DL2 is strictly more expressive than DL1 , and write DL1 < DL2 . If DL2 is as expressive as DL1 and DL1 is as expressive as DL2 , we say that DL1 and DL2 are of equal expressive power, or are simply equivalent, and write DL1 DL2 . We will also use these notions for comparing versions of DL with static logics such as L!! . There is a technical reason for the restriction to initial states in the above de nition. If DL1 and DL2 have access to dierent sets of data types, then they may be trivially incomparable for uninteresting reasons, unless we are careful to limit the states on which they are compared. We shall see examples of this in Section 12. Also, in the de nition of DL(K ) given in Section 8.4, the programming language K is an explicit parameter. Actually, the particular rst-order vocabulary over which DL(K ) and K are considered should be treated as a parameter too. It turns out that the relative expressiveness of versions of DL is sensitive not only to K , but also to . This second parameter is often ignored in the literature, creating a source of potential misinterpretation of the results. For now, we assume a xed rst-order vocabulary . Rich Test Dynamic Logic of R.E. Programs

We are about to introduce the most general version of DL we will ever consider. This logic is called rich test Dynamic Logic of r.e. programs , and it will be denoted DL(rich-test r:e:). Programs of DL(rich-test r:e:) are r.e. sets of seqs as de ned in Section 8.2, except that the seqs may contain tests '? for any previously constructed formula '. The formal de nition is inductive. All atomic programs are programs and all atomic formulas are formulas. If '; are formulas, ; are programs, fn j n 2 !g is an r.e. set of programs over a nite set of variables (free or bound), and x is a variable, then

0 '! []'

8x '

are formulas and

; fn j n 2 !g '?

164

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

are programs. The set CS () of computation sequences of a rich test r.e. program is de ned as usual. The language L!1 ! is the language with the formation rules of the rstorder language L!! , but V W in which countably in nite conjunctions and disjunctions i2I 'i and i2I 'i are also allowed. In addition, if f'i j i 2 I g is recursively enumerable, then the resulting language is denoted L!1ck! and is sometimes called constructive L!1 ! . PROPOSITION 56. DL(rich-test r:e:) L!1ck ! . Since r.e. programs as de ned in Section 8.2 are clearly a special case of general rich-test r.e. programs, it follows that DL(rich-test r:e:) is as expressive as DL(r:e:). In fact they are not of the same expressive power. THEOREM 57. DL(r:e:) < DL(rich-test r:e:). Henceforth, we shall assume that the rst-order vocabulary contains at least one function symbol of positive arity. Under this assumption, DL can easily be shown to be strictly more expressive than L!! : THEOREM 58. L!! < DL. COROLLARY 59. L!! < DL < DL(r:e:) < DL(rich-test r:e:) L!1ck! : The situation with the intermediate versions of DL, e.g. DL(stk), DL(bstk), DL(wild), etc., is of interest. We deal with the relative expressive power of these in Section 12.

9.2 Interpreted Reasoning Arithmetical Structures This is the most detailed level we will consider. It is the closest to the actual process of reasoning about concrete, fully speci ed programs. Syntactically, the programs and formulas are as on the uninterpreted level, but here we assume a xed structure or class of structures. In this framework, we can study programs whose computational behavior depends on (sometimes deep) properties of the particular structures over which they are interpreted. In fact, almost any task of verifying the correctness of an actual program falls under the heading of interpreted reasoning. One speci c structure we will look at carefully is the natural numbers with the usual arithemetic operations: N = (!; 0; 1; +; ; =):

Let denote the ( rst-order-de nable) operation of subtraction and let gcd(x; y) denote the rst-order-de nable operation giving the greatest common divisor of x and y. The following formula of DL is N -valid, i.e., true in

DYNAMIC LOGIC

all states of N : (33) x = x0 ^ y = y0 ^ xy 1

165

! <>(x = gcd(x0 ; y0 ))

where is the while program of Example 1 or the regular program (x 6= y?; ((x > y?; x := x y) [ (x < y?; y := y

x))) x = y?:

Formula (33) states the correctness and termination of an actual program over N computing the greatest common divisor. As another example, consider the following formula over N :

8x 1 (x = 1):

Here = denotes integer division, and even( ) is the relation that tests if its argument is even. Both of these are rst-order de nable. This innocentlooking formula asserts that starting with an arbitrary positive integer and repeating the following two operations, we will eventually reach 1:

if the number is even, divide it by 2; if the number is odd, triple it and add 1.

The truth of this formula is as yet unknown, and it constitutes a problem in number theory (dubbed \the 3x + 1 problem") that has been open for over 60 years. The formula 8x 1 <>1, where is

while x 6= 1 do if even(x) then x := x=2 else x := 3x + 1;

says this in a slightly dierent way. The speci c structure N can be generalized, resulting in the class of arithmetical structures . Brie y, a structure A is arithmetical if it contains a rst-order-de nable copy of N and has rst-order de nable functions for coding nite sequences of elements of A into single elements and for the corresponding decoding. Arithmetical structures are important because (i) most structures arising naturally in computer science (e.g., discrete structures with recursively de ned data types) are arithmetical, and (ii) any structure can be extended to an arithmetical one by adding appropriate encoding and decoding capabilities. While most of the results we present for the interpreted level are given in terms of N alone, many of them hold for any arithmetical structure, so their signi cance is greater. Expressive Power over N

The results of Corollary 59 establishing that

L!! < DL < DL(r:e:) < DL(rich-test r:e:)

166

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

were on the uninterpreted level, where all structures are taken into account. Thus rst-order logic, regular DL, and DL(rich-test r:e:) form a sequence of increasingly more powerful logics when interpreted uniformly over all structures. What happens if one xes a structure, say N ? Do these dierences in expressive power still hold? We now address these questions. First, we introduce notation for comparing expressive power over N . If DL1 and DL2 are variants of DL (or static logics, such as L!! ) and are de ned over the vocabulary of N , we write DL1 N DL2 if for each ' 2 DL1 there is 2 DL2 such that N ' $ . We de ne '; where ' is a rst-order formula. Inference Rules

modus ponens:

'; ' !

DYNAMIC LOGIC

171

We denote provability in Axiom System S1 by `S1 . THEOREM 71. For any DL formula of the form ' ! <> , for rst-order ' and and program containing rst-order tests only,

' ! <>

, `S ' ! <> : 1

Given the high undecidability of validity in DL, we cannot hope for a complete axiom system in the usual sense. Nevertheless, we do want to provide an orderly axiomatization of valid DL formulas, even if this means that we have to give up the nitary nature of standard axiom systems. Below we present a complete in nitary axiomatization S2 of DL that includes an inference rule with in nitely many premises. Before doing so, however, we must get a certain technical complication out of the way. We would like to be able to consider valid rst-order formulas as axiom schemes, but instantiated by general formulas of DL. In order to make formulas amenable to rst-order manipulation, we must be able to make sense of such notions as \a free occurrence of x in '" and the substitution '[x=t]. For example, we would like to be able to use the axiom scheme of the predicate calculus 8x ' ! '[x=t], even if ' contains programs. The problem arises because the dynamic nature of the semantics of DL may cause a single occurrence of a variable in a DL formula to act as both a free and bound occurrence. For example, in the formula <while x 99 do x := x + 1>1, the occurrence of x in the expression x +1 acts as both a free occurrence (for the rst assignment) and as a bound occurrence (for subsequent assignments). There are several reasonable ways to deal with this, and we present one for de niteness. Without loss of generality, we assume that whenever required, all programs appear in the special form (34) ' where x = (x1 ; : : : ; xn ) and z = (z1 ; : : : ; zn) are tuples of variables, z := x stands for

z1 := x1 ; ; zn := xn (and similarly for x := z), the xi do not appear in , and the zi are new variables appearing nowhere in the relevant context outside of the program . The idea is to make programs act on the \local" variables zi by rst copying the values of the xi into the zi , thus freezing the xi , executing the program with the zi , and then restoring the xi . This form can be easily obtained from any DL formula by consistently changing all variables of any program to new ones and adding the appropriate assignments that copy and then restore the values. Clearly, the new formula is equivalent to the old. Given a DL formula in this form, the following are bound occurrences of variables:

172

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

all occurrences of x in a subformula of the form 9x ';

all occurrences of xi in a subformula of the form (34) except for its occurrence in the assignment zi := xi .

all occurrences of zi in a subformula of the form (34) (note, though, that zi does not occur in ' at all);

Every occurrence of a variable that is not bound is free. Our axiom system will have an axiom that enables free translation into the special form discussed, and in the sequel we assume that the special form is used whenever required (for example, in the assignment axiom scheme below). As an example, consider the formula:

8x (p(x; y)) ! p(x; y):

z2 := f (z1 ); z1 := g(z2; z1 ); x := z1 ;

Denoting p(x; y) by ', the conclusion of the implication is just '[x=h(z )] according to the convention above; that is, the result of replacing all free occurrences of x in ' by h(z ) after ' has been transformed into special form. We want the above formula to be considered a legal instance of the assignment axiom scheme below.

Axiom System S2 Axiom Schemes

all instances of valid rst-order formulas; all instances of valid formulas of PDL; <x := t>' $ '[x=t]; ' $ 'b, where 'b is ' in which some occurrence of a program has

been replaced by the program z := x; 0 ; x := z for z not appearing in ', and where 0 is with all occurrences of x replaced by z .

Inference Rules

modus ponens:

'; ' !

DYNAMIC LOGIC

generalization: ' and []'

173

'

8x '

in nitary convergence:

' ! [n ] ; n 2 ! ' ! [ ] Provability in Axiom System S2, denoted by `S2 , is the usual concept for systems with in nitary rules of inference; that is, deriving a formula using the in nitary rule requires in nitely many premises to have been previously derived. Axiom System S2 consists of an axiom for assignment, facilities for propositional reasoning about programs and rst-order reasoning with no programs (but with programs possibly appearing in instantiated rst-order formulas), and an in nitary rule for [ ]. The dual construct, < >, is taken care of by the \unfolding" validity of PDL: < >'

$ (' _ <; >'):

THEOREM 72. For any formula ' of DL,

' , `S ': 2

11.2 Interpreted Reasoning Proving properties of real programs very often involves reasoning on the interpreted level, where one is interested in A-validity for a particular structure A. A typical proof might use induction on the length of the computation to establish an invariant for partial correctness or to exhibit a decreasing value in some well-founded set for termination. In each case, the problem is reduced to the problem of verifying some domain-dependent facts, sometimes called veri cation conditions . Mathematically speaking, this kind of activity is really an eective transformation of assertions about programs into ones about the underlying structure. For DL, this transformation can be guided by a direct induction on program structure using an axiom system that is complete relative to any given arithmetical structure A. The essential idea is to exploit the existence, for any given DL formula, of a rst-order equivalent in A, as guaranteed by Theorem 60. In the axiom systems we construct, instead of dealing with the 11 -hardness of the validity problem by an in nitary rule, we take all A-valid rst-order formulas as additional axioms. Relative to this set of axioms, proofs are nite and eective.

174

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

For partial correctness assertions of the form ' ! [] with ' and rst-order and containing rst-order tests, it suÆces to show that DL reduces to the rst-order logic L!! , and there is no need for the natural numbers to be present. Thus, Axiom System S3 below works for nite structures too. Axiom System S4 is an arithmetically complete system for full DL that does make explicit use of natural numbers. It follows from Theorem 66 that for partial correctness formulas we cannot hope to obtain a completeness result similar to the one proved in Theorem 71 for termination formulas. A way around this diÆculty is to consider only expressive structures. A structure A for the rst-order vocabulary is said to be expressive for a programming language K if for every 2 K and for every rst-order formula ', there exists a rst-order formula L such that A L $ []'. Examples of structures that are expressive for most programming languages are nite structures and arithmetical structures.

Axiom System S3 Axiom Schemes

all instances of valid formulas of PDL; <x := t>' $ '[x=t] for rst-order '. Inference Rules

modus ponens:

'; ' !

generalization:

' : []' Note that Axiom System S3 is really the axiom system for PDL from Section 4 with the addition of the assignment axiom. Given a DL formula ' and a structure A, denote by A `S3 ' provability of ' in the system obtained from Axiom System S3 by adding the following set of axioms:

all A-valid rst-order sentences.

DYNAMIC LOGIC

175

THEOREM 73. For every expressive structure A and for every formula of DL of the form ' ! [] , where ' and are rst-order and involves only rst-order tests, we have

A , A `S : 3

Now we present an axiom system S4 for full DL. It is similar in spirit to S3 in that it is complete relative to the formulas valid in the structure under consideration. However, this system works for arithmetical structures only. It is not tailored to deal with other expressive structures, notably nite ones, since it requires the use of the natural numbers. The kind of completeness result stated here is thus termed arithmetical. As in Section 9.2, we state the results for the special structure N , omitting the technicalities needed to deal with general arithmetical structures. The main dierence is that in N we can use variables n, m, etc., knowing that their values will be natural numbers. We can thus write n + 1, for example, assuming the standard interpretation. When working in an unspeci ed arithmetical structure, we have to precede such usage with appropriate predicates that guarantee that we are indeed talking about that part of the domain that is isomorphic to the natural numbers. For example, we would often have to use the rst-order formula, call it nat(n), which is true precisely for the elements representing natural numbers, and which exists by the de nition of an arithmetical structure.

Axiom System S4 Axiom Schemes

all instances of valid rst-order formulas; all instances of valid formulas of PDL; <x := t>' $ '[x=t] for rst-order '. Inference Rules

modus ponens:

'; ' !

generalization:

'

[]'

and

'

8x '

176

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

convergence:

'(n + 1) ! <>'(n) '(n) ! < >'(0) for rst order ' and variable n not appearing in . REMARK 74. For general arithmetical structures, the +1 and 0 in the rule of convergence denote suitable rst-order de nitions. As in Axiom System S3, denote by A `S4 ' provability of ' in the system obtained from Axiom System S4 by adding all A-valid rst-order sentences as axioms. THEOREM 75. For every formula of DL, N

,

N

`S : 4

The use of the natural numbers as a device for counting down to 0 in the convergence rule of Axiom System S4 can be relaxed. In fact, any wellfounded set suitably expressible in any given arithmetical structure suÆces. Also, it is not necessary to require that an execution of causes the truth of the parameterized '(n) in that rule to decrease exactly by 1; it suÆces that the decrease is positive at each iteration. In closing, we note that appropriately restricted versions of all axiom systems of this section are complete for DL(dreg). In particular, as pointed out in Section 2.6, the Hoare while-rule ' ^ ! []' ' ! [while do ](' ^ : ) results from combining the generalization rule with the induction and test axioms of PDL, when is restricted to appear only in the context of a while statement; that is, only in the form ( ?; p) ; (: )?. 12 EXPRESSIVENESS OF DL The subject of study in this section is the relative expressive power of languages. We will be primarily interested in comparing, on the uninterpreted level, the expressive power of various versions of DL. That is, for programming languages P1 and P2 we will study whether DL(P1 ) DL(P2 ) holds. Recall from Section 9 that the latter relation means that for each formula ' in DL(P1 ), there is a formula in DL(P2 ) such that A; u ' $ for all structures A and initial states u. Studying the expressive power of logics rather than the computational power of programs allows us to compare, for example, deterministic and

DYNAMIC LOGIC

177

nondeterministic programming languages. Also, we will see that the answer to the fundamental question \DL(P1 ) DL(P2 )?" may depend crucially on the vocabulary over which we consider logics and programs. For this reason we always make clear in the theorems of this section our assumptions on the vocabulary. THEOREM 76. Let be a rich vocabulary. Then (i) DL(stk) DL(array). (ii) DL(stk) DL(array) i P = PSPACE. Moreover, the same holds for deterministic regular programs with an algebraic stack and deterministic regular programs with arrays. THEOREM 77. Over a monadic vocabulary, nondeterministic regular programs with a Boolean stack have the same computational power as nondeterministic regular programs with an algebraic stack. Now we investigate the role that nondeterminism plays in the expressive power of logics of programs. As we shall see, the general conclusion is that for a programming language of suÆcient computational power, nondeterminism does not increase the expressive power of the logic. We start our discussion of the role of nondeterminism with the basic case of regular programs. Recall that DL and DDL denote the logics of nondeterministic and deterministic regular programs, respectively. We can now state the main result that separates the expressive power of deterministic and nondeterministic while programs. THEOREM 78. For every vocabulary containing at least two unary function symbols or at least one function symbol of arity greater than one, DDL is strictly less expressive than DL; that is, DDL < DL. It turns out that Theorem 78 cannot be extended to vocabularies containing just one unary function symbol without solving a well known open problem in complexity theory. THEOREM 79. For every rich mono-unary vocabulary, the statement \DDL is strictly less expressive than DL" is equivalent to LOGSPACE 6= NLOGSPACE. We now turn our attention to the discussion of the role nondeterminism plays in the expressive power of regular programs with a Boolean stack. For a vocabulary containing at least two unary function symbols, nondeterminism increases the expressive power of DL over regular programs ` with a Boolean stack. For the rest of this section, we let the vocabulary contain two unary function symbols. THEOREM 80. For a vocabulary containing at least two unary function symbols or a function symbol of arity greater than two, DL(dbstk) < DL(bstk).

178

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

It turns out that for programming languages that use suÆciently strong data types, nondeterminism does not increase the expressive power of Dynamic Logic. THEOREM 81. For every vocabulary,

DL(stk); DL(darray) DL(array).

(i) DL(dstk) (ii)

We will discuss the role of unbounded memory of programs for the expressive power of the corresponding logic. However, this result depends on assumptions about the vocabulary . Recall from Section 8.2 that an r.e. program has bounded memory if the set CS () contains only nitely many distinct variables from V , and if in addition the nesting of function symbols in terms that occur in seqs of CS () is bounded. This restriction implies that such a program can be simulated in all interpretations by a device that uses a xed nite number of registers, say x1 ; : : : ; xn , and all its elementary steps consist of either performing a test of the form

r(xi1 ; : : : ; xim )?; where r is an m-ary relation symbol of , or executing a simple assignment of either of the following two forms:

xi := f (xi1 ; : : : ; xik )

xi := xj :

In general, however, such a device may need a very powerful control (that of a Turing machine) to decide which elementary step to take next. An example of a programming language with bounded memory is the class of regular programs with a Boolean stack. Indeed, the Boolean stack strengthens the control structure of a regular program without introducing extra registers for storing algebraic elements. It can be shown without much diÆculty that regular programs with a Boolean stack have bounded memory. On the other hand, regular programs with an algebraic stack or with arrays are programming languages with unbounded memory. For monadic vocabularies, the class of nondeterministic regular programs with a Boolean stack is computationally equivalent to the class of nondeterministic regular programs with an algebraic stack. For deterministic programs, the situation is slightly dierent. THEOREM 82. (i) For every vocabulary containing a function symbol of arity greater than one, DL(dbstk) < DL(dstk) and DL(bstk) < DL(stk). (ii) For all monadic vocabularies, DL(bstk)

DL(stk).

DYNAMIC LOGIC

(iii) For all mono-unary vocabularies, DL(dbstk)

179

DL(dstk).

(iv) For all monadic vocabularies containing at least two function symbols, DL(dbstk) < DL(dstk). Regular programs with a Boolean stack are situated between pure regular programs and regular programs with an algebraic stack. We start our discussion by comparing the expressive power of regular programs with and without a Boolean stack. The only known de nite answer to this problem is given in the following result, which covers the case of deterministic programs only. THEOREM 83. (i) Let the vocabulary be rich and mono-unary. Then

DL(dreg) DL(dstk) , LOGSPACE = P : (ii) If the vocabulary contains at least one function symbol of arity greater than one or at least two unary function symbols, then DL(dreg) < DL(dbstk). It is not known whether Theorem 83(ii) holds for nondeterministic programs, and neither is its statement known to be equivalent to any of the well known open problems in complexity theory. In contrast, it follows from Theorems 83(i) and 82(iii) that for rich mono-unary vocabularies, DL(dreg) DL(dbstk) if and only if LOGSPACE = P . Hence, this problem cannot be solved without solving one of the major open problems in complexity theory. The wildcard assignment statement x :=? discussed in Section 8.2 chooses an element of the domain of computation nondeterministically and assigns it to x. It is a device that represents unbounded nondeterminism as opposed to the binary nondeterminism of the nondeterministic choice construct [. The programming language of regular programs augmented with wildcard assignment is not an acceptable programming language, since a wildcard assignment can produce values that are outside the substructure generated by the input. Our rst result shows that wildcard assignment increases the expressive power in quite a substantial way; it cannot be simulated even by r.e. programs. THEOREM 84. Let the vocabulary contain two constants c1 ; c2 , a binary predicate symbol p, the symbol = for equality, and no other function or predicate symbols. There is a formula of DL(wild) that is equivalent to no formula of DL(r:e:), thus DL(wild) 6 DL(r:e:).

180

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

It is not known whether any of the logics with unbounded memory are reducible to DL(wild). When both wildcard and array assignments are allowed, it is possible to de ne the niteness of (the domain of) a structure, but not in the logics with either of the additions removed. Thus, having both memory and nondeterminism unbounded provides more power than having either of them bounded. THEOREM 85. Let vocabulary contain only the symbol of equality. There is a formula of DL(array+wild) equivalent to no formula of either DL(array) or DL(wild). 13 VARIANTS OF DL In this section we consider some restrictions and extensions of DL. We are interested mainly in questions of comparative expressive power on the uninterpreted level. In arithmetical structures these questions usually become trivial, since it is diÆcult to go beyond the power of rst-order arithmetic without allowing in nitely many distinct tests in programs (see Theorems 60 and 61). In regular DL this luxury is not present.

13.1 Algorithmic Logic Algorithmic Logic (AL) is the predecessor of Dynamic Logic. The basic system was de ned by [Salwicki, 1970] and generated an extensive amount of subsequent research carried out by a group of mathematicians working in Warsaw. Two surveys of the rst few years of their work can be found in [Banachowski et al., 1977] and [Salwicki, 1977]. The original version of AL allowed deterministic while programs and formulas built from the constructs ' [ ' \ ' corresponding in our terminology to <>'

< >'

^

n2!

<n >';

respectively, where is a deterministic while program and ' is a quanti erfree rst-order formula. In [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b], AL was extended to allow nondeterministic while programs and the constructs r' ' corresponding in our terminology to <>' halt() ^ []' ^ <>';

DYNAMIC LOGIC

181

respectively. The latter asserts that all traces of are nite and terminate in a state satisfying '. A feature present in AL but not in DL is the set of \dynamic terms" in addition to dynamic formulas. For a rst-order term t and a deterministic while program , the meaning of the expression t is the value of t after executing program . If does not halt, the meaning is unde ned. Such terms can be systematically eliminated; for example, P (x; t) is replaced by 9z (<>(z = t) ^ P (x; z )). The emphasis in the early research on AL was in obtaining in nitary completeness results, developing normal forms for programs, investigating recursive procedures with parameters, and axiomatizing certain aspects of programming using formulas of AL. As an example of the latter, the algorithmic formula (while s 6= " do s := pop(s))1 can be viewed as an axiom connected with the data structure stack. One can then investigate the consequences of such axioms within AL, regarding them as properties of the corresponding data structures. Complete in nitary deductive systems for rst-order and propositional versions are given in [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b]. The in nitary completeness results for AL are usually proved by the algebraic methods of [Rasiowa and Sikorski, 1963]. [Constable, 1977], [Constable and O'Donnell, 1978] and [Goldblatt, 1982] present logics similar to AL and DL for reasoning about deterministic while programs.

13.2 Well-Foundedness As in Section 7 for PDL, we consider adding to DL assertions to the eect that programs can enter in nite computations. Here too, we shall be interested both in LDL and in RDL versions; i.e., those in which halt and wf , respectively, have been added inductively as new formulas for any program . As mentioned there, the connection with the more common notation repeat and loop (from which the L and R in the names LDL and RDL derive) is by:

loop repeat

def :halt () def :wf : ()

We now state some of the relevant results. The rst concerns the addition of halt : THEOREM 86. LDL DL. In contrast to this, we have:

182

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

THEOREM 87. LDL < RDL. Turning to the validity problem for these extensions, clearly they cannot be any harder to decide than that of DL, which is 11 -complete. However, the following result shows that detecting the absence of in nite computations of even simple uninterpreted programs is extremely hard. THEOREM 88. The validity problems for formulas of the form ' ! and formulas of the form ' ! , for rst-order ' and regular , are both 11 -complete. If is constrained to have only rst-order tests then the ' ! case remains 11 -complete but the ' ! case is r.e.; that is, it is 01 -complete. We just mention here that the additions to Axiom System S4 of Section 11 that are used to obtain an arithmetically complete system for RDL are the axiom [ ](' ! <>') ! (' ! :wf ) and the inference rule '(n + 1) ! []'(n); :'(0) '(n) ! wf for rst-order ' and n not occurring in .

wf

halt

wf

halt

13.3 Probabilistic Programs There is wide interest recently in programs that employ probabilistic moves such as coin tossing or random number draws and whose behavior is described probabilistically (for example, is \correct" if it does what it is meant to do with probability 1). To give one well known example taken from [Miller, 1976] and [Rabin, 1980], there are fast probabilistic algorithms for checking primality of numbers but no known fast nonprobabilistic ones. Many synchronization problems including digital contract signing, guaranteeing mutual exclusion, etc. are often solved by probabilistic means. This interest has prompted research into formal and informal methods for reasoning about probabilistic programs. It should be noted that such methods are also applicable for reasoning probabilistically about ordinary programs, for example, in average-case complexity analysis of a program, where inputs are regarded as coming from some set with a probability distribution. [Kozen, 1981d] provided a formal semantics for probabilistic rst-order while programs with a random assignment statement x :=?. Here the term \random" is quite appropriate (contrast with Section 8.2) as the statement essentially picks an element out of some xed distribution over the domain D. This domain is assumed to be given with an appropriate set of measurable subsets. Programs are then interpreted as measurable functions on a certain measurable product space of copies of D.

DYNAMIC LOGIC

183

In [Feldman and Harel, 1984] a probabilistic version of rst-order Dynamic Logic, P r(DL), was investigated on the interpreted level. Kozen's semantics is extended as described below to a semantics for formulas that are closed under Boolean connectives and quanti cation over reals and integers and that employ terms of the form F r(') for rst-order '. In addition, if is a while program with nondeterministic assignments and ' is a formula, then fg' is a new formula. The semantics assumes a domain D, say the reals, with a measure space consisting of an appropriate family of measurable subsets of D. The states ; ; : : : are then taken to be the positive measures on this measure space. Terms are interpreted as functions from states to real numbers, with F r(') in being the frequency (or simply, the measure ) of ' in . Frequency is to positive measures as probability is to probability measures. The formula fg' is true in if ' is true in , the state (i.e., measure) that is the result of applying to in Kozen's semantics. Thus fg' means \after , '" and is the construct analogous to <>' of DL. For example, in P r(DL) one can write F r(1) = 1 ! fgF r(1) p to mean, \ halts with probability at least p." The formula F r(1) = 1 ! [i := 1; x := ?; while x > 1=2 do (x := ?; i := i + 1)] 8n ((n 1 ! F r(i = n) = 2 n) ^ (n < 1 ! F r(i = n) = 0)) is valid in all structures in which the distribution of the random variable used in x := ? is a uniform distribution on the real interval [0; 1]. An axiom system for P r(DL) was proved in [Feldman and Harel, 1984] to be complete relative to an extension of rst-order analysis with integer variables, and for discrete probabilities rst-order analysis with integer variables was shown to suÆce. 14 OTHER APPROACHES Here we discuss brie y some topics closely related to Dynamic Logic.

14.1 Logic of Eective De nitions The Logic of Eective De nitions (LED), introduced by [Tiuryn, 1981a], was intended to study notions of computability over abtract models and to provide a universal framework for the study of logics of programs over such models. It consists of rst-order logic augmented with new atomic formulas of the form = , where and are eective de nitional schemes (the latter notion is due to [Friedman, 1971]):

184

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

if '1 then t1 else if '2 then t2 else if '3 then t3 else if : : : where the 'i are quanti er-free formulas and ti are terms over a bounded set of variables, and the function i 7! ('i ; ti ) is recursive. The formula = is de ned to be true in a state if both and terminate and yield the same value, or neither terminates. Model theory and in nitary completeness of LED are treated in [Tiuryn, 1981a]. Eective de nitional schemes in the de nition of LED can be replaced by any programming language K , giving rise to various logical formalisms. The following result, which relates LED to other logics discussed here, is proved in [Meyer and Tiuryn, 1981; Meyer and Tiuryn, 1984]. THEOREM 89. For every vocabulary L, LED DL(r:e:).

14.2 Temporal Logic Temporal Logic (TL) is an alternative application of modal logic to program speci cation and veri cation. It was rst proposed as a useful tool in program veri cation by [Pnueli, 1977] and has since been developed by many authors in various forms. This topic is surveyed in depth in [Emerson, 1990] and [Gabbay et al., 1994]. TL diers from DL chie y in that it is endogenous ; that is, programs are not explicit in the language. Every application has a single program associated with it, and the language may contain program-speci c statements such as at L, meaning \execution is currently at location L in the program." There are two competing semantics, giving rise to two dierent theories called linear-time and branching-time TL. In the former, a model is a linear sequence of program states representing an execution sequence of a deterministic program or a possible execution sequence of a nondeterministic or concurrent program. In the latter, a model is a tree of program states representing the space of all possible traces of a nondeterministic or concurrent program. Depending on the application and the semantics, dierent syntactic constructs can be chosen. The relative advantages of linear and branching time semantics are discussed in [Lamport, 1980; Emerson and Halpern, 1986; Emerson and Lei, 1987; Vardi, 1998a].

DYNAMIC LOGIC

185

Modal constructs used in TL include

2' 3'

\' holds in all future states" \' holds in some future state" ' \' holds in the next state" ' until \there exists some strictly future point t at which will be satis ed and all points strictly between the current state and t satisfy '" for linear-time logic, as well as constructs for expressing \for all traces starting from the present state : : : " \for some trace starting from the present state : : : " for branching-time logic. Temporal logic is useful in situations where programs are not normally supposed to halt, such as operating systems, and is particularly well suited to the study of concurrency. Many classical program veri cation methods such as the intermittent assertions method are treated quite elegantly in this framework. Temporal logic has been most successful in providing tools for proving properties of concurrent nite state protocols, such as solutions to the dining philosophers and mutual exclusion problems, which are popular abstract versions of synchronization and resource management problems in distributed systems. The induction principle of TL takes the form:

e

e

(35) ' ^ 2(' ! ')

! 2':

Note the similarity to the PDL induction axiom (Axiom 17(viii)):

' ^ [ ](' ! []') ! [ ]': This is a classical program veri cation method known as inductive or invariant assertions. The operators , 3, and 2 can all be de ned in terms of until: ' , :(0 until :') 3' , ' _ (1 until ') 2' , ' ^ :(1 until :'); but not vice-versa. It has been shown in [Kamp, 1968] and [Gabbay et al., 1980] that the until operator is powerful enough to express anything that can be expressed in the rst-order theory of (!; , respectively.

e

14.3 Process Logic Dynamic Logic and Temporal Logic embody markedly dierent approaches to reasoning about programs. This dichotomy has prompted researchers to search for an appropriate process logic that combines the best features of both. An appropriate candidate should combine the ability to reason

188

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

about programs compositionally with the ability to reason directly about the intermediate states encountered during the course of a computation. [Pratt, 1979c], [Parikh, 1978b], [Nishimura, 1980], and [Harel et al., 1982b] all suggested increasingly more powerful propositional-level formalisms in which the basic idea is to interpret formulas in traces rather than in states. In particular, [Harel et al., 1982b] present a system called Process Logic (PL), which is essentially a union of TL and test-free regular PDL. That paper proves that the satis ability problem is decidable and gives a complete nitary axiomatization. Syntactically, we have programs ; ; : : : and propositions '; ; : : : as in PDL. We have atomic symbols of each type and compound expressions built up from the operators !, 0, ;, [, , ? (applied to Boolean combinations of atomic formulas only), !, and [ ]. In addition we have the temporal operators rst and until. The temporal operators are available for expressing and reasoning about trace properties, but programs are constructed compositionally as in PDL. Other operators are de ned as in PDL (see Section 2.1) except for skip, which is handled specially. Semantically, both programs and propositions are interpreted as sets of traces. We start with a Kripke frame K = (K; mK ) as in Section 2.2, where K is a set of states s; t; : : : and the function mK interprets atomic formulas p as subsets of K and atomic programs a as binary relations on K . The temporal operators are de ned as in TL. Trace models satisfy (most of) the PDL axioms. As in Section 14.2, de ne

e

def halt () 0 def n () 3halt def : n; inf () which say that the trace is of length 0, of nite length, or of in nite length, respectively. De ne two new operators [[ ]] and >: def [[]]' () n ! []' def >' () :[[]]:' , n ^ <>':

The operator is the same as in PDL. It can be shown that the two PDL axioms ' ^ [][ ]' $ [ ]' ' ^ [ ](' ! []') ! [ ]' hold by establishing that [ [ mK (n ) = mK (0 ) [ (mK () Æ mK (n )) n0 n0 [ 0 = mK ( ) [ (( mK (n )) Æ mK ()): n0

DYNAMIC LOGIC

189

As mentioned, the version of PL of [Harel et al., 1982b] is decidable (but, it seems, in nonelementary time only) and complete. It has also been shown that if we restrict the semantics to include only nite traces (not a necessary restriction for obtaining the results above), then PL is no more expressive than PDL. Translations of PL structures into PDL structures have also been investigated, making possible an elementary time decision procedure for deterministic PL; see [Halpern, 1982; Halpern, 1983]. An extension of PL in which rst and until are replaced by regular operators on formulas has been shown to be decidable but nonelementary in [Harel et al., 1982b]. This logic perhaps comes closer to the desired objective of a powerful decidable logic of traces with natural syntactic operators that is closed under attachment of regular programs to formulas.

14.4 The -Calculus The -calculus was suggested as a formalism for reasoning about programs in [Scott and de Bakker, 1969] and was further developed in [Hitchcock and Park, 1972], [Park, 1976], and [de Bakker, 1980]. The heart of the approach is , the least xpoint operator, which captures the notions of iteration and recursion. The calculus was originally de ned as a rst-order-level formalism, but propositional versions have become popular. The operator binds relation variables. If '(X ) is a logical expression with a free relation variable X , then the expression X:'(X )represents the least X such that '(X ) = X , if such an X exists. For example, the re exive transitive closure R of a binary relation R is the least binary relation containing R and closed under re exivity and transitivity; this would be expressed in the rst-order -calculus as

(36) R def = X (x; y):(x = y _ 9z (R(x; z ) ^ X (z; y))): This should be read as, \the least binary relation X (x; y) such that either x = y or x is related by R to some z such that z and y are already related by X ." This captures the usual xpoint formulation of re exive transitive closure. The formula (36) can be regarded either as a recursive program computing R or as an inductively de ned assertion that is true of a pair (x; y) i that pair is in the re exive transitive closure of R. The existence of a least xpoint is not guaranteed except under certain restrictions. Indeed, the formula :X has no xpoint, therefore X::X does not exist. Typically, one restricts the application of the binding operator X to formulas that are positive or syntactically monotone in X ; that is, those formulas in which every free occurrence of X occurs in the scope of an even number of negations. This implies that the relation operator X 7! '(X ) is (semantically) monotone, which by the Knaster{Tarski theorem ensures the existence of a least xpoint.

190

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

The rst-order -calculus can de ne all sets de nable by rst-order induction and more. In particular, it can capture the input/output relation of any program built from any of the DL programming constructs we have discussed. Since the rst-order -calculus also admits rst-order quanti cation, it is easily seen to be as powerful as DL. It was shown by [Park, 1976] that niteness is not de nable in the rstorder -calculus with the monotonicity restriction, but well-foundedness is. Thus this version of the -calculus is independent of L!1ck ! (and hence of DL(r:e:)) in expressive power. Well-foundedness of a binary relation R can be written

8x (X (x):8y (R(y; x) ! X (y))): A more severe syntactic restriction on the binding operator X is to allow its application only to formulas that are syntactically continuous in X ; that is, those formulas in which X does not occur free in the scope of any negation or any universal quanti er. It can be shown that this syntactic restriction implies semantic continuity, so the least xpoint is the union of ?, '(?), '('(?)); : : : . As shown in [Park, 1976], this version is strictly weaker than L!1ck! . In [Pratt, 1981a] and [Kozen, 1982; Kozen, 1983], propositional versions of the -calculus were introduced. The latter version consists of propositional modal logic with a least xpoint operator. It is the most powerful logic of its type, subsuming all known variants of PDL, game logic of [Parikh, 1983], various forms of temporal logic (see Section 14.2), and other seemingly stronger forms of the -calculus ([Vardi and Wolper, 1986b]). In the following presentation we focus on this version, since it has gained fairly widespread acceptance; see [Kozen, 1984; Kozen and Parikh, 1983; Streett, 1985b; Streett and Emerson, 1984; Vardi and Wolper, 1986b; Walukiewicz, 1993; Walukiewicz, 1995; Walukiewicz, 2000; Stirling, 1992; Mader, 1997; Kaivola, 1997]. The language of the propositional -calculus, also called the modal calculus , is syntactically simpler than PDL. It consists of the usual propositional constructs ! and 0, atomic modalities [a], and the least xpoint operator . A greatest xpoint operator dual to can be de ned:

X:'(X )

def :X::'(:X ): ()

Variables are monadic, and the operator may be applied only to syntactically monotone formulas. As discussed above, this ensures monotonicity of the corresponding set operator. The language is interpreted over Kripke frames in which atomic propositions are interpreted as sets of states and atomic programs are interpreted as binary relations on states. The propositional -calculus subsumes PDL. For example, the PDL formula ' for atomic a can be written X:(' _ X ). The formula

DYNAMIC LOGIC

191

X:[a]X , which expresses the existence of a forced win for the rst player in a two-player game, and the formula X:[a]X , which expresses well-foundedness and is equivalent to wf a (see Section 7), are both inexpressible in PDL, as shown in [Streett, 1981; Kozen, 1981c]. [Niwinski, 1984] has shown that even with the addition of the halt construct, PDL is strictly less expressive than the -calculus. The propositional -calculus satis es a nite model theorem, as rst shown in [Kozen, 1988]. Progressively better decidability results were obtained in [Kozen and Parikh, 1983; Vardi and Stockmeyer, 1985; Vardi, 1985b], culminating in a deterministic exponential-time algorithm of [Emerson and Jutla, 1988] based on an automata-theoretic lemma of [Safra, 1988]. Since the -calculus subsumes PDL, it is EXPTIME -complete. In [Kozen, 1982; Kozen, 1983], an axiomatization of the propositional calculus was proposed and conjectured to be complete. The axiomatization consists of the axioms and rules of propositional modal logic, plus the axiom '[X=X:'] ! X:' and rule '[X= ] ! X:' ! for . Completeness of this deductive system for a syntactically restricted subset of formulas was shown in [Kozen, 1982; Kozen, 1983]. Completeness for the full language was proved by [Walukiewicz, 1995; Walukiewicz, 2000]. This was quickly followed by simpler alternative proofs by [Ambler et al., 1995; Bonsangue and Kwiatkowska, 1995; Hartonas, 1998]. [Brad eld, 1996] showed that the alternating = hierarchy (least/greatest xpoints) is strict. An interesting open question is the complexity of model checking : does a given formula of the propositional -calculus hold in a given state of a given Kripke frame? Although some progress has been made (see [Bhat and Cleaveland, 1996; Cleaveland, 1996; Emerson and Lei, 1986; Sokolsky and Smolka, 1994; Stirling and Walker, 1989]), it is still unknown whether this problem has a polynomial-time algorithm. The propositional -calculus has become a popular system for the speci cation and veri cation of properties of transition systems, where it has had some practical impact ([Steen et al., 1996]). Several recent papers on model checking work in this context; see [Bhat and Cleaveland, 1996; Cleaveland, 1996; Emerson and Lei, 1986; Sokolsky and Smolka, 1994; Stirling and Walker, 1989]. A comprehensive introduction can be found in [Stirling, 1992].

14.5 Kleene Algebra Kleene algebra (KA) is the algebra of regular expressions. It is named for the mathematician S. C. Kleene (1909{1994), who among his many other

192

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

achievements invented regular expressions and proved their equivalence to nite automata in [Kleene, 1956]. Kleene algebra has appeared in various guises and under many names in relational algebra [Ng, 1984; Ng and Tarski, 1977], semantics and logics of programs [Kozen, 1981b; Pratt, 1988], automata and formal language theory [Kuich, 1987; Kuich and Salomaa, 1986], and the design and analysis of algorithms [Aho et al., 1975; Tarjan, 1981; Mehlhorn, 1984; Iwano and Steiglitz, 1990; Kozen, 1991b]. As discussed in Section 13, Kleene algebra plays a prominent role in dynamic algebra as an algebraic model of program behavior. Beginning with the monograph of [Conway, 1971], many authors have contributed over the years to the development of the algebraic theory; see [Backhouse, 1975; Krob, 1991; Kleene, 1956; Kuich and Salomaa, 1986; 1992; Hopkins and Kozen, Sakarovitch, 1987; Kozen, 1990; Bloom and Esik, 1999]. See also [Kozen, 1996] for further references. A Kleene algebra is an algebraic structure (K; +; ; ; 0; 1) satisfying the axioms

+ ( + ) + +0 ( ) 1 ( + ) ( + ) 0 (37) 1 + (38) + (39) +

= = = = = = = = =

! !

( + ) + + + = ( ) 1 = + + 0 = 0 1 + =

where refers to the natural partial order on K :

def + = : ()

In short, a KA is an idempotent semiring under +; ; 0; 1 such that is the least solution to + x x and is the least solution to + x x. The axioms (37){(39) say essentially that behaves like the asterate operator on sets of strings or re exive transitive closure on binary relations. This particular axiomatization is from [Kozen, 1991a; Kozen, 1994a], but there are other competing ones. The axioms (38) and (39) correspond to the re exive transitive closure rule (RTC) of PDL (Section 2.5). Instead, we might postulate the equivalent

DYNAMIC LOGIC

axioms (40) (41)

193

! ! ;

which correspond to the loop invariance rule (LI). The induction axiom (IND) is inexpressible in KA, since there is no negation. A Kleene algebra is -continuous if it satis es the in nitary condition (42) = sup n n0 where

0 def = 1

n+1 def = n

and where the supremum is with respect to the natural order . We can think of (42) as a conjunction of the in nitely many axioms n , n 0, and the in nitary Horn formula ^

(

n0

n Æ)

! Æ:

In the presence of the other axioms, the *-continuity condition (42) implies (38){(41) and is strictly stronger in the sense that there exist Kleene algebras that are not *-continuous [Kozen, 1990]. The fundamental motivating example of a Kleene algebra is the family of regular sets of strings over a nite alphabet, but other classes of structures share the same equational theory, notably the binary relations on a set. In fact it is the latter interpretation that makes Kleene algebra a suitable choice for modeling programs in dynamic algebras. Other more unusual interpretations are the min; + algebra used in shortest path algorithms (see [Aho et al., 1975; Tarjan, 1981; Mehlhorn, 1984; Kozen, 1991b]) and KAs of convex polyhedra used in computational geometry as described in [Iwano and Steiglitz, 1990]. Axiomatization of the equational theory of the regular sets is a central question going back to the original paper of [Kleene, 1956]. A completeness theorem for relational algebras was given in an extended language by [Ng, 1984; Ng and Tarski, 1977]. Axiomatization is a central focus of the monograph of [Conway, 1971], but the bulk of his treatment is in nitary. [Redko, 1964] proved that there is no nite equational axiomatization. Schematic equational axiomatizations for the algebra of regular sets, necessarily representing in nitely many equations, have been given by [Krob, 1991] and [Bloom and Esik, 1993]. [Salomaa, 1966] gave two nitary complete axiomatizations that are sound for the regular sets but not sound in general over other standard interpretations, including relational interpretations. The axiomatization given above is a nitary universal Horn axiomatization that

194

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

is sound and complete for the equational theory of standard relational and language-theoretic models, including the regular sets [Kozen, 1991a; Kozen, 1994a]. Other work on completeness appears in [Krob, 1991; Boa, 1990; Boa, 1995; Archangelsky, 1992]. The literature contains a bewildering array of inequivalent de nitions of Kleene algebras and related algebraic structures; see [Conway, 1971; Pratt, 1988; Pratt, 1990; Kozen, 1981b; Kozen, 1991a; Aho et al., 1975; Mehlhorn, 1984; Kuich, 1987; Kozen, 1994b]. As demonstrated in [Kozen, 1990], many of these are strongly related. One important property shared by most of them is closure under the formation of n n matrices. This was proved for the axiomatization above in [Kozen, 1991a; Kozen, 1994a], but the idea essentially goes back to [Kleene, 1956; Conway, 1971; Backhouse, 1975]. This result gives rise to an algebraic treatment of nite automata in which the automata are represented by their transition matrices. The equational theory of Kleene algebra is PSPACE -complete [Stockmeyer and Meyer, 1973]; thus it is apparently less complex than PDL, which is EXPTIME -complete (Theorem 21), although the strict separation of the two complexity classes is still open. Kleene Algebra with Tests

From a practical standpoint, many simple program manipulations such as loop unwinding and basic safety analysis do not require the full power of PDL, but can be carried out in a purely equational subsystem using the axioms of Kleene algebra. However, tests are an essential ingredient, since they are needed to model conventional programming constructs such as conditionals and while loops and to handle assertions. This motivates the de nition of the following variant of KA introduced in [Kozen, 1996; Kozen, 1997b]. A Kleene algebra with tests (KAT) is a Kleene algebra with an embedded Boolean subalgebra. Formally, it is a two-sorted algebra (K; B; +; ; ; ; 0; 1)

such that

(K; +; ; ; 0; 1) is a Kleene algebra (B; +; ; ; 0; 1) is a Boolean algebra B K.

The unary negation operator is de ned only on B . Elements of B are called tests and are written '; ; : : : . Elements of K (including elements of B ) are written ; ; : : : . In PDL, a test would be written '?, but in KAT we dispense with the symbol ?.

DYNAMIC LOGIC

195

This deceptively concise de nition actually carries a lot of information. The operators +; ; 0; 1 each play two roles: applied to arbitrary elements of K , they refer to nondeterministic choice, composition, fail, and skip, respectively; and applied to tests, they take on the additional meaning of Boolean disjunction, conjunction, falsity, and truth, respectively. These two usages do not con ict|for example, sequential testing of two tests is the same as testing their conjunction|and their coexistence admits considerable economy of expression. For applications in program veri cation, the standard interpretation would be a Kleene algebra of binary relations on a set and the Boolean algebra of subsets of the identity relation. One could also consider trace models, in which the Kleene elements are sets of traces (sequences of states) and the Boolean elements are sets of states (traces of length 0). As with KA, one can form the algebra n n matrices over a KAT (K; B ); the Boolean elements of this structure are the diagonal matrices over B . KAT can express conventional imperative programming constructs such as conditionals and while loops as in PDL. It can perform elementary program manipulation such as loop unwinding, constant propagation, and basic safety analysis in a purely equational manner. The applicability of KAT and related equational systems in practical program veri cation has been explored in [Cohen, 1994a; Cohen, 1994b; Cohen, 1994c; Kozen, 1996; Kozen and Patron, 2000]. There is a language-theoretic model that plays the same role in KAT that the regular sets play in KA, namely the algebra of regular sets of guarded strings, and a corresponding completeness result was obtained by [Kozen and Smith, 1996]. Moreover, KAT is complete for the equational theory of relational models, as shown in [Kozen and Smith, 1996]. Although less expressive than PDL, KAT is also apparently less diÆcult to decide: it is PSPACE -complete, the same as KA, as shown in [Cohen et al., 1996]. In [Kozen, 1999a], it is shown that KAT subsumes propositional Hoare Logic in the following sense. The partial correctness assertion f'g f g is encoded in KAT as the equation ' = 0, or equivalently ' = ' . If a rule f'1 g 1 f 1 g; : : : ; f'n g n f ng f'g f g is derivable in propositional Hoare Logic, then its translation, the universal Horn formula

'1 1 1 = 0 ^ ^ 'n n n = 0 ! ' = 0; is a theorem of KAT. For example, the while rule of Hoare logic (see Section 2.6) becomes '' = 0

! '() ' = 0:

196

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

More generally, all relationally valid Horn formulas of the form

1 = 0 ^ ^ n = 0

! =

are theorems of KAT [Kozen, 1999a]. Horn formulas are important from a practical standpoint. For example, commutativity conditions are used to model the idea that the execution of certain instructions does not aect the result of certain tests. In light of this, the complexity of the universal Horn theory of KA and KAT are of interest. There are both positive and negative results. It is shown in [Kozen, 1997c] that for a Horn formula ! ' over *-continuous Kleene algebras,

if contains only commutativity conditions = , the universal Horn theory is 01 -complete; if contains only monoid equations, the problem is 02 -complete;

for arbitrary nite sets of equations , the problem is 11 -complete.

On the other hand, commutativity assumptions of the form ' = ', where ' is a test, and assumptions of the form = 0 can be eliminated without loss of eÆciency, as shown in [Cohen, 1994a; Kozen and Smith, 1996]. Note that assumptions of this form are all we need to encode Hoare Logic as described above. In typed Kleene algebra introduced in [Kozen, 1998; Kozen, 1999b], elements have types s ! t. This allows Kleene algebras of nonsquare matrices, among other applications. It is shown in [Kozen, 1999b] that Hoare Logic is subsumed by the type calculus of typed KA augmented with a typecast or coercion rule for tests. Thus Hoare-style reasoning with partial correctness assertions reduces to typechecking in a relatively simple type system.

14.6 Dynamic Algebra Dynamic algebra provides an abstract algebraic framework that relates to PDL as Boolean algebra relates to propositional logic. A dynamic algebra is de ned to be any two-sorted algebraic structure (K; B; ), where B = (B; !; 0) is a Boolean algebra, K = (K; +; ; ; 0; 1) is a Kleene algebra (see Section 14.5), and : K B ! B is a scalar multiplication satisfying algebraic constraints corresponding to the dual forms of the PDL axioms (Axioms 17). For example, all dynamic algebras satisfy the equations ( ) ' 0 0' (' _ )

= = = =

( ') 0 0 '_ ;

DYNAMIC LOGIC

197

which correspond to the PDL validities < ; >' <>0 ' <>(' _ )

$ $ $ $

<>< >'

0 0 <>' _ <> ;

respectively. The Boolean algebra B is an abstraction of the formulas of PDL and the Kleene algebra K is an abstraction of the programs. The interaction of scalar multiplication with iteration can be axiomatized in a nitary or in nitary way. One can postulate (43) '

' _ ( (:' ^ ( ')))

corresponding to the diamond form of the PDL induction axiom (Axiom 17(viii)). Here ' in B i ' _ = . Alternatively, one can postulate the stronger axiom of -continuity : (44) ' = sup(n '): n

We can think of (44) as a conjunction of in nitely many axioms n ' ', n 0, and the in nitary Horn formula ^

(

n0

n '

)

! ' :

In the presence of the other axioms, (44) implies (43) [Kozen, 1980b], and is strictly stronger in the sense that there are dynamic algebras that are not *-continuous [Pratt, 1979a]. A standard Kripke frame K = (U; mK ) of PDL gives rise to a *-continuous dynamic algebra consisting of a Boolean algebra of subsets of U and a Kleene algebra of binary relations on U . Operators are interpreted as in PDL, including 0 as 0? (the empty program), 1 as 1? (the identity program), and ' as <>'. Nonstandard Kripke frames (see Section 3.2) also give rise to dynamic algebras, but not necessarily *-continuous ones. A dynamic algebra is separable if any pair of distinct Kleene elements can be distinguished by some Boolean element; that is, if 6= , then there exists ' 2 B with ' 6= '. Research directions in this area include the following.

Representation theory. It is known that any separable dynamic algebra is isomorphic to some possibly nonstandard Kripke frame. Under certain conditions, \possibly nonstandard" can be replaced by \standard," but not in general, even for *-continuous algebras [Kozen, 1980b; Kozen, 1979c; Kozen, 1980a].

198

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Algebraic methods in PDL. The small model property (Theorem 15) and completeness (Theorem 18) for PDL can be established by purely algebraic considerations [Pratt, 1980a].

Comparative study of alternative axiomatizations of . For example, it is known that separable dynamic algebras can be distinguished from standard Kripke frames by a rst-order formula, but even L!1! cannot distinguish the latter from -continuous separable dynamic algebras [Kozen, 1981b].

Equational theory of dynamic algebras. Many seemingly unrelated models of computation share the same equational theory, namely that of dynamic algebras [Pratt, 1979b; Pratt, 1979a].

In addition, many interesting questions arise from the algebraic viewpoint, and interesting connections with topology, classical algebra, and model theory have been made [Kozen, 1979b; Nemeti, 1980]. 15 BIBLIOGRAPHICAL NOTES Systematic program veri cation originated with the work of [Floyd, 1967] and [Hoare, 1969]. Hoare Logic was introduced in [Hoare, 1969]; see [Cousot, 1990; Apt, 1981; Apt and Olderog, 1991] for surveys. The digital abstraction, the view of computers as state transformers that operate by performing a sequence of discrete and instantaneous primitive steps, can be attributed to [Turing, 1936]. Finite-state transition systems were de ned formally by [McCulloch and Pitts, 1943]. State-transition semantics is based on this idea and is quite prevalent in early work on program semantics and veri cation; see [Hennessy and Plotkin, 1979]. The relational-algebraic approach taken here, in which programs are interpreted as binary input/output relations, was introduced in the context of DL by [Pratt, 1976]. The notions of partial and total correctness were present in the early work of [Hoare, 1969]. Regular programs were introduced by [Fischer and Ladner, 1979] in the context of PDL. The concept of nondeterminism was introduced in the original paper of [Turing, 1936], although he did not develop the idea. Nondeterminism was further developed by [Rabin and Scott, 1959] in the context of nite automata. [Burstall, 1974] suggested using modal logic for reasoning about programs, but it was not until the work of [Pratt, 1976], prompted by a suggestion of R. Moore, that it was actually shown how to extend modal logic in a useful way by considering a separate modality for every program. The rst research devoted to propositional reasoning about programs seems to be that of [Fischer and Ladner, 1977; Fischer and Ladner, 1979] on PDL. As

DYNAMIC LOGIC

199

mentioned in the Preface, the general use of logical systems for reasoning about programs was suggested by [Engeler, 1967]. Other semantics besides Kripke semantics have been studied; see [Berman, 1979; Nishimura, 1979; Kozen, 1979b; Trnkova and Reiterman, 1980; Kozen, 1980b; Pratt, 1979b]. Modal logic has many applications and a vast literature; good introductions can be found in [Hughes and Cresswell, 1968; Chellas, 1980]. Alternative and iterative guarded commands were studied in [Gries, 1981]. Partial correctness assertions and the Hoare rules given in Section 2.6 were rst formulated by [Hoare, 1969]. Regular expressions, on which the regular program operators are based, were introduced by [Kleene, 1956]. Their algebraic theory was further investigated by [Conway, 1971]. They were rst applied in the context of DL by [Fischer and Ladner, 1977; Fischer and Ladner, 1979]. The axiomatization of PDL given in Axioms 17 was formulated by [Segerberg, 1977]. Tests and converse were investigated by various authors; see [Peterson, 1978; Berman, 1978; Berman and Paterson, 1981; Streett, 1981; Streett, 1982; Vardi, 1985b]. The continuity of the diamond operator in the presence of reverse is due to [Trnkova and Reiterman, 1980]. The ltration argument and the small model property for PDL are due to [Fischer and Ladner, 1977; Fischer and Ladner, 1979]. Nonstandard Kripke frames for PDL were studied by [Berman, 1979; Berman, 1982], [Parikh, 1978a], [Pratt, 1979a; Pratt, 1980a], and [Kozen, 1979c; Kozen, 1979b; Kozen, 1980a; Kozen, 1980b; Kozen, 1981b]. The axiomatization of PDL used here (Axiom System 17) was introduced by [Segerberg, 1977]. Completeness was shown independently by [Gabbay, 1977] and [Parikh, 1978a]. A short and easy-to-follow proof is given in [Kozen and Parikh, 1981]. Completeness is also treated in [Pratt, 1978; Pratt, 1980a; Berman, 1979; Nishimura, 1979; Kozen, 1981a]. The exponential-time lower bound for PDL was established by [Fischer and Ladner, 1977; Fischer and Ladner, 1979] by showing how PDL formulas can encode computations of linear-space-bounded alternating Turing machines. Deterministic exponential-time algorithms were rst given in [Pratt, 1978; Pratt, 1979b; Pratt, 1980b]. Theorem 24 showing that the problem of deciding whether j= , where is a xed r.e. set of PDL formulas, is 11 -complete is due to [Meyer et al., 1981]. The computational diÆculty of the validity problem for nonregular PDL and the borderline between the decidable and undecidable were discussed in [Harel et al., 1983]. The fact that any nonregular program adds expressive power to PDL, Theorem 25, rst appeared explicitly in [Harel and Singerman, 1996]. Theorem 26 on the undecidability of context-free PDL was observed by [Ladner, 1977].

200

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

Theorems 27 and 28 are from [Harel et al., 1983]. An alternative proof of Theorem 28 using tiling is supplied in [Harel, 1985]; see [Harel et al., 2000]. The existence of a primitive recursive one-letter extension of PDL that is undecidable was shown already in [Harel et al., 1983], but undecidability for the particular case of a2 , Theorem 29, is from [Harel and Paterson, 1984]. Theorem 30 is from [Harel and Singerman, 1996]. As to decidable extensions, Theorem 31 was proved in [Koren and Pnueli, 1983]. The more general results of Section 6.2, namely Theorems 32, 33, and 34, are from [Harel and Raz, 1993], as is the notion of a simple-minded PDA. The decidability of emptiness for pushdown and stack automata on trees that is needed for the proofs of these is from [Harel and Raz, 1994]. A better bound on the complexity of the emptiness results can be found in [Peng and Iyer, 1995]. A suÆcient condition for PDL with the addition of a program over a single letter alphabet not to have the nite model property is given in [Harel and Singerman, 1996]. Completeness and exponential time decidability for DPDL, Theorem 40 and the upper bound of Theorem 41, are proved in [Ben-Ari et al., 1982] and [Valiev, 1980]. The lower bound of Theorem 41 is from [Parikh, 1981]. Theorems 43 and 44 on SDPDL are from [Halpern and Reif, 1981; Halpern and Reif, 1983]. That tests add to the power of PDL is proved in [Berman and Paterson, 1981]. It is also known that the test-depth hierarchy is strict [Berman, 1978; Peterson, 1978] and that rich-test PDL is strictly more expressive than poor-test PDL [Peterson, 1978; Berman, 1978; Berman and Paterson, 1981]. These results also hold for SDPDL. The results on programs as automata (Theorems 45 and 46) appear in [Pratt, 1981b]. Alternative proofs are given in [Harel and Sherman, 1985]; see [Harel et al., 2000]. In recent years, the development of the automata-theoretic approach to logics of programs has prompted renewed inquiry into the complexity of automata on in nite objects, with considerable success. See [Courcoubetis and Yannakakis, 1988; Emerson, 1985; Emerson and Jutla, 1988; Emerson and Sistla, 1984; Manna and Pnueli, 1987; Muller et al., 1988; Pecuchet, 1986; Safra, 1988; Sistla et al., 1987; Streett, 1982; Vardi, 1985a; Vardi, 1985b; Vardi, 1987; Vardi and Stockmeyer, 1985; Vardi and Wolper, 1986b; Vardi and Wolper, 1986a; Arnold, 1997a; Arnold, 1997b]; and [Thomas, 1997]. Especially noteworthy in this area is the result of [Safra, 1988] involving the complexity of converting a nondeterministic automaton on in nite strings into an equivalent deterministic one. This result has already had a signi cant impact on the complexity of decision procedures for several logics of programs; see [Courcoubetis and Yannakakis, 1988; Emerson and Jutla, 1988; Emerson and Jutla, 1989]; and [Safra, 1988].

DYNAMIC LOGIC

201

Intersection of programs was studied in [Harel et al., 1982a]. That the axioms for converse yield completeness for CPDL is proved in [Parikh, 1978a]. The complexity of PDL with converse and various forms of well-foundedness constructs is studied in [Vardi, 1985b]. Many authors have studied logics with a least- xpoint operator, both on the propositional and rst-order levels ([Scott and de Bakker, 1969; Hitchcock and Park, 1972; Park, 1976; Pratt, 1981a; Kozen, 1982; Kozen, 1983; Kozen, 1988; Kozen and Parikh, 1983; Niwinski, 1984; Streett, 1985a; Vardi and Stockmeyer, 1985]). The version of the propositional -calculus presented here was introduced in [Kozen, 1982; Kozen, 1983]. That the propositional -calculus is strictly more expressive than PDL with wf was show in [Niwinski, 1984] and [Streett, 1985a]. That this logic is strictly more expressive than PDL with halt was shown in [Harel and Sherman, 1982]. That this logic is strictly more expressive than PDL was shown in [Streett, 1981]. The wf construct (actually its complement, repeat) is investigated in [Streett, 1981; Streett, 1982], in which Theorems 48 (which is actually due to Pratt) and 50{52 are proved. The halt construct (actually its complement, loop) was introduced in [Harel and Pratt, 1978] and Theorem 49 is from [Harel and Sherman, 1982]. Finite model properties for the logics LPDL, RPDL, CLPDL, CRPDL, and the propositional -calculus were established in [Streett, 1981; Streett, 1982] and [Kozen, 1988]. Decidability results were obtained in [Streett, 1981; Streett, 1982; Kozen and Parikh, 1983; Vardi and Stockmeyer, 1985]; and [Vardi, 1985b]. Deterministic exponential-time completeness was established in [Emerson and Jutla, 1988] and [Safra, 1988]. For the strongest variant, CRPDL, exponential-time decidability follows from [Vardi, 1998b]. Concurrent PDL is de ned and studied in [Peleg, 1987b]. Additional versions of this logic, which employ various mechanisms for communication among the concurrent parts of a program, are considered in [Peleg, 1987c; Peleg, 1987a]. These papers contain many results concerning expressive power, decidability and undecidability for concurrent PDL with communication. Other work on PDL not described here includes work on nonstandard models, studied in [Berman, 1979; Berman, 1982] and [Parikh, 1981]; PDL with Boolean assignments, studied in [Abrahamson, 1980]; and restricted forms of the consequence problem, studied in [Parikh, 1981]. First-order DL was de ned in [Harel et al., 1977], where it was also rst named Dynamic Logic. That paper was carried out as a direct continuation of the original work of [Pratt, 1976]. Many variants of DL were de ned in [Harel, 1979]. In particular, DL(bstk) is very close to the context-free Dynamic Logic investigated there. Uninterpreted reasoning in the form of program schematology has been a common activity ever since the work of [Ianov, 1960]. It was given con-

202

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

siderable impetus by the work of [Luckham et al., 1970] and [Paterson and Hewitt, 1970]; see also [Greibach, 1975]. The study of the correctness of interpreted programs goes back to the work of Turing and von Neumann, but seems to have become a well-de ned area of research following [Floyd, 1967], [Hoare, 1969] and [Manna, 1974]. Embedding logics of programs in L!1! is based on observations of [Engeler, 1967]. Theorem 57 is from [Meyer and Parikh, 1981]. Theorem 60 is from [Harel, 1979] (see also [Harel, 1984] and [Harel and Kozen, 1984]); it is similar to the expressiveness result of [Cook, 1978]. Theorem 61 and Corollary 62 are from [Harel and Kozen, 1984]. Arithmetical structures were rst de ned by [Moschovakis, 1974] under the name acceptable structures . In the context of logics of programs, they were reintroduced and studied in [Harel, 1979]. The 11 -completeness of DL was rst proved by Meyer, and Theorem 63 appears in [Harel et al., 1977]. An alternative proof is given in [Harel, 1985]; see [Harel et al., 2000]. Theorem 65 is from [Meyer and Halpern, 1982]. That the fragment of DL considered in Theorem 66 is not r.e., was proved by [Pratt, 1976]. Theorem 67 follows from [Harel and Kozen, 1984]. The name \spectral complexity" was proposed by [Tiuryn, 1986], although the main ideas and many results concerning this notion were already present in [Tiuryn and Urzyczyn, 1983] (see [Tiuryn and Urzyczyn, 1988] for the full version). This notion is an instance of the so-called second-order spectrum of a formula. First-order spectra were investigated by [Sholz, 1952], from which originates the well known Spectralproblem. The reader can nd more about this problem and related results in the survey paper by [Borger, 1984]. The notion of a natural chain is from [Urzyczyn, 1983]. The results presented here are from [Tiuryn and Urzyczyn, 1983; Tiuryn and Urzyczyn, 1988]. A result similar to Theorem 69 in the area of nite model theory was obtained by [Sazonov, 1980] and independently by [Gurevich, 1983]. Higher-order stacks were introduced in [Engelfriet, 1983] to study complexity classes. Higher-order arrays and stacks in DL were considered by [Tiuryn, 1986], where a strict hierarchy within the class of elementary recursive sets was established. The main tool used in the proof of the strictness of this hierarchy is a generalization of Cook's auxiliary pushdown automata theorem for higher-order stacks, which is due to [Kowalczyk et al., 1987]. [Meyer and Halpern, 1982] showed completeness for termination assertions (Theorem 71). In nitary completeness for DL (Theorem 72) is based upon a similar result for Algorithmic Logic (see Section 13.1) by [Mirkowska, 1971]. The proof sketch presented in [Harel et al., 2000] is an adaptation of Henkin's proof for L!1! appearing in [Keisler, 1971]. The notion of relative completeness and Theorem 73 are due to [Cook, 1978]. The notion of arithmetical completeness and Theorem 75 is from [Harel, 1979].

DYNAMIC LOGIC

203

The use of invariants to prove partial correctness and of well-founded sets to prove termination are due to [Floyd, 1967]. An excellent survey of such methods and the corresponding completeness results appears in [Apt, 1981]. Some contrasting negative results are contained in [Clarke, 1979], [Lipton, 1977], and [Wand, 1978]. Many of the results on relative expressiveness presented herein answer questions posed in [Harel, 1979]. Similar uninterpreted research, comparing the expressive power of classes of programs (but detached from any surrounding logic) has taken place under the name comparative schematology quite extensively ever since [Ianov, 1960]; see [Greibach, 1975] and [Manna, 1974]. Theorems 76, 79 and 83(i) result as an application of the so-called spectral theorem , which connects expressive power of logics with complexity classes. This theorem was obtained by [Tiuryn and Urzyczyn, 1983; Tiuryn and Urzyczyn, 1984; Tiuryn and Urzyczyn, 1988]. A simpli ed framework for this approach and a statement of this theorem together with a proof is given in [Harel et al., 2000]. Theorem 78 appears in [Berman et al., 1982] and was proved independently in [Stolboushkin and Taitslin, 1983]. An alternative proof is given in [Tiuryn, 1989]. These results extend in a substantial way an earlier and much simpler result for the case of regular programs without equality in the vocabulary, which appears in [Halpern, 1981]. A simpler proof of the special case of the quanti er-free fragment of the logic of regular programs appears in [Meyer and Winklmann, 1982]. Theorem 79 is from [Tiuryn and Urzyczyn, 1984]. Theorem 80 is from [Stolboushkin, 1983]. The proof, as in the case of regular programs (see [Stolboushkin and Taitslin, 1983]), uses Adian's result from group theory ([Adian, 1979]). Results on the expressive power of DL with deterministic while programs and a Boolean stack can be found in [Stolboushkin, 1983; Kfoury, 1985]. Theorem 81 is from [Tiuryn and Urzyczyn, 1983; Tiuryn and Urzyczyn, 1988]. [Erimbetov, 1981; Tiuryn, 1981b; Tiuryn, 1984; Kfoury, 1983; Kfoury and Stolboushkin, 1997] contain results on the expressive power of DL over programming languages with bounded memory. [Erimbetov, 1981] shows that DL(dreg) < DL(dstk). The main proof technique is pebble games on nite trees. Theorem 83 is from [Urzyczyn, 1987]. There is a dierent proof of this result, using Adian structures, which appears in [Stolboushkin, 1989]. Theorem 77 is from [Urzyczyn, 1988], which also studies programs with Boolean arrays. Wildcard assignments were considered in [Harel et al., 1977] under the name nondeterministic assignments. Theorem 84 is from [Meyer and Winklmann, 1982]. Theorem 85 is from [Meyer and Parikh, 1981].

204

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

In our exposition of the comparison of the expressive power of logics, we have made the assumption that programs use only quanti er-free rstorder tests. It follows from the results of [Urzyczyn, 1986] that allowing full rst-order tests in many cases results in increased expressive power. [Urzyczyn, 1986] also proves that adding array assignments to nondeterministic r.e. programs increases the expressive power of the logic. This should be contrasted with the result of [Meyer and Tiuryn, 1981; Meyer and Tiuryn, 1984] to the eect that for deterministic r.e. programs, array assignments do not increase expressive power. [Makowsky, 1980] considers a weaker notion of equivalence between logics common in investigations in abstract model theory, whereby models are extended with interpretations for additional predicate symbols. With this notion it is shown in [Makowsky, 1980] that most of the versions of logics of programs treated here become equivalent. Algorithmic logic was introduced by [Salwicki, 1970]. [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b] extended AL to allow nondeterministic while programs and studied the operators r and . Complete in nitary deductive systems for propositional and rst-order versions were given by [Mirkowska, 1980; Mirkowska, 1981a; Mirkowska, 1981b] using the algebraic methods of [Rasiowa and Sikorski, 1963]. Surveys of early work in AL can be found in [Banachowski et al., 1977; Salwicki, 1977]. [Constable, 1977; Constable and O'Donnell, 1978; Goldblatt, 1982] presented logics similar to AL and DL for reasoning about deterministic while programs. Nonstandard Dynamic Logic was introduced by [Nemeti, 1981] and [Andreka et al., 1982a; Andreka et al., 1982b] and studied in [Csirmaz, 1985]. See [Makowsky and Sain, 1986] for more information and further references. The halt construct (actually its complement, loop) was introduced in [Harel and Pratt, 1978], and the wf construct (actually its complement, repeat) was investigated for PDL in [Streett, 1981; Streett, 1982]. Theorem 86 is from [Meyer and Winklmann, 1982], Theorem 87 is from [Harel and Peleg, 1985], Theorem 88 is from [Harel, 1984], and the axiomatizations of LDL and PDL are discussed in [Harel, 1979; Harel, 1984]. Dynamic algebra was introduced in [Kozen, 1980b] and [Pratt, 1979b] and studied by numerous authors; see [Kozen, 1979c; Kozen, 1979b; Kozen, 1980a; Kozen, 1981b; Pratt, 1979a; Pratt, 1980a; Pratt, 1988; Nemeti, 1980; Trnkova and Reiterman, 1980]. A survey of the main results appears in [Kozen, 1979a]. The PhD thesis [Ramshaw, 1981] contains an engaging introduction to the subject of probabilistic semantics and veri cation. [Kozen, 1981d] provided a formal semantics for probabilistic programs. The logic P r(DL) was presented in [Feldman and Harel, 1984], along with a deductive system that is complete for Kozen's semantics relative to an extension of rst-order analysis. Various propositional versions of probabilistic DL have been proposed

DYNAMIC LOGIC

205

in [Reif, 1980; Makowsky and Tiomkin, 1980; Feldman, 1984; Parikh and Mahoney, 1983; Kozen, 1985]. The temporal approach to probabilistic veri cation has been studied in [Lehmann and Shelah, 1982; Hart et al., 1982; Courcoubetis and Yannakakis, 1988; Vardi, 1985a]. Interest in the subject of probabilistic veri cation has undergone a recent revival; see [Morgan et al., 1999; Segala and Lynch, 1994; Hansson and Jonsson, 1994; Jou and Smolka, 1990; Baier and Kwiatkowska, 1998; Huth and Kwiatkowska, 1997; Blute et al., 1997]. Concurrent DL is de ned and studied in [Peleg, 1987b]. Additional versions of this logic, which employ various mechanisms for communication among the concurrent parts of a program, are also considered in [Peleg, 1987c; Peleg, 1987a]. David Harel The Weizmann Institute of Science, Rehovot, Israel Dexter Kozen Cornell University, Ithaca, New York Jerzy Tiuryn The University of Warsaw, Warsaw, Poland BIBLIOGRAPHY [Abrahamson, 1980] K. Abrahamson. Decidability and expressiveness of logics of processes. PhD thesis, Univ. of Washington, 1980. [Adian, 1979] S. I. Adian. The Burnside Problem and Identities in Groups. SpringerVerlag, 1979. [Aho et al., 1975] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, Mass., 1975. [Ambler et al., 1995] S. Ambler, M. Kwiatkowska, and N. Measor. Duality and the completeness of the modal -calculus. Theor. Comput. Sci., 151(1):3{27, November 1995. [Andreka et al., 1982a] H. Andreka, I. Nemeti, and I. Sain. A complete logic for reasoning about programs via nonstandard model theory, part I. Theor. Comput. Sci., 17:193{212, 1982. [Andreka et al., 1982b] H. Andreka, I. Nemeti, and I. Sain. A complete logic for reasoning about programs via nonstandard model theory, part II. Theor. Comput. Sci., 17:259{278, 1982. [Apt and Olderog, 1991] K. R. Apt and E.-R. Olderog. Veri cation of Sequential and Concurrent Programs. Springer-Verlag, 1991. [Apt and Plotkin, 1986] K. R. Apt and G. Plotkin. Countable nondeterminism and random assignment. J. Assoc. Comput. Mach., 33:724{767, 1986. [Apt, 1981] K. R. Apt. Ten years of Hoare's logic: a survey|part I. ACM Trans. Programming Languages and Systems, 3:431{483, 1981. [Archangelsky, 1992] K. V. Archangelsky. A new nite complete solvable quasiequational calculus for algebra of regular languages. Manuscript, Kiev State University, 1992. [Arnold, 1997a] A. Arnold. An initial semantics for the -calculus on trees and Rabin's complementation lemma. Technical report, University of Bordeaux, 1997. [Arnold, 1997b] A. Arnold. The -calculus on trees and Rabin's complementation theorem. Technical report, University of Bordeaux, 1997.

206

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Backhouse, 1975] R. C. Backhouse. Closure Algorithms and the Star-Height Problem of Regular Languages. PhD thesis, Imperial College, London, U.K., 1975. [Backhouse, 1986] R. C. Backhouse. Program Construction and Veri cation. PrenticeHall, 1986. [Baier and Kwiatkowska, 1998] C. Baier and M. Kwiatkowska. On the veri cation of qualitative properties of probabilistic processes under fairness constraints. Information Processing Letters, 66(2):71{79, April 1998. [Banachowski et al., 1977] L. Banachowski, A. Kreczmar, G. Mirkowska, H. Rasiowa, and A. Salwicki. An introduction to algorithmic logic: metamathematical investigations in the theory of programs. In Mazurkiewitz and Pawlak, editors, Math. Found. Comput. Sci., pages 7{99. Banach Center, Warsaw, 1977. [Ben-Ari et al., 1982] M. Ben-Ari, J. Y. Halpern, and A. Pnueli. Deterministic propositional dynamic logic: nite models, complexity and completeness. J. Comput. Syst. Sci., 25:402{417, 1982. [Berman and Paterson, 1981] F. Berman and M. Paterson. Propositional dynamic logic is weaker without tests. Theor. Comput. Sci., 16:321{328, 1981. [Berman et al., 1982] P. Berman, J. Y. Halpern, and J. Tiuryn. On the power of nondeterminism in dynamic logic. In Nielsen and Schmidt, editors, Proc 9th Colloq. Automata Lang. Prog., volume 140 of Lect. Notes in Comput. Sci., pages 48{60. Springer-Verlag, 1982. [Berman, 1978] F. Berman. Expressiveness hierarchy for PDL with rich tests. Technical Report 78-11-01, Comput. Sci. Dept., Univ. of Washington, 1978. [Berman, 1979] F. Berman. A completeness technique for D-axiomatizable semantics. In Proc. 11th Symp. Theory of Comput., pages 160{166. ACM, 1979. [Berman, 1982] F. Berman. Semantics of looping programs in propositional dynamic logic. Math. Syst. Theory, 15:285{294, 1982. [Bhat and Cleaveland, 1996] G. Bhat and R. Cleaveland. EÆcient local model checking for fragments of the modal -calculus. In T. Margaria and B. Steen, editors, Proc. Second Int. Workshop Tools and Algorithms for the Construction and Analysis of Systems (TACAS'96), volume 1055 of Lect. Notes in Comput. Sci., pages 107{112. Springer-Verlag, March 1996. [Bloom and E sik, 1992] S. L. Bloom and Z. E sik. Program correctness and matricial iteration theories. In Proc. 7th Int. Conf. Mathematical Foundations of Programming Semantics, volume 598 of Lecture Notes in Computer Science, pages 457{476. Springer-Verlag, 1992. [Bloom and E sik, 1993] S. L. Bloom and Z. E sik. Equational axioms for regular sets. Math. Struct. Comput. Sci., 3:1{24, 1993. [Blute et al., 1997] R. Blute, J. Desharnais, A. Edelat, and P. Panangaden. Bisimulation for labeled Markov processes. In Proc. 12th Symp. Logic in Comput. Sci., pages 149{ 158. IEEE, 1997. [Boa, 1990] M. Boa. Une remarque sur les systemes complets d'identites rationnelles. Informatique Theoretique et Applications/Theoretical Informatics and Applications, 24(4):419{423, 1990. [Boa, 1995] Maurice Boa. Une condition impliquant toutes les identites rationnelles. Informatique Theoretique et Applications/Theoretical Informatics and Applications, 29(6):515{518, 1995. [Bonsangue and Kwiatkowska, 1995] M. Bonsangue and M. Kwiatkowska. Reinterpreting the modal -calculus. In A. Ponse, M. de Rijke, and Y. Venema, editors, Modal Logic and Process Algebra, pages 65{83. CSLI Lecture Notes, August 1995. [Borger, 1984] E. Borger. Spectralproblem and completeness of logical decision problems. In G. Hasenjaeger E. Borger and D. Rodding, editors, Logic and Machines: Decision Problems and Complexity, Proccedings, volume 171 of Lect. Notes in Comput. Sci., pages 333{356. Springer-Verlag, 1984. [Brad eld, 1996] J. C. Brad eld. The modal -calculus alternation hierarchy is strict. In U. Montanari and V. Sassone, editors, Proc. CONCUR'96, volume 1119 of Lect. Notes in Comput. Sci., pages 233{246. Springer, 1996. [Burstall, 1974] R. M. Burstall. Program proving as hand simulation with a little induction. Information Processing, pages 308{312, 1974.

DYNAMIC LOGIC

207

[Chandra et al., 1981] A. Chandra, D. Kozen, and L.Stockmeyer. Alternation. J. Assoc. Comput. Mach., 28(1):114{133, 1981. [Chellas, 1980] B. F. Chellas. Modal Logic: An Introduction. Cambridge University Press, 1980. [Clarke, 1979] E. M. Clarke. Programming language constructs for which it is impossible to obtain good Hoare axiom systems. J. Assoc. Comput. Mach., 26:129{147, 1979. [Cleaveland, 1996] R. Cleaveland. EÆcient model checking via the equational -calculus. In Proc. 11th Symp. Logic in Comput. Sci., pages 304{312. IEEE, July 1996. [Cohen et al., 1996] Ernie Cohen, Dexter Kozen, and Frederick Smith. The complexity of Kleene algebra with tests. Technical Report 96-1598, Computer Science Department, Cornell University, July 1996. [Cohen, 1994a] E. Cohen. Hypotheses in Kleene algebra. Available as ftp://ftp. telcordia.com/pub/ernie/research/homepage.html, April 1994. [Cohen, 1994b] E. Cohen. Lazy caching. Available as ftp://ftp.telcordia.com/pub/ ernie/research/homepage.html, 1994. [Cohen, 1994c] E. Cohen. Using Kleene algebra to reason about concurrency control. Available as ftp://ftp.telcordia.com/pub/ernie/research/homepage.html, 1994. [Constable and O'Donnell, 1978] R. L. Constable and M. O'Donnell. A Programming Logic. Winthrop, 1978. [Constable, 1977] R. L. Constable. On the theory of programming logics. In Proc. 9th Symp. Theory of Comput., pages 269{285. ACM, May 1977. [Conway, 1971] J. H. Conway. Regular Algebra and Finite Machines. Chapman and Hall, London, 1971. [Cook, 1978] S. A. Cook. Soundness and completeness of an axiom system for program veri cation. SIAM J. Comput., 7:70{80, 1978. [Courcoubetis and Yannakakis, 1988] C. Courcoubetis and M. Yannakakis. Verifying temporal properties of nite-state probabilistic programs. In Proc. 29th Symp. Foundations of Comput. Sci., pages 338{345. IEEE, October 1988. [Cousot, 1990] P. Cousot. Methods and logics for proving programs. In J. van Leeuwen, editor, Handbood of Theoretical Computer Science, volume B, pages 841{993. Elsevier, Amsterdam, 1990. [Csirmaz, 1985] L. Csirmaz. A completeness theorem for dynamic logic. Notre Dame J. Formal Logic, 26:51{60, 1985. [Davis et al., 1994] M. D. Davis, R. Sigal, and E. J. Weyuker. Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science. Academic Press, 1994. [de Bakker, 1980] J. de Bakker. Mathematical Theory of Program Correctness. PrenticeHall, 1980. [Emerson and Halpern, 1985] E. A. Emerson and J. Y. Halpern. Decision procedures and expressiveness in the temporal logic of branching time. J. Comput. Syst. Sci., 30(1):1{24, 1985. [Emerson and Halpern, 1986] E. A. Emerson and J. Y. Halpern. \Sometimes" and \not never" revisited: on branching vs. linear time temporal logic. J. ACM, 33(1):151{178, 1986. [Emerson and Jutla, 1988] E. A. Emerson and C. Jutla. The complexity of tree automata and logics of programs. In Proc. 29th Symp. Foundations of Comput. Sci., pages 328{337. IEEE, October 1988. [Emerson and Jutla, 1989] E. A. Emerson and C. Jutla. On simultaneously determinizing and complementing !-automata. In Proc. 4th Symp. Logic in Comput. Sci. IEEE, June 1989. [Emerson and Lei, 1986] E. A. Emerson and C.-L. Lei. EÆcient model checking in fragments of the propositional -calculus. In Proc. 1st Symp. Logic in Comput. Sci., pages 267{278. IEEE, June 1986. [Emerson and Lei, 1987] E. A. Emerson and C. L. Lei. Modalities for model checking: branching time strikes back. Sci. Comput. Programming, 8:275{306, 1987. [Emerson and Sistla, 1984] E. A. Emerson and P. A. Sistla. Deciding full branching-time logic. Infor. and Control, 61:175{201, 1984.

208

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Emerson, 1985] E. A. Emerson. Automata, tableax, and temporal logics. In R. Parikh, editor, Proc. Workshop on Logics of Programs, volume 193 of Lect. Notes in Comput. Sci., pages 79{88. Springer-Verlag, 1985. [Emerson, 1990] E. A. Emerson. Temporal and modal logic. In J. van Leeuwen, editor, Handbook of theoretical computer science, volume B: formal models and semantics, pages 995{1072. Elsevier, 1990. [Engeler, 1967] E. Engeler. Algorithmic properties of structures. Math. Syst. Theory, 1:183{195, 1967. [Engelfriet, 1983] J. Engelfriet. Iterated pushdown automata and complexity classes. In Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing, pages 365{373, Boston, Massachusetts, 1983. [Erimbetov, 1981] M. M. Erimbetov. On the expressive power of programming logics. In Proc. Alma-Ata Conf. Research in Theoretical Programming, pages 49{68, 1981. In Russian. [Feldman and Harel, 1984] Y. A. Feldman and D. Harel. A probabilistic dynamic logic. J. Comput. Syst. Sci., 28:193{215, 1984. [Feldman, 1984] Y. A. Feldman. A decidable propositional dynamic logic with explicit probabilities. Infor. and Control, 63:11{38, 1984. [Fischer and Ladner, 1977] M. J. Fischer and R. E. Ladner. Propositional modal logic of programs. In Proc. 9th Symp. Theory of Comput., pages 286{294. ACM, 1977. [Fischer and Ladner, 1979] M. J. Fischer and R. E. Ladner. Propositional dynamic logic of regular programs. J. Comput. Syst. Sci., 18(2):194{211, 1979. [Floyd, 1967] R. W. Floyd. Assigning meanings to programs. In Proc. Symp. Appl. Math., volume 19, pages 19{31. AMS, 1967. [Friedman, 1971] H. Friedman. Algorithmic procedures, generalized Turing algorithms, and elementary recursion theory. In Gandy and Yates, editors, Logic Colloq. 1969, pages 361{390. North-Holland, 1971. [Gabbay et al., 1980] D. Gabbay, A. Pnueli, S. Shelah, and J. Stavi. On the temporal analysis of fairness. In Proc. 7th Symp. Princip. Prog. Lang., pages 163{173. ACM, 1980. [Gabbay et al., 1994] D. Gabbay, I. Hodkinson, and M. Reynolds. Temporal Logic: Mathematical Foundations and Computational Aspects. Oxford University Press, 1994. [Gabbay, 1977] D. Gabbay. Axiomatizations of logics of programs. Unpublished, 1977. [Goldblatt, 1982] R. Goldblatt. Axiomatising the Logic of Computer Programming, volume 130 of Lect. Notes in Comput. Sci. Springer-Verlag, 1982. [Goldblatt, 1987] R. Goldblatt. Logics of time and computation. Technical Report Lect. Notes 7, Center for the Study of Language and Information, Stanford Univ., 1987. [Greibach, 1975] S. Greibach. Theory of Program Structures: Schemes, Semantics, Veri cation, volume 36 of Lecture Notes in Computer Science. Springer Verlag, 1975. [Gries, 1981] D. Gries. The Science of Programming. Springer-Verlag, 1981. [Gurevich, 1983] Yu. Gurevich. Algebras of feasible functions. In 24-th IEEE Annual Symposium on Foundations of Computer Science, pages 210{214, 1983. [Halpern and Reif, 1981] J. Y. Halpern and J. H. Reif. The propositional dynamic logic of deterministic, well-structured programs. In Proc. 22nd Symp. Found. Comput. Sci., pages 322{334. IEEE, 1981. [Halpern and Reif, 1983] J. Y. Halpern and J. H. Reif. The propositional dynamic logic of deterministic, well-structured programs. Theor. Comput. Sci., 27:127{165, 1983. [Halpern, 1981] J. Y. Halpern. On the expressive power of dynamic logic II. Technical Report TM-204, MIT/LCS, 1981. [Halpern, 1982] J. Y. Halpern. Deterministic process logic is elementary. In Proc. 23rd Symp. Found. Comput. Sci., pages 204{216. IEEE, 1982. [Halpern, 1983] J. Y. Halpern. Deterministic process logic is elementary. Infor. and Control, 57(1):56{89, 1983. [Hansson and Jonsson, 1994] H. Hansson and B. Jonsson. A logic for reasoning about time and probability. Formal Aspects of Computing, 6:512{535, 1994. [Harel and Kozen, 1984] D. Harel and D. Kozen. A programming language for the inductive sets, and applications. Information and Control, 63(1{2):118{139, 1984.

DYNAMIC LOGIC

209

[Harel and Paterson, 1984] D. Harel and M. S. Paterson. Undecidability of PDL with i L = fa2 j i 0g. J. Comput. Syst. Sci., 29:359{365, 1984. [Harel and Peleg, 1985] D. Harel and D. Peleg. More on looping vs. repeating in dynamic logic. Information Processing Letters, 20:87{90, 1985. [Harel and Pratt, 1978] D. Harel and V. R. Pratt. Nondeterminism in logics of programs. In Proc. 5th Symp. Princip. Prog. Lang., pages 203{213. ACM, 1978. [Harel and Raz, 1993] D. Harel and D. Raz. Deciding properties of nonregular programs. SIAM J. Comput., 22:857{874, 1993. [Harel and Raz, 1994] D. Harel and D. Raz. Deciding emptiness for stack automata on in nite trees. Information and Computation, 113:278{299, 1994. [Harel and Sherman, 1982] D. Harel and R. Sherman. Looping vs. repeating in dynamic logic. Infor. and Control, 55:175{192, 1982. [Harel and Sherman, 1985] D. Harel and R. Sherman. Propositional dynamic logic of

owcharts. Infor. and Control, 64:119{135, 1985. [Harel and Singerman, 1996] D. Harel and E. Singerman. More on nonregular PDL: Finite models and Fibonacci-like programs. Information and Computation, 128:109{ 118, 1996. [Harel et al., 1977] D. Harel, A. R. Meyer, and V. R. Pratt. Computability and completeness in logics of programs. In Proc. 9th Symp. Theory of Comput., pages 261{268. ACM, 1977. [Harel et al., 1982a] D. Harel, A. Pnueli, and M. Vardi. Two dimensional temporal logic and PDL with intersection. Unpublished, 1982. [Harel et al., 1982b] D. Harel, D. Kozen, and R. Parikh. Process logic: Expressiveness, decidability, completeness. J. Comput. Syst. Sci., 25(2):144{170, 1982. [Harel et al., 1983] D. Harel, A. Pnueli, and J. Stavi. Propositional dynamic logic of nonregular programs. J. Comput. Syst. Sci., 26:222{243, 1983. [Harel et al., 2000] D. Harel, D. Kozen, and J. Tiuryn. Dynamic Logic. MIT Press, Cambridge, MA, 2000. [Harel, 1979] D. Harel. First-Order Dynamic Logic, volume 68 of Lect. Notes in Comput. Sci. Springer-Verlag, 1979. [Harel, 1984] D. Harel. Dynamic logic. In Gabbay and Guenthner, editors, Handbook of Philosophical Logic, volume II: Extensions of Classical Logic, pages 497{604. Reidel, 1984. [Harel, 1985] D. Harel. Recurring dominoes: Making the highly undecidable highly understandable. Annals of Discrete Mathematics, 24:51{72, 1985. [Harel, 1992] D. Harel. Algorithmics: The Spirit of Computing. Addison-Wesley, second edition, 1992. [Hart et al., 1982] S. Hart, M. Sharir, and A. Pnueli. Termination of probabilistic concurrent programs. In Proc. 9th Symp. Princip. Prog. Lang., pages 1{6. ACM, 1982. [Hartonas, 1998] C. Hartonas. Duality for modal -logics. Theor. Comput. Sci., 202(1{ 2):193{222, 1998. [Hennessy and Plotkin, 1979] M. C. B. Hennessy and G. D. Plotkin. Full abstraction for a simple programming language. In Proc. Symp. Semantics of Algorithmic Languages, volume 74 of Lecture Notes in Computer Science, pages 108{120. Springer-Verlag, 1979. [Hitchcock and Park, 1972] P. Hitchcock and D. Park. Induction rules and termination proofs. In M. Nivat, editor, Int. Colloq. Automata Lang. Prog., pages 225{251. NorthHolland, 1972. [Hoare, 1969] C. A. R. Hoare. An axiomatic basis for computer programming. Comm. Assoc. Comput. Mach., 12:576{580, 583, 1969. [Hopkins and Kozen, 1999] M. Hopkins and D. Kozen. Parikh's theorem in commutative Kleene algebra. In Proc. Conf. Logic in Computer Science (LICS'99), pages 394{401. IEEE, July 1999. [Hughes and Cresswell, 1968] G. E. Hughes and M. J. Cresswell. An Introduction to Modal Logic. Methuen, 1968. [Huth and Kwiatkowska, 1997] M. Huth and M. Kwiatkowska. Quantitative analysis and model checking. In Proc. 12th Symp. Logic in Comput. Sci., pages 111{122. IEEE, 1997.

210

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Ianov, 1960] Y. I. Ianov. The logical schemes of algorithms. In Problems of Cybernetics, volume 1, pages 82{140. Pergamon Press, 1960. [Iwano and Steiglitz, 1990] K. Iwano and K. Steiglitz. A semiring on convex polygons and zero-sum cycle problems. SIAM J. Comput., 19(5):883{901, 1990. [Jou and Smolka, 1990] C. Jou and S. Smolka. Equivalences, congruences and complete axiomatizations for probabilistic processes. In Proc. CONCUR'90, volume 458 of Lecture Notes in Comput. Sci., pages 367{383. Springer-Verlag, 1990. [Kaivola, 1997] R. Kaivola. Using Automata to Characterise Fixed Point Temporal Logics. PhD thesis, University of Edinburgh, April 1997. Report CST-135-97. [Kamp, 1968] H. W. Kamp. Tense logics and the theory of linear order. PhD thesis, UCLA, 1968. [Keisler, 1971] J. Keisler. Model Theory for In nitary Logic. North Holland, 1971. [Kfoury and Stolboushkin, 1997] A.J. Kfoury and A.P. Stolboushkin. An in nite pebble game and applications. Information and Computation, 136:53{66, 1997. [Kfoury, 1983] A.J. Kfoury. De nability by programs in rst-order structures. Theoretical Computer Science, 25:1{66, 1983. [Kfoury, 1985] A. J. Kfoury. De nability by deterministic and nondeterministic programs with applications to rst-order dynamic logic. Infor. and Control, 65(2{3):98{ 121, 1985. [Kleene, 1956] S. C. Kleene. Representation of events in nerve nets and nite automata. In C. E. Shannon and J. McCarthy, editors, Automata Studies, pages 3{41. Princeton University Press, Princeton, N.J., 1956. [Knijnenburg, 1988] P. M. W. Knijnenburg. On axiomatizations for propositional logics of programs. Technical Report RUU-CS-88-34, Rijksuniversiteit Utrecht, November 1988. [Koren and Pnueli, 1983] T. Koren and A. Pnueli. There exist decidable context-free propositional dynamic logics. In Proc. Symp. on Logics of Programs, volume 164 of Lecture Notes in Computer Science, pages 290{312. Springer-Verlag, 1983. [Kowalczyk et al., 1987] W. Kowalczyk, D. Niwinski, and J. Tiuryn. A generalization of Cook's auxiliary{pushdown{automata theorem. Fundamenta Informaticae, XII:497{ 506, 1987. [Kozen and Parikh, 1981] D. Kozen and R. Parikh. An elementary proof of the completeness of PDL. Theor. Comput. Sci., 14(1):113{118, 1981. [Kozen and Parikh, 1983] D. Kozen and R. Parikh. A decision procedure for the propositional -calculus. In Clarke and Kozen, editors, Proc. Workshop on Logics of Programs, volume 164 of Lecture Notes in Computer Science, pages 313{325. SpringerVerlag, 1983. [Kozen and Patron, 2000] D. Kozen and M.-C. Patron. Certi cation of compiler optimizations using Kleene algebra with tests. In J. Lloyd, V. Dahl, U. Furbach, M. Kerber, K.-K. Lau, C. Palamidessi, L. M. Pereira, Y. Sagiv, and P. J. Stuckey, editors, Proc. 1st Int. Conf. Computational Logic (CL2000), volume 1861 of Lecture Notes in Arti cial Intelligence, pages 568{582, London, July 2000. Springer-Verlag. [Kozen and Smith, 1996] D. Kozen and F. Smith. Kleene algebra with tests: Completeness and decidability. In D. van Dalen and M. Bezem, editors, Proc. 10th Int. Workshop Computer Science Logic (CSL'96), volume 1258 of Lecture Notes in Computer Science, pages 244{259, Utrecht, The Netherlands, September 1996. Springer-Verlag. [Kozen and Tiuryn, 1990] D. Kozen and J. Tiuryn. Logics of programs. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, pages 789{ 840. North Holland, Amsterdam, 1990. [Kozen, 1979a] D. Kozen. Dynamic algebra. In E. Engeler, editor, Proc. Workshop on Logic of Programs, volume 125 of Lecture Notes in Computer Science, pages 102{144. Springer-Verlag, 1979. chapter of Propositional dynamic logics of programs: A survey by Rohit Parikh. [Kozen, 1979b] D. Kozen. On the duality of dynamic algebras and Kripke models. In E. Engeler, editor, Proc. Workshop on Logic of Programs, volume 125 of Lecture Notes in Computer Science, pages 1{11. Springer-Verlag, 1979. [Kozen, 1979c] D. Kozen. On the representation of dynamic algebras. Technical Report RC7898, IBM Thomas J. Watson Research Center, October 1979.

DYNAMIC LOGIC

211

[Kozen, 1980a] D. Kozen. On the representation of dynamic algebras II. Technical Report RC8290, IBM Thomas J. Watson Research Center, May 1980. [Kozen, 1980b] D. Kozen. A representation theorem for models of *-free PDL. In Proc. 7th Colloq. Automata, Languages, and Programming, pages 351{362. EATCS, July 1980. [Kozen, 1981a] D. Kozen. Logics of programs. Lecture notes, Aarhus University, Denmark, 1981. [Kozen, 1981b] D. Kozen. On induction vs. *-continuity. In Kozen, editor, Proc. Workshop on Logic of Programs, volume 131 of Lecture Notes in Computer Science, pages 167{176, New York, 1981. Springer-Verlag. [Kozen, 1981c] D. Kozen. On the expressiveness of . Manuscript, 1981. [Kozen, 1981d] D. Kozen. Semantics of probabilistic programs. J. Comput. Syst. Sci., 22:328{350, 1981. [Kozen, 1982] D. Kozen. Results on the propositional -calculus. In Proc. 9th Int. Colloq. Automata, Languages, and Programming, pages 348{359, Aarhus, Denmark, July 1982. EATCS. [Kozen, 1983] D. Kozen. Results on the propositional -calculus. Theor. Comput. Sci., 27:333{354, 1983. [Kozen, 1984] D. Kozen. A Ramsey theorem with in nitely many colors. In Lenstra, Lenstra, and van Emde Boas, editors, Dopo Le Parole, pages 71{72. University of Amsterdam, Amsterdam, May 1984. [Kozen, 1985] D. Kozen. A probabilistic PDL. J. Comput. Syst. Sci., 30(2):162{178, April 1985. [Kozen, 1988] D. Kozen. A nite model theorem for the propositional -calculus. Studia Logica, 47(3):233{241, 1988. [Kozen, 1990] D. Kozen. On Kleene algebras and closed semirings. In Rovan, editor, Proc. Math. Found. Comput. Sci., volume 452 of Lecture Notes in Computer Science, pages 26{47, Banska-Bystrica, Slovakia, 1990. Springer-Verlag. [Kozen, 1991a] D. Kozen. A completeness theorem for Kleene algebras and the algebra of regular events. In Proc. 6th Symp. Logic in Comput. Sci., pages 214{225, Amsterdam, July 1991. IEEE. [Kozen, 1991b] D. Kozen. The Design and Analysis of Algorithms. Springer-Verlag, New York, 1991. [Kozen, 1994a] D. Kozen. A completeness theorem for Kleene algebras and the algebra of regular events. Infor. and Comput., 110(2):366{390, May 1994. [Kozen, 1994b] D. Kozen. On action algebras. In J. van Eijck and A. Visser, editors, Logic and Information Flow, pages 78{88. MIT Press, 1994. [Kozen, 1996] D. Kozen. Kleene algebra with tests and commutativity conditions. In T. Margaria and B. Steen, editors, Proc. Second Int. Workshop Tools and Algorithms for the Construction and Analysis of Systems (TACAS'96), volume 1055 of Lecture Notes in Computer Science, pages 14{33, Passau, Germany, March 1996. SpringerVerlag. [Kozen, 1997a] D. Kozen. Automata and Computability. Springer-Verlag, New York, 1997. [Kozen, 1997b] D. Kozen. Kleene algebra with tests. Transactions on Programming Languages and Systems, 19(3):427{443, May 1997. [Kozen, 1997c] D. Kozen. On the complexity of reasoning in Kleene algebra. In Proc. 12th Symp. Logic in Comput. Sci., pages 195{202, Los Alamitos, Ca., June 1997. IEEE. [Kozen, 1998] D. Kozen. Typed Kleene algebra. Technical Report 98-1669, Computer Science Department, Cornell University, March 1998. [Kozen, 1999a] D. Kozen. On Hoare logic and Kleene algebra with tests. In Proc. Conf. Logic in Computer Science (LICS'99), pages 167{172. IEEE, July 1999. [Kozen, 1999b] D. Kozen. On Hoare logic, Kleene algebra, and types. Technical Report 99-1760, Computer Science Department, Cornell University, July 1999. Abstract in: Abstracts of 11th Int. Congress Logic, Methodology and Philosophy of Science, Ed. J. Cachro and K. Kijania-Placek, Krakow, Poland, August 1999, p. 15. To appear

212

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

in: Proc. 11th Int. Congress Logic, Methodology and Philosophy of Science, ed. P. Gardenfors, K. Kijania-Placek and J. Wolenski, Kluwer. [Krob, 1991] Daniel Krob. A complete system of B -rational identities. Theoretical Computer Science, 89(2):207{343, October 1991. [Kuich and Salomaa, 1986] W. Kuich and A. Salomaa. Semirings, Automata, and Languages. Springer-Verlag, Berlin, 1986. [Kuich, 1987] W. Kuich. The Kleene and Parikh theorem in complete semirings. In T. Ottmann, editor, Proc. 14th Colloq. Automata, Languages, and Programming, volume 267 of Lecture Notes in Computer Science, pages 212{225, New York, 1987. EATCS, Springer-Verlag. [Ladner, 1977] R. E. Ladner. Unpublished, 1977. [Lamport, 1980] L. Lamport. \Sometime" is sometimes \not never". Proc. 7th Symp. Princip. Prog. Lang., pages 174{185, 1980. [Lehmann and Shelah, 1982] D. Lehmann and S. Shelah. Reasoning with time and chance. Infor. and Control, 53(3):165{198, 1982. [Lewis and Papadimitriou, 1981] H. R. Lewis and C. H. Papadimitriou. Elements of the Theory of Computation. Prentice Hall, 1981. [Lipton, 1977] R. J. Lipton. A necessary and suÆcient condition for the existence of Hoare logics. In Proc. 18th Symp. Found. Comput. Sci., pages 1{6. IEEE, 1977. [Luckham et al., 1970] D. C. Luckham, D. Park, and M. Paterson. On formalized computer programs. J. Comput. Syst. Sci., 4:220{249, 1970. [Mader, 1997] A. Mader. Veri cation of Modal Properties Using Boolean Equation Systems. PhD thesis, Fakultt fr Informatik, Technische Universitt Mnchen, September 1997. [Makowsky and Sain, 1986] J. A. Makowsky and I. Sain. On the equivalence of weak second-order and nonstandard time semantics for various program veri cation systems. In Proc. 1st Symp. Logic in Comput. Sci., pages 293{300. IEEE, 1986. [Makowsky, 1980] J. A. Makowsky. Measuring the expressive power of dynamic logics: an application of abstract model theory. In Proc. 7th Int. Colloq. Automata Lang. Prog., volume 80 of Lect. Notes in Comput. Sci., pages 409{421. Springer-Verlag, 1980. [Makowsky and Tiomkin, 1980] J. A. Makowsky and M. L. Tiomkin. Probabilistic propositional dynamic logic. Manuscript, 1980. [Manna and Pnueli, 1981] Z. Manna and A. Pnueli. Veri cation of concurrent programs: temporal proof principles. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 200{252. Springer-Verlag, 1981. [Manna and Pnueli, 1987] Z. Manna and A. Pnueli. Speci cation and veri cation of concurrent programs by 8-automata. In Proc. 14th Symp. Principles of Programming Languages, pages 1{12. ACM, January 1987. [Manna, 1974] Z. Manna. Mathematical Theory of Computation. McGraw-Hill, 1974. [McCulloch and Pitts, 1943] W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophysics, 5:115{143, 1943. [Mehlhorn, 1984] K. Mehlhorn. Graph Algorithms and NP-Completeness, volume II of Data Structures and Algorithms. Springer-Verlag, 1984. [Meyer and Halpern, 1982] A. R. Meyer and J. Y. Halpern. Axiomatic de nitions of programming languages: a theoretical assessment. J. Assoc. Comput. Mach., 29:555{ 576, 1982. [Meyer and Parikh, 1981] A. R. Meyer and R. Parikh. De nability in dynamic logic. J. Comput. Syst. Sci., 23:279{298, 1981. [Meyer and Tiuryn, 1981] A. R. Meyer and J. Tiuryn. A note on equivalences among logics of programs. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 282{299. Springer-Verlag, 1981. [Meyer and Tiuryn, 1984] A. R. Meyer and J. Tiuryn. Equivalences among logics of programs. Journal of Computer and Systems Science, 29:160{170, 1984. [Meyer and Winklmann, 1982] A. R. Meyer and K. Winklmann. Expressing program looping in regular dynamic logic. Theor. Comput. Sci., 18:301{323, 1982.

DYNAMIC LOGIC

213

[Meyer et al., 1981] A. R. Meyer, R. S. Streett, and G. Mirkowska. The deducibility problem in propositional dynamic logic. In E. Engeler, editor, Proc. Workshop Logic of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 12{22. Springer-Verlag, 1981. [Miller, 1976] G. L. Miller. Riemann's hypothesis and tests for primality. J. Comput. Syst. Sci., 13:300{317, 1976. [Mirkowska, 1971] G. Mirkowska. On formalized systems of algorithmic logic. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astron. Phys., 19:421{428, 1971. [Mirkowska, 1980] G. Mirkowska. Algorithmic logic with nondeterministic programs. Fund. Informaticae, III:45{64, 1980. [Mirkowska, 1981a] G. Mirkowska. PAL|propositional algorithmic logic. In E. Engeler, editor, Proc. Workshop Logic of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 23{101. Springer-Verlag, 1981. [Mirkowska, 1981b] G. Mirkowska. PAL|propositional algorithmic logic. Fund. Informaticae, IV:675{760, 1981. [Morgan et al., 1999] C. Morgan, A. McIver, and K. Seidel. Probabilistic predicate transformers. ACM Trans. Programming Languages and Systems, 8(1):1{30, 1999. [Moschovakis, 1974] Y. N. Moschovakis. Elementary Induction on Abstract Structures. North-Holland, 1974. [Muller et al., 1988] D. E. Muller, A. Saoudi, and P. E. Schupp. Weak alternating automata give a simple explanation of why most temporal and dynamic logics are decidable in exponential time. In Proc. 3rd Symp. Logic in Computer Science, pages 422{427. IEEE, July 1988. [Nemeti, 1980] I. Nemeti. Every free algebra in the variety generated by the representable dynamic algebras is separable and representable. Manuscript, 1980. [Nemeti, 1981] I. Nemeti. Nonstandard dynamic logic. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 311{ 348. Springer-Verlag, 1981. [Ng and Tarski, 1977] K. C. Ng and A. Tarski. Relation algebras with transitive closure, abstract 742-02-09. Notices Amer. Math. Soc., 24:A29{A30, 1977. [Ng, 1984] K. C. Ng. Relation Algebras with Transitive Closure. PhD thesis, University of California, Berkeley, 1984. [Nishimura, 1979] H. Nishimura. Sequential method in propositional dynamic logic. Acta Informatica, 12:377{400, 1979. [Nishimura, 1980] H. Nishimura. Descriptively complete process logic. Acta Informatica, 14:359{369, 1980. [Niwinski, 1984] D. Niwinski. The propositional -calculus is more expressive than the propositional dynamic logic of looping. University of Warsaw, 1984. [Parikh and Mahoney, 1983] R. Parikh and A. Mahoney. A theory of probabilistic programs. In E. Clarke and D. Kozen, editors, Proc. Workshop on Logics of Programs, volume 164 of Lect. Notes in Comput. Sci., pages 396{402. Springer-Verlag, 1983. [Parikh, 1978a] R. Parikh. The completeness of propositional dynamic logic. In Proc. 7th Symp. on Math. Found. of Comput. Sci., volume 64 of Lect. Notes in Comput. Sci., pages 403{415. Springer-Verlag, 1978. [Parikh, 1978b] R. Parikh. A decidability result for second order process logic. In Proc. 19th Symp. Found. Comput. Sci., pages 177{183. IEEE, 1978. [Parikh, 1981] R. Parikh. Propositional dynamic logics of programs: a survey. In E. Engeler, editor, Proc. Workshop on Logics of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 102{144. Springer-Verlag, 1981. [Parikh, 1983] R. Parikh. Propositional game logic. In Proc. 23rd IEEE Symp. Foundations of Computer Science, 1983. [Park, 1976] D. Park. Finiteness is -ineable. Theor. Comput. Sci., 3:173{181, 1976. [Paterson and Hewitt, 1970] M. S. Paterson and C. E. Hewitt. Comparative schematology. In Record Project MAC Conf. on Concurrent Systems and Parallel Computation, pages 119{128. ACM, 1970. [Pecuchet, 1986] J. P. Pecuchet. On the complementation of Buchi automata. Theor. Comput. Sci., 47:95{98, 1986.

214

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Peleg, 1987a] D. Peleg. Communication in concurrent dynamic logic. J. Comput. Sys. Sci., 35:23{58, 1987. [Peleg, 1987b] D. Peleg. Concurrent dynamic logic. J. Assoc. Comput. Mach., 34(2):450{479, 1987. [Peleg, 1987c] D. Peleg. Concurrent program schemes and their logics. Theor. Comput. Sci., 55:1{45, 1987. [Peng and Iyer, 1995] W. Peng and S. Purushothaman Iyer. A new type of pushdowntree automata on in nite trees. Int. J. of Found. of Comput. Sci., 6(2):169{186, 1995. [Peterson, 1978] G. L. Peterson. The power of tests in propositional dynamic logic. Technical Report 47, Comput. Sci. Dept., Univ. of Rochester, 1978. [Pnueli, 1977] A. Pnueli. The temporal logic of programs. In Proc. 18th Symp. Found. Comput. Sci., pages 46{57. IEEE, 1977. [Pratt, 1976] V. R. Pratt. Semantical considerations on Floyd-Hoare logic. In Proc. 17th Symp. Found. Comput. Sci., pages 109{121. IEEE, 1976. [Pratt, 1978] V. R. Pratt. A practical decision method for propositional dynamic logic. In Proc. 10th Symp. Theory of Comput., pages 326{337. ACM, 1978. [Pratt, 1979a] V. R. Pratt. Dynamic algebras: examples, constructions, applications. Technical Report TM-138, MIT/LCS, July 1979. [Pratt, 1979b] V. R. Pratt. Models of program logics. In Proc. 20th Symp. Found. Comput. Sci., pages 115{122. IEEE, 1979. [Pratt, 1979c] V. R. Pratt. Process logic. In Proc. 6th Symp. Princip. Prog. Lang., pages 93{100. ACM, 1979. [Pratt, 1980a] V. R. Pratt. Dynamic algebras and the nature of induction. In Proc. 12th Symp. Theory of Comput., pages 22{28. ACM, 1980. [Pratt, 1980b] V. R. Pratt. A near-optimal method for reasoning about actions. J. Comput. Syst. Sci., 20(2):231{254, 1980. [Pratt, 1981a] V. R. Pratt. A decidable -calculus: preliminary report. In Proc. 22nd Symp. Found. Comput. Sci., pages 421{427. IEEE, 1981. [Pratt, 1981b] V. R. Pratt. Using graphs to understand PDL. In D. Kozen, editor, Proc. Workshop on Logics of Programs, volume 131 of Lect. Notes in Comput. Sci., pages 387{396. Springer-Verlag, 1981. [Pratt, 1988] V. Pratt. Dynamic algebras as a well-behaved fragment of relation algebras. In D. Pigozzi, editor, Proc. Conf. on Algebra and Computer Science, volume 425 of Lecture Notes in Computer Science, pages 77{110, Ames, Iowa, June 1988. Springer-Verlag. [Pratt, 1990] V. Pratt. Action logic and pure induction. In J. van Eijck, editor, Proc. Logics in AI: European Workshop JELIA '90, volume 478 of Lecture Notes in Computer Science, pages 97{120, New York, September 1990. Springer-Verlag. [Rabin and Scott, 1959] M. O. Rabin and D. S. Scott. Finite automata and their decision problems. IBM J. Res. Develop., 3(2):115{125, 1959. [Rabin, 1980] M. O. Rabin. Probabilistic algorithms for testing primality. J. Number Theory, 12:128{138, 1980. [Ramshaw, 1981] L. H. Ramshaw. Formalizing the analysis of algorithms. PhD thesis, Stanford Univ., 1981. [Rasiowa and Sikorski, 1963] H. Rasiowa and R. Sikorski. Mathematics of Metamathematics. Polish Scienti c Publishers, PWN, 1963. [Redko, 1964] V. N. Redko. On de ning relations for the algebra of regular events. Ukrain. Mat. Z., 16:120{126, 1964. In Russian. [Reif, 1980] J. Reif. Logics for probabilistic programming. In Proc. 12th Symp. Theory of Comput., pages 8{13. ACM, 1980. [Rogers, 1967] H. Rogers. Theory of Recursive Functions and Eective Computability. McGraw-Hill, 1967. [Safra, 1988] S. Safra. On the complexity of !-automata. In Proc. 29th Symp. Foundations of Comput. Sci., pages 319{327. IEEE, October 1988.

DYNAMIC LOGIC

215

[Sakarovitch, 1987] J. Sakarovitch. Kleene's theorem revisited: A formal path from Kleene to Chomsky. In A. Kelemenova and J. Keleman, editors, Trends, Techniques, and Problems in Theoretical Computer Science, volume 281 of Lecture Notes in Computer Science, pages 39{50, New York, 1987. Springer-Verlag. [Salomaa, 1966] A. Salomaa. Two complete axiom systems for the algebra of regular events. J. Assoc. Comput. Mach., 13(1):158{169, January 1966. [Salwicki, 1970] A. Salwicki. Formalized algorithmic languages. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astron. Phys., 18:227{232, 1970. [Salwicki, 1977] A. Salwicki. Algorithmic logic: a tool for investigations of programs. In Butts and Hintikka, editors, Logic Foundations of Mathematics and Computability Theory, pages 281{295. Reidel, 1977. [Sazonov, 1980] V.Y. Sazonov. Polynomial computability and recursivity in nite domains. Elektronische Informationsverarbeitung und Kibernetik, 16:319{323, 1980. [Scott and de Bakker, 1969] D. S. Scott and J. W. de Bakker. A theory of programs. IBM Vienna, 1969. [Segala and Lynch, 1994] R. Segala and N. Lynch. Probabilistic simulations for probabilistic processes. In Proc. CONCUR'94, volume 836 of Lecture Notes in Comput. Sci., pages 481{496. Springer-Verlag, 1994. [Segerberg, 1977] K. Segerberg. A completeness theorem in the modal logic of programs (preliminary report). Not. Amer. Math. Soc., 24(6):A{552, 1977. [Shoen eld, 1967] J. R. Shoen eld. Mathematical Logic. Addison-Wesley, 1967. [Sholz, 1952] H. Sholz. Ein ungelostes Problem in der symbolischen Logik. The Journal of Symbolic Logic, 17:160, 1952. [Sistla and Clarke, 1982] A. P. Sistla and E. M. Clarke. The complexity of propositional linear temporal logics. In Proc. 14th Symp. Theory of Comput., pages 159{168. ACM, 1982. [Sistla et al., 1987] A. P. Sistla, M. Y. Vardi, and P. Wolper. The complementation problem for Buchi automata with application to temporal logic. Theor. Comput. Sci., 49:217{237, 1987. [Sokolsky and Smolka, 1994] O. Sokolsky and S. Smolka. Incremental model checking in the modal -calculus. In D. Dill, editor, Proc. Conf. Computer Aided Veri cation, volume 818 of Lect. Notes in Comput. Sci., pages 352{363. Springer, June 1994. [Steen et al., 1996] B. Steen, T. Margaria, A. Classen, V. Braun, R. Nisius, and M. Reitenspiess. A constraint oriented service environment. In T. Margaria and B. Steen, editors, Proc. Second Int. Workshop Tools and Algorithms for the Construction and Analysis of Systems (TACAS'96), volume 1055 of Lect. Notes in Comput. Sci., pages 418{421. Springer, March 1996. [Stirling and Walker, 1989] C. Stirling and D. Walker. Local model checking in the modal -calculus. In Proc. Int. Joint Conf. Theory and Practice of Software Develop. (TAPSOFT89), volume 352 of Lect. Notes in Comput. Sci., pages 369{383. Springer, March 1989. [Stirling, 1992] C. Stirling. Modal and temporal logics. In S. Abramsky, D. Gabbay, and T. Maibaum, editors, Handbook of Logic in Computer Science, pages 477{563. Clarendon Press, 1992. [Stockmeyer and Meyer, 1973] L. J. Stockmeyer and A. R. Meyer. Word problems requiring exponential time. In Proc. 5th Symp. Theory of Computing, pages 1{9, New York, 1973. ACM. [Stolboushkin and Taitslin, 1983] A. P. Stolboushkin and M. A. Taitslin. Deterministic dynamic logic is strictly weaker than dynamic logic. Infor. and Control, 57:48{55, 1983. [Stolboushkin, 1983] A.P. Stolboushkin. Regular dynamic logic is not interpretable in deterministic context-free dynamic logic. Information and Computation, 59:94{107, 1983. [Stolboushkin, 1989] A.P. Stolboushkin. Some complexity bounds for dynamic logic. In Proc. 4th Symp. Logic in Comput. Sci., pages 324{332. IEEE, June 1989. [Streett and Emerson, 1984] R. Streett and E. A. Emerson. The propositional -calculus is elementary. In Proc. 11th Int. Colloq. on Automata Languages and Programming, pages 465{472. Springer, 1984. Lect. Notes in Comput. Sci. 172.

216

DAVID HAREL, DEXTER KOZEN, AND JERZY TIURYN

[Streett, 1981] R. S. Streett. Propositional dynamic logic of looping and converse. In Proc. 13th Symp. Theory of Comput., pages 375{381. ACM, 1981. [Streett, 1982] R. S. Streett. Propositional dynamic logic of looping and converse is elementarily decidable. Infor. and Control, 54:121{141, 1982. [Streett, 1985a] R. S. Streett. Fixpoints and program looping: reductions from the propositional -calculus into propositional dynamic logics of looping. In R. Parikh, editor, Proc. Workshop on Logics of Programs, volume 193 of Lect. Notes in Comput. Sci., pages 359{372. Springer-Verlag, 1985. [Streett, 1985b] R. Streett. Fixpoints and program looping: reductions from the propositional -calculus into propositional dynamic logics of looping. In Parikh, editor, Proc. Workshop on Logics of Programs 1985, pages 359{372. Springer, 1985. Lect. Notes in Comput. Sci. 193. [Tarjan, 1981] R. E. Tarjan. A uni ed approach to path problems. J. Assoc. Comput. Mach., pages 577{593, 1981. [Thiele, 1966] H. Thiele. Wissenschaftstheoretische untersuchungen in algorithmischen sprachen. In Theorie der Graphschemata-Kalkale Veb Deutscher Verlag der Wissenschaften. Berlin, 1966. [Thomas, 1997] W. Thomas. Languages, automata, and logic. Technical Report 9607, Christian-Albrechts-Universitat Kiel, May 1997. [Tiuryn and Urzyczyn, 1983] J. Tiuryn and P. Urzyczyn. Some relationships between logics of programs and complexity theory. In Proc. 24th Symp. Found. Comput. Sci., pages 180{184. IEEE, 1983. [Tiuryn and Urzyczyn, 1984] J. Tiuryn and P. Urzyczyn. Remarks on comparing expressive power of logics of programs. In Chytil and Koubek, editors, Proc. Math. Found. Comput. Sci., volume 176 of Lect. Notes in Comput. Sci., pages 535{543. Springer-Verlag, 1984. [Tiuryn and Urzyczyn, 1988] J. Tiuryn and P. Urzyczyn. Some relationships between logics of programs and complexity theory. Theor. Comput. Sci., 60:83{108, 1988. [Tiuryn, 1981a] J. Tiuryn. A survey of the logic of eective de nitions. In E. Engeler, editor, Proc. Workshop on Logics of Programs, volume 125 of Lect. Notes in Comput. Sci., pages 198{245. Springer-Verlag, 1981. [Tiuryn, 1981b] J. Tiuryn. Unbounded program memory adds to the expressive power of rst-order programming logics. In Proc. 22nd Symp. Found. Comput. Sci., pages 335{339. IEEE, 1981. [Tiuryn, 1984] J. Tiuryn. Unbounded program memory adds to the expressive power of rst-order programming logics. Infor. and Control, 60:12{35, 1984. [Tiuryn, 1986] J. Tiuryn. Higher-order arrays and stacks in programming: an application of complexity theory to logics of programs. In Gruska and Rovan, editors, Proc. Math. Found. Comput. Sci., volume 233 of Lect. Notes in Comput. Sci., pages 177{198. Springer-Verlag, 1986. [Tiuryn, 1989] J. Tiuryn. A simpli ed proof of DDL < DL. Information and Computation, 81:1{12, 1989. [Trnkova and Reiterman, 1980] V. Trnkova and J. Reiterman. Dynamic algebras which are not Kripke structures. In Proc. 9th Symp. on Math. Found. Comput. Sci., pages 528{538, 1980. [Turing, 1936] A. M. Turing. On computable numbers with an application to the Entscheidungsproblem. Proc. London Math. Soc., 42:230{265, 1936. Erratum: Ibid., 43 (1937), pp. 544{546. [Urzyczyn, 1983] P. Urzyczyn. A necessary and suÆcient condition in order that a Herbrand interpretation be expressive relative to recursive programs. Information and Control, 56:212{219, 1983. [Urzyczyn, 1986] P. Urzyczyn. \During" cannot be expressed by \after". Journal of Computer and System Sciences, 32:97{104, 1986. [Urzyczyn, 1987] P. Urzyczyn. Deterministic context-free dynamic logic is more expressive than deterministic dynamic logic of regular programs. Fundamenta Informaticae, 10:123{142, 1987. [Urzyczyn, 1988] P. Urzyczyn. Logics of programs with Boolean memory. Fundamenta Informaticae, XI:21{40, 1988.

DYNAMIC LOGIC

217

[Valiev, 1980] M. K. Valiev. Decision complexity of variants of propositional dynamic logic. In Proc. 9th Symp. Math. Found. Comput. Sci., volume 88 of Lect. Notes in Comput. Sci., pages 656{664. Springer-Verlag, 1980. [van Emde Boas, 1978] P. van Emde Boas. The connection between modal logic and algorithmic logics. In Symp. on Math. Found. of Comp. Sci., pages 1{15, 1978. [Vardi and Stockmeyer, 1985] M. Y. Vardi and L. Stockmeyer. Improved upper and lower bounds for modal logics of programs: preliminary report. In Proc. 17th Symp. Theory of Comput., pages 240{251. ACM, May 1985. [Vardi and Wolper, 1986a] M. Y. Vardi and P. Wolper. An automata-theoretic approach to automatic program veri cation. In Proc. 1st Symp. Logic in Computer Science, pages 332{344. IEEE, June 1986. [Vardi and Wolper, 1986b] M. Y. Vardi and P. Wolper. Automata-theoretic techniques for modal logics of programs. J. Comput. Syst. Sci., 32:183{221, 1986. [Vardi, 1985a] M. Y. Vardi. Automatic veri cation of probabilistic concurrent nitestate programs. In Proc. 26th Symp. Found. Comput. Sci., pages 327{338. IEEE, October 1985. [Vardi, 1985b] M. Y. Vardi. The taming of the converse: reasoning about two-way computations. In R. Parikh, editor, Proc. Workshop on Logics of Programs, volume 193 of Lect. Notes in Comput. Sci., pages 413{424. Springer-Verlag, 1985. [Vardi, 1987] M. Y. Vardi. Veri cation of concurrent programs: the automata-theoretic framework. In Proc. 2nd Symp. Logic in Comput. Sci., pages 167{176. IEEE, June 1987. [Vardi, 1998a] M. Vardi. Linear vs. branching time: a complexity-theoretic perspective. In Proc. 13th Symp. Logic in Comput. Sci., pages 394{405. IEEE, 1998. [Vardi, 1998b] M. Y. Vardi. Reasoning about the past with two-way automata. In Proc. 25th Int. Colloq. Automata Lang. Prog., volume 1443 of Lect. Notes in Comput. Sci., pages 628{641. Springer-Verlag, July 1998. [Walukiewicz, 1993] I. Walukiewicz. Completeness result for the propositional calculus. In Proc. 8th IEEE Symp. Logic in Comput. Sci., June 1993. [Walukiewicz, 1995] I. Walukiewicz. Completeness of Kozen's axiomatisation of the propositional -calculus. In Proc. 10th Symp. Logic in Comput. Sci., pages 14{24. IEEE, June 1995. [Walukiewicz, 2000] I. Walukiewicz. Completeness of Kozen's axiomatisation of the propositional -calculus. Infor. and Comput., 157(1{2):142{182, February{March 2000. [Wand, 1978] M. Wand. A new incompleteness result for Hoare's system. J. Assoc. Comput. Mach., 25:168{175, 1978. [Wolper, 1981] P. Wolper. Temporal logic can be more expressive. In Proc. 22nd Symp. Foundations of Computer Science, pages 340{348. IEEE, 1981. [Wolper, 1983] P. Wolper. Temporal logic can be more expressive. Infor. and Control, 56:72{99, 1983.

HENRY PRAKKEN & GERARD VREESWIJK

LOGICS FOR DEFEASIBLE ARGUMENTATION 1 INTRODUCTION Logic is the science that deals with the formal principles and criteria of validity of patterns of inference. This chapter surveys logics for a particular group of patterns of inference, namely those where arguments for and against a certain claim are produced and evaluated, to test the tenability of the claim. Such reasoning processes are usually analysed under the common term `defeasible argumentation'. We shall illustrate this form of reasoning with a dispute between two persons, A and B . They disagree on whether it is morally acceptable for a newspaper to publish a certain piece of information concerning a politician's private life.1 Let us assume that the two parties have reached agreement on the following points. (1) The piece of information I concerns the health of person P ; (2) P does not agree with publication of I ; (3) Information concerning a person's health is information concerning that person's private life

A now states the moral principle that (4) Information concerning a person's private life may not be published if that person does not agree with publication. and A says \So the newspapers may not publish I " (Fig. 1, page 220). Although B accepts principle (4) and is therefore now committed to (1-4), B still refuses to accept the conclusion that the newspapers may not publish I . B motivates his refusal by replying that: (5) P is a cabinet minister (6) I is about a disease that might aect P 's political functioning (7) Information about things that might aect a cabinet minister's political functioning has public signi cance Furthermore, B maintains that there is also the moral principle that (8) Newspapers may publish any information that has public signi cance 1 Adapted

from [Sartor, 1994].

220

HENRY PRAKKEN & GERARD VREESWIJK

(3) Information concerning a person's health is information concerning that person's private (1) I concerns life. the health of P . (2) P does not I concerns the permit private life of publication of P. I . (4) Information I concerns the concerning a private life of P person's private and P does not life may not be permit published publication of against that I. person's will. The newspapers may not publish I. Figure 1. A's argument.

B concludes by saying that therefore the newspapers may write about P 's disease (Fig. 2, page 221). A agrees with (5{7) and even accepts (8) as a moral principle, but A does not give up his initial claim. (It is assumed that A and B are both male.) Instead he tries to defend it by arguing that he has the stronger argument: he does so by arguing that in this case (9) The likelihood that the disease mentioned in I aects P 's functioning is small. (10) If the likelihood that the disease mentioned in I aects P 's functioning is small, then principle (4) has priority over principle (8). Thus it can be derived that the principle used in A's rst argument is stronger than the principle used by B (Fig. 3, page 222), which makes A's rst argument stronger than B's, so that it follows after all that the newspapers should be silent about P 's disease. Let us examine the various stages of this dispute in some detail. Intuitively, it seems obvious that the accepted basis for discussion after A has stated (4) and B has accepted it, viz. (1,2,3,4), warrants the conclusion that the piece of information I may not be published. However, after B 's counterargument and A's acceptance of its premises (5-8) things have changed.

LOGICS FOR DEFEASIBLE ARGUMENTATION

221

(6) I is about a disease that (5) P is a might aect P 's cabinet political minister. functioning. (7) Information about things I is about a that might disease that aect a cabinet might aect a minister's cabinet political minister's functioning has political public (8) Newspapers may publish functioning. signi cance. any information I has public that has public signi cance. signi cance. The newspapers may publish I . Figure 2. B 's argument. At this stage the joint basis for discussion is (1-8), which gives rise to two con icting arguments. Moreover, (1-8) does not yield reasons to prefer one argument over the other: so at this point A's conclusion has ceased to be warranted. But then A's second argument, which states a preference between the two con icting moral principles, tips the balance in favour of his rst argument: so after the basis for discussion has been extended to (1-10), we must again accept A's moral claim as warranted. This chapter is about logical systems that formalise this kind of reasoning. We shall call them `logics for defeasible argumentation', or `argumentation systems'. As the example shows, these systems lack one feature of `standard', deductive logic (say, rst-order predicate logic, FOL). The notion of `warrant' that we used in explaining the example is clearly not the same as rst-order logical consequence, which has the property of monotonicity: in FOL any conclusion that can be drawn from a given set of premises, remains valid if we add new premises to this set. So according to FOL, if A's claim is implied by (1{4), it is surely also implied by (1{8). From the point of view of FOL it is pointless for B to accept (1{4) and yet state a counterargument; B should also have refused to accept one of the premises, for instance, (4).

222

HENRY PRAKKEN & GERARD VREESWIJK

(9) The likelihood that the disease mentioned in I aects P 's functioning is small.

(10) If the likelihood that the disease mentioned in I aects P 's functioning is small, then principle (4) has priority over principle (8).

Principle (4) has priority over principle (8). Figure 3. A's priority argument. Does this mean that our informal account of the example is misleading, that it conceals a subtle change in the interpretation of, say, (4) as the dispute progresses? This is not so easy to answer in general. Although in some cases it might indeed be best to analyse an argument move like B 's as a reinterpretation of a premise, in other cases this is dierent. In actual reasoning, rules are not always neatly labelled with an exhaustive list of possible exceptions; rather, people are often forced to apply `rules of thumb' or `default rules', in the absence of evidence to the contrary, and it seems natural to analyse an argument like B 's as an attempt to provide such evidence to the contrary. When the example is thus analysed, the force of the conclusions drawn in it can only be captured by a consequence notion that is nonmonotonic: although A's claim is warranted on the basis of (1{4), it is not warranted on the basis of (1{8). Such nonmonotonic consequence notions have been studied over the last twenty years in an area of arti cial intelligence called `nonmonotonic reasoning' (recently the term `defeasible reasoning' has also become popular), and logics for defeasible argumentation are largely a result of this development. Some might say that the lack of the property of monotonicity disquali es these notions from being notions of logical consequence: isn't the very idea of calling an inference `logical' that it is (given the premises) beyond any doubt? We are not so sure. Our view on logic is that it studies criteria of warrant, that is, criteria that determine the degree according to which it is reasonable to accept logical conclusions, even though some of these conclusions are established non-deductively: sometimes it is reasonable to accept a conclusion of an argument even though this argument is not strong enough to establish its conclusion with absolute certainty. Several ways to formalise nonmonotonic, or defeasible reasoning have been studied. This chapter is not meant to survey all of them but only discusses the argument-based approach, which de nes notions like argument, counterargument, attack and defeat, and de nes consequence notions in terms of the interaction of arguments for and against certain conclusions. This approach was initiated by the philosopher John Pollock [1987], based

LOGICS FOR DEFEASIBLE ARGUMENTATION

223

on his earlier work in epistemology, e.g. [1974], and the computer scientist Ronald Loui [1987]. As we shall see, argumentation systems are able to incorporate the traditional, monotonic notions of logical consequence as a special case, for instance, in their de nition of what an argument is. The eld of defeasible argumentation is relatively young, and researchers disagree on many issues, while the formal meta-theory is still in its early stages. Yet we think that the eld has suÆciently matured to devote a handbook survey to it.2 We aim to show that there are also many similarities and connections between the various systems, and that many dierences are variations on a few basic notions, or are caused by dierent focus or dierent levels of abstraction. Moreover, we shall show that some recent developments pave the way for a more elaborate meta-theory of defeasible argumentation. Although when discussing individual systems we aim to be as formal as possible, when comparing them we shall mostly use conceptual or quasiformal terms. We shall also report on some formal results on this comparison, but it is not our aim to present new technical results; this we regard as a task for further research in the eld. The structure of this chapter is as follows. In Section 2 we give an overview of the main approaches in nonmonotonic reasoning, and argue why the study of this kind of reasoning is relevant not only for arti cial intelligence but also for philosophy. In Section 3 we give a brief conceptual sketch of logics for defeasible argumentation, and we argue that it is not obvious that they need a model-theoretic semantics. In Section 4 we become formal, studying how semantic consequence notions for argumentation systems can be de ned given a set of arguments ordered by a defeat relation. This discussion is still abstract, leaving the structure of arguments and the origin of the defeat relation largely unspeci ed. In Section 5 we become more concrete, in discussing particular logics for defeasible argumentation. Then in Section 6 we discuss one way in which argumentation systems can be formulated, viz. in the form of rules for dispute. We end this chapter in Section 7 with some concluding remarks, and with a list of the main open issues in the eld. 2 NONMONOTONIC LOGICS: OVERVIEW AND PHILOSOPHICAL RELEVANCE Before discussing argumentation systems, we place them in the context of the study of nonmonotonic reasoning, and discuss why this study deserves a place in philosophical logic. 2 For a survey of this topic from a computer science perspective, see [Ches~ nevar et al., 1999].

224

HENRY PRAKKEN & GERARD VREESWIJK

2.1 Research in nonmonotonic reasoning Although this chapter is not about nonmonotonic logics in general, it is still useful to give a brief impression of this eld, to put systems for defeasible argumentation in context. Several styles of nonmonotonic logics exist. Most of them take as the basic `nonstandard' unit the notion of a default, or defeasible conditional or rule: this is a conditional that can be quali ed with phrases like `typically', `normally' or `unless shown otherwise' (the two principles in our example may be regarded as defaults). Defaults do not guarantee that their consequent holds whenever their antecedent holds; instead they allow us in such cases to defeasibly derive their consequent, i.e., if nothing is known about exceptional circumstances. Most nonmonotonic logics aim to formalise this phenomenon of `default reasoning', but they do so in dierent ways. Firstly, they dier in whether the above quali cations are regarded as extra conditions in the antecedent of a default, as aspects of the use of a default, or as inherent in the meaning of a defeasible conditional operator. In addition, within each of these views on defaults, nonmonotonic logics dier in the technical means by which they formalise it. Let us brie y review the main approaches. (More detailed overviews can be found in e.g. [Brewka, 1991] and [Gabbay et al., 1994].) Preferential entailment

Preferential entailment, e.g. [Shoham, 1988], is a model-theoretic approach based on standard rst-order logic, which weakens the standard notion of entailment. The idea is that instead of checking all models of the premises to see if the conclusion holds, only some of the models are checked, viz. those in which as few exceptions to the defaults hold as possible. This technique is usually combined with the `extra condition' view on defaults, by adding a special `normality condition' to their antecedent, as in (1)

8x:Birds(x) ^ :ab1 (x) Canfly(x)

Informally, this reads as `Birds can y, unless they are abnormal with respect to ying'. Let us now also assume that Tweety is a bird: (2) Bird(T weety) We want to infer from (1) and (2) that Canfly(T weety), since there is no reason to believe that ab1 (T weety). This inference is formalised by only looking at those models of (1,2) where the extension of the abi predicates are minimal (with respect to set inclusion). Thus, since on the basis of (1) and (2) nothing is known about whether Tweety is an abnormal bird, there are both FOL-models of these premises where ab1 (T weety) is satis ed and FOL-models where this is not satis ed. The idea is then that we can

LOGICS FOR DEFEASIBLE ARGUMENTATION

225

disregard the models satisfying ab1(T weety), and only look at the models satisfying : ab1 (T weety); clearly in all those models Canfly(T weety) holds. The defeasibility of this inference can be shown by adding ab1 (T weety) to the premises. Then all models of the premises satisfy ab1 (T weety), and the preferred models are now those in which the extension of ab1 is fT weetyg. Some of those models satisfy Canfly(T weety) but others satisfy :Canfly(T weety), so we cannot any more draw the conclusion Canfly(T weety ). A variant of this approach is Poole's [1988] `abductive framework for default reasoning'. Poole also represents defaults with normality conditions, but he does not de ne a new semantics. Instead, he recommends a new way of using rst-order logic, viz. for constructing `extensions' of a theory. Essentially, extensions can be formed by adding as many normality statements to a theory as is consistently possible. The standard rst-order models of a theory extension correspond to the preferred models of the original theory. Intensional semantics for defaults

There are also intensional approaches to the semantics of defaults, e.g. [Delgrande, 1988; Asher & Morreau, 1990]. The idea is to interpret defaults in a possible-worlds semantics, and to evaluate their truth in a model by focusing on a subset of the set of possible worlds within a model. This is similar to the focusing on certain models of a theory in preferential entailment. On the other hand, intensional semantics capture the defeasibility of defaults not with extra normality conditions, but in the meaning of the conditional operator. This development draws its inspiration from the similarity semantics for counterfactuals in conditional logics, e.g. [Lewis, 1973]. In these logics a counterfactual conditional is interpreted as follows: ' ) is true just in case is true in a subset of the possible worlds in which ' is true, viz. in the possible worlds which resemble the actual world as much as possible, given that in them ' holds. Now with respect to defeasible conditionals the idea is to de ne in a similar way a possible-worlds semantics for defeasible conditionals. A defeasible conditional ' ) is roughly interpreted as `in all most normal worlds in which ' holds, holds as well'. Obviously, if read in this way, then modus ponens is not valid for such conditionals, since even if ' holds in the actual world, the actual world need not be a normal world. This is dierent for counterfactual conditionals, where the actual world is always among the worlds most similar to itself. This difference makes that intensional defeasible logics need a component that is absent in counterfactual logics, and which is similar to the selection of the `most normal' models in preferential entailment: in order to derive default conclusions from defeasible conditionals, the actual world is assumed to be as normal as possible given the premises. It is this assumption that makes the resulting conclusions defeasible: it validates modus ponens for those

226

HENRY PRAKKEN & GERARD VREESWIJK

defaults for which there is no evidence of exceptions. Consistency and non-provability statements

Yet another approach is to somehow make the expression possible of consistency or non-provability statements. This is, for instance, the idea behind Reiter's [1980] default logic, which extends rst-order logic with constructs that technically play the role of inference rules, but that express domainspeci c generalisations instead of logical inference principles. In default logic, the Tweety default can be written as follows. Bird(x) : Canfly(x)=Canfly(x) The middle part of this `default' can be used to express consistency statements. Informally the default reads as `If it is provable that Tweety is a bird, and it is not provable that Tweety cannot y, then we may infer that Tweety can y'. To see how this works, assume that in addition to this default we have a rst-order theory W = fBird(T weety); 8x:Penguin(x) :Canfly(x)g Then (informally) since Canfly(T weety) is consistent with what is known, we can apply the default to T weety and defeasibly derive Canfly(T weety) from W . That this inference is indeed defeasible becomes apparent if Penguin(T weety ) is also added to W : then :Canfly(T weety ) is classically entailed by what is known and the consistency check for applying the default fails, for which reason Canfly(T weety) cannot be derived from W [ fPenguin(Tweety)g. This example seems straightforward but the formal de nition of defaultlogical consequence is tricky: in this approach, what is provable is determined by what is not provable, so the problem is how to avoid a circular de nition. In default logic (as in related logics) this is solved by giving the de nition a xed-point appearance; see below in Section 5.4. Similar equilibrium-like de nitions for argumentation systems will be discussed throughout this chapter. Inconsistency handling

It has also been proposed to formalise defeasible reasoning as strategies for dealing with inconsistent information, e.g. by Brewka [1989]. In this approach defaults are formalised with ordinary material implications and without normality conditions, and their defeasible nature is captured in how they are used by the consistency handling strategies. In particular, in case of inconsistency, alternative consistent subsets (subtheories) of the premises give rise to alternative default conclusions, after which a choice can be made for the subtheory containing the exceptional rule.

LOGICS FOR DEFEASIBLE ARGUMENTATION

227

In our birds example this works out as follows. (1) bird canfly (2) penguin : canfly (3) bird (4) penguin

The set f(1); (3)g is a subtheory supporting the conclusion canfly, while f(2); (4)g is a subtheory supporting the opposite. The exceptional nature of (2) over (1) can be captured by preferring the latter subtheory. Systems for defeasible argumentation

Argumentation systems are yet another way to formalise nonmonotonic reasoning, viz. as the construction and comparison of arguments for and against certain conclusions. In these systems the basic notion is not that of a defeasible conditional but that of a defeasible argument. The idea is that the construction of arguments is monotonic, i.e., an argument stays an argument if more premises are added. Nonmonotonicity, or defeasibility, is not explained in terms of the interpretation of a defeasible conditional, but in terms of the interactions between con icting arguments: in argumentation systems nonmonotonicity arises from the fact that new premises may give rise to stronger counterarguments, which defeat the original argument. So in case of Tweety we may construct one argument that Tweety ies because it is a bird, and another argument that Tweety does not y because it is a penguin, and then we may prefer the latter argument because it is about a speci c class of birds, and is therefore an exception to the general rule. Argumentation systems can be combined with each of the above-discussed views on defaults. The `normality condition' view can be formalised by regarding an argument as a standard derivation from a set of premises augmented with normality statements. Thus a counterargument is an attack on such a normality statement. A variant of this method can be applied to the use of consistency and nonprovability expressions. The `pragmatic' view on defaults (as in inconsistency handling) can be formalised by regarding arguments as a standard derivation from a consistent subset of the premises. Here a counterargument attacks a premise of an argument. Finally, the `semantic' view on defaults could be formalised by allowing the construction of arguments with inference rules (such as modus ponens) that are invalid in the semantics. In that case a counterargument attacks the use of such an inference rule. It is important to note, however, that argumentation systems have wider scope than just reasoning with defaults. Firstly, argumentation systems can be applied to any form of reasoning with contradictory information, whether the contradictions have to do with rules and exceptions or not. For instance, the contradictions may arise from reasoning with several sources

228

HENRY PRAKKEN & GERARD VREESWIJK

of information, or they may be caused by disagreement about beliefs or about moral, ethical or political claims. Moreover, it is important that several argumentation systems allow the construction and attack of arguments that are traditionally called `ampliative', such as inductive, analogical and abductive arguments; these reasoning forms fall outside the scope of most other nonmonotonic logics. Most argumentation systems have been developed in arti cial intelligence research on nonmonotonic reasoning, although Pollock's work, which was the rst logical formalisation of defeasible argumentation, was initially applied to the philosophy of knowledge and justi cation (epistemology). The rst arti cial intelligence paper on argumentation systems was [Loui, 1987]. One domain in which argumentation systems have become popular is legal reasoning [Loui et al., 1993; Prakken, 1993; Sartor, 1994; Gordon, 1995; Loui & Norman, 1995; Prakken & Sartor, 1996; Freeman & Farley, 1996; Prakken & Sartor, 1997a; Prakken, 1997; Gordon & Karacapilidis, 1997]. This is not surprising, since legal reasoning often takes place in an adversarial context, where notions like argument, counterargument, rebuttal and defeat are very common. However, argumentation systems have also been applied to such domains as medical reasoning [Das et al., 1996], negotiation [Parsons et al., 1998] and risk assessment in oil exploration [Clark, 1990].

2.2 Nonmonotonic reasoning: arti cial intelligence or logic? Usually, nonmonotonic logics are studied as a branch of arti cial intelligence. However, it is more than justi ed to regard these logics as also part of philosophical logic. In fact, several issues in nonmonotonic logic have come up earlier in philosophy. For instance, in the context of moral reasoning, Ross [1930] has studied the notion of prima facie obligations. According to Ross an act is prima facie obligatory if it has a characteristic that makes the act (by virtue of an underlying moral principle) tend to be a `duty proper'. Ful lling a promise is a prima facie duty because it is the ful llment of a promise, i.e., because of the moral principle that one should do what one has promised to do. But the act may also have other characteristics which make the act tend to be forbidden. For instance, if John has promised a friend to visit him for a cup of tea, and then John's mother suddenly falls ill, then he also has a prima facie duty to do his mother's shopping, based, say, on the principle that we ought to help our parents when they need it. To nd out what one's duty proper is, one should `consider all things', i.e., compare all prima facie duties that can be based on any aspect of the factual circumstances and nd which one is `more incumbent' than any con icting one. If we qualify the all-things-considered clause as `consider all things that you know', then the reasoning involved is clearly nonmonotonic: if we are rst only told that John has promised his friend to visit him, then we conclude that Johns' duty proper is to visit his friend. But if we next also

LOGICS FOR DEFEASIBLE ARGUMENTATION

229

hear that John's mother has become ill, we conclude instead that John's duty proper is to help his mother. The term `defeasibility' was rst introduced not in logic but in legal philosophy, viz. by Hart [1949] (see the historical discussion in [Loui, 1995]). Hart observed that legal concepts are defeasible in the sense that the conditions for when a fact situation classi es as an instance of a legal concept (such as `contract'), are only ordinarily, or presumptively, suÆcient. If a party in a law suit succeeds in proving these conditions, this does not have the eect that the case is settled; instead, legal procedure is such that the burden of proof shifts to the opponent, whose turn it then is to prove additional facts which, despite the facts proven by the proponent, nevertheless prevent the claim from being granted (for instance, insanity of one of the contracting parties). Hart's discussion of this phenomenon stays within legal-procedural terms, but it is obvious that it provides a challenge for standard logic: an explanation is needed of how proving new facts without rejecting what was proven by the other party can reverse the outcome of a case. Toulmin [1958], who criticised the logicians of his days for neglecting many features of ordinary reasoning, was aware of the implications of this phenomenon for logic. In his well-known pictorial scheme for arguments he leaves room for rebuttals of an argument. He also urges logicians to take the procedural aspect (in the legal sense) of argumentation seriously. In particular, Toulmin argues that (outside mathematics) an argument is valid if it can stand against criticism in a properly conducted dispute, and the task of logicians is to nd criteria for when a dispute has been conducted properly. The notion of burden of proof, and its role in dialectical inquiry, has also been studied by Rescher [1977], in the context of epistemology. Among other things, Rescher claims that a dialectical model of scienti c reasoning can explain the rational force of inductive arguments: they must be accepted if they cannot be successfully challenged in a properly conducted scienti c dispute. Rescher thereby assumes that the standards for constructing inductive arguments are somehow given by generally accepted practices of scienti c reasoning; he only focuses on the dialectical interaction between con icting inductive arguments. Another philosopher who has studied defeasible reasoning is John Pollock. Although his work, to be presented below, is also well-known in the eld of arti cial intelligence, it was initially a contribution to epistemology, with, like Rescher, much attention for induction as a form of defeasible reasoning. As this overview shows, a logical study of nonmonotonic, or defeasible reasoning fully deserves a place in philosophical logic. Let us now turn to the discussion of logics for defeasible argumentation.

230

HENRY PRAKKEN & GERARD VREESWIJK

3 SYSTEMS FOR DEFEASIBLE ARGUMENTATION: A CONCEPTUAL SKETCH In this section we give a conceptual sketch of the general ideas behind logics for defeasible argumentation. These systems contain the following ve elements (although sometimes implicitly): an underlying logical language, de nitions of an argument, of con icts between arguments and of defeat among arguments and, nally, a de nition of the status of arguments, which can be used to de ne a notion of defeasible logical consequence. Argumentation systems are built around an underlying logical language and an associated notion of logical consequence, de ning the notion of an argument. As noted above, the idea is that this consequence notion is monotonic: new premises cannot invalidate arguments as arguments but only give rise to counterarguments. Some argumentation systems assume a particular logic, while other systems leave the underlying logic partly or wholly unspeci ed; thus these systems can be instantiated with various alternative logics, which makes them frameworks rather than systems. The notion of an argument corresponds to a proof (or the existence of a proof) in the underlying logic. As for the layout of arguments, in the literature on argumentation systems three basic formats can be distinguished, all familiar from the logic literature. Sometimes arguments are de ned as a tree of inferences grounded in the premises, and sometimes as a sequence of such inferences, i.e., as a deduction. Finally, some systems simply de ne an argument as a premises - conclusion pair, leaving implicit that the underlying logic validates a proof of the conclusion from the premises. One argumentation system, viz. Dung [1995], leaves the internal structure of an argument completely unspeci ed. Dung treats the notion of an argument as a primitive, and exclusively focuses on the ways arguments interact. Thus Dung's framework is of the most abstract kind.

t p

q

r

s

:t

t p

q

r

s

:dp; q; r; s=te

Figure 4. Rebutting attack (left) vs. undercutting attack (right). The notions of an underlying logic and an argument still t with the standard picture of what a logical system is. The remaining three elements are what makes an argumentation system a framework for defeasible argumentation.

LOGICS FOR DEFEASIBLE ARGUMENTATION

231

q p

:p

p

:p

Figure 5. Direct attack (left) vs. indirect attack (right). The rst is the notion of a con ict between arguments (also used are the terms `attack' and `counterargument'). In the literature, three types of con icts are discussed. The rst type is when arguments have contradictory conclusions, as in `Tweety ies, because it is a bird' and `Tweety does not

y because it is a penguin' (cf. the left part of Fig. 4). Clearly, this form of attack, which is often called rebutting an argument, is symmetric. The other two types of con ict are not symmetric. One is where one argument makes a non-provability assumption (as in default logic) and another argument proves what was assumed unprovable by the rst. For example, an argument `Tweety ies because it is a bird, and it is not provable that Tweety is a penguin', is attacked by any argument with conclusion `Tweety is a penguin'. We shall call this assumption attack. The nal type of con ict ( rst discussed by Pollock [1970]) is when one argument challenges, not a proposition, but a rule of inference of another argument (cf. the right part of Fig. 4). After Pollock, this is usually called undercutting an inference. Obviously, a rule of inference can only be undercut if it is not deductive. Non-deductive rules of inference occur in argumentation systems that allow inductive, abductive or analogical arguments. To consider an example, the inductive argument `Raven 101 is black since the observed ravens raven1 . . . raven100 were black' is undercut by an argument `I saw raven102, which was white'. In order to formalise this type of con ict, the rule of inference that is to be undercut (in Fig. 4: the rule that is enclosed in the dotted box, in at text written as p; q; r; s=t) must be expressed in the object language: dp; q; r; s=te) and denied: :dp; q; r; s=te.3 Note that all these senses of attack have a direct and an indirect version; indirect attack is directed against a subconclusion or a substep of an argument, as illustrated by Figure 5 for indirect rebutting. The notion of con icting, or attacking arguments does not embody any form of evaluation; evaluating con icting pairs of arguments, or in other 3 Ceiling brackets around a meta-level formula denote a conversion of that formula to the object language, provided that the object language is expressive enough to enable such a conversion.

232

HENRY PRAKKEN & GERARD VREESWIJK

words, determining whether an attack is successful, is another element of argumentation systems. It has the form of a binary relation between arguments, standing for `attacking and not weaker' (in a weak form) or `attacking and stronger' (in a strong form). The terminology varies: some terms that have been used are `defeat' [Prakken & Sartor, 1997b], `attack' [Dung, 1995; Bondarenko et al., 1997] and `interference' [Loui, 1998]. Other systems do not explicitly name this notion but leave it implicit in the de nitions. In this chapter we shall use `defeat' for the weak notion and `strict defeat' for the strong, asymmetric notion. Note that the several forms of attack, rebutting vs. assumption vs. undercutting and direct vs. indirect, have their counterparts for defeat. Argumentation systems vary in their grounds for the evaluation of arguments. In arti cial intelligence the speci city principle, which prefers arguments based on the most speci c defaults, is by many regarded as very important, but several researchers, e.g. Vreeswijk [1989], Pollock [1995] and Prakken & Sartor [1996], have argued that speci city is not a general principle of common-sense reasoning but just one of the many standards that might or might not be used. Moreover, some have claimed that general, domain-independent principles of defeat do not exist or are very weak, and that information from the semantics of the domain will be the most important way of deciding among competing arguments [Konolige, 1988; Vreeswijk, 1989]. For these reasons several argumentation systems are parametrised by user-provided criteria. Some, e.g. Prakken & Sartor, even argue that the evaluation criteria are debatable, just as the rest of the domain theory is, and that argumentation systems should therefore allow for defeasible arguments on these criteria. (Our example in the introduction contains such an argument, viz. A's use of a priority rule (10) based on the expected consequences of certain events. This argument might, for instance, be attacked by an argument that in case of important oÆcials even a small likelihood that the disease aects the oÆcial's functioning justi es publication, or by an argument that the negative consequences of publication for the oÆcial are small.) The notion of defeat is a binary relation on the set of arguments. It is important to note that this relation does not yet tell us with what arguments a dispute can be won; it only tells us something about the relative strength of two individual con icting arguments. The ultimate status of an argument depends on the interaction between all available arguments: it may very well be that argument B defeats argument A, but that B is in turn defeated by a third argument C ; in that case C `reinstates' A (see Figure 6).4 Suppose, for instance, that the argument A that Tweety ies because it is a bird is regarded as being defeated by the argument B that 4 While in gures 4 and 5 the arrows stood for attack relations, from now on they will depict defeat relations.

LOGICS FOR DEFEASIBLE ARGUMENTATION

A

B

233

C

Figure 6. Argument C reinstates argument A. Tweety does not y because it is a penguin (for instance, because con icting arguments are compared with respect to speci city). And suppose that B is in turn defeated by an argument C , attacking B 's intermediate conclusion that Tweety is a penguin. C might, for instance, say that the penguin observation was done with faulty instruments. In that case C reinstates argument A. Therefore, what is also needed is a de nition of the status of arguments on the basis of all the ways in which they interact. Besides reinstatement, this de nition must also capture the principle that an argument cannot be justi ed unless all its subarguments are justi ed (by Vreeswijk [1997] called the `compositionality principle'). There is a close relation between these two notions, since reinstatement often proceeds by indirect attack, i.e., attacking a subargument of the attacking argument. (Cf. Fig. 5 on page 231.) It is this de nition of the status of arguments that produces the output of an argumentation system: it typically divides arguments in at least two classes: arguments with which a dispute can be `won' and arguments with which a dispute should be `lost'. Sometimes a third, intermediate category is also distinguished, of arguments that leave the dispute undecided. The terminology varies here also: terms that have been used are justi ed vs. defensible vs. defeated (or overruled), defeated vs. undefeated, in force vs. not in force, preferred vs. not preferred, etcetera. Unless indicated otherwise, this chapter shall use the terms `justi ed', `defensible' and `overruled' arguments. These notions can be de ned both in a `declarative' and in a `procedural' form. The declarative form, usually with xed-point de nitions, just declares certain sets of arguments as acceptable, (given a set of premises and evaluation criteria) without de ning a procedure for testing whether an argument is a member of this set; the procedural form amounts to de ning just such a procedure. Thus the declarative form of an argumentation system can be regarded as its (argumentation-theoretic) semantics, and the procedural form as its proof theory. Note that it is very well possible that, while an argumentation system has an argumentation-theoretic semantics, at the same time its underlying logic for constructing arguments has a model-theoretic semantics in the usual sense, for instance, the semantics of standard rst-order logic, or a possible-worlds semantics of some modal logic. In fact, this point is not universally accepted, and therefore we devote a separate subsection to it.

234

HENRY PRAKKEN & GERARD VREESWIJK

Semantics: model-theoretic or not? A much-discussed issue is whether logics for nonmonotonic reasoning should have a model-theoretic semantics or not. In the early days of this eld it was usual to criticise several systems (such as default logic) for the lack of a model-theoretic semantics. However, when such semantics were provided, this was not always felt to be a major step forward, unlike when, for instance, possible-worlds semantics for modal logic was introduced. In addition, several researchers argued that nonmonotonic reasoning needs a dierent kind of semantics than a model theory, viz. an argumentation-theoretic semantics. It is here not the place to decide the discussion. Instead we con ne ourselves to presenting some main arguments for this view that have been put forward. Traditionally, model theory has been used in logic to de ne the meaning of logical languages. Formulas of such languages were regarded as telling us something about reality (however de ned). Model-theoretic semantics de nes the meaning of logical symbols by de ning how the world looks like if an expression with these symbols is true, and it de nes logical consequence, entailment, by looking at what else must be true if the premises are true. For defaults this means that their semantics should be in terms of how the world normally, or typically looks like when defaults are true; logical consequence should, in this approach, be determined by looking at the most normal worlds, models or situations that satisfy the premises. However, others, e.g. Pollock [1991, p. 40], Vreeswijk [1993a, pp. 88{9] and Loui [1998], have argued that the meaning of defaults should not be found in a correspondence with reality, but in their role in dialectical inquiry. That a relation between premises and conclusion is defeasible means that a certain burden of proof is induced. In this approach, the central notions of defeasible reasoning are notions like attack, rebuttal and defeat among arguments, and these notions are not `propositional', for which reason their meaning is not naturally captured in terms of correspondence between a proposition and the world. This approach instead de nes `argumentationtheoretic' semantics for such notions. The basic idea of such a semantics is to capture sets of arguments that are as large as possible, and adequately defend themselves against attacks on their members. It should be noted that this approach does not deny the usefulness of model theory but only wants to de ne its proper place. Model theory should not be applied for things for which it is not suitable, but should be reserved for the initial components of an argumentation system, the notions of a logical language and a consequence relation de ning what an argument is. It should also be noted, however, that some have proposed argumentation systems as proof theories for model-theoretic semantics of preferential entailment (in particular Gener & Pearl [1992]). In our opinion, one criterion for success of such model-theoretic semantics of argumentation systems

LOGICS FOR DEFEASIBLE ARGUMENTATION

235

is whether natural criteria for model preference can be de ned. For certain restricted cases this seems possible, but whether this approach is extendable to more general argumentation systems, for instance, those allowing inductive, analogical or abductive arguments, remains to be investigated. 4 GENERAL FEATURES OF ARGUMENT-BASED SEMANTICS Let us now, before looking at some systems in detail, become more formal about some of the notions that these systems have in common. We shall focus in particular on the semantics of argumentation systems, i.e., on the conditions that sets of justi ed arguments should satisfy. In line with the discussion at the end of Section 3, we can say that argumentation systems are not concerned with truth of propositions, but with justi cation of accepting a proposition as true. In particular, one is justi ed in accepting a proposition as true if there is an argument for the proposition that one is justi ed in accepting. Let us concentrate on the task of de ning the notion of a justi ed argument. Which properties should such a de nition have? Let us assume as background a set of arguments, with a binary relation of `defeat' de ned over it. Recall that we read `A defeats B ' in the weak sense of `A con icts with B and is not weaker than B '; so in some cases it may happen that A defeats B and B defeats A. For the moment we leave the internal structure of an argument unspeci ed, as well as the precise de nition of defeat.5 Then a simple de nition of the status of an argument is the following. DEFINITION 1. Arguments are either justi ed or not justi ed. 1. An argument is justi ed if all arguments defeating it (if any) are not justi ed. 2. An argument is not justi ed if it is defeated by an argument that is justi ed. This de nition works well in simple cases, in which it is clear which arguments should emerge victorious, as in the following example. EXAMPLE 2. Consider three arguments A, B and C such that B defeats A and C defeats B :

A

5 This

B

C

style of discussion is inspired by Dung [1995]; see further Subsection 5.1 below.

236

HENRY PRAKKEN & GERARD VREESWIJK

A concrete version of this example is A = `Tweety ies because it is a bird' B = `Tweety does not y because it is a penguin' C = `The observation that Tweety is a penguin is unreliable' C is justi ed since it is not defeated by any other argument. This makes B not justi ed, since B is defeated by C . This in turn makes A justi ed: although A is defeated by B , A is reinstated by C , since C makes B not justi ed. In other cases, however, De nition 1 is circular or ambiguous. Especially when arguments of equal strength interfere with each other, it is not clear which argument should remain undefeated. EXAMPLE 3. (Even cycle.) Consider the arguments A and B such that A defeats B and B defeats A.

A

B

A concrete example is A = `Nixon was a paci st because he was a quaker' B = `Nixon was not a paci st because he was a republican' Can we regard A as justi ed? Yes, we can, if B is not justi ed. Can we regard B as not justi ed? Yes, we can, if A is justi ed. So, if we regard A as justi ed and B as not justi ed, De nition 1 is satis ed. However, it is obvious that by a completely symmetrical line of reasoning we can also regard B as justi ed and A as not justi ed. So there are two possible `status assignments' to A and B that satisfy De nition 1: one in which A is justi ed at the expense of B , and one in which B is justi ed at the expense of A. Yet intuitively, we are not justi ed in accepting either of them. In the literature, two approaches to the solution of this problem can be found. The rst approach consists of changing De nition 1 in such a way that there is always precisely one possible way to assign a status to arguments, and which is such that with `undecided con icts' as in our example both of the con icting arguments receive the status `not justi ed'. The second approach instead regards the existence of multiple status assignments not as a problem but as a feature: it allows for multiple assignments and de nes an argument as `genuinely' justi ed if and only if it receives this status in all possible assignments. The following two subsections discuss the details of both approaches. First, however, another problem with De nition 1 must be explained, having to do with self-defeating arguments. EXAMPLE 4. (Self-defeat.) Consider an argument L, such that L defeats L. Suppose L is not justi ed. Then all arguments defeating L are not

LOGICS FOR DEFEASIBLE ARGUMENTATION

237

L Figure 7. A self-defeating argument. justi ed, so by clause 1 of De nition 1 L is justi ed. Contradiction. Suppose now L is justi ed. Then L is defeated by a justi ed argument, so by clause 2 of De nition 1 L is not justi ed. Contradiction. Thus, De nition 1 implies that there are no self-defeating arguments. Yet the notion of self-defeating arguments seems intuitively plausible, as is illustrated by the following example. EXAMPLE 5. (The Liar.) An elementary self-defeating argument can be fabricated on the basis of the so-called paradox of the Liar . There are many versions of this paradox. The one we use here, runs as follows: Dutch people can be divided into two classes: people who always tell the truth, and people who always lie. Hendrik is a Dutch monk, and of Dutch monks we know that they tend to be consistent truth-tellers. Therefore, it is reasonable to assume that Hendrik is a consistent truth-teller. However, Hendrik says he is a lier. Is Hendrik a truth-teller or a lier? The Liar-paradox is a paradox, because either answer leads to a contradiction. 1. Suppose that Hendrik tells the truth. Then what Hendrik says must be true. So, Hendrik is a lier. Contradiction. 2. Suppose that Hendrik lies. Then what Hendrik says must be false. So, Hendrik is not a lier. Because Dutch people are either consistent truth-tellers or consistent liers, it follows that Hendrik always tells the truth. Contradiction. From this paradox, a self-defeating argument L can be made out of (1):

238

HENRY PRAKKEN & GERARD VREESWIJK

Dutch monks tend to be consistent truth-tellers Hendrik says: \I lie"

Hendrik is a Dutch monk Hendrik is a consistent truth-teller

Hendrik lies Hendrik is not a consistent truth-teller If the argument for \Hendrik is not a consistent truth-teller" is as strong as its subargument for \Hendrik is a consistent truth-teller," then L defeats one of its own sub-arguments, and thus is a self-defeating argument. In conclusion, it seems that De nition 1 needs another revision, to leave room for the existence of self-defeating arguments. Below we shall not discuss this in general terms since, perhaps surprisingly, in the literature it is hard to nd generally applicable solutions to this problem. Instead we shall discuss for each particular system how it deals with self-defeat.

4.1 The unique-status-assignment approach The idea to enforce unique status assignments basically comes in two variants. The rst de nes status assignments in terms of some xed-point operator, and the second involves a recursive de nition of a justi ed argument, by introducing the notion of a subargument of an argument. We rst discuss the xed-point approach. Fixed-point de nitions

This approach, followed by e.g. Pollock [1987; 1992], Simari & Loui [1992] and Prakken & Sartor [1997b], can best be explained with the notion of `reinstatement' (see above, Section 3). The key observation is that an argument that is defeated by another argument can only be justi ed if it is reinstated by a third argument, viz. by a justi ed argument that defeats its defeater. This idea is captured by Dung's [1995] notion of acceptability . DEFINITION 6. An argument A is acceptable with respect to a set S of arguments i each argument defeating A is defeated by an argument in S . The arguments in S can be seen as the arguments capable of reinstating A in case A is defeated.

LOGICS FOR DEFEASIBLE ARGUMENTATION

239

However, the notion of acceptability is not suÆcient. Consider in Example 3 the set S = fAg. It is easy to see that A is acceptable with respect to S , since all arguments defeating A (viz. B ) are defeated by an argument in S , viz. A itself. Clearly, we do not want that an argument can reinstate itself, and this is the reason why a xed-point operator must be used. Consider the following operator from [Dung, 1995], which for each set of arguments returns the set of all arguments that are acceptable to it. DEFINITION 7. (Dung's [1995] grounded semantics.) Let Args be a set of arguments ordered by a binary relation of defeat,6 and let S Args. Then the operator F is de ned as follows:

F (S ) = fA 2 Args j A is acceptable with respect to S g Dung proves that the operator F has a least xed point. (The basic idea is that if an argument is acceptable with respect to S , it is also acceptable with respect to any superset of S , so that F is monotonic.) Self-reinstatement can then be avoided by de ning the set of justi ed arguments as that least xed point. Note that in Example 3 the sets fAg and fB g are xed points of F but not its least xed point, which is the empty set. In general we have that if no argument is undefeated, then F (;) = ;. These observations allow the following de nition of a justi ed argument. DEFINITION 8. An argument is justi ed i it is a member of the least xed point of F . It is possible to reformulate De nition 7 in various ways, which are either equivalent to, or approximations of the least xed point of F . To start with, Dung shows that it can be approximated from below, and when each argument has at most nitely many defeaters even be obtained, by iterative application of F to the empty set. PROPOSITION 9. Consider the following sequence of arguments.

F0 = ; F i+1 = fA 2 Args j A is acceptable with respect to F i g. The following observations hold [Dung, 1995]. i 1. All arguments in [1 i=0 (F ) are justi ed.

2. If each argument is defeated by at most a nite number of arguments, i then an argument is justi ed i it is in [1 i=0 (F ) 6 As

remarked above, Dung uses the term `attack' instead of `defeat'.

240

HENRY PRAKKEN & GERARD VREESWIJK

In the iterative construction rst all arguments that are not defeated by any argument are added, and at each further application of F all arguments that are reinstated by arguments that are already in the set are added. This is achieved through the notion of acceptability. To see this, suppose we apply F for the ith time: then for any argument A, if all arguments that defeat A are themselves defeated by an argument in F i 1 , then A is in F i . It is instructive to see how this works in Example 2. We have that F 1 = F (;) = fC g F 2 = F (F 1 ) = fA; C g F 3 = F (F 2 ) = F 2 Dung [1995] also shows that F is equivalent to double application of a simpler operator G, i.e. F = G Æ G. The operator G returns for each set of arguments all arguments that are not defeated by any argument in that set. DEFINITION 10. Let Args be a set of arguments ordered by a binary relation of defeat. Then the operator G is de ned as follows: G(S ) = fA 2 ArgsjA is not defeated by any argument in S g The G operator is in turn very similar to the one used by Pollock [1987; 1992]. To see this, we reformulate G in Pollock's style, by considering the sequence obtained by iterative application of G to the empty set, and de ning an argument A to be justi ed if and only if at some point (or \level") m in the sequence A remains in Gn for all n m. DEFINITION 11. (Levels in justi cation.) All arguments are in at level 0. An argument is in at level n +1 i it is not defeated by any argument in at level n. An argument is justi ed i there is an m such that for every n m, the argument is in at level n. As shown by Dung [1995], this de nition stands to De nition 10 as the construction of Proposition 9 stands to De nition 7. Dung also remarks that De nition 11 is equivalent to Pollock's [1987; 1992] de nition, but as we shall see below, this is not completely accurate. In Example 2, De nition 11 works out as follows. level in 0 A; B; C 1 C 2 A; C 3 A; C . ...

LOGICS FOR DEFEASIBLE ARGUMENTATION

241

C is in at all levels, while A becomes in at 2 and stays in at all subsequent levels. And in Example 3 both A and B are in at all even levels and out at all odd levels. level in 0 A; B 1 2 A; B 3 4 A; B . ... The following example, with an in nite chain of defeat relations, gives another illustration of De nitions 7 and 11. EXAMPLE 12. (In nite defeat chain.) Consider an in nite chain of arguments A1 ; : : : ; An ; : : : such that A1 is defeated by A2 , A2 is defeated by A3 , and so on.

A1

A2

A3

A4

A5

:::

The least xed point of this chain is empty, since no argument is undefeated. Consequently, F (;) = ;. Note that this example has two other xed points, which also satisfy De nition 1, viz. the set of all Ai where i is odd, and the set of all Ai where i is even. Defensible arguments

A nal peculiarity of the de nitions is that they allow a distinction between two types of arguments that are not justi ed. Consider rst again Example 2 and observe that, although B defeats A, A is still justi ed since it is reinstated by C . Consider next the following extension of Example 3. EXAMPLE 13. (Zombie arguments.) Consider three arguments A, B and C such that A defeats B , B defeats A, and B defeats C .

A A concrete example is

B

C

242

HENRY PRAKKEN & GERARD VREESWIJK

A = `Dixon is no paci st because he is a republican' B = `Dixon is a paci st because he is a quaker, and he has no gun because he is a paci st' C = `Dixon has a gun because he lives in Chicago' According to De nitions 8 and 11, neither of the three arguments are justi ed. For A and B this is since their relation is the same as in Example 3, and for C this is since it is defeated by B . Here a crucial distinction between the two examples becomes apparent: unlike in Example 2, B is, although not justi ed, not defeated by any justi ed argument and therefore B retains the potential to prevent C from becoming justi ed: there is no justi ed argument that reinstates C by defeating B . Makinson & Schlechta [1991] call arguments like B `zombie arguments':7 B is not `alive', (i.e., not justi ed) but it is not fully dead either; it has an intermediate status, in which it can still in uence the status of other arguments. Following Prakken & Sartor [1997b], we shall call this intermediate status `defensible'. In the unique-status-assignment approach it can be de ned as follows. DEFINITION 14. (Overruled and defensible arguments.)

An argument is overruled i it is not justi ed, and defeated by a justi ed argument.

An argument is defensible i it is not justi ed and not overruled.

Self-defeating arguments Finally, we must come back to the problem of self-defeating arguments. How does De nition 7 deal with them? Consider the following extension of Example 4. EXAMPLE 15. Consider two arguments A and B such that A defeats A and A defeats B .

A

B

Intuitively, we want that B is justi ed, since the only argument defeating it is self-defeating. However, we have that F (;) = ;, so neither A nor B are justi ed. Moreover, they are both defensible, since they are not defeated by any justi ed argument. How can De nitions 7 and 11 be modi ed to obtain the intuitive result that A is overruled and B is justi ed? Here is where Pollock's deviation from the latter de nition becomes relevant. His version is as follows. 7 Actually, they talk about `zombie paths', since their article is about inheritance systems.

LOGICS FOR DEFEASIBLE ARGUMENTATION

243

DEFINITION 16. (Pollock, [1992])

An argument is in at level 0 i it is not self-defeating.

An argument is justi ed i there is an m such that for every n m, the argument is in at level n.

An argument is in at level n + 1 i it is in at level 0 and it is not defeated by any argument in at level n.

The additions i it is not self-defeating in the rst condition and i it is in at level 0 in the second make the dierence: they render all self-defeating arguments out at every level, and incapable of preventing other arguments from being out. Another solution is provided by Prakken & Sartor [1997b] and Vreeswijk [1997], who distinguish a special `empty' argument, which is not defeated by any other argument and which by de nition defeats any self-defeating argument. Other solutions are possible, but we shall not pursue them here. Recursive de nitions

Sometimes a second approach to the enforcement of unique status assignments is employed, e.g. by Prakken [1993] and Nute [1994]. The idea is to make explicit that arguments are usually constructed step-by-step, proceeding from intermediate to nal conclusions (as in Example 13, where A has an intermediate conclusion `Dixon is a paci st' and a nal conclusion `Dixon has no gun'). This approach results in an explicitly recursive de nition of justi ed arguments, re ecting the basic intuition that an argument cannot be justi ed if not all its subarguments are justi ed. At rst sight, this recursive style is very natural, particularly for implementing the de nition in a computer program. However, the approach is not so straightforward as it seems, as the following discussion aims to show. To formalise the recursive approach, we must make a rst assumption on the structure of arguments, viz. that they have subarguments (which are `proper' i they are not identical to the entire argument). Justi ed arguments are then de ned as follows. (We already add how self-defeating arguments can be dealt with, so that our discussion can be con ned to the issue of avoiding multiple status assignments. Note that the explicit notion of a subargument makes it possible to regard an argument as self-defeating if it defeats one of its subarguments, as in Example 5.) DEFINITION 17. (Recursively justi ed arguments.) An argument A is justi ed i 1. A is not self-defeating; and 2. All proper subarguments of A are justi ed; and

244

HENRY PRAKKEN & GERARD VREESWIJK

3. All arguments defeating A are self-defeating, or have at least one proper subargument that is not justi ed. How does this de nition avoid multiple status assignments in Example 3? The `trick' is that for an argument to be justi ed, clause (2) requires that it have no (non self-defeating) defeaters of which all proper subarguments are justi ed. This is dierent in De nition 1, which leaves room for such defeaters, and instead requires that these themselves are not justi ed; thus this de nition implies in Example 3 that A is justi ed if and only if B is not justi ed, inducing two status assignments. With De nition 17, on the other hand, A is prevented from being justi ed by the existence of a (non-selfdefeating) defeater with justi ed subarguments, viz. B (and likewise for B ). The reader might wonder whether this solution is not too drastic, since it would seem to give up the property of reinstatement. For instance, when applied to Example 2, De nition 17 says that argument A is not justi ed, since it is defeated by B , which is not self-defeating. That B is in turn defeated by C is irrelevant, even though C is justi ed. However, here it is important that De nition 17 allows us to distinguish between two kinds of reinstatement. Intuitively, the reason why C defeats B in Example 2, is that it defeats B 's proper subargument that Tweety is a penguin. And if the subarguments in the example are made explicit as follows, De nition 17 yields the intuitive result. (As for notation, for any pair of arguments X and X , the latter is a proper subargument of the rst.) EXAMPLE 18. Consider four arguments A, B , B and C such that B defeats A and C defeats B .

B A

B

C

According to De nition 17, A and C are justi ed and B and B are not justi ed. Note that B is not justi ed by Clause 2. So C reinstates A not by directly defeating B but by defeating B 's subargument B . The crucial dierence between the Examples 2 and 3 is that in the latter example the defeat relation is of a dierent kind, in that A and B are in con ict on their nal conclusions (respectively that Nixon is, or is not a paci st). The only way to reinstate, say, the argument A that Nixon was a

LOGICS FOR DEFEASIBLE ARGUMENTATION

245

paci st is by nding a defeater of B 's proper subargument that Nixon was a republican (while making the subargument relations explicit). So the only case in which De nition 17 does not capture reinstatement is when all relevant defeat relations concern the nal conclusions of the arguments involved. This might even be regarded as a virtue of the de nition, as is illustrated by the following modi cation of Example 2 (taken from [Nute, 1994]). EXAMPLE 19. Consider three arguments A, B and C such that B defeats A and C defeats B . Read the arguments as follows. A = `Tweety ies because it is a bird' B = `Tweety does not y because it is a penguin' C = `Tweety might y because it is a genetically altered penguin' Note that, unlike in Example 2, these three arguments are in con ict on the same issue, viz. on whether Tweety can y. According to De nitions 7 and 11 both A and C are justi ed; in particular, A is justi ed since it is reinstated by C . However, according to De nition 17 only C is justi ed, since A has a non-self-defeating defeater, viz. B . The latter outcome might be regarded as the intuitively correct one, since we still accept that Tweety is a penguin, which blocks the `birds y' default, and C allows us at most to conclude that Tweety might y. So does this example show that De nitions 7 and 11 must be modi ed? We think not, since it is possible to represent the arguments in such a way that these de nitions give the intuitive outcome. However, this solution requires a particular logical language, for which reason its discussion must be postponed (see Section 5.2, p. 269). Nevertheless, we can at least conclude that while the indirect form of reinstatement (by defeating a subargument) clearly seems a basic principle of argumentation, Example 19 shows that with direct reinstatement this is not so clear. Unfortunately, De nition 17 is not yet fully adequate, as can be shown with the following extension of Example 3. It is a version of Example 13 with the subarguments made explicit. EXAMPLE 20. (Zombie arguments 2.) Consider the arguments A , A, B and C such that A and B defeat each other and A defeats C .

A

A

B

C

246

HENRY PRAKKEN & GERARD VREESWIJK

A concrete example is A = `Dixon is a paci st because he is a quaker' B = `Dixon is no paci st because he is a republican' A = `Dixon has no gun because he is a paci st' C = `Dixon has a gun because he lives in Chicago' According to De nition 17, C is justi ed since its only defeater, A, has a proper subargument that is not justi ed, viz. A . Yet, as we explained above with Example 13, intuitively A should retain its capacity to prevent C from being justi ed, since the defeater of its subargument is not justi ed. There is an obvious way to repair De nition 17: it must be made explicitly `three-valued' by changing the phrase `not justi ed' in Clause 3 into `overruled',8 where the latter term is de ned as follows. DEFINITION 21. (Defensible and overruled arguments 2.)

An argument is overruled i it is not justi ed and either it is selfdefeating, or it or one of its proper subarguments is defeated by a justi ed argument.

An argument is defensible i it is not justi ed and not overruled.

This results in the following de nition of justi ed arguments. DEFINITION 22. (Recursively justi ed arguments|revised.) An argument A is justi ed i 1. A is not self-defeating; and 2. All proper subarguments of A are justi ed; and 3. All arguments defeating A are self-defeating, or have at least one proper subargument that is overruled. In Example 20 this has the following result. Note rst that none of the arguments are self-defeating. Then to determine whether C is justi ed, we must determine the status of A. A defeats C , so C is only justi ed if A is overruled. Since A is not defeated, A can only be overruled if its proper subargument A is overruled. No proper subargument of A is defeated, but A is defeated by B . So if B is justi ed, A is overruled. Is B justi ed? No, since it is defeated by A , and A is not self-defeating and has no overruled proper subarguments. But then A is not overruled, which means that C is not justi ed. In fact, all arguments in the example are defensible, as can be easily veri ed. 8 Makinson & Schlechta [1991] criticise this possibility and recommend the approach with multiple status assignments.

LOGICS FOR DEFEASIBLE ARGUMENTATION

247

Comparing xed-point and recursive de nitions

Comparing the xed-point and recursive de nitions, we have seen that in the main example where their outcomes dier (Example 19), the intuitions seem to favour the outcome of the recursive de nitions (but see below, p. 269). We have also seen that the recursive de nition, if made `three-valued', can deal with zombie arguments just as well as the xed-point de nitions. So must we favour the recursive form? The answer is negative, since it also has a problem: De nitions 17 and 22 do not always enforce a unique status assignment. Consider the following example. EXAMPLE 23. (Crossover defeat.)9 Consider four arguments A ; A; B ; B such that A defeats B while B defeats A .

A

B

A

B

De nition 17 allows for two status assignments, viz. one in which only A and A are justi ed, and one in which only B and B are justi ed. In addition, De nition 22 also allows for the status assignment which makes all arguments defensible. Clearly, the latter status assignment is the intuitively intended one. However, without xed-point constructions it seems hard to enforce it as the unique one. Note, nally, that in our discussion of the non-recursive approach we implicitly assumed that when a proper subargument of an argument is defeated, thereby the argument itself is also defeated (see e.g. Example 2). In fact, any particular argumentation system that has no explicitly recursive de nition of justi ed arguments should satisfy this assumption. By contrast, systems that have a recursive de nition, can leave defeat of an argument independent from defeat of its proper subarguments. Furthermore, if a system has no recursive de nition of justi ed arguments, but still distinguishes arguments and subarguments for other reasons (as e.g. [Simari & Loui, 1992] and [Prakken & Sartor, 1997b]), then a proof is required that Clause 2 of De nition 17 holds. Further illustration of this point must be postponed to the discussion of concrete systems in Section 5. 9 The

name `crossover' is taken from Hunter [1993].

248

HENRY PRAKKEN & GERARD VREESWIJK

Unique status assignments: evaluation

Evaluating the unique-status-assignment approach, we have seen that it can be formalised in an elegant way if xed-point de nitions are used, while the, perhaps more natural attempt with a recursive de nition has some problems. However, regardless of its precise formalisation, this approach has inherent problems with certain types of examples, such as the following. EXAMPLE 24. (Floating arguments.) Consider the arguments A; B; C and D such that A defeats B , B defeats A, A defeats C , B defeats C and C defeats D.

B C

D

A Since no argument is undefeated, De nition 8 tells us that all of them are defensible. However, it might be argued that for C and D this should be otherwise: since C is defeated by both A and B , C should be overruled. The reason is that as far as the status of C is concerned, there is no need to resolve the con ict between A and B : the status of C ` oats' on that of A and B . And if C should be overruled, then D should be justi ed, since C is its only defeater. A variant of this example is the following piece of default reasoning. To analyse this example, we must again make an assumption on the structure of arguments, viz. that they have a conclusion. EXAMPLE 25. (Floating conclusions.)10 Consider the arguments A , A, B and B such that A and B defeat each other and A and B have the same conclusion.

A

B

A

B

An intuitive reading is 10 The

term ` oating conclusions' was coined by Makinson & Schlechta [1991].

LOGICS FOR DEFEASIBLE ARGUMENTATION

249

A = Brygt Rykkje is Dutch because he was born in Holland B = Brygt Rykkje is Norwegian because he has a Norwegian name A = Brygt Rykkje likes ice skating because he is Dutch B = Brygt Rykkje likes ice skating because he is Norwegian The point is that whichever way the con ict between A and B is decided, we always end up with an argument for the conclusion that Brygt Rykkje likes ice skating, so it seems that it is justi ed to accept this conclusion as true, even though it is not supported by a justi ed argument. In other words, the status of this conclusion oats on the status of the arguments A and B . While the unique-assignment approach is inherently unable to capture

oating arguments and conclusions, there is a way to capture them, viz. by working with multiple status assignments. To this approach we now turn.

4.2 The multiple-status-assignments approach A second way to deal with competing arguments of equal strength is to let them induce two alternative status assignments, in both of which one is justi ed at the expense of the other. Note that both these assignments will satisfy De nition 1. In this approach, an argument is `genuinely' justi ed i it receives this status in all status assignments. To prevent terminological confusion, we now slightly reformulate the notion of a status assignment. DEFINITION 26. A status assignment to a set X of arguments ordered by a binary defeat relation is an assignment to each argument of either the status `in' or the status `out' (but not both), satisfying the following conditions: 1. An argument is in if all arguments defeating it (if any) are out. 2. An argument is out if it is defeated by an argument that is in. Note that the conditions (1) and (2) are just the conditions of De nition 1. In Example 3 there are precisely two possible status assignments:

A

B

A

B

Recall that an argumentation system is supposed to de ne when it is justi ed to accept an argument. What can we say in case of A and B ? Since both of them are `in' in one status assignment but `out' in the other, we must conclude that neither of them is justi ed. This is captured by rede ning the notion of a justi ed argument as follows:

250

HENRY PRAKKEN & GERARD VREESWIJK

DEFINITION 27. Given a set X of arguments and a relation of defeat on X , an argument is justi ed i it is `in' in all status assignments to X . However, this is not all; just as in the unique-status-assignment approach, it is possible to distinguish between two dierent categories of arguments that are not justi ed. Some of those arguments are in no extension, but others are at least in some extensions. The rst category can be called the overruled , and the latter category the defensible arguments. DEFINITION 28. Given a set X of arguments and a relation of defeat on X

An argument is overruled i it is `out' in all status assignments to X ; An argument is defensible i it is `in' in some and `out' in some status assignments to X .

It is easy to see that the unique-assignment and multiple-assignments approaches are not equivalent. Consider again Example 24. Argument A and B form an even loop, thus, according to the multiple-assignments approach, either A and B can be assigned `in' but not both. So the above defeat relation induces two status assignments:

B

B C

D

C

and

A

D

A

While in the unique-assignment approach all arguments are defensible, we now have that D is justi ed and C is overruled. Multiple status assignments also make it possible to capture oating conclusions. This can be done by de ning the status of formulas as follows. DEFINITION 29. (The status of conclusions.)

' is a justi ed conclusion i every status assignment assigns `in' to an argument with conclusion ';

' is a defensible conclusion i ' is not justi ed, and a conclusion of a defensible argument.

' is an overruled conclusion i ' is not justi ed or defensible, and a conclusion of an overruled argument.

LOGICS FOR DEFEASIBLE ARGUMENTATION

251

Changing the rst clause into `' is a justi ed conclusion i ' is the conclusion of a justi ed argument ' would express a stronger notion, not recognising

oating conclusions as justi ed. There is reason to distinguish several variants of the multiple-statusassignments approach. Consider the following example, with an `odd loop' of defeat relations. EXAMPLE 30. (Odd loop.) Let A; B and C be three arguments, represented in a triangle, such that A defeats C , B defeats A, and C defeats B .

C

A

B

In this situation, De nition 27 has some problems, since this example has no status assignments. 1. Assume that A is `in'. Then, since A defeats C , C is `out'. Since C is `out', B is `in', but then, since B defeats A, A is `out'. Contradiction. 2. Assume next that A is `out'. Then, since A is the only defeater of C , C is `in'. Then, since C defeats B , B is `out'. But then, since B is the only defeater of A, A is `in'. Contradiction. Note that a self-defeating argument is a special case of Example 30, viz. the case where B and C are identical to A. This means that sets of arguments containing a self-defeating argument might have no status assignment. To deal with the problem of odd defeat cycles, several alternatives to De nition 26 have been studied in the literature. They will be discussed in Section 5, in particular in 5.1 and 5.2.

4.3 Comparing the two approaches How do the unique- and multiple-assignment approaches compare to each other? It is sometimes said that their dierence re ects a dierence between a `sceptical' and `credulous' attitude towards drawing defeasible conclusions: when faced with an unresolvable con ict between two arguments, a sceptic would refrain from drawing any conclusion, while a credulous reasoner would choose one conclusion at random (or both alternatively) and further explore its consequences. The sceptical approach is often defended by saying that since in an unresolvable con ict no argument is stronger than the other, neither of them can be accepted as justi ed, while the credulous approach

252

HENRY PRAKKEN & GERARD VREESWIJK

has sometimes been defended by saying that the practical circumstances often require a person to act, whether or not s/he has conclusive reasons to decide which act to perform. In our opinion this interpretation of the two approaches is incorrect. When deciding what to accept as a justi ed belief, what is important is not whether one or more possible status assignments are considered, but how the arguments are evaluated given these assignments. And this evaluation is captured by the quali cations `justi ed' and `defensible', which thus capture the distinction between `sceptical' and `credulous' reasoning. And since, as we have seen, the distinction justi ed vs. defensible arguments can be made in both the unique-assignment and the multiple-assignments approach, these approaches are independent of the distinction `sceptical' vs. `credulous' reasoning. Although both approaches can capture the notion of a defensible argument, they do so with one important dierence. The multiple-assignments approach is more convenient for identifying sets of arguments that are compatible with each other. The reason is that while with unique assignments the defensible arguments are defensible on an individual basis, with multiple assignments they are defensible because they belong to a set of arguments that are `in' and thus can be defended simultaneously. Even if two defensible arguments do not defeat each other, they might be incompatible in the sense that no status assignment makes them both `in', as in the following example. EXAMPLE 31. A and B defeat each other, B defeats C , C defeats D.

A

B

C

D

This example has two status assignments, viz. fA; C g and fB; Dg. Accordingly, all four arguments are defensible. Note that, although A and D do not defeat each other, A is in i D is out. So A and D are in some sense incompatible. In the unique-assignment approach this notion of incompatibility seems harder to capture. As we have seen, the unique-assignment approach has no inherent dif culty to recognise `zombie arguments'; this problem only occurs if this approach uses a recursive two-valued de nition of the status of arguments. As for their outcomes, the approaches mainly dier in their treatment of

oating arguments and conclusions. With respect to these examples, the question easily arises whether one approach is the right one. However, we prefer a dierent attitude: instead of speaking about the `right' or `wrong' de nition, we prefer to speak of `senses' in which an argument or conclusion

LOGICS FOR DEFEASIBLE ARGUMENTATION

253

can be justi ed. For instance, the sense in which the conclusion that Brygt Rykkje likes ice skating in Example 25 is justi ed is dierent from the sense in which, for instance, the conclusion that Tweety ies in Example 2 is justi ed: only in the second case is the conclusion supported by a justi ed argument. And the status of D in Example 24 is not quite the same as the status of, for instance, A in Example 2. Although both arguments need the help of other arguments to be justi ed, the argument helping A is itself justi ed, while the arguments helping D are merely defensible. In the concluding section we come back to this point, and generalise it to other dierences between the various systems.

4.4 General properties of consequence notions We conclude this section with a much-discussed issue, viz. whether any nonmonotonic consequence notion, although lacking the property of monotonicity, should still satisfy other criteria. Many argue that this is the case, and much research has been devoted to formulating such criteria and designing systems that satisfy them; see e.g. [Gabbay, 1985; Makinson, 1989; Kraus et al., 1990]. We, however, do not follow this approach, since we think that it is hard to nd any criterion that should really hold for any argumentation system, or nonmonotonic consequence notion, for that matter. We shall illustrate this with the condition that is perhaps most often defended, called cumulativity . In terms of argumentation systems this principle says that if a formula ' is justi ed on the basis of a set of premises T , then any formula is justi ed on the basis of T if and only if is also justi ed on the basis of T [ f'g. We shall in particular give counterexamples to the `if' part of the biconditional, which is often called cautious monotony . This condition in fact says that adding justi ed conclusions to the premises cannot make other justi ed conclusions unjusti ed. At rst sight, this principle would seem uncontroversial. However, we shall now (quasi-formally) discuss reasonably behaving argumentation systems, with plausible criteria for defeat, and show by example that they do not satisfy cautious monotony and are therefore not cumulative. These examples illustrate two points. First they illustrate Makinson & Schlechta's [1991] remark that systems that do not satisfy cumulativity assign facts a special status. Second, since the examples are quite natural, they illustrate that argumentation systems should assign facts a special status and therefore should not be cumulative. Below, the ! symbols stand for unspeci ed reasoning steps in an argument, and the formulas stand for the conclusion drawn in such a step. EXAMPLE 32. Consider two (schematic) arguments

A: p !q B : ! :s

! r ! :q ! s

254

HENRY PRAKKEN & GERARD VREESWIJK

Suppose we have a system in which self-defeating arguments have no capacity to prevent other arguments from being justi ed. Assume also that A is self-defeating, since a subconclusion, :q, is based on a subargument for a conclusion q. Assume, nally, that the system makes A's subargument for r justi ed (since it has no non-selfdefeating counterarguments). Then B is justi ed. However, if r is now added to the `facts', the following argument can be constructed: A0 : r ! :q ! s This argument is not self-defeating, and therefore it might have the capacity to prevent B from being justi ed. EXAMPLE 33. Consider next the following arguments. A is a two-step argument p ! q ! r B is a three-step argument s ! t ! u ! :r And assume that con icting arguments are compared on their length (the shorter, the better). Then A strictly defeats B , so A is justi ed. Assume, however, also that B 's subargument s !t !u is justi ed, since it has no counterarguments, and assume that u is added to the facts. Then we have a new argument for :r, viz. B 0 : u ! :r which is shorter than A and therefore strictly defeats A. Yet another type of example uses numerical assessments of arguments. EXAMPLE 34. Consider the arguments A: p !q !r B : s ! :r Assume that in A the strength of the derivation of q from p is 0.7 and that the strength of the derivation of r from q is 0.85, while in B the strength of the derivation of :r from s is 0.8. Consider now an argumentation system where arguments are compared with respect to their weakest links. Then B strictly defeats A, since B 's weakest link is 0.8 while A's weakest link is 0.7. However, assume once more that A's subargument for q is justi ed because it has no counterargument, and then assume that q is added as a fact. Then a new argument A0 : q ! r can be constructed, with as weakest link 0.85, so that it strictly defeats B . The point of these examples is that reasonable argumentation systems with plausible criteria for defeat are conceivable which do not satisfy cumulativity, so that cumulativity cannot be required as a minimum requirement for justi ed belief. Vreeswijk [1993a, pp. 82{8] has shown that other

LOGICS FOR DEFEASIBLE ARGUMENTATION

255

properties of nonmonotonic consequence relations also turn out to be counterintuitive in a number of realistic logical scenario's. 5 SOME ARGUMENTATION SYSTEMS Let us, after our general discussions, now turn to individual argumentation systems and frameworks. We shall present them according to the conceptual sketch of Section 3, and also evaluate them in the light of Section 4.

5.1 The abstract approach of Bondarenko, Dung, Kowalski and Toni Introductory remarks

We rst discuss an abstract approach to nonmonotonic logic developed in several articles by Bondarenko, Dung, Toni and Kowalski (below called the `BDKT approach'). Historically, this work came after the development by others of a number of argumentation systems (to be discussed below). The major innovation of the BDKT approach is that it provides a framework and vocabulary for investigating the general features of these other systems, and also of nonmonotonic logics that are not argument-based. The latest and most comprehensive account of the BDKT approach is Bondarenko et al. [1997]. In this account, the basic notion is that of a set of \assumptions". In their approach the premises come in two kinds: `ordinary' premises, comprising a theory , and assumptions , which are formulas (of whatever form) that are designated (on whatever ground) as having default status. Inspired by Poole [1988], Bondarenko et al. [1997] regard nonmonotonic reasoning as adding sets of assumptions to theories formulated in an underlying monotonic logic, provided that the contrary of the assumptions cannot be shown. What in their view makes the theory argumentationtheoretic is that this provision is formalised in terms of sets of assumptions attacking each other. In other words, according to Bondarenko et al. [1997] an argument is a set of assumptions. This approach has especially proven successful in capturing existing nonmonotonic logics. Another version of the BDKT approach, presented by Dung [1995], completely abstracts from both the internal structure of an argument and the origin of the set of arguments; all that is assumed is the existence of a set of arguments, ordered by a binary relation of `defeat'.11 This more abstract point of view seems more in line with the aims of this chapter, and therefore we shall below mainly discuss Dung's version of the BDKT approach. As remarked above, it inspired much of our discussion in Section 4. The 11 BDKT

use the term `attack', but to maintain uniformity we shall use `defeat'.

256

HENRY PRAKKEN & GERARD VREESWIJK

assumption-based version of Bondarenko et al. [1997] will be brie y outlined at the end of this subsection. Basic notions

As just remarked, Dung's [1995] primitive notion is a set of arguments ordered by a binary relation of defeat. Dung then de nes various notions of so-called argument extensions, which are intended to capture various types of defeasible consequence. These notions are declarative, just declaring sets of arguments as having a certain status. Finally, Dung shows that many existing nonmonotonic logics can be reformulated as instances of the abstract framework. Dung's basic formal notions are as follows. DEFINITION 35. An argumentation framework (AF) is a pair (Args, defeat), where Args is a set of arguments, and defeat a binary relation on Args.

An AF is nitary i each argument in Args is defeated by at most a nite number of arguments in Args.

A set of arguments is con ict-free i no argument in the set is defeated by an argument in the set.

One might think of the set Args as all arguments that can be constructed in a given logic from a given set of premises (although this is not always the case; see the discussions below of `partial computation'). Unless stated otherwise, we shall below implicitly assume an arbitrary but xed AF. Dung interprets defeat , like us, in the weak sense of `con icting and not being weaker'. Thus in Dung's approach two arguments can defeat each other. Dung does not explicitly use the stronger (and asymmetric) notion of strict defeat, but we shall sometimes use it below. A central notion of Dung's framework is acceptability, already de ned above in De nition 6. We repeat it here. It captures how an argument that cannot defend itself, can be protected from attacks by a set of arguments. DEFINITION 36. An argument A is acceptable with respect to a set S of arguments i each argument defeating A is defeated by an argument in S . As remarked above, the arguments in S can be seen as the arguments capable of reinstating A in case A is defeated. To illustrate acceptability, consider again Example 2, which in terms of Dung has an AF (called `TT' for `Tweety Triangle') with Args = fA; B; C g and defeat = f(B; A); (C; B )g (B strictly defeats A and C strictly defeats B ). A is acceptable with respect to fC g, fA; C g, fB; C g and fA; B; C g, but not with respect to ; and fB g. Another central notion is that of an admissible set.

LOGICS FOR DEFEASIBLE ARGUMENTATION

257

DEFINITION 37. A con ict-free set of arguments S is admissible i each argument in S is acceptable with respect to S . Intuitively, an admissible set represents an admissible, or defendable, point of view. In Example 2 the sets ;, fC g and fA; C g are admissible but all other subsets of fA; B; C g are not admissible. Argument extensions

In terms of the notions of acceptability and admissibility several notions of `argument extensions' can be de ned, which are what we above called `status assignments'. The following notion of a stable extension is equivalent to De nition 26 above. DEFINITION 38. A con ict-free set S is a stable extension i every argument that is not in S , is defeated by some argument in S . In Example 2, T T has only one stable extension, viz. fA; C g. Consider next an AF called ND (the Nixon Diamond), corresponding to Example 3, with Args = fA; B g, and defeat = f(A; B ); (B; A)g. ND has two stable extensions, fAg and fB g. Since a stable extension is con ict-free, it re ects in some sense a coherent point of view. It is also a maximal point of view, in the sense that every possible argument is either accepted or rejected. In fact, stable semantics is the most `aggressive' type of semantics, since a stable extension defeats every argument not belonging to it, whether or not that argument is hostile to the extension. This feature is the reason why not all AF's have stable extensions, as Example 30 has shown. To give such examples also a multiple-assignment semantics, Dung de nes the notion of a preferred extension. DEFINITION 39. A con ict-free set is a preferred extension i it is a maximal (with respect to set inclusion) admissible set. Let us go back to De nition 26 of a status assignment and de ne a partial status assignment in the same way as a status assignment, but without the condition that it assigns a status to all arguments. Then it is easy to verify that preferred extensions correspond to maximal partial status assignments. Dung shows that every AF has a preferred extension. Moreover, he shows that stable extensions are preferred extensions, so in the Nixon Diamond and the Tweety Triangle the two semantics coincide. However, not all preferred extensions are stable: in Example 30 the empty set is a (unique) preferred extension, which is not stable. Preferred semantics leaves all arguments in an odd defeat cycle out of the extension, so none of them is defeated by an argument in the extension. Preferred and stable semantics are an instance of the multiple-statusassignments approach of Section 4.2: in cases of an irresolvable con ict as

258

HENRY PRAKKEN & GERARD VREESWIJK

in the Nixon diamond, two incompatible extensions are obtained. Dung also explores the unique-status-assignment approach, with his notion of a grounded extension, already presented above as De nition 7. To build a bridge between the various semantics, Dung also de nes `complete semantics'. DEFINITION 40. An admissible set of arguments is a complete extension i each argument that is acceptable with respect to S belongs to S . This de nition implies that a set of arguments is a complete extension i it is a xed point of the operator F de ned in De nition 7. According to Dung, a complete extension captures the beliefs of a rational person who believes everything s/he can defend. Self-defeating arguments How do Dung's various semantics deal with self-defeating arguments? It turns out that all semantics have some problems. For stable semantics they are the most serious, since an AF with a self-defeating argument might have no stable extensions. For preferred semantics this problem does not arise, since preferred extensions are guaranteed to exist. However, this semantics still has a problem, since self-defeating arguments can prevent other arguments from being justi ed. This can be illustrated with Example 15 (an AF with two arguments A and B such that A defeats A and A defeats B ). The set fB g is not admissible, so the only preferred extension is the empty set. Yet intuitively it seems that instead fB g should be the only preferred extension, since B 's only defeater is self-defeating. It is easy to see that the same holds for complete semantics. In Section 4.1 we already saw that this example causes the same problems for grounded semantics, but that for nitary AF's Pollock [1987] provides a solution. Both Dung [1995] and Bondarenko et al. [1997] recognise the problem of self-defeating arguments, and suggest that solutions in the context of logic programming of Kakas et al. [1994] could be generalised to deal with it. Dung also acknowledges Pollock's [1995] approach, to be discussed in Subsection 5.2. Formal results Both Dung [1995] and Bondarenko et al. [1997] establish a number of results on the existence of extensions and the relation between the various semantics. We now summarise some of them.

1. Every stable extension is preferred, but not vice versa. 2. Every preferred extension is a complete extension, but not vice versa. 3. The grounded extension is the least (with respect to set inclusion) complete extension.

LOGICS FOR DEFEASIBLE ARGUMENTATION

259

4. The grounded extension is contained in the intersection of all preferred extensions (Example 24 is a counterexample against `equal to'.) 5. If an AF contains no in nite chains A1 ; : : : ; An ; : : : such that each Ai+1 defeats Ai then AF has exactly one complete extension, which is grounded, preferred and stable. (Note that the even loop of Example 3 and the odd loop of Example 30 form such an in nite chain.) 6. Every AF has at least one preferred extension. 7. Every AF has exactly one grounded extension. Finally, Dung [1995] and Bondarenko et al. [1997] identify several conditions under which preferred and stable semantics coincide. Assumption-based formulation of the framework As mentioned above, Bondarenko et al. [1997] have developed a dierent version of the BDKT approach. This version is less abstract than the one of Dung [1995], in that it embodies a particular view on the structure of arguments. Arguments are seen as sets of assumptions that can be added to a theory in order to (monotonically) derive conclusions that cannot be derived from the theory alone. Accordingly, Bondarenko et al. [1997] de ne a more concrete version of Dung's [1995] argumentation frameworks as follows: DEFINITION 41. Let L be a formal language and ` a monotonic logic de ned over L. An assumption-based framework with respect to (L; `) is a tuple hT; Ab; i where

T; Ab L is a mapping from Ab into L, where denotes the contrary of . The notion of defeat is now de ned for sets of assumptions (below we leave the assumption-based framework implicit). DEFINITION 42. A set of assumptions A defeats an assumption i T [ A ` ; and A defeats a set of assumptions i A defeats some assumption 2 . The notions of argument extensions are then de ned in terms of sets of assumptions. For instance, DEFINITION 43. A set of assumptions is stable i

is closed, i.e., = f 2 AbjT [ ` g does not defeat itself

260

HENRY PRAKKEN & GERARD VREESWIJK

defeats each assumption 62

A stable extension is a set T h(T [ ) for some stable set of assumptions. As remarked above, Bondarenko et al.'s [1997] main aim is to reformulate existing nonmonotonic logics in their general framework. Accordingly, what an assumption is, and what its contrary is, is determined by the choice of nonmonotonic logic to be reformulated. For instance, in applications of preferential entailment where abnormality predicates abi are to be minimised (see Section 2.1), the assumptions will include expressions of the form :abi (c), where :abi (c) = abi (c). And in default logic (see also Section 2.1), an assumption is of the form M' for any `middle part' ' of a default, where M' = :'; moreover, all defaults ': = are added to the rules de ning ` as monotonic inference rules '; M =. Procedure

The developers of the BDKT approach have also studied procedural forms for the various semantics. Dung et al. [1996; 1997] propose two abstract proof procedures for computing admissibility (De nition 37), where the second proof procedure is a computationally more eÆcient re nement of the rst. Both procedures are based upon a proof procedure originally intended for computing stable semantics in logic programming. And they are both formulated as logic programs that are derived from a formal speci cation. The derivation guarantees the correctness of the proof procedures. Further, Dung et al. [1997] show that both proof procedures are complete. Here, the rst procedure is discussed. It is de ned in the form of a meta-level logic program, of which the toplevel clause de nes admissibility. This concept is captured in a predicate adm: (1) adm(0 ; )

! [0 and is admissible]

and 0 are sets of assumptions, where ` is admissible' is a low-level concept that is de ned with the help of auxiliary clauses. In this manner, (1) provides a speci cation for the proof procedure. Similarly, a top-level predicate defends is de ned defends(D; ) ! [D defeats 0 , for every 0 that defeats ] The proof procedure that Dung et al. propose can be understood in procedural terms as repeatedly adding defences to the initially given set of assumptions 0 until no further defences need to be added. More precisely, given a current set of assumptions , initialised as 0 , the proof procedure repeatedly

LOGICS FOR DEFEASIBLE ARGUMENTATION

261

1. nds a set of assumptions D such that defends(D; ); 2. replaces by [ D until D = , in which case it returns . Step (1) is non-deterministic, since there might be more than one set of assumptions D defending the current . The proof procedure potentially needs to explore a search tree of alternatives to nd a branch which terminates with a self-defending set. The logic-programming formulation of the proof procedure is:

adm(; ) adm(; 0 )

defends(; ) defends(D; ); adm( [ D; 0 )

The procedural characterisation of the proof procedure is obtained by applying SLD resolution to the above clauses with a left-to-right selection rule, with an initial query of the form adm(0 ; ) with 0 as input and as output. The procedure is proved correct with respect to the admissibility semantics, but it is shown to be incorrect for stable semantics in general. According to Dung et al., this is due to the above-mentioned `epistemic aggressiveness' of stable semantics, viz. the fact that a stable extension defeats every argument not belonging to it. Dung et al. remark that, besides being counterintuitive, this property is also computationally very expensive, because it necessitates a search through the entire space of arguments to determine, for every argument, whether or not it is defeated. Subsequent evaluation by Dung et al. of the proof procedure has suggested that it is the semantics, rather than the proof procedure, which was at fault, and that preferred semantics provides an improvement. This insight is also formulated by Dung [1995]. Finally, it should be noted that recently, Kakas & Toni [1999] have developed proof procedures in dialectical style (see Section 6 below) for the various semantics of Bondarenko et al. [1997] and for Kakas et al. [1994]'s acceptability semantics. Evaluation

As remarked above, the abstract BDKT approach was a major innovation in the study of defeasible argumentation, in that it provided an elegant general framework for investigating the various argumentation systems. Moreover, the framework also applies to other nonmonotonic logics, since Dung and Bondarenko et al. extensively show how many of these logics can be translated into argumentation systems. Thus it becomes very easy to formulate alternative semantics for nonmonotonic logics. For instance, default logic, which was shown by Dung [1995] to have a stable semantics, can very easily

262

HENRY PRAKKEN & GERARD VREESWIJK

be given an alternative semantics in which extensions are guaranteed to exist, like preferred or grounded semantics. Moreover, the proof theories that have been or will be developed for the various argument-based semantics immediately apply to the systems that are an instance of these semantics. Because of these features, the BDKT framework is also very useful as guidance in the development of new systems, as, for instance, Prakken & Sartor have used it in developing the system of Subsection 5.7 below. On the other hand, the level of abstractness of the BDKT approach (especially in Dung's version) also leaves much to the developers of particular systems. In particular, they have to de ne the internal structure of an argument, the ways in which arguments can con ict, and the origin of the defeat relation. Moreover, it seems that at some points the BDKT approach needs to be re ned or extended. We already mentioned the treatment of self-defeating arguments, and Prakken & Sartor [1997b] have extended the BDKT framework to let it cope with reasoning about priorities (see Subsection 5.7 below).

5.2 Pollock John Pollock was one of the initiators of the argument-based approach to the formalisation of defeasible reasoning. Originally he developed his theory as a contribution to philosophy, in particular epistemology. Later he turned to arti cial intelligence, developing a computer program called OSCAR, which implements his theory. Since the program falls outside the scope of this handbook, we shall only discuss the logical aspects of Pollock's system; for the architecture of the computer program the reader is referred to e.g. Pollock [1995]. The latter also discusses other topics, such as practical reasoning, planning and reasoning about action. Reasons, arguments, con ict and defeat

In Pollock's system, the underlying logical language is standard rst-order logic, but the notion of an argument has some nonstandard features. What still conforms to accounts of deductive logic is that arguments are sequences of propositions linked by inference rules (or better, by instantiated inference schemes). However, Pollocks's formalism begins to deviate when we look at the kinds of inference schemes that can be used to build arguments. Let us rst concentrate on linear arguments; these are formed by combining so-called reasons. Technically, reasons connect a set of propositions with a proposition. Reasons come in two kinds, conclusive and prima facie reasons. Conclusive reasons still adhere to the common standard, since they are reasons that logically entail their conclusions. In other words, a conclusive reason is any valid rst-order inference scheme (which means that Pollock's system includes rst-order logic). Thus, examples of conclusive reasons are

LOGICS FOR DEFEASIBLE ARGUMENTATION

263

fp; qg is a conclusive reason for p ^ q f8xP xg is a conclusive reason for P a Prima facie reasons, by contrast have no counterpart in deductive logic; they only create a presumption in favour of their conclusion, which can be defeated by other reasons, depending on the strengths of the con icting reasons. Based on his work in epistemology, Pollock distinguishes several kinds of prima facie reasons: for instance, principles of perception, such as12

dx appears to me as Y e is a prima facie reason for believing dx is Y e. (For the objecti cation-operator d e see page 231 and page 265.) Another source of prima facie reasons is the statistical syllogism, which says that: If (r > 0:5) then dx is an F and prob(G=F ) = re is a prima facie reason of strength r for believing dx is a Ge. Here prob(G=F ) stands for the conditional probability of G given F . Prima facie reasons can also be based on principles of induction, for example,

dX

is a set of m F 's and n members of X have the property G (n=m > 0:5)e is a prima facie reason of strength n=m for believing dall F 's have the property Ge. Actually, Pollock adds to these de nitions the condition that F is projectible with respect to G. This condition, introduced by Goodman, 1954, is meant to prevent certain `unfounded' probabilistic or inductive inferences. For instance, the rst observed person from Lanikai, who is a genius, does not permit the prediction that the next observed Lanikaian will be a genius. That is, the predicate `intelligence' is not projectible with respect to `birthplace'. Projectibility is of major concern in probabilistic reasoning. To give a simple example of a linear argument, assume the following set of `input' facts INPUT = fA(a), prob(B=A) = 0:8, prob(C=B ) = 0:7g. The following argument uses reasons based on the statistical syllogism, and the rst of the above-displayed conclusive reasons. 12 When

a reason for a proposition is a singleton set, we drop the brackets.

264

HENRY PRAKKEN & GERARD VREESWIJK

1. hA(a); 1i (A(a) is in INPUT) 2. h dprob(B=A) = 0:8e; 1i (dprob(B=A) = 0:8e is in INPUT) 3. hA(a)^ dprob(B=A) = 0:8e; 1i (1,2 and fp; qg is a conclusive reason for p ^ q) 4. hB (a); 0:8i (3 and the statistical syllogism) 5. h dprob(C=B ) = 0:7e; 1i (lceilprob(C=B ) = 0:7e is in INPUT) 6. hB (a) ^ dprob(C=B ) = 0:7e; 0:8i (4,5 and fp; qg is a conclusive reason for p ^ q) 7. hC (a); 0:7i (6 and the statistical syllogism) So each line of a linear argument is a pair, consisting of a proposition and a numerical value that indicates the strength, or degree of justi cation of the proposition. The strength 1 at lines 1,2 and 5 indicates that the conclusions of these lines are put forward as absolute facts, originating from the epistemic base `INPUT'. At line 4, the weakest link principle is applied, with the result that the strength of the argument line is the minimum of the strength of the reason for B (a) (0.8) and the argument line 3 from which C (a) is derived with this reason (1). At lines 6 and 7 the weakest link principle is applied again. Besides linear arguments, Pollock also studies suppositional arguments. In suppositional reasoning, we `suppose' something that we have not inferred from the input, draw conclusions from the supposition, and then `discharge' the supposition to obtain a related conclusion that no longer depends on the supposition. In Pollock's system, suppositional arguments can be constructed with inference rules familiar from natural deduction. Accordingly, the propositions in an argument have sets of propositions attached to them, which are the suppositions under which the proposition can be derived from earlier elements in the sequence. The following de nition (based on [Pollock, 1995]) summarises this informal account of argument formation. DEFINITION 44. In OSCAR, an argument based on INPUT is a nite sequence 1 ; : : : ; n , where each i is a line of argument. A line of argument i is a triple hXi ; pi ; i i, where Xi , a set of propositions, is the set of suppositions at line i, pi is a proposition, and i is the degree of justi cation of at line i. A line of argument is obtained from earlier lines of argument according to one of the following rules of argument formation. Input. If p is in INPUT and is an argument, then for any X it holds that ; hX; p; 1i is an argument. Reason. If is an argument, hX1 ; p1 ; 1 i; : : : ; hXn ; pn; n i are members of , and fp1; : : : ; pn g is a reason of strength for q, and for each i, Xi X , then ; hX; q; minf1 ; : : : n ; gi is an argument. Supposition. If is an argument, X a set of propositions and p 2 X , then ; hX; p; 1i is also an argument.

LOGICS FOR DEFEASIBLE ARGUMENTATION

265

Conditionalisation. If is an argument and some line of is hX [ fpg; q; i, then ; hX; (p q); i is also an argument. Dilemma. If is an argument and some line of is hX; p _ q; i, and some line of is hX [ fpg; r; i, and some line of is hX [ fqg; r; i, then ; hX; r; minf; ; gi is also an argument. Pollock [1995] notes that other inference rules could be added as well. It is the use of prima facie reasons that makes arguments defeasible, since these reasons can be defeated by other reasons. This can take place in two ways: by rebutting defeaters, which are at least as strong reasons with the opposite conclusion, and by undercutting defeaters, which are at least as strong reasons of which the conclusion denies the connection that the undercut reason states between its premises and its conclusion. A typical example of rebutting defeat is when an argument using the reason `Birds y' is defeated by an argument using the reason `Penguins don't y'. Pollock's favourite example of an undercutting defeater is when an object looks red because it is illuminated by a red light: knowing this undercuts the reason for believing that this object is red, but it does not give a reason for believing that the object is not red. Before we can explain how Pollock formally de nes the relation of defeat among arguments, some extra notation must be introduced. In the de nition of defeat among arguments, Pollock uses a, what may be called, objecti cation operator, d e. (This operator was also used in Fig. 4 on page 230 and in the prima facie reasons on page 263.) With this operator, expressions in the meta-language are transformed into expressions in the object language. For example, the meta-level rule fp; qg is a conclusive reason for p may be transformed into the object-level expression dfp; qg is a conclusive reason for pe: If the object language is rich enough, then the latter expression is present in the object language, in the form (p^q) p. Evidently, a large fraction of the meta-expressions cannot be conveyed to the object language, because the object language lacks suÆcient expressibility. This is the case, for example, if corresponding connectives are missing in the object language. Pollock formally de nes the relation of defeat among arguments as follows. Defeat among arguments. An argument defeats another argument if and only if: 1. 's last line is hX; q; i and is obtained by the argument formation rule Reason from some earlier lines hX1 ; p1; 1 i; : : : ; hXn ; pn ; n i where fp1; : : : ; pn g is a prima facie reason for q; and

266

HENRY PRAKKEN & GERARD VREESWIJK

2. 's last line is hY; r; i where Y X and either: (a) r is :q and ; or (b) r is :dfp1 ; : : : ; png >> qe and . (1) determines the weak spot of , while (2) determines whether that weak spot is (2a) a conclusion (in this case q), or (2b) a reason (in this case fp1; : : : ; pn g >> q). For Pollock, (2a) is a case of rebutting defeat , and 2b is a case of undercutting defeat : if undercuts the last reason of , it blocks the derivation of q, without supporting :q as alternative conclusion. The formula dfp1; : : : ; png >> qe stands for the translation of `fp1 ; : : : ; pn g is a prima facie reason for q' into the object language. Pollock leaves the notion of con icting arguments implicit in this de nition of defeat. Note also that a defeater of an argument always defeats the last step of an argument; Pollock treats `subargument defeat' by a recursive de nition of a justi ed argument, i.e., in the manner explained above in Section 4.1. Suppositional reasoning

As noted above, the argument formation rules supposition , conditionalisation and dilemma can be used to form suppositional arguments. OSCAR is one of the very few nonmonotonic logics that allow for suppositional reasoning. Pollock nds it necessary to introduce suppositional reasoning because, in his opinion, this type of reasoning is ubiquitous not only in deductive, but also in defeasible reasoning. Pollock mentions, among other things, the reasoning form `reasoning by cases', which is notoriously hard for many nonmonotonic logics. An example is `presumably, birds y, presumably, bats

y, Tweety is a bird or a bat, so, presumably, Tweety ies'. In Pollock's system, this argument can be formalised as follows. EXAMPLE 45. Consider the following reasons. (1) Bird(x) is a prima facie reason of strength for Flies(x) (2) Bat(x) is a prima facie reason of strength for Flies(x) And consider INPUT = fBird(t) _ Bat(t)g. The conclusion Flies(t) can be defeasibly derived as follows. 1. 2. 3. 4. 5. 6.

h;; Bird(t) _ Bat(t); 1i hfBird(t)g; Bird(t); 1i hfBird(t)g; Flies(t); i hfBat(t)g; Bat(t); 1i hfBat(t)g; Flies(t); i h;; Flies(t); minf; gi

(Bird(t) _ Bat(t) is in INPUT) (Supposition) (2 and prima facie reason (1)) (Supposition) (4 and prima facie reason (2)) (3,5 and Dilemma)

LOGICS FOR DEFEASIBLE ARGUMENTATION

267

At line 1, the proposition Bird(t) _ Bat(t) is put forward as an absolute fact. At line (2), the proposition Bird(t) is temporarily supposed to be true. From this assumption, at the following line the conclusion Flies(t) is defeasibly derived with the rst prima facie reason. Line (4) is an alternative continuation of line 1. At line (4), Bat(t) is supposed to be true, and at line (5) it is used to again defeasibly derive Flies(t), this time from the second prima facie reason. Finally, at line (6) the Dilemma rule is applied to (3) and (5), discharging the assumptions in the alternative suppositional arguments, and concluding to Flies(t) under no assumption. According to Pollock, another virtue of his system is that it validates the defeasible derivation of a material implication from a prima facie reason. Consider again the `birds y' reason (1), and assume that INPUT is empty. 1. hfBird(t)g; Bird(t); 1i (Supposition) 2. hfBird(t)g; Flies(t); i (1 and prima facie reason (1)) 3. h;; Bird(t) Flies(t); i (2 and Conditionalisation) Pollock regards the validity of these inferences as desirable. On the other hand, Vreeswijk has argued that suppositional defeasible reasoning, in the way Pollock proposes it, sometimes enables incorrect inferences. Vreeswijk's argument is based on the idea that the strength of a conclusion obtained by means of conditionalisation is incomparable to the reason strength of the implication occurring in that conclusion. For a discussion of this problem the reader is further referred to Vreeswijk [1993a, pp. 184{7]. Having seen how Pollock de nes the notions of arguments, con icting arguments, and defeat among arguments, we now turn to what was the main topic of Section 4 and the main concern of Dung [1995], de ning the status of arguments. The status of arguments Over the years, Pollock has more than once changed his de nition of the status of arguments. One change is that while earlier versions (e.g. Pollock, 1987) dealt with (successful) attack on a subargument in an implicit way via the de nition of defeat, the latest version makes this part of the status de nition, by explicitly requiring that all subarguments of an `undefeated' argument are also undefeated (cf. Section 4.1). Another change is in the form of the status de nition. Earlier Pollock took the unique-statusassignment approach, in particular, the xed-point variant of De nition 16 which, as shown by Dung [1995], (almost) corresponds to the grounded semantics of De nition 7. However, his most recent work is in terms of multiple status assignments, and very similar to the preferred semantics of De nition 39. Pollock's thus combines the recursive style of De nition 17 with the multiple-status-assignments approach. We present the most recent de nition, of [Pollock, 1995]. To maintain uniformity in our terminology, we

268

HENRY PRAKKEN & GERARD VREESWIJK

state it in terms of arguments instead of, as Pollock, in terms of an `inference graph'. To maintain the link with inference graphs, we make the de nition relative to a closed set of arguments, i.e., a set of arguments containing all subarguments of all its elements. Since we deviate from Pollock's inference graphs, we must be careful in de ning the notion of subarguments. Sometimes a later line of an argument depends on only some of its earlier lines. For instance, in Example 45 line (5) only depends on (4). In fact, the entire argument (1-6) has three independent, or parallel subarguments, viz. a lineair subargument (1), and two suppositional subarguments (2,3) and (4,5). Pollock's inference graphs nicely capture such dependencies, since their nodes are argument lines and their links are inferences. However, with our sequential format of an argument this is dierent, for which reason we cannot de ne a subargument as being any subsequence of an argument. Instead, they are only those subsequences of A that can be transformed into an inference tree. DEFINITION 46 (subarguments). An argument A is a subargument of an argument B i A is a subsequence of B and there exists a tree T of argument lines such that 1. T contains all and only lines from A; and 2. T 's root is A's last element; and 3. l is a child of l0 i l was inferred from a set of lines one of which was l0 . A proper subargument of A is any subargument of A unequal to A. Now we can give Pollock's [1995] de nition of a status assignment. DEFINITION 47. An assignment of `defeated' and `undefeated' to a closed set S of arguments is a partial defeat status assignment i it satis es the following conditions. 1. All arguments in S with only lines obtained by the input argument formation rule are assigned `undefeated'; 2. A 2 S is assigned `undefeated' i: (a) All proper sub-arguments of A are assigned `undefeated'; and (b) All arguments in S defeating A are assigned `defeated'. 3. A 2 S is assigned `defeated' i: (a) One of A's proper sub-arguments is assigned `defeated'; or (b) A is defeated by an argument in S that is assigned `undefeated'.

LOGICS FOR DEFEASIBLE ARGUMENTATION

269

A defeat status assignment is a maximal (with respect to set inclusion) partial defeat status assignment. Observe that the conditions (2a) and (3a) on the sub-arguments of A make the weakest link principle hold by de nition. The similarity of defeat status assignments to Dung's preferred extensions of De nition 39 shows itself as follows: the conditions (2b) and (3b) on the defeaters of A are the analogues of Dung's notion of acceptability, which make a defeat status assignment an admissible set; then the fact that a defeat status assignment is a maximal partial assignment induces the similarity with preferred extensions. It is easy to verify that when two arguments defeat each other (Example 3), an input has more than one status assignment. Since Pollock wants to de ne a sceptical consequence notion, he therefore has to consider the intersection of all assignments. Pollock does so in a variant of De nitions 27 and 28. DEFINITION 48. (The status of arguments.) Let S be a closed set of arguments based on INPUT. Then, relative to S , an argument is undefeated i every status assignment to S assigns `undefeated' to it; it is defeated outright i no status assignment to S assigns `undefeated' to it; otherwise it is provisionally defeated. In our terms, `undefeated' is `justi ed', `defeated outright' is `overruled', and `provisionally defeated' is `defensible'. Direct vs. indirect reinstatement

It is now the time to come back to the discussion in Section 4.1 on reinstatement. Example 19 showed that there is reason to invalidate the direct version of this principle, viz. when the con icts are about the same issue. We remarked that the explicitly recursive De nition 17 of justi ed arguments indeed invalidates direct reinstatement while preserving its indirect version. However, we also promised to explain that both versions of reinstatement can be retained if Example 19 is represented in a particular way. In fact, Pollock (personal communication) would represent the example as follows: (1) Being a bird is a prima facie reason for being able to y (2a) Being a penguin is an undercutting reason for (1) (2b) Being a penguin is a defeasible reason for not being able to y (3) Being a genetically altered penguin is an undercutting reason for (2b) (4) Tweety is a genetically altered penguin It is easy to verify that De nitions 47 and 48, which validate both direct and indirect of reinstatement, yield the intuitive outcome, viz. that it is neither justi ed that Tweety can y, nor that it cannot y. A similar representation

270

HENRY PRAKKEN & GERARD VREESWIJK

is possible in systems that allow for abnormality or exception clauses, e.g. in [Gener & Pearl, 1992; Bondarenko et al., 1997; Prakken & Sartor, 1997b]. Self-defeating arguments Pollock has paid much attention to the problem of self-defeating arguments. In Pollock's system, an argument defeats itself i one of its lines defeats another of its lines. Above in Section 4.1 we already discussed Pollock's treatment of self-defeating arguments within the unique-status-assignment approach. However, he later came to regard this treatment as incorrect, and he now thinks that it can only be solved in the multiple-assignment approach (personal communication). Let us now see how Pollock's De nitions 47 and 48 deal with the problem. Two cases must be distinguished. Consider rst two defeasible arguments A and B rebutting each other. Then A and B are `parallel' subarguments of a deductive argument A + B for any proposition. Then (if no other arguments interfere with A or B ) there are two status assignments, one in which A is assigned `undefeated' and B assigned `defeated', and one the other way around. Now A + B is in both of these assignments assigned `defeated', since in both assignments one of its proper subarguments is assigned `defeated'. Thus the self-defeating argument A + B turns out to be defeated outright, which seems intuitively plausible. A dierent case is the following, with the following reasons

(1) p is a prima facie reason of strength 0.8 for q (2) q is a prima facie reason of strength 0.8 for r (3) r is a conclusive reason for d:(p >> q)e

and with INPUT = fpg. The structed. 1. hp; 1i 2. hq; 0:8i 3. hr; 0:8i 4. h:(p >> q); 0:8i

following (linear) argument can be con(p is in INPUT) (1 and prima facie reason (1)) (2 and prima facie reason (2)) (3 and conclusive reason (3))

Let us call this argument A, with proper subarguments A1 ; A2 ; A3 and A4 , respectively. Observe rst that, according to Pollock's de nition of selfdefeat, A4 is self-defeating. Further, according to Pollock's earlier approach with De nition 16, A4 is, as being self-defeating, overruled, or `defeated', while A1 , A2 and A3 are justi ed, or `undefeated'. Pollock now regards this outcome as incorrect: since A4 is a deductive consequence of A3 , A3 should also be `defeated'. This result is obtained with De nitions 47 and 48. Firstly, A1 is clearly undefeated. Consider next A2 . This argument is undercut by A4 , so if A4 is assigned `undefeated', then A2 must be assigned `defeated'. But then A4

LOGICS FOR DEFEASIBLE ARGUMENTATION

271

must also be assigned `defeated', since one of its proper subarguments is assigned `defeated'. Contradiction. If, on the other hand, A4 is assigned `defeated', then A2 and so A3 must be assigned `undefeated'. But then A4 must be assigned `undefeated'. Contradiction. In conclusion, no partial status assignment will assign a status to A4 and, consequently, no status assignment will assign a status to A2 or A3 either. And since this implies that no status assignment assigns the status `undefeated' to any of these arguments, they are by De nition 48 all defeated outright. Two remarks about this outcome can be made. Firstly, it might be doubted whether A2 should indeed be defeated outright, i.e., overruled. It is not self-defeating, its only defeater is self-defeating, and this defeater is not a deductive consequence of A2 's conclusion. Other systems, e.g. those of Vreeswijk (Section 5.5) and Prakken & Sartor (Section 5.7), regard A2 as justi ed. In these systems Pollock's intuition about A3 is formalised by regarding A3 as self-defeating because its conclusion deductively, not just defeasibly, implies a conclusion incompatible with itself. This makes it possible to regard A3 as overruled but A2 as justi ed. Furthermore, even if Pollock's outcome is accepted, the situation is not quite the same as with the previous example. Consider another defeasible argument B which rebuts and is rebutted by A3 . Then no assignment assigns a status to B either, for which reason B is also defeated outright. Yet this shows that the `defeated outright' status of A2 is not the same as the `defeated outright' status of an argument that has an undefeated defeater: apparently, A2 is still capable of preventing other arguments from being undefeated. In fact, the same holds for arguments involved in an odd defeat cycle (as in Example 30). In conclusion, Pollock's de nitions leave room for a fourth status of arguments, which might be called `seemingly defeated'. This status holds for arguments that according to De nition 48 are defeated outright but still have the power to prevent other arguments from being ultimately undefeated. The four statuses can be partially ordered as follows: `undefeated' is better than `provisionally defeated' and than `seemingly defeated', which both in turn are better than `defeated outright'. This observation applies not only to Pollock's de nition, but to all approaches based on partial status assignments, like Dung [1995] preferred semantics. However, this is not yet all: even if the notion of seeming defeat is made explicit, there still is an issue concerning oating arguments (cf. Example 24). To see this, consider the following extension of Example 30 (formulated in terms of [Dung, 1995]). EXAMPLE 49. Let A; B and C be three arguments, represented in a triangle, such that A defeats C , B defeats A, and C defeats B . Furthermore, let D and E be arguments such that all of A, B and C defeat D, and D defeats E .

272

HENRY PRAKKEN & GERARD VREESWIJK

seemingly defeated

better than

ultimately undefeated

better than

provisionally defeated

better than better than

defeated outright

Figure 8. Partial ordering of defeat statuses.

B C

D

E

A The dierence between Example 24 and this example is that the even defeat loop between two arguments is replaced by an odd defeat loop between three arguments. One view on the new example is that this dierence is inessential and that, for the same reasons as why in Example 24 the argument D is justi ed, here the argument E is ultimately undefeated: although E is strictly defeated by D, it is reinstated by all of A, B and C , since all these arguments strictly defeat D. On this account De nitions 47 and 48 are awed since they render all ve arguments defeated outright (and in our terms seemingly defeated). However, an alternative view is that odd defeat loops are of an essentially dierent kind than even defeat loops, so that our analysis of Example 24 does not apply here and that the outcome in Pollock's system re ects a aw in the available input information rather than in the system. Ideal and resource-bounded reasoning

We shall now see that De nition 48 is not yet all that Pollock has to say on the status of arguments. In the previous section we saw that the BDKT approach leaves the origin of the set of `input' arguments unspeci ed. At

LOGICS FOR DEFEASIBLE ARGUMENTATION

273

this point Pollock develops some interesting ideas. At rst sight it might be thought that the set S of the just-given de nitions is just the set of all arguments that can be constructed with the argument formation rules of De nition 44. However, this is only one of the possibilities that Pollock considers, in which De nition 48 captures so-called ideal warrant . DEFINITION 50. (Ideal warrant.) Let S be the set of all arguments based on INPUT. Then an argument A is ideally warranted relative to INPUT i A is undefeated relative to S . Pollock wants to respect that in actual reasoning the construction of arguments takes time, and that reasoners have no in nite amount of time available. Therefore, he also considers two other de nitions, both of which have a computational avour. To capture an actual reasoning process, Pollock makes them relative to a sequence S of closed nite sets S0 : : : Si : : : of arguments. Let us call this an argumentation sequence. Such a sequence contains all arguments constructed by a reasoner, in the order in which they are produced. It (and any of its elements) is based on INPUT if all its arguments are based on INPUT.13 Now the rst `computational' status de nition determines what a reasoner must believe at any given time. DEFINITION 51. (Justi cation.) Let S be an argumentation sequence based on INPUT, and Si an element of S . Then an argument A is justi ed relative to INPUT at stage i i A is undefeated relative to Si . In this de nition the set Si contains just those arguments that have actually been constructed by a reasoner. Thus this de nition captures the current status of a belief; it may be that further reasoning (without adding new premises) changes the status of a conclusion. This cannot happen for the other `computational' consequence notion de ned by Pollock, called warrant. Intuitively, an argument A is warranted i eventually in an argumentation sequence a stage is reached where A remains justi ed at every subsequent stage. To de ne this, the notion of a `maximal' argumentation sequence is needed, i.e., a sequence that cannot be extended. Thus it contains all arguments that a reasoner with unlimited resources would construct (in a particular order). DEFINITION 52. (Warrant.) Let S be a maximal argumentation sequence S0 : : : Si : : : based on INPUT. Then an argument A is warranted (relative to INPUT) i there is an i such that for all j > i, A is undefeated relative to Sj . The dierence between warrant and ideal warrant is subtle: it has to do with the fact that, while in determining warrant every set Sj Si that 13 Note that we again translate Pollock's inference graphs into (structured) sets of arguments.

274

HENRY PRAKKEN & GERARD VREESWIJK

is considered is nite , in determining ideal warrant the set of all possible arguments has to be considered, and this set can be in nite. EXAMPLE 53. (Warrant does not entail ideal warrant.) Suppose A1 ; A2 ; A3 ; : : : are arguments such that every Ai is defeated by its successor Ai+1 . Further, suppose that the arguments are produced in the order A2 ; A1 ; A4 ; A3 ; A6 ; A5 ; A8 ; : : : Then Stage Produced Justi ed 1 A2 A2 2 A2 ; A1 A2 3 A2 ; A1 ; A4 A2 ; A4 4 A2 ; A1 ; A4 ; A3 A2 ; A4 5 A2 ; A1 ; A4 ; A3 ; A6 A2 ; A4 ; A6 6 A2 ; A1 ; A4 ; A3 ; A6 ; A5 A2 ; A4 ; A6 7 A2 ; A1 ; A4 ; A3 ; A6 ; A5 ; A8 A2 ; A4 ; A6 ; A8 .. .. .. . . . From stage 1, A2 is justi ed and stays justi ed. Thus, A2 is warranted. At the same time, however, A2 is not ideally warranted, because there exist two status assignments for all Ai 's. One assignment in which all and only all odd arguments are `in', and one assignment in which all and only all odd arguments are `out'. Hence, according to ideal warrant, every argument is only provisionally defeated. In particular, A2 is provisionally defeated. A remarkable aspect of this example is that, eventually, every argument will be produced, but without reaching the right result for A2 . EXAMPLE 54. (Ideal warrant does not imply warrant.) Suppose that A, B1 , B2 , B3 , . . . and C1 , C2 , C3 , . . . are arguments such that A is defeated by every Bi , and every Bi is defeated by Ci . Further, suppose that the arguments are produced in the order A; B1 ; C1 ; B2 ; C2 ; B3 ; C3 ; . . . Then Stage Produced Justi ed 1 A A 2 A; B1 B1 3 A; B1 ; C1 A; C1 4 A; B1 ; C1 ; B2 C1 ; B2 5 A; B1 ; C1 ; B2 ; C2 A; C1 ; C2 6 A; B1 ; C1 ; B2 ; C2 ; B3 C1 ; C2 ; B3 .. .. .. . . . Thus, in this sequence, A is provisionally defeated. However, according to the de nition of ideal warrant, every Bi is defeated by Ci , so that A remains undefeated. Although the notion of warrant is computationally inspired, as Pollock observes there is no automated procedure that can determine of any war-

LOGICS FOR DEFEASIBLE ARGUMENTATION

275

ranted argument that it is warranted: even if in fact a warranted argument stays undefeated after some nite number n of computations, a reasoner can in state n not know whether it has reached a point where the argument stays undefeated, or whether further computation will change its status. Pollock's reasoning architecture We now discuss Pollock's reasoning architecture for computing the ideally warranted propositions, i.e. the propositions that are the conclusion of an ideally warranted argument. (According to Pollock, ideal warrant is what every reasoner should ultimately strive for.) In deductive logic such an architecture would be called a `proof theory', but Pollock rejects this term. The reason is that one condition normally required of proof theories, viz. that the set of theorems is recursively enumerable, cannot in general be satis ed for a defeasible reasoner. Pollock assumes that a reasoner reasons by constantly updating its beliefs, where an update is an elementary transition from one set of propositions to the next set of propositions. According to this view, a reasoner would be adequate if the resulting sequence is a recursively enumerable approximation of ideal warrant. However, this is impossible. Ideal warrant contains all theorems of predicate logic, and it is known that all theorems of predicate logic form a set that is not recursive. And since in defeasible reasoning some conclusions depend on the failure to derive other conclusions, the set of defeasible conclusions is not recursively enumerable. Therefore, Pollock suggests an alternative criterion of adequacy. A reasoner is called defeasibly adequate if the resulting sequence is a defeasibly enumerable approximation of ideal warrant. DEFINITION 55. A set A is defeasibly enumerable if there is a sequence of sets fAi g1i such that for all x 1. If x 2 A, then there is an N such that x 2 Ai for all i > N . 2. If x 2= A, then there is an M such that x 2= Ai for all i > M .

If A is recursively enumerable, then a reasoner who updates his beliefs in Pollock's way can approach A `from below': the reasoner can construct sets that are all supersets of the preceding set and subsets of A. However, when A is only defeasibly enumerable, a reasoner can only approach A from below and above simultaneously, in the sense that the sets Ai the reasoner constructs may contain elements not contained in A. Every such element must eventually be taken out of the Ai 's, but there need not be any point at which they have all been removed. To ensure defeasible adequacy, Pollock introduces the following three operations: 1. The reasoner must adopt beliefs in response to constructing arguments, provided no counterarguments have already been adopted for

276

HENRY PRAKKEN & GERARD VREESWIJK

any step in the argument. If a defeasible inference occurs, a check must be made whether a counterargument for it has not already been adopted as a belief. 2. The reasoner must keep track of the bases upon which its beliefs are held. When a new belief is adopted that is a defeater for a previous inference step, then the reasoner must retract that inference step and all beliefs inferred from it. 3. The reasoner must keep track of defeated inferences, and when a defeater is itself retracted (2), this should reinstate the defeated inference. To achieve the functions just described, Pollock introduces a so-called

ag-based reasoner. A ag-based reasoner consists of an inference engine that produces all arguments eventually, and a component computing the defeat status of arguments. LOOP

BEGIN END

make-an-inference recompute-defeat-statuses

The procedure recompute-defeat-statuses determines which arguments are defeated outright, undefeated and provisionally defeated at each iteration of the loop. That is, at each iteration it determines justi cation. Pollock then identi es certain conditions under which a ag-based reasoner is defeasibly adequate. For these conditions, the reader is referred to [Pollock, 1995, ch. 4]. Evaluation

Pollock's theory of defeasible reasoning is based on more than thirty years of research in logic and epistemology. This large time span perhaps explains the richness of his theory. It includes both linear and suppositional arguments, and deductive as well as non-deductive (mainly statistical and inductive) arguments, with a corresponding distinction between two types of con icts between arguments. Pollock's de nition of the status of arguments takes the multiple-status-assignments approach, being related to Dung's preferred semantics. This semantics can deal with certain types of

oating statuses and conclusions, but we have seen that certain other types are still ignored. In fact, this seems one of the main unsolved problems in argument-based semantics. An interesting aspect of Pollock's work is his study of the resource-bounded nature of practical reasoning, with the idea of partial computation embodied in the notions of warrant and especially

LOGICS FOR DEFEASIBLE ARGUMENTATION

277

justi cation. And for arti cial intelligence it is interesting that Pollock has implemented his system as a computer program. Since Pollock focuses on epistemological issues, his system is not immediately applicable to some speci c features of practical (including legal) reasoning. For instance, the use of probabilistic notions seems to make it diÆcult to give an account of reasoning with and about priority relations between arguments (see below in Subsection 5.7). Moreover, it would be interesting to know what Pollock would regard as suitable reasons for normative reasoning. It would also be interesting to study how, for instance, analogical and abductive arguments can be analysed in Pollock's system as giving rise to prima facie reasons.

5.3 Inheritance systems A forerunner of argumentation systems is work on so-called inheritance systems, especially of Horty et al., e.g. [1990], which we shall brie y discuss. Inheritance systems determine whether an object of a certain kind has a certain property. Their language is very restricted. The network is a directed graph. Its initial nodes represent individuals and its other nodes stand for classes of individuals. There are two kinds of links, ! and 6!, depending on whether something does or does not belong to a certain class. Links from an individual to a class express class membership, and links between two classes express class inclusion. A path through the graph is an inheritance path i its only negative link is the last one. Thus the following are examples of inheritance paths. P1 : Tweety ! Penguin ! Bird ! Canfly P2 : Tweety ! Penguin 6! Canfly Another basic notion is that of an assertion , which is of the form x ! y or x 6! y, where y is a class. Such an assertion is enabled by an inheritance path if the path starts with x and ends with the same link to y as the assertion. Above, an assertion enabled by P1 is Tweety ! Canfly, and an assertion enabled by P2 is Tweety 6! Canfly. As the example shows, two paths can be con icting. They are compared on speci city, which is read o from the syntactic structure of the net, resulting in relations of neutralisation and preemption between paths. The assignment of a status to a path (whether it is permitted) is similar to the recursive variant of the unique-status-assignment approach of De nition 17. This means that the system has problems with Zombie paths and oating conclusions (as observed by Makinson & Schlechta [1991]). Although Horty et al. present their system as a special-purpose formalism, it clearly has all the elements of an argumentation system. An inheritance path corresponds to an argument, and an assertion enabled by a path to a conclusion of an argument. Their notion of con icting paths

278

HENRY PRAKKEN & GERARD VREESWIJK

corresponds to rebutting attack. Furthermore, neutralisation and preemption correspond to defeat, while a permitted path is the same as a justi ed argument. Because of the restricted language and the rather complex de nition of when an inheritance path is permitted, we shall not present the full system. However, Horty et al. should be credited for anticipating many distinctions and discussions in the eld of defeasible argumentation. In particular, their work is a rich source of benchmark examples. We shall discuss one of them. EXAMPLE 56. Consider four arguments A; B; C and D such that B strictly defeats A, D strictly defeats C , A and D defeat each other and B and C defeat each other.

A

D

B

C

Here is a natural-language version (due to Horty, personal communication), in which the defeat relations are based on speci city considerations. A = Larry is rich because he is a public defender, public defenders are lawyers, and lawyers are rich; B = Larry is not rich because he is a public defender, and public defenders are not rich; C = Larry is rich because he lives in Brentwood, and people who live in Brentwood are rich; D = Larry is not rich because he rents in Brentwood, and people who rent in Brentwood are not rich. If we apply the various semantics of the BDKT approach to this example, we see that since no argument is undefeated, none of them is in the grounded extension. Moreover, there are preferred extensions in which Larry is rich, and preferred extensions in which Larry is not rich. Yet it might be argued that since both arguments that Larry is rich are strictly defeated by an argument that Larry is not rich, the sceptical conclusion should be that Larry is not rich. This is the outcome obtained by Horty et al. [1990]. We note that if this example is represented in the way Pollock proposes for Example 19 (see page 269 above), this outcome can also be obtained in the BDKT approach.

LOGICS FOR DEFEASIBLE ARGUMENTATION

279

5.4 Lin and Shoham Before the BDKT approach, an earlier attempt to provide a unifying framework for nonmonotonic logics was made by Lin & Shoham [1989]. They show how any logic, whether monotonic or not, can be reformulated as a system for constructing arguments. However, in contrast with the other theories in this section, they are not concerned with comparing incompatible arguments, and so their framework cannot be used as a theory of defeat among arguments. The basic elements of Lin & Shoham's abstract framework are an unspeci ed logical language, only assumed to contain a negation symbol, and an also unspeci ed set of inference rules de ned over the assumed language. Arguments can be constructed by chaining inference rules into trees. Inference rules are either monotonic or nonmonotonic. For instance, Penguin(a) ! Bird(a) Penguin(a); :ab(penguin(a)) ! :Fly(a)

are monotonic rules, and

True ) :ab(penguin(a)) True ) :ab(bird(a))

are nonmonotonic rules. Note that these inference rules are, as in default logic, domain speci c. In fact, Lin & Shoham do not distinguish between general and domain-dependent inference rules, as is shown by their reconstruction of default logic, to be discussed below. Although the lack of a notion of defeat is a severe limitation, in capturing nonmonotonic consequence Lin & Shoham introduce a notion which for defeasible argumentation is very relevant viz. that of an argument structure. DEFINITION 57. (argument structures) A set T of arguments is an argument structure if T satis es the following conditions: 1. The set of `base facts' (which roughly are the premises) is in T ; 2. Of every argument in T all its subarguments are in T ; 3. The set of conclusions of arguments in T is deductively closed and consistent. Note that the notion of a `closed' set of arguments that we used above in Pollock's De nition 47 satis es the rst two but not the third of these conditions. Note also that, although argument structures are closed under monotonic rules, they are not closed under defeasible rules. Lin & Shoham then reformulate existing nonmonotonic logics in terms of monotonic and nonmonotonic inference rules, and show how the alternative

280

HENRY PRAKKEN & GERARD VREESWIJK

sets of conclusions of these logics can be captured in terms of argument structures with certain completeness properties. Bondarenko et al. [1997] remark that structures with these properties are very similar to their stable extensions. The claim that existing nonmonotonic logics can be captured by an argument system is an important one, and Lin & Shoham were among the rst to make it. The remainder of this section is therefore devoted to showing with an example how Lin & Shoham accomplish this, viz. for default logic [Reiter, 1980]. In default logic (see also Subsection 2.1), a default theory is a pair = (W; D), where W is a set of rst-order formulas, and D a set of defaults. Each default is of the form A : B1 ; : : : ; Bn =C , where A, Bi and C are rstorder formulas. Informally, a default reads as `If A is known, and B1 ; : : : ; Bn are consistent with what is known, then C may be inferred'. An extension of a default theory is any set of formulas E satisfying the following conditions. E = [1 i=0 , where

E0 = W; Ei+1 = T h(Ei ) [ fC j A : B1 ; : : : ; Bn =C 2 D where A 2 Ei and :B1 ; : : : :Bn 2= E g We now discuss the correspondence between default logic and argument systems by providing a global outline of the translation and proof. Lin & Shoham perform the translation as follows. Let = (W; D) be a closed default theory. De ne R() to be the set of the following rules: 1. True is a base fact. 2. If A 2 W , then A is a base fact of R(). 3. If A1 ; : : : ; An , and B are rst-order sentences and B is a consequence of A1 ; : : : ; An in rst-order logic, then A1 ; : : : ; An ! B is a monotonic rule. 4. If A is a rst-order sentence, then :A ! ab(A) is a monotonic rule. 5. If A : B1 ; : : : ; Bn =C is a default in D, then

A; :ab(B1 ); : : : ; :ab(Bn ) ! C

is a monotonic rule. 6. If B is a rst-order sentence, then True ) :ab(B ) is a nonmonotonic rule. Lin & Shoham proceed by introducing the concept of DL-complete argument structures.

LOGICS FOR DEFEASIBLE ARGUMENTATION

281

DEFINITION 58. An argument structure T of R() is said to be DLcomplete if for any rst-order sentence A, either ab(A) or :ab(A) is in W(T). Thus, a DL-complete argument structure is explicit about the abnormality of every rst-order sentence. For DL-complete argument structures, the following lemma is established. LEMMA 59. If T is a DL-complete argument structure of R(), then for any rst-order sentence A, ab(A) 2 W(T ) i :A 2 W(T ). On the basis of this result, Lin & Shoham are able to establish the following correspondence between default logic and argument systems. THEOREM 60. Let E be a consistent set of rst-order sentences. E is an extension of i there is a DL-complete argument structure T of R() such that E is the restriction of W(T ) to the set of rst-order sentences. This theorem is proven by constructing extensions for given argument structures and vice versa . If E is an extension of , Lin & Shoham de ne T as the set of arguments with all nodes in E 0 , where E 0 = E [ fab(B ) j :B 2 E g [ f:ab(B ) j :B 2= E g and prove that W(T ) = E 0 . Conversely, for a DL-complete argument structure T of R(), Lin & Shoham prove that the rst-order restriction E of W(T ) is a default extension of . This is proven by induction on the de nition of an extension. Two features in the translation are worth noticing. First, default logic makes a distinction between meta-logic default rules and rst-order logic, while argument systems do not. Second, the notion of groundedness of default extensions corresponds to that of an argument in argument systems, and the notion of xed points in default logic corresponds to that of DLcompleteness of argument structures. Lin & Shoham further show that, for normal default theories, the translation can be performed without second-order predicates, such as ab. This result however falls beyond the scope of this chapter.

5.5 Vreeswijk's Abstract Argumentation Systems Like the BDKT approach and Lin & Shoham [1989], Vreeswijk [1993a; 1997] also aims to provide an abstract framework for defeasible argumentation. His framework builds on the one of Lin & Shoham, but contains the main elements that are missing in their system, namely, notions of con ict and defeat between arguments. As Lin & Shoham, Vreeswijk also assumes an unspeci ed logical language L, only assumed to contain the symbol ?, denoting `falsum' or `contradiction,' and an unspeci ed set of monotonic and

282

HENRY PRAKKEN & GERARD VREESWIJK

nonmonotonic inference rules (which Vreeswijk calls `strict' and `defeasible'). This also makes his system an abstract framework rather than a particular system. A point in which Vreeswijk's work diers from Lin & Shoham is that Vreeswijk's inference rules are not domain speci c but general logical principles. DEFINITION 61. (Rule of inference.) Let L be a language. 1. A strict rule of inference is a formula of the form 1 ; : : : ; n ! where 1 ; : : : ; n is a nite, possibly empty, sequence in L and is a member of L. 2. A defeasible rule of inference is a formula of the form 1 ; : : : ; n ) where 1 ; : : : ; n is a nite, possibly empty, sequence in L and is a member of L. A rule of inference is a strict or a defeasible rule of inference. Another aspect taken from Lin & Shoham is that in Vreeswijk's framework, arguments can also be formed by chaining inference rules into trees. DEFINITION 62. (Argument.) Let R be a set of rules. An argument has premises , a conclusion , sentences (or propositions), assumptions , subarguments , top arguments , a length , and a size . These are abbreviated by corresponding pre xes. An argument is 1. A member of L; in that case,

prem() = fg, conc() = , sent() = fg, asm() = ;, sub() = fg, top() = fg, length() = 1, and size() = 1;

or 2. A formula of the form 1 ; : : : ; n ! where 1 ; : : : ; n is a nite, possibly empty, sequence of arguments, such that conc(1 ) = 1 ; : : : ; conc(n ) = n for some rule 1 ; : : : ; n ! in R, and 2= sent(1 ) [ : : : [ sent(n )|in that case,

prem() = prem(1 ) [ : : : [ prem(n ), conc() = , sent() = sent(1 ) [ : : : [ sent(n ) [ fg, asm() = asm(1 ) [ : : : [ asm(n ), sub() = sub(1 ) [ : : : [ sub(n ) [ fg, top() = f1 ; : : : ; n ! j 1 2 top(1 ) ; : : : ; n 2 top(n )g[ fg, length() = maxflength(1) ; : : : ; length(n)g + 1, and size() = size(1 ) + : : : + size(n) + 1;

LOGICS FOR DEFEASIBLE ARGUMENTATION

283

or 3. A formula of the form 1 ; : : : ; n ) where 1 ; : : : ; n is a nite, possibly empty, sequence of arguments, such that conc(1 ) = 1 ; : : : ; conc(n ) = n for some rule 1 ; : : : ; n ) in R, and 2= sent(1 ) [ : : : [ sent(n ); for assumptions we have

asm() = asm(1 ) [ : : : [ asm(n ) [ fg; premises, conclusions, and other attributes are de ned as in (2). Arguments of type (1) are atomic arguments; arguments of type (2) and (3) are composite arguments. Thus, atomic arguments are language elements. An argument is said to be in contradiction if conc() = ?. An argument is defeasible if it contains at least one defeasible rule of inference; else it is strict . Unlike Lin & Shoham, Vreeswijk assumes an ordering on arguments, indicating their dierence in strength (on which more below). As for con icts between arguments, a dierence from all other systems of this section (except [Verheij, 1996]; see below in subsection 5.10) is that a counterargument is in fact a set of arguments: Vreeswijk de nes a set of arguments incompatible with an argument i the conclusions of [ f g give rise to a strict argument for ?. Sets of arguments are needed because the language in Vreeswijk's framework is unspeci ed and therefore lacks the expressive power to `recognise' inconsistency. The consequence of this lack of expressiveness is that a set of arguments 1 ; : : : ; n that is incompatible with , cannot be joined to one argument that contradicts, or is inconsistent, with . Therefore, it is necessary to take sets of arguments into account. Vreeswijk has no explicit notion of undercutting attacks; he claims that this notion is implicitly captured by his notion of incompatibility, viz. as arguments for the denial of a defeasible conditional used by another argument. This requires some extra assumptions on the language of an abstract argumentation system, viz. that it is closed under negation (:), conjunction (^), material implication (), and defeasible implication (>). For the latter connective Vreeswijk de nes the following defeasible inference rule. '; ' > ) With these extra language elements, it is possible to express rules of inference (which are meta-linguistic notions) in the object language. Meta-level rules using ! (strict rule of inference) and ) (defeasible rule of inference) are then represented by corresponding object language implication symbols and >. Under this condition, Vreeswijk claims to be able to de ne rebutting and undercutting attackers in a formal fashion. For example, let and be arguments in Vreeswijk's system with conclusions ' and , respectively. Let '1 ; : : : ; 'n ) ' be the top rule of .

284

HENRY PRAKKEN & GERARD VREESWIJK

Rebutting attack. If = :', then Vreeswijk calls a rebutting attacker of . Thus, the conclusion of a rebutting attacker contradicts the conclusion of the argument it attacks. Undercutting attack. If = :('1 ^ : : : ^ 'n > '), i.e. if is the negation of the last rule of stated in the object language, then is said to be an undercutting attacker of . Thus, the conclusion of an undercutting attacker contradicts the last inference of the argument it attacks. Vreeswijk's notion of defeat rests on two basic concepts, viz. the abovede ned notion of incompatibility and the notion of undermining. An argument is said to undermine a set of arguments, if it dominates at least one element of that set. Formally, a set of arguments is undermined by an argument if < for some 2 . If a set of arguments is undermined by another argument, it cannot uphold or maintain all of its members in case of a con ict. Vreeswijk then de nes the notion of a defeater as follows: DEFINITION 63. (Defeater.) Let P be a base set, and let be an argument. A set of arguments is a defeater of if it is incompatible with and not undermined by it; in this case is said to be defeated by , and defeats . is a minimal defeater of if all its proper subsets do not defeat . As for the assessment of arguments, Vreeswijk's declarative de nition, (which he says is about \warrant") is similar to Pollock's de nition of a defeat status assignment: both de nitions have an explicit recursive structure and both lead to multiple status assignments in case of irresolvable con icts. However, Vreeswijk's status assignments cannot be partial, for which reason Vreeswijk's de nition is closer to stable semantics than to preferred semantics. DEFINITION 64. (Defeasible entailment.) Let P be a base set. A relation j between P and arguments based on P is a defeasible entailment relation if, for every argument based on P , we have P j ( is in force on the basis of P ) if and only if 1. The set P contains ; or 2. For some arguments 1 ; : : : ; n we have P n ! ; or

j 1 ; : : : ; n and 1 ; : : : ;

3. For some arguments 1 ; : : : ; n we have P j 1 ; : : : ; n and 1 ; : : : ; n ) and every set of arguments such that P j does not defeat . In the Nixon Diamond of Example 3 this results in `the Quaker argument is in force i the Republican argument is not in force'. To deal with such

LOGICS FOR DEFEASIBLE ARGUMENTATION

285

circularities Vreeswijk de nes for every j satisfying the above de nition an extension (1) = f j P j g On the basis of De nition 64 it can be proven that (1) is stable, i.e., it can be proven that 2= i 0 defeats for some 0 . With equally strong con icting arguments, as in the Nixon Diamond, this results in multiple stable extensions (cf. De nition 38). Just as in Dung's stable semantics, in Vreeswijk's system examples with odd defeat loops might have no extensions. However, an exception holds for the special case of self-defeating arguments, since De nition 63 implies that every argument of which the conclusion strictly implies ? is defeated by the empty set. Argumentation sequences

Vreeswijk extensively studies various other characterisations of defeasible argumentation. Among other things, he develops the notion of an `argumentation sequence'. An argumentation sequence can be regarded as a sequence 1 ! 2 ! : : : ! n ! : : : of Lin & Shoham's [1989] argument structures, but without the condition that these structures are closed under deduction. Each following structure is constructed by applying an inference rule to the arguments in the preceding structure. An important addition to Lin & Shoham's notion is that a newly constructed argument is only appended to the sequence if it survives all counterattacks from the argument structure developed thus far. Thus the notion of an argumentation sequence embodies, like Pollock's notion of `justi cation', the idea of partial computation, i.e., of assessing arguments relative to the inferences made so far. Vreeswijk's argumentation sequences also resemble BDKT's procedure for computing admissible semantics. The dierence is that BDKT adopt arguments that are defended (admissible semantics), while Vreeswijk argumentation sequences adopt arguments that are not defeated (stable semantics). Vreeswijk also develops a procedural version of his framework in dialectical style. It will be discussed below in Section 6. Plausible reasoning

Vreeswijk further discusses a distinction between two kinds of nonmonotonic reasoning, `defeasible' and `plausible' reasoning. According to him, the above de nition of defeasible entailment captures defeasible reasoning, which is unsound (i.e., defeasible) reasoning from rm premises, like in `typically birds y, Tweety is a bird, so presumably Tweety ies'. Plausible

286

HENRY PRAKKEN & GERARD VREESWIJK

reasoning, by contrast, is sound (i.e., deductive) reasoning from uncertain premises, as in `all birds y (we think), Tweety is a bird, so Tweety ies (we think)' [Rescher, 1976]. The dierence is that in the rst case a default proposition is accepted categorically, while in the second case a categorical proposition is accepted by default. In fact, Vreeswijk would regard reasoning with ordered premises, as studied in many nonmonotonic logics, not as defeasible but as plausible reasoning. One element of this distinction is that for defeasible reasoning the ordering on arguments is not part of the input theory, re ecting priority relations between, or degrees of belief in premises, but a general ordering of types of arguments, such as `deductive arguments prevail over inductive arguments' and `statistical inductive arguments prevail over generic inductive arguments'. Accordingly, Vreeswijk assumes that the ordering on arguments is the same for all sets of premises (although relative to a set of inference rules). Vreeswijk formalises plausible reasoning independent of defeasible reasoning, with the possibility to de ne input orderings on the premises, and he then combines the two formal treatments. To our knowledge, Vreeswijk's framework is unique in treating these two types of reasoning in one formalism as distinct forms of reasoning; usually the two forms are regarded as alternative ways to look at the same kind of reasoning. Evaluating Vreeswijk's framework, we can say that it has little attention for the details of comparing arguments and that, as Pollock but in contrast to BDKT, it formalises only one type of defeasible consequence, but that it is philosophically well-motivated, and quite detailed with respect to the structure of arguments and the process of argumentation.

5.6 Simari & Loui Simari & Loui [1992] present a declarative system for defeasible argumentation that combines ideas of Pollock [1987] on the interaction of arguments with ideas of Poole [1985] on speci city and ideas of Loui [1987] on defaults as twoplace meta-linguistic rules. Simari & Loui divide the premises into sets of contingent rst-order formulas KC , and necessary rst-order formulas KN , and one-directional default rules , e.g.

KC KN

= = =

fP (a)g f8x:P (x) B (x)g fB (x) > F (x); P (x) > :F (x)g:

Note that Simari & Loui's default rules are not threeplace as Reiter's defaults, but twoplace. The set of grounded instances of , i.e., of defeasible rules without variables, is denoted by # . The notion of argument that Simari & Loui maintain is somewhat uncommon:

LOGICS FOR DEFEASIBLE ARGUMENTATION

287

DEFINITION 65. (Arguments.) Given a context K = KN [ KC and a set of defeasible rules we say that a subset T of # is an argument for h 2 SentC (L) in the context K, denoted by hT; hiK if and only if 1. K [ T j h 2. K [ T j6 ? 3. 6 9T 0 T : K [ T 0 j h An argument hT; h1 iK is a subargument of an argument hS; h2 iK i T S . That K [ T j h means that h is derivable from K [ T with rst-order inferences applied to rst-order formulas and modus ponens applied to defaults. Thus, an argument T is a set of grounded instances of defeasible rules containing suÆcient rules to infer h (1), containing no rules irrelevant for inferring h (3), and not making it possible to infer ? (2). This notion of argument is somewhat uncommon because it does not refer to a tree or chain of inference rules. Instead, De nition 65 merely demands that an argument is a unordered collection of rules that together imply a certain conclusion. Simari & Loui de ne con ict between arguments as follows. An argument hT; h1iK counterargues an argument hS; h2 iK i the latter has a subargument hS 0 ; hiK such that hT; h1 iK disagrees with hS 0 ; hiK , i.e., K [ fh1 ; hg ` ?. Arguments are compared with Poole's [1985] de nition of speci city: an argument A defeats an argument B i A disagrees with a subargument B of B and A is more speci c than B . Note that this allows for subargument defeat: this is necessary since Simari & Loui's de nition of the status of arguments is not explicitly recursive. In fact, they use Pollock's theory of level-n arguments. Since they exclude self-defeating arguments by de nition, they can use the version of De nition 11. An important component of Simari & Loui's system is the k -operator. Of all the conclusions that can be argued, the k -operator returns the conclusions that are supported by level-k arguments. Simari & Loui prove that arguments for which k = k+1 , are justi ed. The main theorem of the paper states that the set of justi ed conclusions is uniquely determined, and that a repeated application of the -operator will bring us to that set. A strong point of Simari & Loui's approach is that it combines the ideas of speci city (Poole) and level-n arguments (Pollock) into one system. Another strong point of the paper is that it presents a convenient calculus of arguments, that possesses elegant mathematical properties. Finally, Simari & Loui sketch an interesting architecture for implementation, which has a dialectical form (see below, Section 6 and, for a full description, [Simari et al., 1994; Garcia et al., 1998]). However, the system also has some limitations. Most of them are addressed by Prakken & Sartor [1996; 1997b], to be discussed next.

288

HENRY PRAKKEN & GERARD VREESWIJK

5.7 Prakken & Sartor Inspired by legal reasoning, Prakken & Sartor [1996; 1997b] have developed an argumentation system that combines the language (but not the rest) of default logic with the grounded semantics of the BDKT approach.14 Actually, Prakken & Sartor originally used the language of extended logic programming, but Prakken [1997] generalised the system to default logic's language. Below we present the latter version. The main contributions to defeasible argumentation are a study of the relation between rebutting and assumption attack, and a formalisation of argumentation about the criteria for defeat. The use of default logic's language and grounded semantics make Prakken & Sartor's system rather similar to Simari & Loui's. However, as just noted, they extend and revise it in a number of respects, to be indicated in more detail below. As for the logical language, the premises are divided into factual knowledge F , a set of rst-order formulas subdivided into the necessary facts Fn and the contingent facts Fc , and defeasible knowledge , consisting of Reiter-defaults. The set F is assumed consistent. Prakken & Sartor write defaults as follows. d: '1 ^ : : : ^ 'j ^ 'k ^ : : : ^ 'n ) where d, a term, is the informal name of the default, and each 'i and is a rst-order formula. The part 'k ^ : : : ^ 'n corresponds to the middle part of a Reiter-default. The symbol can be informally read as `not provable that'. For each 'i in a default, :'i is called an assumption of the default. The language is de ned such that defaults cannot be nested, nor combined with other formulas. Arguments are, as in [Simari & Loui, 1992], chains of defaults `glued' together by rst-order reasoning. More precisely, consider the set R consisting of all valid rst-order inference rules plus the following rule of defeasible modus ponens (DMP):

d : '0 ^ : : : ^ 'j ^ 'k ^ : : : ^ 'm ) 'n ; '0 ^ : : : ^ 'j 'n where all 'i are rst-order formulas. Note that DMP ignores a default's assumptions; the idea is that such an assumption is untenable, this will be re ected by a successful attack on the argument using the default. An argument is de ned as follows. DEFINITION 66. (Arguments.) Let be any default theory (Fc [Fn [ ). An argument based on is a sequence of distinct rst-order formulas and/or ground instances of defaults ['1 ; : : : ; 'n ] such that for all 'i : 14 A

forerunner of this system was presented in [Prakken, 1993].

LOGICS FOR DEFEASIBLE ARGUMENTATION

289

- 'i 2 ; or - There exists an inference rule 1 ; : : : ; m ='i in R such that 1 ; : : : ; m 2 f'1 ; : : : ; 'i 1 g For any argument A - ' 2 A is a conclusion of A i ' is a rst-order formula; - ' 2 A is an assumption of A i ' is an assumption of a default in A; - A is strict i A does not contain any default; A is defeasible otherwise. The set of conclusions of an argument A is denoted by CONC (A) and the set of its assumptions by ASS (A). Note that unlike in Simari & Loui, arguments are not assumed consistent. Here is an example of an argument: [a, r1 : a ^ :b ) c, c, a ^ c, r2 : a ^ c ) d; d; d _ e] CONC (A) = fa; c; a ^ c; d; d _ eg and ASS (A) = fbg. The presence of assumptions in a rule gives rise to two kinds of con icts between arguments, conclusion-to-conclusion attack and conclusionto-assumption attack. DEFINITION 67. (Attack.) Let A and B be two arguments. A attacks B i 1. CONC (A) [ CONC (B ) [ Fn ` ?; or 2. CONC (A) [ Fn ` :' for any ' 2 ASS (B ). Prakken & Sartor's notion of defeat among arguments is built up from two other notions, `rebutting' and `undercutting' an argument. An argument A rebuts an argument B i A conclusion-to-conclusion attacks B and either A is strict and B is defeasible, or A's default rules involved in the con ict have no lower priority than B 's defaults involved in the con ict. Identifying the involved defaults and applying the priorities to them requires some subtleties for which the reader is referred to Prakken & Sartor [1996; 1997b] and Prakken [1997]. The source of the priorities will be discussed below. An argument A undercuts an argument B precisely in case of the second kind of con ict (attack on an assumption). Note that it is not necessary that the default(s) responsible for the attack on the assumption has/have no lower priority than the default containing the assumption. Note also that Prakken & Sartor's undercutters capture a dierent situation than Pollock's: their undercutters attack an explicit non-provability assumption of another argument (in Section 3 called `assumption attack'), while Pollock's undercutters deny the relation between premises and conclusion in a non-deductive argument. Prakken & Sartor's notion of defeat also diers from that of Pollock [1995]. An inessential dierence is that their notion allows for `subargument defeat';

290

HENRY PRAKKEN & GERARD VREESWIJK

this is necessary since their de nition of the status of arguments is not explicitly recursive (cf. Subsection 4.1). More importantly, Prakken & Sartor regard undercutting defeat as prior to rebutting defeat. DEFINITION 68. (Defeat.) An argument A defeats an argument B i A = [] and B attacks itself, or else if - A undercuts B ; or - A rebuts B and B does not undercut A. As mentioned above in Subsection 4.1, the empty argument serves to adequately deal with self-defeating arguments. By de nition the empty argument is not defeated by any other argument. The rationale for the precedence of undercutters over rebutters is explained by the following example. EXAMPLE 69. Consider r1 : : Brutus is innocent ) Brutus is innocent r2 : ' ) : Brutus is innocent Assume that for some reason r2 has no priority over r1 and consider the arguments [r1 ] and [: : : ; r2 ].15 Then, although [r1 ] rebuts [: : : ; r2 ], [r1 ] does not defeat [: : : ; r2 ], since [: : : ; r2 ] undercuts [r1 ]. So [: : : ; r2 ] strictly defeats [r1 ]. Why should this be so? According to Prakken & Sartor, the crux is to regard the assumption of a rule as one of its conditions (albeit of a special kind) for application. Then the only way to accept both rules is to believe that Brutus is not innocent: in that case the condition of r1 is not satis ed. By contrast, if it is believed that Brutus is innocent, then r2 has to be rejected, in the sense that its conditions are believed but its consequent is not (`believing an assumption' here means not believing its negation). Note that this line of reasoning does not naturally apply to undercutters Pollock-style, which might explain why in Pollock's [1995] rebutting and undercutting defeaters stand on equal footing. Finally, we come to Prakken & Sartor's de nition of the status of arguments. As remarked above, they use the grounded semantics of De nition 7. However, they change it in one important respect. This has to do with the origin of the default priorities with which con icting arguments are compared. In arti cial intelligence research the question where these priorities can be found is usually not treated as a matter of common-sense reasoning. Either a xed ordering is simply assumed, or use is made of a speci city ordering, read o from the syntax or semantics of an input theory. However, Prakken 15 We abbreviate arguments by omitting their conclusions and only giving the names of their defaults. Furthermore, we leave implicit that r2 's antecedent ' is derived by a subargument of possibly several steps.

LOGICS FOR DEFEASIBLE ARGUMENTATION

291

& Sartor want to capture that in many domains of common-sense reasoning, like the law or bureaucracies, priority issues are part of the domain theory. This even holds for speci city; although checking which argument is more speci c may be a logical matter, deciding to prefer the most speci c argument is an extra-logical decision. Besides varying from domain to domain, the priority sources can also be incomplete or inconsistent, in the same way as `ordinary' domain information can be. In other words, reasoning about priorities is defeasible reasoning. (This is why our example of the introduction contains a priority argument, viz. A's use of (9) and (10).) For these reasons, Prakken & Sartor want that the status of arguments does not only depend on the priorities, but also determines the priorities. Accordingly, priority conclusions can be defeasibly derived within their system in the same way as conclusions like `Tweety ies'.16 To formalise this, Prakken & Sartor need a few technicalities. First the rst-order part of the language is extended with a special twoplace predicate . That x y means that y has priority over x. The variables x and y can be instantiated with default names. This new predicate symbol should denote a strict partial order on the set of defaults that is assumed by the metatheory of the system. For this reason, the set Fn must contain the axioms of a strict partial order: transitivity : 8x; y; z . x y ^ y z x z asymmetry : 8x; y. x y : y x For simplicity, some restrictions on the syntactic form of priority expressions are assumed. Fc may not contain any priority expressions, while in the defaults priority expressions may only occur in the consequent, and only in the form of conjunctions of literals (a literal is an atomic formula or a negated atomic formula). This excludes, for instance, disjunctive priority expressions. Next, the rebut and defeat relations must be made relative to an ordering relation that might vary during the reasoning process. DEFINITION 70. For any set S of arguments - <S = fr < r0 j r r0 is a conclusion of some A 2 S g - A (strictly) S -defeats B i, assuming the ordering <S on , A (strictly) defeats B . The idea is that when it must be determined whether an argument is acceptable with respect to a set S of arguments, the relevant defeat relations are veri ed relative to the priority conclusions drawn by the arguments in S . 16 For some non-argument-based nonmonotonic logics that deal with this phenomenon, see Grosof [1993], Brewka [1994a; 1996], Prakken [1995] and Hage [1997]; see also Gordon's [1995] use of [Gener & Pearl, 1992].

292

HENRY PRAKKEN & GERARD VREESWIJK

DEFINITION 71. An argument A is acceptable with respect to a set S of arguments i all arguments S -defeating A are strictly S -defeated by some argument in S . Note that this de nition also replaces the second occurrence of defeat in De nition 6 with strict defeat. This is because otherwise it cannot be proven that no two justi ed arguments are in con ict with each other. Prakken & Sartor then apply the construction of Proposition 9 with Definition 71. They prove that the resulting set of justi ed arguments is unique and con ict-free and that, when S is this set, the ordering <S is a strict partial order. They also prove that if an argument is justi ed, all its subarguments are justi ed. We illustrate the system with the following example. EXAMPLE 72. Consider an input theory with empty Fc , Fn containing the above axioms for , and containing the following defaults.

r0 : ) a r4 : ) r0 r3 r1 : a ) b r5 : ) r3 r0 r2 : b ) c r6 : ) r5 r4 r3 : ) :a The set of justi ed arguments is constructed as follows (for simplicity we ignore combinations of the listed arguments). F0 = ;

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close