PROCEEDINGS OF THE THIRD SCANDINAVIAN LOGIC SYMPOSIUM
Edited by
Stig KANGER Professor of Philosophy, University of Uppsala,Sweden
1975
NORTHHOLLAND PUBLISHING COMPANYAM STERDAMeOXFORD AMERICAN ELSEVIER PUBLISHING COMPANY, 1NC.NEW YORK
0 NORTHHOLLAND PUBLISHING COMPANY 1975 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any from or by any means, electronic, mechanical, photocopyin:, recording or otherwise, without the prior permission of the copyright owner. Library of Congress Catalog Curd Number 74801 13 NorthHolland ISBN S 0720422000
072042283 3 American Elsevier ISBN 0444 106790
Published by:
NorthHolland Publishing Company Amsterdam NorthHolland Publishing Company, Ltd. Oxford
Sole distributors for the U.S.A. and Canada: American Elsevier Publishing Company, Inc. 52 Vanderbilt Avenue New York, N.Y. 10017
P RI NTED I N EAST G ERM A N Y
PREFACE The Third Scandinavian Logic Symposium was held at the University of Uppsala, Sweden, April 91 1, 1973. About 70 persons attended the meeting and 18 papers were presented. The symposium was organized by a committee consisting of J. E. Fenstad, J. Hintikka, B. Mayoh and S . Kanger. It was sponsored by the Division of Logic, Methodology and Philosophy of Science of the International Union of History and Philosophy of Science, by the Swedish Natural Science Research Council, and by Fornanderska Fonden. Uppsala, January 1974
Stig Kanger
V
QUANTIFIERS, GAMES AND INDUCTIVE DEFINITIONS Peter ACZEL University of Manchester, Manchester, Great Britain
0. Introduction We give a game theoretic interpretation of infinity strings of quantifiers (Qx, Qxl Qx, where Q is any nontrivial monotone quantifier. These are shown to be closely related to certain inductive definitions. In tbe last section we give a generalisation of Moschovakis's characterisation, using his game quantifier, of the positive &st order inductive relations (see [5]). Finally we relate our work to an earlier construction in descriptive set theory, Kolmogorov's Roperator. The results in this paper were first announced in an unpublished paper 'Stage comparison theorems and game playing with inductive definitions' and a supplement 'The classical Roperator and games'. a..),
0.1. NOTATION. If R is a set, R(u) means u E R. "A denotes the set of ntuples of elements of A . ( ) is the only element of OA. (""A = "A. "'Ais the set of all infinite sequences of elements of A . x,y , z, ... range over A ; s, t range over ("'A, and 01, /?range over "A. Let aln = (01(0), ..., a (n  1)) E "Afor 01 E "A and n E w .
Un
1. Quantifiers
By a quuntijier on the set A we mean a set Q of subsets of A. We sometimes prefer the notation Qx R(x) to R E Q. Q is monotone if X 2 Y EQ => X E Q, and Q is nontriuiul if Q # 0 and 0 4 Q. Throughout this paper, 1
2
PETER ACZEL
all quantifiers will be understood to be monotone and nontrivial, unless otherwise indicated. The usual quantifiers are 3 = {X E A I X # S } and V = {A}. All quantifiers have certain properties in common with 3 and V. In particular, 1.1. PROPERTY. If S is a sentence and R Qz (S
A
c A, then
R(z)) c> S
A
Qz R(z),
Qz ( S v R(z)) 0S v Qz R(z). The aim of this paper is to investigate a natural interpretation to infinite strings of quantifiers and relate it to some earlier ideas. Certain infinite strings of quantifiers have already been interpreted. For example (3x0 3x1 *.), (VXO v x ,
a),
(VXO 3x1 vx* 3x3
*.a).
We shall give an interpretation to (Qxo Qxl for any quantifier Q. This will generalise the three examples Q = 3, V and V3, where V3 is the quantifier on A x A given by em.)
V3xy R ( x , y ) e Vx 3y R (x, y ) .
More elaborate strings of quantifiers may be considered without extra effort except for notational complexity (see 0 5). Using these strings, we can define a quantifier Q* on "A by
Q*LXR(a)
(Qx, Q x ~ R ( ~ g ,XI, ...). ;*a)
Also we can define quantifiers Q " and Q on (@)Aby Q"SR(S)*(QXOQ x ~ :   ) V R ( X O..., , xnl), n
Q " s R ( s ) * ( Q x ~ Qx, * * . ) A R ( x o..., , ~
~  1 ) .
n
For example, 3* and V* are the usual quantifiers on mA, while 3" and V" are the usual quantifiers on (m)A.More interestingly, V" is the Souslin quantifier Y Y s R(s) eVLXV R (.In), n
QUANTIFIERS, GAMES AND INDUCTNE DEPINlTIONS
3
and 3' is the classical dguantijier d s R(s) 0 36 A R (.in). n
Recently, Moschovakis has used the game quantiJier B = (V3)".
The dual 0 of a quantifier Q is given by
o x R(x) o i Qx
R(x).
i
3 and V are duals. So are Y and d. Also the dual of B is (3V) A . One of the results of this paper will be that 0 A is always the dual of Q'. 2. Games
The key to our interpretation of (Qxo Qxl .)is the following triviality.
2.1. Qz R(z)e ( ~ X Q) E (Vz E X)R(z). Now (3x0E
0 ) (Vzo E Xo)(3x1E Q)("21
E
Xi)* . *
has a natural interpretation in terms of games G (9, R) for R c "A between two players 3 and V who make alternate moves starting with 3 who always picks elements of Q, while V always responds by picking an element from the last set chosen by 3. As Q is nontrivial, each player can always move. A play of the game produces an infinite sequence xo,z~,Xl,z1,...Xn,z~, such that znE XnE Q for all n. Such a play is a win for 3 if R (zo ,z1,...); otherwise it is a win for V. if player 3 has a 2.2. MAIN DEFINITION. (Qxo Qxl ..)R(xo, x1, winning strategy in the game G (Q, R). First notice that Q*, Q A and Q ' as defined in 0 1 are all nontrivial monotone quantifiers. Also, the special cases when Q is 3, V or V3 agree with the earlier interpretations.
4
PETER ACZEL
Our k s t result appears obvious from the notation, but does require a proof.
2.3. THEOREM. Qz (Qzo Qzi
me.)
R (z, zo, z i , ...) 0 (Qzo Qzi
..a)
R (ZO,
21 9
me.).
PROOF.Suppose that the righthand side is true. Then player 3 has a winning strategy a in G (Q, R). Let X E Q be the first move of 3 following this strategy. Then to each move z E X of player V, the strategy a determines a strategy a, for 3 in the game G (Q, &), where R, (zo, zl,...) 0R (z, zo ,z l , ...). Clearly, a, is a winning strategy, so that ( 3 X € Q) CJZE
x> (Qzo Qzi
..a)
Rz (zo, z i , ..).
Hence by 2.1, the lefthand side is true. Conversely, suppose that the lefthand side is true. Then by 2.1, there is an X E Q such that for each z E X , 3 has a winning strategy a, in the game G (Q, Rz).Now 3 has the following winning strategy for G (Q, R): Start by choosing X and then if V chooses z E,'A follow the strategy a,. Hence the righthand side holds. If s, t are two sequences, let St denote the result of concatenating them. ( ) denotes the empty sequence. 2.4. COROLLARY.
(i) R() v QzQ"sR((z)s)=Q"sR(s), (ii) R() A Q z Q A s R ( ( z ) n s ) ~ Q h ~ R ( s ) .
PROOF.
QUANTIFIERS, GAMES AND INDUCTIVE DEFINITIONS
(ii) As in (i), except use If R E (")A, let
A
AR =
Then
5
' instead of ' v '.

E
Q v~ R(s)
I
"A A R (.In) n
I.
Q*or V R(or),
Q " S R(s) eQ*CX A R(ol). The games G (Q, V R) and G (Q, V R) are open and closed, respectively, in the terminology of GaleStewart [2], and hence by a result of that paper, they are determined, i.e., one of the players has a winning strategy. Hence 2.5. THEOREM.
where W, (Q, R) and W, (Q, R) mean that 3 and V, respectively, haue a winning strategy in the game G (Q, R).
3. Fans If S E '"'A and s E ("'A, let S, = { z E A I s(z) sight into the games G (Q, R) we need:
E
S}. For more in
3.1. DEFINITION. S E ("'A is a Qfan if
(0 (18 s, (ii) S E S * S , E Q . We shall see that a Qfan is essentially a strategy for 3 in G (Q, R) while a afan is a strategy for V in the same game.
6
PETER ACZEL
3.2. THEOREM. (i) W3 (Q, R ) CJ (3s) [ S is a Qfan & A S
c R],
(ii) W ~ ( Q , ~ ) * ( ~ S ) [ S ~ S ~ ~E~i U R ]~, where & A Si R = " A  R . PROOF.Given a strategy u for either player, let S be the set of sequences (zo, ..., z,~) of possible moves of player V when this strategy is being followed. If u is a winning strategy for 3, then clearly A S E R, and if R. Hence in order to prove it is a winning strategy for V, then A S E i the implications =in (i) and (ii) it is sufficient to show that S is a Qfan in (i) and a @fan in (ii). So first let u be a strategy for 3. Trivially, ( ) E S. If s E S, then S, is the set of possible moves of V after 3 has made his move according to his strategy, given the sequence s of moves of V. Hence S, itself must be the set chosen by V so that S, E Q and hence S is a Qfan. Now let u be a strategy for V. Again, ( ) E S. Given s E S , suppose that S, 4 0. Then i S, = A  S, E Q and hence is a possible move for 3 after the sequence s of moves of V. If 3 makes this move, the strategy u determines an element z E is,as ti's next move. So s(z) 4 S, but this contradicts the fact that s(z) is a sequence of possible moves of V following the strategy u. Hence we must have S, E (5, so that S is a 0fan. For the converse implications, let S be a Qfan with A S E R. Consider the following strategy for 3. If V has played a sequence of moves s E S, then 3 should play S, as next move. Any response z E S, of V still gives a sequence s(z) E S. If 3 follows this strategy, then the moves of V will form a sequence in A S E R. Hence it is a winning strategy for 3. Thus (i) e is proved. Finally, let S be a @fan with A S E i R . Consider the following strategy for V. If V has played a sequence s E S of moves, and X E Q is the last move of 3, then as S, E 0, i s , 4 Q so that X $ is,, i.e., X n S, # 8. V's strategy is to choose an element of X n S, for his next move. In following this strategy, V's moves will form a sequence in A S c iR. Hence the strategy is a winning strategy for V, proving (iii) =. We have the following immediate corollaries.
QUANTIFIERS, OAMES AND INDUCTIVE DEFINITIONS
3.4. THEOREM. Q ' is the dual of
4
7
A.
This follows from 2.5 and 3.3. 3.5. THEOREM.
Q's R(s) o (3s) [ S is a Qfan & A S
c V R],
Q ^ s R(s) c> (3s E R) [S is a Qfun].
The last equivalence holds because
when S is a Qfan. 4. Inductive definitions Moschovakis [ 5 ] has shown that the game quantifier B = (El)"can be expressed in terms of an inductive definition. We shall do the same for arbitrary Q '. An operator @ on A , mapping subsets of A to subsets of A is monotone if X c Y c A * @(X) E @(Y). The set inductively defined by such @ is { S I @(S) E S } . More generally, Qrn = Qrn(x)= r ) { S [ X u Q j ( S ) s S } if X S A . 4.1. THEOREM.
[email protected] is the operator on @"Aghen by @(S)= (s E '")A I S, E Q } for S c @)A, then Q"s
R (ts)
ot E
Grn(R).
PROOF.Let X be the set of t E (@)Asatisfying the lefthand side. We must show that X = @"(R). By 2.4 (i), R u @(X) E X. Hence CDrn(R)E X.To show that X C
[email protected]), let t E X. Then there is a Qfan S such that A S c V (s I R (t )}. Suppose that t#Grn(R).We shall obtain a contradiction by defining a sequence z,,, zl, ... in A S such that t(z0, zl, ..., z, 1) 4 Qrn(R)for all n.
8
PETER ACZEL
Let
so = ( t " ( Z ) 1 z E so>. Then as SO E Q, t E @(So)so that as t 4 P ( R ) we must have So $ @,"(R). Hence there is zo E So such that t(zo) 4 @,"(It). Let
s1= (t+o
Y
z)
I z E s(so)) *
Then as &,) E Q, t(zo) E @(S1) so that S1$ @"(R). Repeating, we may define z o , zl,... such that z, E S(,,,,...,Z n  l ) and t(zO, ...,Zn ,) 4 @"(R) for all H. Hence z0 , Z ,
so that R (t(zo, 4 @,"(R).
~
.,. E A S E V
{S
R ( t s))
...,z ,  ~ ) )for some n, contradicting t(zo, ...,z,,)
4.2. COROLLARY. r f @ is as in 4.1, then
Q "s R(s) o ( ) E @,"(R). 5. Generalisations
Given a sequence Q = Qo, Q 1 , ... of quantifiers on A, the infinite string (Qozo Qlzl may be interpreted using games G ( 9 , R) which are like the earlier games G (Q, R),except that player 3 must choose X, E Q, for his (n + l>,, move. All the results will generalise and we may define quantifiers Q*, Q " and Q as before. The notion of Qfan is defined in the obvious way; i.e., if S is a Qfan, then if s E S is a sequence of length n, then S, E Q,. Another generalisation is perhaps more interesting. Let Q = { Q a } A e a be a family of quantifiers on A. Then (Q,zo Qzozl QZ1z2..) may be interpreted using games G,, (9, R) which are like the games G (Q, R), except that 3 starts by choosing X, E Q, and if z, is the (n + lst) move of player V, then 3 must choose X,,, E QZ, for his next move. Again, all the results generalise, and quantifiers a  Q*, a  Q" and a  Q" may be defined for each a E A. Also the notion of an (a  @fan S may be defined. This time, So E Q., and if s(z) e S,then SS(.)E Q,.
QUANTIFIERS, GAMES AND INDUCTIVE DEFINITIONS
9
If @ ( X ) = ( x E A I XE:Q,}, then @ is a monotone operator on A such that @(0) = 0 and @(A) = A. As in the proof of 4.1, we can prove 5.1. THEOREM. If U
c A , then
a E @"(U)
ea E
U v (Q,,z, QEOzl ..) V zn E U . n
More generally, we may characterise arbitrary inductive definitions on A in terms of these strings of quantifiers. Let @ be any monotone operator on A and define Q = by
If U c A , let U' = U u @(@). 5.2. THEOREM.
a E @"(U) e a E U' v (Qazo Qz0z,...) V znE U ' . n
6. Applications
The ordinary quantifiers are often used to specify interesting classes of relations on a set (e.g. the Xc,"and ll,"relations on co). Other quantifiers may also be useful in this way. In order to use the quantifiers constructed earlier, we shall assume given a coding of '")A in A ; i.e., an injective mapping @')A+ A that assigns (s) E A to each S E '"'A. Using this coding, we shall assume that Q" and Q" are quantifiers on A defined by
Q " X R(x) 0 (Qxo Qxl ...) A R ((xO
7 *
**
n
Q " x R ( x ) e ( Q ~ oQ x , . . * ) V R ( ( x o , n
.
y
Xn
1)) 7
..., ~ ~  1 ) ) .
Also, if Q1 .,, Qn are quantifiers on A, then Q1 on A defined by
Q,is the quantifier
10
PETER ACZEL
Before quantifiers can be used to specify classes of relation on A , we need a basic class of relations to start with. We assume that 9 is a class of relations on A satisfying the following conditions. (1) W is closed under the boolean operations. (2) If R € 9and
S(X1, ..*,xn) 0 (xn(1)7* *  , xn(m))7 for somen: (1, ..., m} + (1, ..., n}, then S E ~ . (3) The coding Is ( s ) is an 9acceptable coding. To spell out what we mean by this, let us call a function f:"A + A an 9function if, whenever R E 9,then so is S, where
s (Xl,
***
7
x",y1,
Y
ym)
(f(xl
3
.**
3
xn), Y17
* * * 9
vm).
Then ilr <s) is 9acceptable if (3a) For each n, ilr (s) f "A is an 9function. (3b) For each n, there is an 9function Ax (x). such that ((xo, ..., xm,)), = x,
if n c rn.
(3c) There is an 9function * such that (s) * ( t ) = ( s e t ) for s, t E ('"'A. (3d) If S E m f l A is in B? and f : "'+IA + A is an Wfunction, then S' E 9,where
These are certainly not the most elegant set of conditions, but they are sufficient for our purposes. The examples we have in mind are (A) W is the set of recursive relations on A = o,in which case the &%functionsare just the recursive functions.
11
QUANTIFlERS, GAMES AND INDUCTIVE DEFINlTlONS
(B) W is the set of elementary (i.e., first order definable) relations on an acceptable structure 8 = ( A , R1,..., R ) in the sense of Moschovakis's [6],in which case the 92functions are just those functions whose graph is elementary. 6.1. DEFINITION. If Q is a quantifier on A and R E nA,then R is Qexplicit if
R(x,,...,xn)~QyS(y,x,,..*,xn) for some S E 9.
6.2. EXAMPLES. (A) If W is the set of recursive relations on w, then 3explicit = r.e. = x :, Vexplicit = II;, ~jexplicit= n;, 3Vexplicit = I;;, etc. ... Recall that d = 3 A and 9 = (V3)". Then dexplicit = I;:, 9explicit = .&explicit = H i . (B) If W is the set of elementary relations on an acceptable structure 8, then the main result of [ 5 ] is 6.3. THEOREM (Moschovakis). R is positive jirstorder inductive iff R is 9explicit. Our aim here is to generalise this result. 6.4. DEFINITION. (i) @ : P(A) + P(A) is a Qoperator if there is an R E 9 and some &?functionf such that
x E @(x) Qx,
Qxn [R( x ,
,
* * * 7
Xn)
v x(f(x,
***
3
xn))]
(ii) A relation R is Qinductive if there is a Qoperator @ and an 9function f such that
R
(XI,
..., x,) * f ( ~ l ,...,
E
!DW.
Moschovakis's canonical form for positive firstorder inductive definitions implies that they are essentially just the V3operators. Hence we get 6.5. THEOREM. R is positive jirstorder inductive if R is V3inductive. We can now state our generalisation of 6.2.
12
PETER ACZEL
6.6. THEOREM. For any quantifier Q,
R is Qinductive o R is Q 'explicit. PROOF. Let R be Qinductive. Then where g is an Wfunction and x E @(x) Q Y ~
 0 .
QYm
( S (x, Y I 7
...¶
Ym)
v X ( f ( x , YI
9
* a *
3
Ym))),
where S E W and f is an Wfunction. Then, as in the proof of 4.1 we can show that
where
QUANTIFIERS, GAMES AND INDUCTIVE DEFINITIONS
13
Then ?Pmay easily be seen to be a Qoperator and ( x , x 1 , ...,X,)E ! P r n o x E @ Q({x I
S(X,XI,
..., xn)}).
Hence
so that R is Qinductive. In [6], Moschovakis has generalised his work on positivefirst order inductive to what he calls positive LZ'(Q)inductive. Essentially, LZ'(Q) is the firstorder language for '2I augmented with quantifier symbols Q x and o x so that if Q is a formula, so are QxY and b Y . By putting formulae of LZ'(Q) in prenex form using 1.1, the positive Y'(Q)inductive definitions may be seen to be the V3QOoperators. If Q subsumes 3, i.e., there is an Wfunction f such that
then the VLlQOoperators are just the Qaoperators. Hence we have the result
6.7. THEOREM. If Q subsumes 3, then R is positive 9(Q)inductive iff R
is (QO) "explicit.
An example of such a Q is d ,because 3x R(x) e d x R((x),J. Finally, we wish to relate the ideas in this paper to some earlier work in descriptive set theory. Let Q be a quantifier on cc). If F = F(O), F ( l ) , ... is a sequence of subsets of a set A , let
I'@)
= {u E A
I Qn (a E F(n))}.
Operators of the form r, are calledpositive analytic (see [3] for details). In the 1920's, Kolmogorov introduced an operator R on positive analytic operations which had the property that R r 3 = In fact, it turns out that R r Q = rQA for all Q. Our terminology 'Qfan' has been taken from the terminology used for the same idea in the definition of R. An operator R* similar to R was later introduced by Ljapunov. This time R*r, = I'caii,for all Q. In particular, R*r3 = r,, so that the
r,.
14
PETER ACZEL
game quantifier is not a stranger to descriptive set theory. For further results relevant to the Roperator, see [4] as well as [3]. For further results on Qinductive relations on co, see [l].
References [l] P.Acztl, Representability in some systems of second order arithmetic, Israel J. Math. 8 (1970) 309328. [2] D.Gale and F.M.Stewart, Infinite games of perfect information, Ann. Math. Studies 28 (1953) 245266. [3] P. G .Hinman, Hierarchies of effective descriptive set theory, Trans. Am. Math. SOC.142 (1969) 111140. [4] P. G. Hinman, The finite levels of the hierarchy of efective Rsets, to appear. [5] Y.N.Moschovakis, The game quantifier, Proc. Am. Math. SOC.31 (1971) 245250. [6] Y .N. Moschovakis, Elementary induction on abstract structures (NorthHolland, Amsterdam, 1974).
SOME CONNECTIONS BETWEEN ELEMENTARY AND MODAL LOGIC Kit F I N E University of Edinburgh, Edinburgh, Scotland
0. Introduction
A common way of proving completeness in modal logic is to look at the canonical frame. This paper shows that the method is applicable to any complete logic whose axioms express a XAelementary condition or to any logic complete for a Aelementary class of frames. We also prove two mild conversesto this result‘. The first is that any finitely axiomatized logic has axioms expressing an elementary condition if it is complete for a certain class of natural subframes of the canonical frame. The second result is obtained from the first by dropping ‘finitely axiomatized’, and weakening ‘elementary’ to ‘Aelementary’. Classical logic is used in the formulation and proof of these results. The proofs are not hard, but they do show that there may be a fruitful and nonsuperficial contact between modal and elementary logic. Hopefully, more work along these lines can be carried out. $1 outlines some basic notions and results of modal logic. For simplicity, this is taken to be monomodal. However, the results can be readily extended to multimodal logics and, in particular, to tense logic. After writing this paper, I discovered that A.H.Lachlan had already proved the first of these ‘mild converses’ in [5]. His proof uses Craig’s interpolation theorem, whereas mine uses the algebraic characterization of elementary classes. R. I. Goldblatt [4] independently hit upon this latter proof at about the same time as I did. He also has a counterexample to the converse of this result. It is similar to the one in 94. I should like to thank Steve Thomason for the references above and for some helpful comments on the paper.
15
16
KIT FINE
$2 proves the first of the above results and a related result as well; 53 proves the second of the above results; and finally, $4 constructs counterexamples to some plausible looking converse results.
1. Preliminaries
The reader is assumed to be familiar with elementary logic. [I], for example, contains all of the relevant information. We shall briefly review some of the basic notions and results of modal logic, including its translation into elementary logic. Proofs are straightforward or standard and are therefore omitted. Further details may be found in [71 or [9]. 1.1. Syntax The language La,01 an ordinal, is the set { p < :t < 01} of distinct sentenceletters. Usually only countable languages are considered. In [2], there are languages with only finitely many sentence letters. In this paper, we will consider languages of any cardinality. The (modal) formulas of Laare constructed from L, with the help of truthfunctional connectives (+ and I)and the modal connective [7 (necessity). 0 (possibility) abbreviates . All formulas and derivative notions will be relative to a fixed language La,unless otherwise stated. A (modai) theory d is a set of modal formulas such that (i) all tautologous formulas E d ; (ii) all formulas 0 ( A + B) + ( U A + n B ) E A ; (iii) A , A
+
B Ed
B EA (Modus Ponens).
A theory A is normal if (iv) A e A * CIA E A ; and substitutive if (v) A Ed * A
p/leEd.
SOME CONNECTIONS BETWEEN ELEMENTARY
17
A theory A is consistent if not both A and  A E A , complete if A or  A E A , and maximally consistent if both consistent and complete. For A a set of formulas, a Atheory is simply a theory containing A . A modal logic L is a normal substitutive theory. A logic is finitely axiomatizable if it is the smallest logic to contain some finite set of formulas. Suppose that A is a formula of La with sentence letters
Then we may let A* be the result of substituting prfor psi,i = 0,1,. ..,n, or be A itself if A contains no senlenceletters. To each logic L on L, there corresponds a logic La on La, where La = {formulasA of La :A*€ L}. By substitution ((v) above), La is the only logic on L, to conservatively extend L, cx 2 0. For this reason we shall often restrict our attention to logics on L,.
1.2. Semantics A model 8 for L, is a triple (X, R, v), where X (worlds) is a nonempty set, R (accessibility) E Xz,and v (valuation) E La+ P ( X ) . A frame is a pair ( X , R), where X is nonempty and R E X z . Thus a frame is essentially a model on Lo. Given a model 8 = ( X , R, u), the domain of 8, Dom (%), is X and the frame of 8, the frame upon which it is based, is ( X , R). Usually, we shall let the models 8 and 23 be ( X , R, v) and ( Y ,S, w),
respectively. A point is a pair ((11, a), where 8 is a model and a is a member of Dom (8).The truthdefinition for modal logic holds between points and formulas. The nonobvious clause is: ((11,
a) k [7A

(8,b) k A for each b such that a R b.
For A a set of formulas, (8,a) i=A , d is true at (a, a), if (8, a) i= A for each member A of d. A model 8 verifies A ( d ) if (8,a) i= A(d) for each point (8, a). A frame 3is a frame for
[email protected]), or validates A(& if each model based upon 8 verifies A ( A ) . Fr(A) is the class of frames for A . A logic L is complete f o r a class of frames K  alternatively, K determines L  if A E L iff each frame in K validates A . A logic L is complete if it is complete for Fr(L). 2
Knnger, Symposium
18
KIT FINE
1.3. Results We shall require some results on canonical models, generated submodels, and pmorphisms. Each normal theory T determines a model 91T = ( X T ,RT, uT), called the canonical modelfor T, where XT = { A : A is a maximally consistent Ttheory}; RT UT
= { ( a , b ) E X + : { A : f Z l A E a C ) b}; =
{(t,Y)E L,xP
(XT)
: Y = {a E XT :pc E a}}.
The canonicalframe ST is (XT ,RT).
LEMMA 1 (Lindenbaum). Any consistent Ttheory is included in a member of XT, for T a normal theory. LEMMA 2 (TruthLemma). and a in X T .
(aT, a) k A o A E a, for
T a normal theory
Given a model 8 = (X, R,u) and Y c X,let 8 I Y, the restriction of 3 to Y, be (Y,R1 Y,u [ Y),where the internal restrictions have their obvious sense. 8 = (Y,S, w) is a generated submodel of 8 if B = M [ Y and b E Y whenever a E Y and a R b. The notion of generated subframe is similar.
LEMMA 3 (On Generated Submodels). Suppose 8 is a generated submodel of a. Then (23, a) t A i f ( %a) , t A for each a in Dom(8). LEMMA 4 . 3 is a frame for L if, for each a in 8, some generated subframe of 3 contains a and is a frame for L. f is a pmorphismfrom frame (3 = (Y,5') onto frame 3 = (X, R ) i f
(i) f is a function from Y into X ; (ii) f is onto; (iii) a S b =.f(a) R f ( b ) for a, b E Y ; (iv) f ( a ) R f ( b ) =. a S c and f ( c ) = f ( b ) for some c in Y.
f is a pmorphism from 23 = ( Y , S, w) onto (v) ~ ( 5 = ) ( f (a):a E
[email protected])}.
=
(X,R,v ) if, in addition,
SOME CONNECTIONS BETWEEN ELEMENTARY
19
LEMMA 5. Suppose that f is a pmorphism from 23 onto 8. Then (‘23, a) t.A i f ( % , f ( a ) ) C A , for a in Dom (8). LEMMA 6. Suppose that there is a pmorphism f from @ onto 5. Then a frame for logic L
[email protected] is.
is
1.4. The translation
There is a standard translation from a modal into an elementary language. For this purpose, we may regard the sentenceletters of L, as monadic predicate letters. The elementary formulas of L, are then constructed from L, with the help of variables x,,, x i , ..., truthfunctional connectives, quantifiers, the identity symbol = ,and the binary predicate letter (for accessibility). Given a modal formula A of L, and a term t, we let A’(t) be an elementary formula of L,, where (0 PXt) = P$, 5 .c a ; (ii) I’= I; (iii) (A
+ B)’
( t ) = (A’(t) + B’(t));
(iv) (CIA)’ (t) = (Vy) (t & y to occur in t.
P
A’(y)), where y is the first variable not
The models % of modal logic are also, of course, models for elementary logic. The standard notions of elementary logic will be used. In particular, will be used in an unrigorous, but obvious, way as a name for the element a. The context will always make it clear whether notions of modal or elementary logic are intended. The semantic significance of the translation is given by: THEOREM 1. For any point (%, a) and modal formula A ,
(a, a) k A z g
% I= A‘@).
2. From elementary to canonical We shall sometimes think of a modal logic in terms of the class of its frames. Thus we say a logic L is elementary (Aelementary, ZAelemenfury) if Fr(L) is elementary, (Aelementary, XAelementary), i.e. if its
20
KIT FINE
axioms express the appropriate firstorder property. Many of the standard logics are known to be elementary. A logic L is canonical if, for each ordinal a, &a is a frame for L. From Lemmas 1 and 2, it follows that every canonical logic is complete. The first result of this section is: THEOREM 2. Any complete XAelementary logic is canonical. This result shows that the success of the canonical frame proofs in [7] is no accident. For all of the logics considered are XAelementary (indeed, elementary). So the logics, if complete, are also canonical. The proof proceeds through a series of lemmas. Recall that a set A of elementary formulas in the free variable xo is satisfied in the model % if there is an a in X = Dom (%)such that
[email protected]) is true in % for each 4(xo) in A ; that A isfnitely satisfied in % if each finite subset of A is satisfied in 9, and that 5 3 is asaturated, a a cardinal, if any set of formulas A in the free variable xo and with fewer than a constants a, a E X, is satisfied in % whenever d is finitely satisfied in 3. LEMMA 7. Any model 8 on L, has an osaturated elementary extension 23. The proof of this result and its generalisations is wellknown from classical model theory. For example, 23 can be obtained from M as an ultralimit in the sense of Kochen [6].In the sequel we shall only use the fact that 23 is 2saturated. There are two senses of saturation that are relevant for modal logic. We say that a set of modal formulas A is satisfied in % if there is an a in X = Dom(%) such that (8,a) k A for each A in A ; and that A is fifiitely satisjied in % if any finite subset of A is satisfied in 8. Then 8 is modally saturated, if a set of formulas A is satisfied in % whenever it is finitely satisfied in %. Given the set of modal formulas A , let Od = (0(Ao
h
/I
A,) :A o ,
...,A , E A ] .
A
A , ) : Ao,
..., AnEd).
Similarly, let
a d = (0(A0 A
***
Then % is modally saturated, if, whenever OA is true at a point (8,a), then, for some b in X,a R b and A is true at (%, b). % is modally saturated if it is modally saturated, and modally saturated,.
SOME CONNECTIONS B E W E N ELEMENTARY
21
LEMMA 8. Any 2saturated model 8 is also modally saturated. PROOF.Suppose that 8 is 2saturated. To show that 8 is modally saturated,, let A be any set of modal formulas that is finitely satisfied in 23. By Theorem 1, {A’(xo): A E A } is finitely satisfied in %. Since 8 is lsaturated, {A‘(xo): A E A } is satisfied in 8.So by Theorem 1 again, d is satisfied in 8. To show that 8 is modally saturated,, let A be any set of modal formulas such that OA is true at the point (8,a). By Theorem 1,
is true in 23. So {g & xo} u {A’(xo):A E A } is finitely satisfied in %. Since 8 is 2saturatedY there is a b in Dom(8) such that {&I,) u {A’(bo): A E A } is true in %. So by Theorem 1 again, a R b and d is true at (%, b). For each model %, let the (modal) theory of % be { A : (%, a) I=A for each u in X}. Clearly, the theory of % is normal.
LEMMA 9. There is a pmorphism f from any modally saturated model 23 = ( Y , S, w) onto the canonical model % for the theory T of 8.
,
FROOF.For each a in Y = Dam(%), let f ( a ) = { A : (8,a) CA}. We verify that f satisfies the five conditions for being a pmorphism. (i) f is a function from Y into X , = Dom (3,). It is easy to show that f ( a ) is a theory. f ( a ) is maximally consistent since  A Ef ( a ) iff A 4 f ( u ) by the clause for negation. Finally, f ( a ) 2 T since T is the theory of 8. (ii) f i s a function onto X,. Let d be any member of X,. Then A is finitely satisfied in 8. For suppose otherwise, so that there are formulas A o , ...,A , in d such that (8,a) k (Ao A A A,) for each a in Y. Since T is the theory of 8, (Ao A ... A A,)eT. So  ( A o A A A,)eA by A a theory containing T, and  A , e d , for some i I n, by A complete. But then A l 44 by d consistent, contrary to the original supposition.
22
KIT FINE
b is modally saturated, and so (8, a) C d for some a in Y. But d is maximally consistent and s o f ( a ) = d. (iii) a S b =.f(a) R,f(b) for each a, b in Y. Suppose a S b and U A Ef(a). Then (b,a) I= UA. So (8, b) I. A and A Ef(Q. (iv) f ( a ) R,f(b) * ( 3 4 (a s c andf(4 = f(b>). Suppose f ( a ) R,f(b) = d. Then Od c f(a) and (a, a) C Od. By B modally saturated,, there is a c in Y such that a S c and (a, c) C d. But d is maximally consistent and sof(c) = d = f(b). (v) DT(5) = {fc.> :a E ~ 8 1 . v#) = {b E X, :pc E b} (by Lemma 2) = {f(u) :(23,a) i=p s } (by (ii) and the definition off) = { f ( a ):a E w(t)}. We are now in a position to put all the lemmas together. Suppose that L is a XAelementary and complete logic (defined on thelanguage La). Let La be the corresponding logic on L,, LY an ordinal. Then we must show that is a frame for L. Let { A i :i E I } enumerate all of the Laconsistent formulas. Since L is complete, there is, for each i in I, a model 9ii = ( X i , R i , v i ) and an element a, of Xi such that (at,al) C At and & = (Xi,R,) is a frame for L. Clearly, we can suppose that the sets { X i :i E I } are pairwise disjoint. Now let = (X, R,v ) = Us,, i.e., X = u X i , R = U R , and ~ ( 5 ) = u u i (5). Then by Lemma 4, 3 = (X, R) is a frame for L, and the theory of % is L,. By Lemma 7,% has an wsaturated elementary extension b = (Y, S,w). By Lzbelementary, @ = (Y, 5')is also a frame for L; and by Theorem 1, the theory of b is also La.Now 23 is modally saturated by Lemma 8. So by Lemma 9, there is a pmorphism f from b onto %La. But then by Lemma 6, &a is a frame for L and the proof is complete. A logic L is quasielementary (quasiAelementary, quasiXAelementary) if some elementary (Aelementary, XAelementary) class of frames determines the same logic as Fr(L). Clearly, every elementary (Aelementary, CAelementary) logic is quasielementary (Aelementary, ZAelementary). Later we shall give an example of a logic that is quasielementary but not elementary (or even XAelementary). We shall now prove:
SOME CONNECTIONS BETWEEN ELEMENTARY
23
THEOREM 3. Any quasiAelementary and complete logic L is canonical.
PROOF.Suppose that L is quasiAelementary and complete logic L, so that L is complete for some Aelementary class K of frames. We wish to show that is a frame for L. Let do be an arbitrary member of &a, i.e., consistent and complete L,theory. Then by Lemma 4, it suffices to show that some generated subframe of & contains doand is a frame for L. Let ( A i : i E I } enumerate the formulas of do. Then for each i in I there is a model 8, = (Xi, R t , v,) and an element a, of X , such that (Hi7a,) I= A , and 8, = (Xi7R i ) E K. Let .Ii = { j I :~(5, aJ k A i } , i E I, and Fo = (.Ii:i E I}.Then Fo has the finite intersection property and so can be extended to an ultrafilter F. Now let % = (X, R, u) = n%,/F. By K Aelementary, 3 = (X, 3)E K ;and by Theorem 1 and LoS's theorem, the theory of % contains La. By Lemma 7, % has an wsaturated elementary extension b = (Y, S,w). By K Aelementary, (8 = (Y, S)E K; and by Theorem 1 again, the theory of 23 contains L,. Now 23 is modally saturated by Lemma 8; and so by Lemma 9, there is a pmorphismffrom %3 onto a submodel I X' of ELe *
By Lemma 6, I X' is a frame for L. Letfo be the function on I such thatf(i) = a,, i E I. Then by Lol's theorem and Theorem 1,fcfo/ ) =do. Finally, &* I X' is a generated subframe of 3L.,.For suppose that f ( a ) = A and dRL. I' for a in Y.Then OI'is finitely satisfied at (23,a). So by 8 modally saturated,, there is a b in Y such that a R b and ('23, b) I. I'. But I'is maximally consistent and so f(b) = I'. 3. From natural to elementary
e)
A canonical model 8 = (X, R, has the following two properties: (i) for any distinct a, b in X,there is a formula A such that (8,a) C A but (a7b) k  A ; (ii) if not a R b, then there is a formula A such that (%, a) k n A and (8,b) k  A . We shall call a model with the first property differentiated, a model with the second property tight, and a model with both properties natural. This follows the terminology of [2]. Thomason [I I] calls an analogue of
24
KIT FINE
natural models ‘refined’. Segerberg [9] calls differentiated models ‘distinguishable’ and uses ‘natural’ for the canonical logics of $2. A modal logic is natural if any natural model (on any language) that verifies the logic is based upon a frame for the logic. Any natural logic is canonical. Indeed, a model is natural if it is isomorphic to a submodel of a canonical model that satisfies Lemma 2, i.e., (%, a) I: A iff A E a. We know from $2 that any elementary (and complete) logic is canonical. In this section we prove a mild converse to this result: THEOREM 4. Any finitely axiomatizable natural logic is elementary. PROOF.The proof uses Kochen’s [6] characterization of Aelementary classes. We could use Shelah’s stronger characterization [lo], but this hardly simplifies the proof. LEMMA 10. A class of models K is Aelementary if (i) it is closed under isomorphism; (ii) it is closed under the formation of ukraproducts; (iii) it is closed under the formation of ultraiimits; (iv) its complement (in the class of similar models) is also closed under the formation of ultraproducts; (v) its complement is closed under the formation of ultralimits. Suppose that L is a natural logic (on the language LJ. Then we need to show that K = Fr(L) satisfies the conditions (i)(v). (i) is trivial. (The scrupulous reader can derive it from Lemma 6.) (ii) and (iii) call for the following lemma: 11. Suppose that each a,, i E I, is a model on L, , that (I, F ) is an LEMMA ultrajilter pair, aad that ‘2l is the ultraproduct n[%,/F. Then there are models Bi on a language L,, p 2 01, such that each !Ji is the restriction of Bi to L,, i E I, and % = n23J.F is natural.
PROOF.For each a in Y = Dom(B), select an f in a and letpf and qs be fresh and distinct sentenceletters. Choose p > a so that each p f and qf can be identified with a distinct p e , a I5 < p. Suppose = ( X i , R,, u,), i E I. Then we let Bl = ( X i , R,, wi), where : for 5 < a ;
{ b E X i :f ( i ) Rib}
for Pr = P f ; for pr = q f .
SOME CONNECTIONS BETWEEN ELEMENTARY
25
Clearly, each ?It is the restriction of Bi to L,. So it remains to show that
B is natural.
First we show that B is differentiated. Suppose a and b are distinct members of Y. Let f and g be the selected members from a and b, respectively. Since a # b, Fa, = {i E I : f ( i )# g(i)} E F. Recall that wi(t) = {f(i)}forpr = p f . So(Bi,f(i)) kpgforeachiinIand(Bj,g(i))k pe for each i in Fa,b.But then by Log's theorem and Theorem 1, (B,f/) k P!= P is an elementary chain of models and that ?I = UZ,. Then Z is natural ifeach (11, is natural, n E w.
PROOF. (11 is differentiated. For suppose that a and b are distinct members of Dom(%). Then for some n in a, a, b E Dam(%,). Since %, is differentiated, there is a formula A of L, such that (%, a) C A and (a,, b) k  A . So by Theorem 1 and the union of chains lemmaapplied to (11, I L,: n 5 m w},((11, a) k A and ((11, b) I= A . The proof that 8 is tight is similar. We can now establish (iii). Suppose that '3: E Fr(L) and that {(I,, F,): n E w} is a sequence of ultrafilter pairs. Let go= 8, Sn+l = Sf'lF,;
=
26
KIT FINE
sm
and be the ultralimit of 8 with respect to the ultrafiltersequence {(I,,, F,,): n E w } . We wish to show that 8" E Fr(L). By repeated applications of Lemma 11, there is a set of natural models {a,,: n E w } such that is the frame of a,,, n E w . It follows that there is an elementary chain of natural models {b,,: n E w } such that bnis isomorphic to '?la, n E w , and B = Ub,,is isomorphic to X". By Lemma 12,% is natural. boverifies L since is a frame for L; and so b verifies L by {B,,:n E w } an elementary chain and Theorem 1. Therefore, by L natural, S",which is isomorphic to the frame of 93,is a frame for L. To establish (iv), suppose that 4 Fr(L), i E I, and that 5 = n & / F for some ultrafilter pair (I,F). Since L is finitely axiomatizable, we can suppose it is the smallest logic to contain some formula A . So, for each i E Z, there is a model 5Xi based upon si and an element a, E Dam(%,) such that (!Ill,al) I=  A . Let = n a i / F andfbe such thatf(i) = ai for each i E I. Then by LoS's theorem and Theorem 1, (%,A) k  A and 8 is not a frame for L. Finally, we establish (v). Suppose 5 4 Fr (L) and let 5"be the ultralimit of with respect to the ultrafilter sequence {(In,F,,): n E @}. Then there is a formula A of L, a model % with frame 3, and a point a in % such that (a, a) k  A . Let a" be the ultralimit of 3 with respect to the ultrafilter sequence {(Z,,,F,,):n E w } , where is the equivalence relation on UDom(%,,) that generates the domain of a. Then by Theorem 1 and 8" elementarily equivalent to 3, (a", a /  ) =!  A . But 3" is the frame of a", and so 8" 4 Fr(L). This completes the proof of Theorem 4.
s,,
so
si
s
N
It is worth noting that the proof of (iv) and (v) did not use the fact that A was a modal formula. Any secondorder universal closure of an elementary formula would do instead. The proof of Theorem 4 only used the fact that L was finitely axiomatized to establish condition (v). But K is Aelementary if it satisfies conditions (i)(iv) of Lemma 10. It immediately follows that:
THEOREM 5. Any natural logic is Aelementary.
21
SOME CONNECTIONS BETWEEN ELEMENTARY
4. Some counterexamples
The following implications follow from the last two sections: Natural
Aelementary and complete =+ 2Aelementary and complete =>
canonical.
It would be nice if the first or last of these implications could be reversed. Unfortunately, such results are too good to be true. Let us begin by constructing an elementary and complete logic L that is not natural. L is the smallest logic to contain U p + p , U p t OUp, and O O p OOp. Thus L is the logic S4.1, introduced by McKinsey in [S]. It is easy to verify that Fr(L) is the class of frames satisfying the conditions:
(W (x B 4 ; (VX, y , 4 (x R Y & y li 2 y li 4; ( V 4 (3) (x B Y &L (V.4 (Y li z Y = 4). +
+
so (1) L is elementary. Now let '3 be the model (X, R, v ) on Lm such that X = m, R = I, and v(n) = {m E w :rn In}. The diagram for is: Po
POP1
POPlPZ
1
2
O+O+O
0
***
We can establish the following facts about 8. (a) 8 is differentiated. PROOF.Suppose k # 1, say k < 1. Then (8,1) I=p l and (8, k ) b pl (b) B is tight. PROOF.Suppose not 1 R Ic. Then k
P1. (c)
I
= 1 and so (a, 1) 1 U p Lbut (rU,k)
(9,l)I=A e (a, n) I=A for 1 2 n 2 0 and A a formula of L,.
28
KIT FINE
PROOF.By induction on the construction of A . (d) % verifies L. PROOF. (X, R) is reflexive and transitive and so ! ! Iverifies each substitutioninstance of O p + p and U p ,O o p . Also, % verifies any substitutioninstance 0 O A + O O A of !J G p + QOp. For suppose A is a formula of L, and let k be an arbitrary member of X . Set n = max ( k , 1 } . Now either (%, n) k A or (a, n) I:  A . So by (c) above, (a, m) k A for all m 2 n or (8,m) I:  A for all m 2 n. But then (%, n) k OA v 0 A , and so(%,k)I:0 0 A + OOA. By (a), (b) and (d), % is a natural model that verifies L. But (X, R) = (0, I)is not a frame for L. Therefore (2) L is not natural. It is easy to show: (3) L is complete; and (11, (2) and (3) complete the counterexample. It is worth noting that, by Theorem 2, (1) and (3) imply that L is canonical. A direct proof of this and, consequently, (3) is given in [7]. Let us now construct a canonical logic L that is not XAelementary. L is the smallest logic to contain
OOP
+
(00(P A 4) v o n (P A 4)).
for La,01 an ordinal. Then Let % = (X, R, u) be the canonical model we can establish the following facts about %: (a) For a E X,
OOA E a or

00 ( A u ( A } ) E a
00( A u (  A ) ) c a . PROOF.Suppose OOA c a but 00 ( A u { A ) ) $ a. Then for some formulas B 1 , ...,B,, ELI, 00 (B, A ... A B,, A A ) $ a . Take any formulas C1, ... , C, E d. Then
00 (B1
A
*.*
A
B,,
A
clA
But
oO(B1
A
A
B,, A
..*
c, A
A
* * a
c, A A ) $ a . A
C,,,)Ea,
29
SOME CONNECTIONS BETWEEN ELEMENTARY
since OOA E a, and so, by a 2 L,, 00 (Bl
A
*
A
B,, A
c, h
*.
A
c,,,A
A)
Eu.
Therefore 00 (C, A A C, A  A ) E a and 00 ( A u {  A } ) C _ a. (b) Suppose { A i : i E I } is a cchain of sets of formulas and a E X . Then ODdi E a for each i E I + OD u A , c a.
PROOF.Straightforward. (c) The frame (X, R) of % satisfies the condition:
I*)
(VX,
u) (X &Y.
+
(32) (X
&z
A (VU,0) (2
&U
= 2,
y&u))). PROOF.Suppose a R b for a, b in X . Let A = { A : O A E b}. We can suppose b R f for some f (otherwise, put z = b). But thend is consistent, and so O n A c a. From (a) and (b) above and Zorn's lemma, there is a maximally consistent r z d such that OUT c a. So, for some cin X , a R c and OI' c c. Now suppose c R d and c R e. Then T c d, e and so, by maximally consistent, d = e = T.Also A E l' c d, and so b R d. Any frame satisfying condition (*) is a frame for L ; and so by (c) above : A ZBv
4U
A
r
(1) L is canonical. To show that L is not EAelementary, pick upon a nonprincipal ultrafilter F i n u). Let 8 = (X,R),where
X = (P}uFuLo; R = {(a,b)EX2:bEa}. Thus in the frame 8,I: is related to its members, which are related to their members. (a) 8 is a frame for L.
PROOF.Let (21 = ( X , R, v ) be any model defined upon L, and suppose that (8,a) k OOpo for a in X. If a # F, then (a, a) k 00 ( p o A pl). So suppose a = F. Then for some M in F, (a, M ) C U p o ;and so v(0) n LO E F. Now either LO n v(1) or u)  v(1) E F. Assume that w n v(1) E F (the other case is similar). Then M' = LO n u(0) n v ( l ) E F. Hence ( 8 7 M') k 0 (Po A PI) and (2,F) 1 00 (Po A P I). (b) In any elementary submodel 49 = (Y, R I Y) of 3;(i) FE Y ; (ii) F n Y is nonempty; (iii) {n E LO nY M R n and M'R n} is infinite for any M , M' in Y n I;.
30
KIT FINE
PROOF. (i) holds since Fis the unique object to satisfy (3y, z) (x&y (ii) holds since (3y) (FIty) is true in 3. (iii) holds since
A
y8.z).
is true in 3 for each natural number n > 0. (c) No countable elementary submodel @ of 3 is a frame for L.
PROOF.Suppose that C$ = (Y, R I Y ) is a countable elementary submodel of 3, with { M t : l < a } an enumeration of Y n F. By b(ii), we can assume that 0 < a I w . Let ao,a , , ... and bo, b l , ... be two sequences in Mo n Y such that M,,R a,,, b, and a, and b, are distinct and not members of {ao,boy...,a,, 1, b, l}. Such sequences exist by b(iii). Let 8 be a model ( Y ,R I Y, v) such that v(0) = M o n Y and o(l) = { a o , sly...}. F e Y b y b(i); F R M , ; ( 8 , M , J k O p o ; a n d s o ( 8 , P ) k O O p o . Now take any M in Y such that F R M . Then M = M,, for some n < a. (ByM,,) I# 0 ( p o A p l ) since M,R b,, and b, 4 z!( 1); and (‘B, M n ) I# 00 (PO A pl) since M,R a, and a,,E ~(1).Hence (IS, F) I# OUPO+ ( O D (Po
A
PI)V O D (PO A PI>>
and 8 is not a frame for L. By the SkolemLowenheim theorem, 3 has a countable elementary submodel @. So by (a) and (c) above. (2) L is not ZAelementary. This completes the counterexample. The proof that L is canonical also shows that L is quasielementary. Consequently, quasielementary does not imply ZAelementary. Some questions remain. Is every natural logic the union of finitely axiomatized natural logics? Say that a logic L is acanonical if SLFis a frame for L whenever 6 < a. [2] presents an ocanonical logic that is not w+canonical. But can an w +canonical logic fail to be acanonical for some cardinal a > w? Does being canonical or being ZAelementary imply being quasiAelementary? The last two questions are connected. For suppose every w +canonical logic were quasi Aelementary. Then a modification of the proof of Theorem 3 would show that every w+canonical logic was. canonical.
SOME CONNECTIONS BETWEEN ELEMENTARY
31
References [l] J. L.Bell and A.B. Slornson, Models and ultraproducrs, an introduction (NorthHolland, Amsterdam, 1969). [2] K. Fine, Logics containing K4 I, J. Symbolic Logic, to appear. [3] K. Fine, Compactness in modal logic, to appear. [4] R. I. Goldblatt, Firstorder definability in modal logic, unpublished. [5] A.H.Lachlan, A note on Thornason’s refined structures for tense logic, Theoria, to appear. [6] S . Kochen, Ultraproducts in the theory of models, Ann. of Math. 74 pp.221261. [7] E. J. Lemmon and D. Scott, Intensional Logic, Preliminary draft of initial chapters by E. J. Lemmon, Dittoed, Stanford University (1966). [S] J. C. C. McKinsey, On the syntactical construction of modal logic, J. Symbolic Logic 10 (1945) 8396. [9] K.Segerberg, An essay in classical modal logic (Uppsala, 1971). [lo] S. Shelah, Every two elementarily equivalent models have isomorphic ultrapowers, Israel J. Math. 10 (1971) 224233. [Ill S.Thornason, Semantic analysis of tense logics, J. Symbolic Logic 37 (1972) 1501 58.
FILTRATIONS AND THE FINITE FRAME PROPERTY IN BOOLEAN SEMANTICS Bengt HANSSON and Peter G A R D E N F O R S University of Lund, Lund, Sweden
In modal logic, it is often interesting to know whether a certain logic has the socalled finite model property (abbreviated fmp) because it then immediately follows that it is decidable, provided it is finitely axiomatizable. Lemmon and Scott used socalledJiftrations [4] to prove that many wellknown modal logics had the fmp. The method is presented by Segerberg [6, 71. For a logic to have the fmp means to be characterized by a class of finite models, or, equivalently, that each nontheorem is rejected by some finite model for the logic in question. (We assume familiarity with the basic concepts of modal semantics, in particular with the concepts of a frame and a model, the latter being a frame with an added valuation. Our terminology is explained in detail in [3].) Prima facie the concept of fmp. is relative to the kind of models employed  i.e., relational or neighbourhood models in the case of Lemmon & Scott and Segerberg. Nevertheless it can be shown (cf. [3]) that a logic has the fmp. in the relational sense iff it has it in the neighbourhood sense and iff it has it in the boolean sense. It is also possible to define a somewhat stronger variant of the fmp, where for each nontheorem there exists a computable upper limitation to the size of the model that falsifies the formula in question. At the cost of this complication we no longer need to know that the logic is finitely axiomatizable in order to conclude that it is decidable. For if a logic has this stronger property, we only need to check whether a certainformula is true in all models smaller than the given limitation in order to know 32
FILTRATIONS AND THE FINITE FRAME PROPERTY
33
whether it is a theorem. In fact, most proofs that have been given that a certain logic has the fmp suffice to show that it has the stronger variant too. We will not be directly concerned with fmp, but rather study the finite frame property (ffp). It means that every nontheorem is rejected by some finite frame for the logic. It is trivial that the ffp entails the fmp. In fact, the converse also holds, as shown in [7]. Although the fact that a logic has the ffp is independent of which kind of frames we use, the techniques for proving this may differ in complexity. We will use the boolean semantics developed in [3] to describe a comparatively simple filtration method. In many respects it is a generalization of McKinsey’s methods in [5]. A central point is that we know that each classical modal logic is characterized by a single boolean frame, i.e., a pair <W, f ) , where W is a boolean algebra and f a function from elements of W to elements of W. Such a characteristic frame can be construed in the following way: as elements of W we take the equivalence classes of provably equivalent formulas in the logic in question with the boolean operations defined in the obvious way. The value off for the equivalence class IAl determined by the formula A is the equivalence class IOAl. We now fix our attention on a given logic L. For each nontheorem A of L we want to construe a finite frame for L which rejects A . Let us therefore look at A and all its subformulas. Each of them determines an equivalence class, i.e., an element of the boolean algebra just mentioned. We form the set of all boolean compounds of these elements and thus obtain a finite subalgebra of W which we call aA. This algebra is to be the first component of the finite frame (aA, f A ) which we are looking for. As for f A , we look at f ( x ) for x in WA.It may happen that f ( x ) too is in gA,and we then define fA(x) to be f ( x ) . If f ( x ) is not in WA, we leave the value of fA(x) undetermined for the moment. We now have the following lemma: LEMMA 1 . If W A is as above, and f A is any function on W A which agrees with .f whenever f ( x ) is in W A and B is a formula such that IBI is in g A and B is not valid in (W, f), then B is not valid in (aA, fA). PROOF.Consider the model obtained from (99, f) by adding the valuation V defined by V ( p ) = IpI for any propositional variable p . A formula 3 Kanger, Symposium
34
BENGT HANSSON AND PETER GARDENFORS
is true in this model iff it is a theorem of L . But (a,f ) is characteristic for L and therefore the B of Lemma 1 is not true in the model, Le., V ( B ) is not the boolean unit element. Y restricted to the variables occurring in A is obviously a valuation on (aA, f A ) . When we compute the values of VA for compound formulas, everything goes as in (99, f ) as far as propositional connectives are concerned. When we compute VA(OC ) given VA(C),we note that f ( [ C [ )= lOC[and since (UC(is in BA,f,(lC() must agree with f ( l C ( ) ,so V A ( o C )= V ( 0 C ) . Therefore, VA(B)cannot be the unit element. THEOREM. If, for each formula A , there is a function fA on W A agreeing with f whenever f ( x ) is in a A , such that is a frame for L (i.e., a frame which validates all theorems of L), then L has the ffp in the strong sense. PROOF.Suppose A is a nontheorem of L. Let ( g Af,A ) be as in the assumption of the theorem. By Lemma 1, A is not valid in ( B A f, A ) . Since this frame validates all theorems of L and since the cardinality of aAis at most 2” if n is the number of subformulas of A , the proof is complete. An application where it is easy to find a suitable f A is the following example.
EXAMPLE 1. E, the smallest classical modal logic, has the ffp. Since ( B A fA> , is a frame for B for any f A ,the result follows immediately from the theorem. In general, more ingenuity is required to find an adequatef, for a given logic. A straightforward method is to try to approximate the value off as closely as possible. In principle this can be done in two ways: either we take the smallest element in above f ( x ) or the greatest one below (note that ‘smallest’ and ‘greatest’ have a definite meaning since aAis finite  they simply denote the intersection of all elements above and the union of all those below). The following lemma will provide reason for approximating from below. LEMMA 2. Let B be an arbitrary boolean algebra and S an arbitrary finite subset of 93 closed under intersection. Let m(x) be the union of ally’s in S
FILTRATIONS A N D THE FINITE FRAME PROPERTY
35
such that y 5 x. Then m(x n y ) = m(x) n rn(y).
PROOF. m(x n y ) is the union of all Selements below x n y . Each of these is of course also below x. Therefore this union is below or equal to the union of all Selements below x, i.e., rn(x). Similarly, we obtain m(x n y ) Im(y) and hence m(x n y ) Im(x) n rn(y). We now turn to the opposite inclusion. m(x) is the union of all u’s in S below x and m(y) the union of all 0’s in S below y. m(x) n m(y) is thus an intersection of two unions, which by de Morgan’s laws is equal to the union of all elements of the form u n 0. These are all in S and each of them is below x n y. Therefore their union is below or equal to the union of all Selements below x n y, i.e., m(x n y). This completes the proof. We get our approximation from below if we take for S the set of elements of 93,. This construction is sufficient for many standard logics.
EXAMPLE 2. K, the smallest normal modal logic, has the ffp. The function f in the Lindenbaum frame for K has the following properties: f(l> = 1 f(x n Y ) = f
0 nf(u)
We now definef,(x) as rn(f(x)) in the sense of Lemma 2 (with S as 93J. It is clear thatf,(x) = f(x) iff(x) is in 93,. We proceed to show thatf, fulfils the same conditions as f above. It is immediate that fA(l) = 1 since 1 belongs to 93,. The other condition follows directly from Lemma 2, so our theorem is applicable.
EXAMPLE 3. The modal logic T has the ffp. In addition to the properties mentioned above, the function f i n the Lindenbaum frame for T fulfils f(x) 5 x If we takefA(x) as mCf(x)) again, we only have to show thatf,(x) I x in order to prove that (93A,fA) is a frame for T. Since m(f(x)) 5 f(x) holds generally, this is immediate.
36
BENGT HANSSON AND PETER GXRDENFORS
EXAMPLE 4. The Brouwerian system B has the ffp. Frames for B are characterized by the following condition x If(f(x)> in addition to those for T. With the same definition of fA we only have to show that fA fulfils the new condition. By the definition of m ( f ( x ) ) we get f A (  x ) ~ f (  x ) . Hence f(x) I fA(x). By the general rule that f(x) If(y) follows from x Iy, which holds in all extensions of K, we conclude that f(f( x)) I f(fA( x)). But our assumption x ~ f (  f (  x ) ) implies that x If(fA(x)).fA(fA(x)) is the union of all elements in BA below f(fA(x)). One of these is evidently x. Hence x 5 fA( fA( x)). In our framework, generalizations to manyplace operators is quite straightforward and our theorem will work as before. As a simple example we take the following system QP of qualitative probability with the twoplace operator (to be interpreted as ‘as least as probable as’).
+
Modal axioms
Inference rules substitution, modus ponens, replacement of provable equivalents.
E ~ L 5. E QP has the ffp. A frame for QP is characterized by the following properties: f ( x , 0) = 1, f(x, 1) 5 x, f(XY v) n f h 4 5 f(x, 4, if x n z = 0 and y n z = 0 thenf(x,y) =f(x u z, y u z).
FILTRATIONS AND THE FINITE FRAME PROPERTY
37
The problem is to find anfA which has these three properties. As before, fA(x,y ) = m ( f ( x , y ) ) will do. Only transitivity is not completely trivial. BY definition, f X x , Y ) ~ J Y 4 , is W l x , u))n d f (4 ~ 1,,which, according to Lemma 2, equals m ( f ( x , y ) n f ( y , 2)). It follows that
m(f(x,y ) f ( y , 2)) 5 m(f(x,z)) = fd(x, z)* When it comes to S4 and its extensions, the iterated modal operators complicate the picture. In order to take care of them, we have to make a slightly more sophisticated choice of S in Lemma 2. Following an idea of McKinsey's, we choose for S the set of elements of aAwhich are in the range of the function f. It is closed under intersection as soon as we are dealing with extensions of K. EXAMPLE 6 . S4 has the ffp. A frame for S4 is characterized by the condition
f W
=m
in addition to those for T. We define fA(x) as m ( f ( x ) ) with the choice of S as above. In principle, we have to check all the Tconditions again, since we have changed the definition of fR, but the intersection property follows directly from Lemma 2 and the other ones go as before. As for the specific S4 condition, we note that fA(x) is the union of several elements in S, say f ( x l ) , ,..,f(xJ. For each i we have f(xJ 5 fA(x) and hence //
>
r3/ >
N I
/>>
Therefore fA(')
=
uf(xi> 5 f(f~(x)).
The invexse inclusionfollows trivially, so in fact w e havefA(n) = fcfA(x)).
Since f A coincides with f when its value is in .a,, = fA\x) anh we are ready.
we a h kae
f%cx\
U p <S!nQw w e have expYici\\y based our constructions on the Lindenbaum frame for the relevant logic. This is, however, not necessary. We need not even start with a characteristic frame, for any frame for the logic and any model on that frame which falsifies A will do. We then let aAbe the boolean closure of the set of elements assigned to A and its subformulas according to this model. Our theorem will hold for this g d too. We use this generalization in the following example.
38
BENGT HANSSON AND PETER GENFORS
EXAMPLE 7. S4.3 has the ffp. We know from [I] that there is a frame for S4.3 where all the fvalues are linearly ordered. (The Lindenbaum frame has not this property, though.) Conversely, any frame in which thefvalues are linearly ordered is a frame for S4.3. We therefore continue to prove that they are so when fA is as in Example 6. Suppose f(x) I f ( Y ) . fA(x) is the union of all fvalues in aAbelow f ( x ) . All of these are below f ( y ) too, so f,(y) is the union of all these fvalues and perhaps some more. Hencef,(x) I fA(y). This is perhaps also the place to point out the connection with Bull’s famous result [2] that all normal extension of S4.3 have the ffp (note that a ‘model’ for Bull is a ‘frame’ for us). What Bull does is, in our terminology, to give a general method of constructingf,  a method which he proves to work for every normal extension of S4.3. Finally, we would like to call attention to the possibility of using the fmp and ffp in proving completeness results. Usually, a logic is first proved to be complete and a specific model (e.g. the socalled canonical model) is then used for obtaining the fmp. If we work in a boolean framework we are always guaranteed that we have a characteristic boolean frame to start with. Since it is proved in [3] that a finite boolean frame is isomorphic to a finite relational frame (if the logic is normal), this fact is sufficient for us to conclude that a logic with the fmp is complete with respect to a class of relational frames. The relational frames can be explicitly constructed by the following rule : the elements (worlds) are to be the atoms of the boolean algebra (they exist since the algebra is finite). The relation R is to hold between x and y if and only if y I z holds for each z in the boolean algebra such that x 5 fA(z). We thus see that e.g. T is characterized by a class of reflexive relational frames, since the fact that fA(x) I x holds entails that x I z holds whenever x 5 fa(z) holds, which by the rule above means that x R x holds. Since the Taxioms trivially hold in any reflexive frame, it follows that T is characterized by the set of all reflexive frames. Similarly, the transitivity of R follows from the S4 axiom. For assume x R y and y R z. Then y I fA(w) whenever x I f:(w), and z I w whenever y I fA(w), i.e., z I w whenever x If:(w) = fA(w) (by the typical S4 condition on fA), whence x R z. As above we conclude that S4 is characterized by the set of all transitive and reflexive frames.
FILTRATIONS AND THE FINITE FRAME PROPERTY
39
It has sometimes been argued that the Lindenbaum frame is uninteresting since it is so uninformative. That it is uninformative can hardly be denied, but we hope that this paper has given some reason to think that it is not uninteresting, because it can serve as a starting point for completeness and decidability results. It can be compared with the conceptually more complicated canonical model (in neighbourhood semantics) which is also to a large extent ad hoc, but nevertheless quite useful in many applications.
References [I] R.A.Bull, An algebraic study of Diodorean modal systems, J. Symbolic Logic 30 (1965) 5864. [2] R.A.Bul1, That all normal extensions of S4.3 have the finite model property, Z. Math. Logik Grrrndlagen Math. 12 (1966) 341344. [3] B.Hansson and P. Gardenfors, A guide to intensional semantics, in: Modality, morality and other problems of sense and nonsense, Essays dedicated to Soren HalldCn (Lund, 1973) 151167. [4] E. J. Lemmon and D. Scott, Intensional logic, Preliminary draft of initial chapters by E. J.Lernmon (dittoed), Stanford University (1966). [5] J. C. C. McKinsey, A solution to the decision problem for the Lewis systems S2 and S4, with an application to topology, J. Symbolic Logic 6 (1941) 117134. [6] K. Segerberg, Decidability of S4 1, Theoria 34 (1968) 720. [7] K.Segerberg, An essay in classical modal logic, Filosofiska studier utgivna av Filosofiska foreningen och Filosofiska institutionen vid Uppsala universitet nr. 13, Uppsala (1971).
SYSTEMATIZING DEFINABILITY THEORY Jaakko H I N T I K K A and Veikko R A N T A L A Academy of Finland, Helsinki, Finland
1. Preliminaries There exists a fairly extensive literature on the different kinds of definability (identifiability)‘ in firstorder theories. (See, e. g. [ 121.) Especially during the last few decades there have been progress in the questions of definability. This progress is mostly due to some logicians, as is the clarifying of the concept of definability. The purpose of this work is to show that it is possible to systematize (at least to some extent) the questions of the different kinds of definability in firstorder (finitary) logic. The main syntactical tool used in this enterprise will be the theory of distributive normal forms developed by Hintikka. (See [3,4, 6,7].) The import of distributive normal forms for these questions can be seen in [7]. Here this line of thought will be continued. For a given firstorder theory, one can construct certain expressions which have a form of constituent but which can be inconsistent. In spite of their inconsistency, these expressions can be interpreted modeltheoretically so that we can obtain modeltheoretic arguments for results concerning certain familiar kinds of identifiabilityinstead of the syntactic proofs in [7]. A fully explicit treatment of the modeltheoretical import of such inconsistent expressions would obviously require recourse to ideas some
’
In econometrics, questions of definability are customarily referred to as problems of identijiubility. In fact, our main problems may be considered as generalizations of econometricians’ identifiability problems. (For these problems, see e.g. 121.) In this work, we shall not examine the interrelations between logicians’ and econometricians’ problems beyond borrowing the handy term ‘identifiability’from the latter.
40
SYSTEMATIZING DEFINABILITY THEORY
41
what different from those of the traditional model theory. A set of tools appropriate to this task is in fact found in the surface semantics of Hintikka [8], suitably extended. It would take us too far in this survey to review Hintikka’s theory in any detail. For this reason, the modeltheoretic significance of the ideas developed here will remain partly implicit. Although the focus of our concepts will thus seem to be syntactic, it is hoped that the main outlines of their modeltheoretical applications are apparent enough. In any case, our methods enable us to obtain a convenient overview of a number of earlier results concerning different kinds of definability (identifiability) in firstorder theories. As a preparation for this overview, a brief summary of some of the relevant results is presented below in Section 4. In most of what follows, we shall be considering an arbitrary fixed firstorder theory T. For the sake of expositional simplicity, we shall usually formulate our definitions and results first on the basis of a number of simplifying assumptions concerning T.They are the following: (a) (b) (c) (d)
T is finitely axiomatizable. T does not contain individual constants. T does not contain free individual variables. T contains a finite number of nonlogical constants.
Of these, (c) is trivial, and needs no special comments, while (d) will be adhered to through most of the discussion. As to the others, it will be indicated in each case what changes in our approach are necessitated by giving them up. We shall normally study the identifiability of one particular oneplace predicate P occurring in T in terms of the other constants of T,the set of which will be called 9. Occasionally, the presence of these different constants in a theory or in a sentence will be indicated by writingthem out as arguments in the obvious way, e.g. T (97, P). The number of layers of quantifiers in a sentence or other expressions will be called its depth. In other words, the depth of F is the maximum length of nested sequences of quantifiers in F. (In [9, pp. 1819, 142 (note 33)] and elsewhere, it is indicated how this definition of depth can be sharpened somewhat. However, these refinements are not needed in the present work.) We shall often indicate the depth of our expressions
42
JAAKKO HINTIKKA AND VEIKKO RANTALA
by superscripts in parenthesis, e.g. Ttd)would be a (finitely axiomatizable) theory of depth d. The assumption that P is monadic is not restrictive since the results in this work can be generalized for the case that P is not monadic. 2. Distributive normal forms
Distributive normal forms may be thought of as operating as follows : Each sentence S is split up into a number of disjuncts called constituents, each of which describes one of the several alternatives concerning the world S admits of. A constituent C does this by giving us a ramified list of all the different kinds of finite sequences (of a fixed length) of individuals that can be found in a model which satisfies C. This length equals the depth of C. Moreover, each sentence of depth d can be transformed into a disjunction of constituents of the same depth with the same nonlogical constants. When depth is increased, each constituent is split into a disjunction of deeper constituents. Given a finitely axiomatized theory T of depth d, we thus obtain the kind of treelike analysis of T as shown in Fig. 1 .
Fig. 1.
When T is a tautology of depth zero, the tree diagram of Fig. 1 gives us an analysis of the whole firstorder language using the same nonlogical constants as T. b ' For simplicity, we shall often assume that all the inconsistent constituents have been eliminated from Fig. 1. Although this cannot be accomplished effectively, we can use this assumption in defining our metalogical concepts.
SYSTEMATIZING DEFINABILITY THEORY
43
Any two constituents of the same depth are mutually incompatible. (That is, one of them always implies the negation of the other. Of course, if inconsistent constituents are admitted, two constituents can none the less be equivalent, viz. if they are both inconsistent.) Hence all the constituents in Fig. 1 which are compatible with a given one, say Chd+e), lie in branches passing through CAd+e).Each complete consistent theory compatible with T is formulated by the sentences of some branch of Fig. 1. If inconsistent constituents are assumed to be eliminated, the converse relation also holds : the sentences of any one branch constitute a complete consistent theory compatible with T. Thus many questions concerning Tare reduced to questions concerning the branches of Fig. 1. In the sequel, it will be assumed that the following is an arbitrary branch (of consistent constituents) of this kind:
Arrows indicate here logical implications. Normally, they cannot be strengthened into equivalences. When we are considering a theory T which is not finitely axiomatizable, perhaps not axiomatizable at all, pretty much the same things can still be said, provided that only a finite number of nonlogical constants occur in T. Then T is not any longer equivalent with the disjunction of all the constituents C ( d + eof ) each fixed depth d + e in Fig. 1. Rather, the constituents of a given depth d + e occurring in Fig. 1 are all the different constituents of this depth (with the same nonlogical constants as T) logically compatible with T. Again, all the different complete theories compatible with T correspond onetoone with the branches of (consistent) constituents in Fig. 1. It remains to explain the structure of constituents in less informal terms. We shall first consider the case in which no individual constants are present. For later purposes, we shall highlight the role of the special monadic predicate P whose definability we are studying here. Thenan
44
JAAKKO HINTIKKA AND VEIKKO RANTALA
arbitrary constituent C f + e )can be said to be of the following form:
(Ex,) (Px,
A
CtLt+el) ( y , P, q))A
... (EX,) (  P x l
A
A
C t T e  l ) ((p, P , x l ) )
A
Here each conjunction k P x , A Ct(d+el)( y , P, x l ) is called an attributice constituent (in the vocabulary (p {P}).‘ Their structure resembles closely that of (2). In fact, an arbitrary attributive constituent of depth d + e  i 1 with the set of constants y { P } and with the free
+
+
+
Our notation is therefore somewhat deviant, for usually the symbol Ct is used for the attributive constituents themselves, not for certain parts of theirs, as here. We do not see any serious danger of confusion here, however, which is simply due to our desire to highlight the special role of the predicate P.
45
SYSTEMATIZING DEFINABILITY THEORY
individual variables x i , x 2
(Ev>(Py
A
...
xi  is of the following form:
CtE+ei’ (9);f‘,
Xi 9
X2
..., Xi 1 , Y ) )
(Ev)(PJ’ A Ctg+ei)(v, P , X i , X 2 , ... ... ( E Y ) (   P ~A CtS,
(d+ei)
(EY)(PY
A
...
Ctp2
(d+ei)
(P.Y
(p.Y (+A,
A
... A A
...I A
J’))A A
( 9 ) , P , ~ i , ~ 2 ,  . . , x i  iA, ~ ) ) ( ~ ) Y ~ , X ~ Y X ~ , . . . , X ~ A ~ Y Y ) )
A (diei)
( Q ~ , P , X ~ , X ~ , . . . , X ~ v ~ , Y ) )
(d+ef)
(91,P,xlYx2,...,~li,~)) v
( ~ ) [ ( pAy Ctal (PY
Xi 1
A
CtaZ
V (d+ei)
Cfb2
(d+ei)
( ~ ) Y P , X ~ , X.  ~, XYi  i , Y ) )
V
( 9 ) ~ ~ ~ X i ~ X ~ , . . . , X i  iV ,Y)) A
A
.).
(3)
Here the A , are all the atomic formulas which we can build up of the members of 91, P, and x l , x 2 , ...)x i  l and which contain x i  l . The symbol If: is a placeholder for the negationsymbol or for nothing at all, in all the combinations compatible with the general restrictions that may be put on constituents. In order to make an overview of (3) easier, xihas been replaced by y in (3). (3) may be considered a (courseofvalues) recursive definition of an attributive constituent of a given depth in terms of shallower attributive constituents. These have of course the same constants, but in (3) they contain more free individual variables, whence the courseofvalues character of the definition. When d + e  i + 1 = 0, only the last conjunction (that of unnegated or negated atomic formulas) occurs in (3). This gives us a basis for the inductive definition. This definition is of course relative to the vocabulary p { P } . For other selections of nonlogical constants, attributive constituents are defined analogously.
+
46
JAAKKO HINTIKKA AND VElKKO RANTALA
We shall disregard the order of conjuncts in an attributive constituent as well as possible repetitions of conjuncts. Furthermore, the naming of bound variables (barring conflicts between them) is likewise considered inessential. This suffices to define an attributive constituent. From the concept of an attributive constituent we obtain readily that of a constituent (with free individual variables or with individual constants). All we have to do is to conjoin (3) with a conjunction of the form
where k again stands for a negationsign or for nothing at all and where the B, are all the atomic formulas that can be built from the members of q,from P , and from the variables xl, x 2 , ..., This gives us constituents with free individual variables. Constituents with individual constants are obtained from them by simple substitution. Our notation for constituents with individual constants is very simple: whenever necessary, we just write out the constants as arguments. Likewise free individual variables may be written out, as may nonlogical constants. Thus (I) may be written out more explicitly:
It is perhaps not very hard to see how the formal structure of constituents hangs together with the informal explanation given of constituents earlier in this chapter in terms of the different sequences of individuals one can find in a world (model) in which the constituent in question is true. Let us look at (2) and (3) for the purpose, assuming that the latter occurs as a part of the former. Clearly the different initial existential quantifiers of (2) list all the different kinds of individuals (sequences of individuals of length one) that one can find in a world in which (2) is true. Likewise, in (3) the different initial quantifiers may be thought of as listing the different kinds of next step (next individual to be found) after we have already found the individuals which are the values of xl, x2,...,xi Assuming that a constituent has been fully written out, we can thus say that each sequence of nested quantifiers occurring in it corresponds to a sequence of individuals which one can
SYSTEMATIZING DEFINABILITY THEORY
47
find in a world in which the constituent is true. This sequence is described by all the atomic formulae in the constituent whose individual variables are bound to these quantifiers. Conversely, each different sequence of dindividuals that can be found in a world in which (3) is true will have to correspond in this sense to a nested sequence of d quantifiers in (3). That is, the atomic formulae whose variables are bound to these quantifiers describe the inlerre1a;ions of the given sequence of d individuals. From this intuitive idea it is seen at once that certain operations can be performed on constituents and on attributive constituents so that the result is implied by the original. For instance, since the descriptions of different sequences of individuals that the constituents give us can be formulated in vocabularies of different strength, we can simply omit all the atomic sentences containing certain fixed constants from a constituent and obtain a constituent in the poorer vocabulary which is implied by the given one. We shall normally use the convention that a designation for the new constituent is obtained simply by omitting the constants in question from the designation of the original one. For instance, is the being given, C:d+e)(rp, x1,...,x i Ci(d+e) (rp, P , xl, ...,x i result of omitting from it all atomic formulas which contain P . (‘Omitting P’, we shall say in the sequel.) Intuitively, the latter describes the same sequences that can be found in a world in which the former is true, but without the use of the predicate P . Likewise, we may omit any layer of quantifiers from a constituent and obtain (apart from repetitions) a constituent which is implied by the former. It obviously lists precisely the same sequences of individuals that the former said to exist in the world, but only truncated by one step. For a more formal motivation for these observations, see [4]. The idea that the different sequences of nested quantifiers in a constituent describe the different sequences of individuals that can be found in a model in which the constituent is true can be made more formal in several ways. One of them is Hintikka’s gametheoretical interpretation of firstorder logic [ 5 ] , another his ‘surface semantics’ [8]. For the time being, we shall only use the idea in an informal way, however. Attributive constituents have pretty much the same properties as constituents. For instance, any two different attributive constituents of the same depth with the same constants and with the same free individual variables are incompatible. Of two compatible attributive constituents
48
JAAKKO HINTIKKA A N D VEIKKO RANTALA
(with the same constants and the same free individual variables) the shallower results from the deeper by omitting layers of quantifiers. Obtainability from another attributive constituent by this procedure defines a relation on attributive constituents with the same nonlogical constants and the same free individual variables. This relation clearly imposes a tree structure on the set of all such attributive constituents. The branches of this tree are countable (have the order type co), and only a finite number of branches diverge at each element. When identities are admitted to our language, the former definition of constituents and attributive constituents can remain unchanged. However, then quantifiers will have to be given what Hintikka [4,Sections l l and 131 has called an exclusive interpretation. What this interpretation is is easily explained in terms of the idea that nested sequences of quantifiers in a constituent Chdfe)represent the kinds of individuals one can draw from a model of CAd+e).In the case of an exclusive interpretation unlike the normal or ‘inclusive’ one  these draws are performed without replacement. Because of this difference in interpretation, constituents will in the case with identities behave somewhat differently from their namesakes without identity. The most important differences are noted in [4].We shall refer to these differences in the sequel whenever they are relevant. As a special case we shall admit of an exceptional constituent in which there are no existentially quantified conjuncts in (2) and the disjunction in (2) becomes inconsistent. Such a constituent (which of course could simply be ruled out) is satisfiable in an empty domain only. We shall call it an empty constituent and think of it as containing no quantifiers. (All such constituents are of course equivalent.) In principle, the same thing might of course happen in (3). We shall then use the same conventions as in the case when (2) becomes empty. A constituent which then becomes ‘empty’ only in some of its inner parts like (3) is easily seen to be inconsistent, however. Since we shall later employ inconsistent constituents for legitimate purposes, we must nevertheless take into account such partially empty constituents (constituents in which some branches stop before their allotted depth). The case in which all the inner parts of a given depth become empty can occur in consistent constituents, satisfied in a finite domain, if the quantifiers are interpreted exclusively.
49
SYSTEMATIZING DEFINABILITY THEORY
3. Uncertainty sets From (2) and (3) we can see at once one kind of index of uncertainty concerning P left by C6d+e).Omitting from each of the expressions Ct:a+el) (cp, P , xl), all the atomic formulas containing P (with the associated connectives) we obtain an expression which we shall call Ctj tcp, xl) or more explicitly, Ct:d+el) (p, xl). It is clearly an attributive constituent with the constants p and with the single free individual symbol xl. We can then put a(x1)
=
{CL, (v,
B(x1)
=
{Ct,, (V,
Y(Xl) =
XI),
Cta, (9, x1),

Ctp, (p, Xi),
* *
.}
*
.I
7
3
4x1) n B(x1).
(4) (5)
(6)
Let us assume that Y(Xl) =
Wt,, (V>

CtY,(cp, x,), **.I
Likewise we can form from (3) the sets 01 ( ~ 1 *,. * ,
xi1
,Y ) = (Cta, (q,xi, *.,
xi1
9
Y),
Cta,(cp,x1, . * * s x i  l , ~ ) ~ . *  } , B(x1,
...,Xil,Y)
(7)
= {Ct,,(p,x1, . . . , x i  l,Y),
C t p 2 ( ~ , ~*1.,* , x i  l , ~ ) , * *  } , y(x1,...,xii,Y) = a ( x 1 , . .  , x i  l , ~ )n B ( x l , . . . , x i  l , y ) .
(8) (9)
Let us assume that here * ~(x1,*..,~i= l , {Cty,(q,xI> ~)
. . , ~ i  l , ~ ) ,
Cty, (v, x1,  * * , Xil,Y),
...>.
The depth of the a’s, B’s and y’s may be indicated by a superscript. The members of any 01 are called Ppositive and the members of any Pnegative. The sets (6) and (9) are called uncertainty sets. Their modeltheoretical meaning may be seen as follows: Assume that M is a model of C$d+e). Assume also that we cannot ‘see’ directly which elements of the domain 4 Kanger, Symposium
50
JAAKKO HINTlKM AND VEIKKO RANTALPI
D of M have the property P but that we know how the members of y are interpreted in D. What can we tell on the basis of this knowledge of the distribution of P in D? It is assumed here that only sequences of individuals of a fixed finite length d + e (at most) may be used in answering this question. According to formula (2), each a E D satisfies one of the expressions Ctx'el)(y, x,) or CtE+el)(q,x,). There are three possibilities here: ( x l ) n B(xl), (i) it satisfies a member of a(ii) it satisfies a member of a ( x l ) n B(xl), (iii) it satisfies a member of y(xl). In case (i), we must have Pa.In case (ii), we must have Pa. It is precisely in case (iii), i.e., when a satisfies a member of the uncertainty set y(xl), that we cannot, by considering those characteristics which can be expressed by means of d e  1 layers of quantifiers at most, tell whether it has the property P or not. (Hence the term 'uncertainty set'.) More than this we cannot say without considering longer sequences of individuals. Suppose likewise that we have chosen certain elements a l , ..., ai E D which in this order satisfy the attributive constituent Ct (47, x1, ..., xi 1) obtained from those attributive constituents in which (3) occurs in (2). Let us suppose further that although we could not initially tell whether a,, ..., aid, have P or P, we have somehow assigned one of these two to each a l , ..., al.l and even decided that a,, ..., a l  l satisfy a certain constituent (with free variables) C ( y , P, x l , ...,XIwhich of course will be obtainable from (3) and from the attributive constituents in which (3) occurs in (2). Given now one more element a, E D , can we tell (on the basis of its characterization in terms of q, of a,, ..., a i  , , and of d + e  i layers of quantifiers) whether we have Pai or Pai? By the same reasoning as before, we can see from (3) that we cannot tell this (on the basis indicated) precisely in case a,, ...,at , aisatisfy an attributive constituent
+
,
,
Thus the uncertainty set y ( x , , ...,x i  ,y) again defines those cases in which we remain in uncertainty as to whether P or P applies at , at the level of analysis on which we are moving in (3).
51
SYSTEMATIZING DEFINABILITY THEORY
4. Summary of some known results on definability
Several earlier results on different kinds of definability can be formulated in terms of the uncertainty sets. They illustrate the systematizations which are made possible by our concepts, especially by the notion of uncertainty set. The following is a summary of some main results: (A) Piecewise dejnability (Svenonius). P is piecewise definable in Tiff in each branch (1) of Fig. 1, y ( d + e  l )(xl) = 0 for some finite e. This ~ l )@(d+el)(~l) are separated in that branch means that d d + e  l ) (and by at least one set (possibly by several) 6 = & d + e  l ) (x1); &(d
If
+e  1)( X I ) E 6, 6
=
iCtdl
@ ( d + e  1 ) (XI)

c 6.

(YYx l )Ctd, ~ (YY xl>, }Y
T will imply
where A is the set of all the different 6's obtained in the different branches (I) of Fig. 1. Here piecewise definability means the same as HintikkaTuomela's 'conditional definability' [lo]. It is easily seen to be equivalent to d e b ability in each model of T (see [16], who first pointed this out). Conversely, it is easily shown (cf. [lo]) that whenever P is definable in the complete theory constituted by the sequence (l), then this will have to be betrayed by the separation of o((xl) and @(xl) at the depth which equals that of the shallowest explicit definition of P in terms of fp implied by (1). Definability in a given model M means of course that an explicit definition of P of the form
(where 6 E A ) is true in M. What is characteristic of piecewise definability is that it cannot be gathered from the way the members of are interpreted in the domain D of Mwhich definition (which 6 E A ) applies in M.
52
JAAKKO HINTIKKA AND VEIKKO RANTALA
We so to speak have to know how P is interpreted in the domain of M in order to decide how it is to be defined there, although we knew ahead of time that one of a finite list of explicit definitions must be applicable. (B) Explicit dejinability. We obtain explicit definability as a special case of piecewise definability when the 00s and /?'s are uniformly separable at some depth in all the different branches (1) of Fig. 1, that is to say, there is a set d,(x,) such that for all the pairs m,(xl) and Bi(xl) obtained from the different branches (1) we have
Then T implies the explicit definition where
(C) Restricted identijiability (Chang and Makkai). Now that we know what it means for separation to take place in the outmost attributive constituents of the sequence (l), we are naturally led to ask what happens if a separation is effected deeper in the constituents of this sequence. Again the answer is quite clear cut. If in each of the branches (1) of Fig. 1 some y (xl, ..., x k ) (not necessarily the same in different branches) eventually disappears, then P is what Hintikka [7] calls restrictedly identifiable. It might aImost be called countably identifiable, for what we have is that whenever the interpretation of the members of y in any countably infinite domain D is fixed, there are at most No different interpretations of P which make T true in D. More generally, if the cardinality of D is 5, the interpretation of 4, in any infinite D leaves at most 5 choices open for P . Chang [l] and Makkai [13] have shown (cf. also [12], p. 430) that this is equivalent with the following: (i) P is identifiable in any infinite domain D to a degree less than 2.' (ii) There are formulas Fl , ..., F, with the constants y and the free individual variables xl, ...,xk 1,y but without P such that T implies i= n
v
(Ex,)
i=1
"'
(Exk1)
(r)(py Fi (XI,
**.)
xk1,
Y)).
(12)
53
SYSTEMATIZING DEFINABILITY THEORY
Hintikka [7] indicates how to show that (ii) is equivalent to the disappearance of some y (x,, ..., xk) in each sequence (1) of Fig. 1 (not necessarily the same one in different sequences). In fact, if
6 = {CtJ, (P,
XI
> ***
Y
xk),
Ct62 (9,x 1
>
* * * 3
xk),

*>
separates di (xl,..., xk) and (xl, ..., xk) in a given branch, then the constituent in which y (xl,..., xk) disappears implies ( c b , (v, x1, * * . , x k  l ~ ~ )
(Ex,) (Exk1)(u) IPu
CtJ2(9,x 1
3 *

* 3
x k  13
7) v ”.)] *
( 3)
Hence T implies (in virtue of Konig’s lemma) a disjunction of the form (12). The converse implication (i.e., implication from (ii) to the disappearance of some y in each (1)) can be established by studying the constituents of Fig. 1 at the depth of (12). It is easily seen that in each branch a separation must take place at this depth if (12) is to be implied by T. These observations can be considerably sharpened, as we shall point out later in this study. (D) Finite ident$ubility [12]. It may also happen that not only does some y (xl,..., xk) disappear in each given sequence (l), but that all the uncertainty sets y i (xl,..., x,J disappear that are derived from ‘indistinguishable’ (in v) constituents C(d+e  k) (9,p , x1 xk) 9
occurring as (not necessarily consecutive) parts of the constituent CAd+e) of (1) at which the disappearance takes place. Two such constituents
Cd
(9,p , x1, . * * Y
+e
xk),
c,( d + e  k ) (0,P , XI, ..*,Xk)
will be called indistinguishable in pl iff the reducts C:d+ek’(v?Xlr
+
(d e k)
*.*>
xk),
c,
Y * * . Y
xk)
= Cti
(0, XI,
...,Xk)
are identical. That
Gd
+
 k,
(v, p , x1
(d+ek)
(9,p , x1,
A (&B1 A
&B,
A
xk)
...)
54
JAAKKO HINTIKKA A N D VEIKKO RANTALA
occurs as a part of
(tp,
P ) means that
occurs there as a consecutive part and that the + B , , fB, , ... (with the appropriate ‘signs’) occur there with their variables bound to the same quantifiers as the variables of CtY+ek , ( 9 ) , p , x l ,. * . , x k ) * As shown in [7], in this case C f f e )also satisfies the following condition of Kueker’s: (i),, There are expressions S a n d Fi ( i = 1, ...,n) in the vocabulary 47 (but without P ) with the free individual variables x l , ...,x k and x l , ...,x k , y respectively, such that Cf+=) implies the following: *”
(xi)
(xk)
[
s (XI
9

(Exk) s ( x l ,
(14)
xk)?
i=n *Y
xk)
v (v)(pu
i=l
Fi ( x i 9
*

7
xk
( 15 )
Here n is the number of the different separating sets needed in the different indistinguishable constituents. It is easily seen that if this happens in each sequence (l), then T also implies expressions of the form (14) and (15) (with n now the maximum of the similar parameters in the several branches). Kueker [12] (cf. also [14, 151) shows that (i),, is equivalent to at most nfold identifiability of P in T. 5. Uncertainty descriptions
By means of uncertainty sets, we can define certain firstorder expressions which may be said to describe,in a rather vivid sense, the uncertainty which the theory T leaves to the predicate P (in those cases in which it is not definable in T). We shall fist explain the formal definition of an uncertainty description. Such a description is relative to a given depth. Hence we have to start from some given constituent, say from CAd+e)(q, P ) . It is of the form (2).
SYSTEMATIZING DEFINABILITY THEORY
The corresponding uncertainty description will be called (p, P ) . Unc c$'+~)
It is reached by stages as follows: Stage 1. Omit from (2) all the attributive constituents &PXl
A
Ct'd+"l'
(99
p , Xl)
which do not yield a member of y ( x l ) when P is omitted from them.
...
Stage i (1 < i I d + e). Assuming that (3) occurs in the expression obtained at stage i  1, omit from it all the attributive constituents
&&'
A
Ct'disi) (p, P, X i ,
...,
Xi1
,u)
which do not yield a member of y (xi, ...,xi 1, y ) when P is omitted from them.
...
This process comes to an end after a finite number of stages. The outcome is Unc CFie) (p, P). From Unc Cidie)(Q), P ) we can form Unc CAd+e'(fp) by omitting from it all the atomic formulas containing P (together with the associated sentential connectives). The latter will be called a reduced uncertainty description. It is easily seen that the expressions Unc Cid+=) (9, P ) and Unc Chdie'(p) have the syntactic form of a constituent (in the vocabulary p + {PI and p, respectively). However, they are often inconsistent. In particular, they may contain empty parts in the sense explained in Section 2 above. (That is, some of their branches may come to an end before the depth d e even though other branches do not.) In spite of their inconsistency, such expressions Unc Cf") (p, P ) and Unc C$d+e) (p) can be put to a perfectly good use and even given an intuitive modeltheoretic explanation. In order to see what this explanation might be, let us assume once again that we are given a model M of T with the domain D.How much can we tell of the definition of P on D on the basis of the way the members of Q) are interpreted in D? The answer is relative to the depth to which we are following the interrelations of the elements of D. Let us therefore fix this depth at d + e.
+
56
JAAKKO HMTIKKA AND VEIKKO RANTALA
We may again consider the kind of situation mentioned above when we defined the uncertainty sets: We are given a model M of CAd+" ( y , P ) with the domain D.We know how the members of y are interpreted in D, and on the basis of this we are trying to see precisely when it is that P that is undefined on a member a, of D,on a second member a2 of D, and so on. If we choose one member a, ED, we do not know whether it has P or P precisely when it satisfies a member of y(x,), in other words, satisfies an attributive constituent Ct(d+el ) ( y , x l ) which is preserved at Stage 1 of the definition of an uncertainty description. We can thus assign to such an a, E D either P or P and investigate M further. At each stage, a member a, E D is chosen, and the question is whether we can tell between Pa, and Pal on the basis of the available information on a l , ...,ai  I. This information consists of course of our knowing which point in the tree Unc CAd+e)( y , P) we have reached, which means knowing which constituent C(d+ei+l)
ty, P,~
1  *, , Xi1)
,
is satisfied by a, ,...,a j  (in this order).' From the definition of uncertainty descriptions it is seen that we do not know, on the basis of the

This information includes knowing (having decided) whether each of the a l , ...,aLl has P or P. It is not exhausted by the latter information, however, as is readily seen from the following example: Consider a theory T (it can be formulated as a single constituent of depth 3) which contains a twoplace predicate R which imposes a discrete linear order without endpoints on the domain and a oneplace predicate P which is carried over to the left in the sense that we have (x) ( y ) (Rry A Py 3 Px). We are interested in the definability of P. Then, if al is the rightmost individual with P, no uncertainty remains with respect to the other individuals, characterized in terms of their relationship to a l , whether they have P or P. In other words, we have here a n instance of restricted identifiability yielding the definitionlike statement (cf. (13)) (Ex) (y) ( Py = Ryx). However, such an a, cannot be recognized on the sole basis of the interpretation of R plus knowledge whether we have Pal or P a l . We have to know which constituent CCz) (R,P,xl)with P is satisfied by nl . Whether less information than knowing which constituent is satisfied by al , ..., suffices for restricted definability can be seen by comparing the different branches of CAdfe)(p,P ) which yields a quasidefinition of the form (13). In this way we can, e.g., see when it suffices to know whether each of the a , , ..., a i Y lhas P or P. The fact that the latter does not always suffice may perhaps be considered a kind of analogue to the peculiarity of piecewise definability which was registered above.

N
57
SYSTEMATIZING DEFINABlLlTY THEORY
definition of the members of y on D and on the basis of the already available information just described, whether Pai or Pai precisely when a , , ...,a, a, satisfy one of those attributive constituents N
Ct(d+ei ) ( r p Y x1,
... X i  1 , Y ) Y
(cf. (3)) which are preserved at Stage i above. Hence the different sequences of nested quantifiers in Unc Cz+e'(rp, P ) or Unc Cid+e)(y)describe in a sense all the different kinds of sequences (of length ~d e) of individuals that can be chosen from D one by one preserving all the time the uncertainty as to whether the new individual chosen has P or P . They thus constitute in a rather vivid sense a description of the uncertainty which Cid+" (rp, P ) leaves open for P.' (Notice that Unc CF+e)(q, P ) is independent of M.) This description is accomplished in a way which closely resembles the way an ordinary constituent describes its models. In the latter case, too, what is specified is what kinds of sequences of individuals we can successively draw from a model. This similarity can be spelled out more technically, for instance, in terms of a suitable gametheoretical interpretation of firstorder logic. From this point of view, we can also see the reasons for the inconsistency of many of the uncertainty descriptions. It is part and parcel of the usual interpretation of quantified expressions that the draws of the individuals from a domain which quantifiers gametheoreticallyrepresent are draws from a constant domain (cf. [8, Section lo]). Now the peculiarity of uncertainty descriptions is precisely that the successive draws they describe are not draws from a domain independent of the draw. For the range of individuals of which we do not know whether they have P or P can change.2This uncertainty range can decrease, for in the case of later individuals we have some further information at our disposal which we did not have to begin with, viz. information concerning the individuals chosen earlier. Conversely, our uncertainty may be greater at later stages
+
N
N
If the quantifiers are interpreted inclusively, then (and only then) a n individual may occur repeatedly in the same sequence. This merely reflects the fact that the considered information (when identity is not present) is not sufficient for saying whether an individual chosen was chosen earlier. That is, theset of theithcoordinatesof thesequences described by Unc C&d+e'( y , P ) may change when i is increasing in the interval 1 5 i 5 d e.
+
58
JAAKKO HINTIKKA A N D VEIKKO RANTALA
than at earlier ones, for the later individuals, when they are described in terms of the members of y only, are described by means of fewer layers of quantifiers than earlier ones, thus allowing for less firm a decision between P and P.
6. Uncertainty descriptions and different kinds of definability
In terms of uncertainty descriptions, we can reformulate some of the results explained in Section 4 in a way that brings out more clearly the underlying situation and also yields proofs of most of the results. It is assumed in this chapter that we are dealing with firstorder logic with identity, i.e., that the quantifiers have been given an exclusive interpretation. Let us examine a given sequence (I). Let the corresponding sequence of uncertainty descriptions be Unc Cid)( y , P), Unc Chd+” (9, P), ..., Unc Chd+e) ( y , P ) , ... . (17) It is easily seen that if the last layer of quantifiers is omitted from result is either identical with (16) or can be obtained from it by omitting some attributive constituents. This simply reflects the fact that one’s uncertainty about P grows smaller when a new layer of quantifiers is admitted to the description of individuals in terms of the defining constants pl. Thus each branch of (16) is either continued or cut off in Unc CAd+e+l) 6% PI. What are now the most important things that can happen in (17) and what do they tell us about the dehability of P in the complete theory (l)? The most important things that can happen here are clearly the following: (a) The members of (17) disappear altogether from some point on, say from (16) on. Then obviously P is definable explicitly in (l), and if this happens in each branch (1) of Fig. 1, P is definable piecewise in the given theory. This is case (A) of Section 4. (b) All the branches of the members of (17) stop growing from some point on, say from (16) on. Then the description given in Section 4 of case (D) applies a fortiori, and we have a case of finite identifiability. Described in the way done here, however, this case yields Kueker’s
unCC f + e + l t( y , P), the
59
SYSTEMATIZING DEFINABILITY THEORY
conditions (14), (15). much more easily than in [7] and in fact yields several parallel conditions. What we obtain immediately is the existence of a number of expressions (in the vocabulary of 9)
Fl(xl,...,xk,y),...,F",(xI,...,xk,y) such that Chdf" (rp, P ) implies j=m
A (Ex,) ." (Exk) sj (XI
7
*
J= 1 j=m
(X1)
"*
(xk)
v
J=l
[Sj (XI
9

*2
xk) A
(y) (py
e . 9
(18)
xk)?
FJ(XI 9
* >
xk, v))l*(19)
Here some of the S j may be identical. Any two different S, are incompatible, however. The Kueker conditions (14), (15) are obtained from any equivalence class of identical SJ's. It is an interesting result that if (14), (15) hold for one S j , they hold for each of them. In order to see what these S j and Fj are, let Ctk (9,P , x l , ...,xk) (it is (rp, P ) and correspond of the form (3), with i  1 = k) occur in Chd+e) to a point in Unc Chd+e) (v, P), where a branch comes to an end so that the resulting uncertainty set y (x, , .. .,x,,, y) is empty. Then we obtain a S j as the conjunction
where Ctkl ( y , p , x1 ...) Xkl)r 9
Ctl
(v, p ,
are all the attributive constituents in which Ctk (rp, P , x l , ..., xk) occurs in C&d+e) (vYP ) , The corresponding Fj is obtained as the disjunction of the members of any 6 which separates the Ppositive and Pnegative attributive constituents shown in Ctk ( y , P, xl, ...,xk). Conversely, if one branch in the uncertainty descriptions (17) keeps growing indefinitely, it means that there exists in each infinite model M of (1) a countable sequence of individuals such that at each individual a, P's applying or not applying to a is not determined by the earlier individuals and at least one of the two choices gives rise to a further un
60
JAAKKO HINTIKKA AND VEIKKO RANTALA
certainty.l But this means that P is not finitely identifiable. Thus the conditions given above are not only sufficient but also necessary conditions of finite identifiability on the basis of (l), and we obtain a simple argument for the Kuekertype criterion of finite identifiability.” If this kind of situation occurs in each branch of Fig. 1, P is finitely identifiable on the basis of the given theory. It is readily seen how conditions (14), (15) or (18), (19) obtained in the different branches are to be combined so as to yield conditions of the same form for the whole theory. (It is easily seen that it does not matter whether in (18), (19) quantifiers are interpreted exclusively or inclusively.) We can read off further syntactical criteria of finite identifiability from what has been said. Perhaps the simplest is that T should imply, for a suitable k and for suitable F’ (xl, ,xk,y ) (in the vocabulary of p) w i t h j = 1,2,..., m,
...
j=m
(Xl)”‘(xk)
v
j=1
(y)(pu
(20)
Fj(xl,*?xk,y))*
On the inclusive interpretation this could be written, as one can easily see, (XI) * “ (xk)
[
(xi =k x2 J=m
v
j = 1
A
*’*
( y ) (pr
A xk1
G j
*
(x1,
xk)
9
(21)
xk, y))]
.
with suitableGj (xl, ..,xk, y ) (in the vocabulary of p) withj = 1,2,,..,m.3 As a further consequence, it is seen that P is finitely identifiable on the basis of T iff it is finitely definable (identifiable) in each model M of T, in an obvious sense of finite definability in a model.

Of course, for each stage of choices there must be at least two individuals such that P can be given to one of them and P to the other. It should be also noted that here identity is supposed to be present. That the terminating of every branch is the necessary condition can be proved also syntactically by showing that if some branch does not terminate in (17), then the condition in section 4 (in the very beginning of part (D)) is not satisfied. It is obvious that (20) is also a sufficient condition for finite identifiability when x2 A identity is present, since (21) is of the form (15) and (Exl) (Exk)(xl A x k  l =k xk) holds in each infinite model of T.
+
..
SYSTEMATIZING DEFINABILITY THEORY
61
(c) The third main question concerning (17) is whether any branch terminates (stops growing) in the uncertainty descriptions (17). It was already seen in Section 4, part (C), that in this case we have restricted identifiability. The ChangMakkai condition (12) was also seen to result trivially in this case. Conversely, if no branch terminates in (17), it means that we have in any model M of (1) a countable sequence of choices whether or not assign P or P to certain individuals. Each choice is independent of the earlier ones and either alternative leads to further choices. This clearly means that P cannot be identifiable to any degree smaller than 2Ko, h0wever.l By extending this argument somewhat, it can aIso be shown (we shall not do it here) that if no branch terminates in (17), the ChangMakkai condition (i) (Section 4,part (C)) is not satisfied for infinite domains D with cardinality 5 > KO. Again, it is seen that P is restrictedly identifiable on the basis of a theory iff it is restrictedly identifiable in each model of this theory. Thus the most important known results on identifiability find their niche in terms of uncertainty descriptions.

References [l] C.C.Chang, Some new results in definability, Bull. Am. Math. Soc. 70 (1964) 8088 13. 121 F. M. Fisher, The identificationproblem in econometrics(McGrawHill, New York,
1966). [3] J. Hintikka, Distributive Normal Forms and Deductive Interpolation, Z. Math. Logik Grundlagen Math. 10 (1964) 185191. [4] J. Hintikka, Distributive forms in firstorder logic, in: Formal systems and recursive functions, J. N. Crossley and M. A.E. Dummet, Eds. (NorthHolland, Amsterdam, 1965) 4790. [5] J. Hintikka, Languagegames for quantifiers, in: Studies in logical theory, American Philosophical Quarterly,Monograph Series No. 2 (Blackwell's, Oxford, 1968, pp. 4672). Reprinted in [9]. [6] J. Hintikka, Surface information and depth information, in: Information and inference, J. Hintikka and P. Suppes, Eds. @. Reidel, Dordrecht, 1970) 263297. This can be proved also syntactically by showing that if no branch terminates in (17), then no uncertainty set disappears in (l), either.
62
JAAKKO HINTIKKA AND VEIKKO RANTALA
[7] J.Hintikka, Constituents and finite identifiability, J. Phil. Logic 1 (1972) 4552. [8] J. Hintikka, Surface semantics: Definition and its motivation, in: Truth, syntax and modality, H. Leblanc, Ed. (NorthHolland, Amsterdam, 1973). [9] J. Hintikka, Logic, languagegames and information: Kantian themes in the philosophy of logic (The Clarendon Press, Oxford, 1973). [lo] J.Hintikka and R.Tuomela, Towards a general theory of auxiliary concepts and definability in firstorder theories, in: [ll]. [l I] J. Hintikka and P. Suppes, Eds., Informafion and itference (Reidel, Dordrecht, 1970) 298330. [12] D. W. Kueker, Generalized interpolation and definability, Ann. Math. Logic 1 (1970) 423468. [I31 M. Makkai, A generalization of a theorem of E. W.Beth, Acta Math. Acad. Sci. Hungar. 15 (1964) 227235. [I41 G.E.Reyes, Local definability theory, Ann. Math. Logic 1 (1970) 95137. [15] S.Shelah, Remarks to ‘Local definability theory’ of Reyes, Ann. Math. Logic 2 (1971) 4 4 4 7 . [16] L.Svenonius, A theorem about permutation in models, Theoria 25 (1959) 173178.
CONSERVATIVE ENDEXTENSIONS AND THE QUANTIFIER ‘THERE EXIST UNCOUNTABLY MANY’ Herman Ruge JERVELL University of Tromm, Tromsn, Norway
0.
In this paper, we will give some new connections betweenordered models and the quantifier Qx  ‘there exist uncountably many x’. For some of the known results, see the papers by Fuhrken [l] and Keisler [2]. We work with the languages L, L1,LR and LQ. L is an ordinary firstorder countabIe relational language. L1is the language we get from L by adding a new binary relation Rxy. LQ is got from L by adding the quantifier Qx. Below we will define LR as a certain sublanguage of L, . Formulas are defined as usual. A sentence is a formula without free variables. We map the formulas of LQ into L1as follows: (F)* = F for F atomic,
(lF)* = 1(F)*, ( F 3 G)* (F
A
F*
3
G*,
G)* = F*
A
G*,
=
( F v G)* = F* v G*, (VX Fx)* = VX (Fx)*,
(3x Fx)* = 3~ (Fx)*, (Qx Fx)* = V ~ 3~ J [ R p A (EX)*]. 63
64
HERMAN RUGE JERVELL
LQ* is the *image of LQ in L1. Whenever the meaning is clear from the context, we omit * from LQ* or from F*. LR is the sublanguage of L1 consisting of LQ*formulas and their subformulas. In LR, we have formulas of the following four types (Fx is LQ*formula, s and t are terms): (1) (2) (3) (4)
LQ*formulas, 3x [ f i x A Fx], Rst A Ft, Rst.
Let % be a model. If we extend the language L with names for the individuals of 8 , we get the language L8. Similarly we get LR8 and LQ8. An LQmodel is an uncountable model in the language L. If we wanted to, we could have admitted countable models as LQmodels. This would have necessitated some trivial changes in the theory below. An ordered model is a model in the language LR where Rxy is a total, irreflexive, linear ordering. Let 8 and B be two ordered models. We then say that (i) B is a proper extension of 9.l if B is an extension of B and there is b €23 such that for all a € 8 , B i=Rub; (ii) B is a conservative extension of 8 if B is an extension of 8 and for all F E LR8;
8kFGBkF (iii) % is an endextension of 8 if it is an extension of 8 such that for all a E1 ‘ 1, b E B if 23 i= Rba, then b E 8. We could have changed the definition in (ii) to: ‘For all F F E L19.1, ‘3 i= F o B i= F’. The first part of the theory below (results (A) and (B)) will still go through with only minor changes, but we then get into difficulties when we try to tie our results up with the result of Keisler (result (C)). Let A be the class of all countable ordered models which have proper conservative endextensions. We can now formulate the main results of this paper: (A) We give a necessary and sufficient criterion for a countable model to be an element of dt. This criterion can be used to axiomatize ‘true in all models of A’.
CONSERVATIVE ENDEXTENSIONS AND THE QUANTIFIER
65
(B) For all F E LQ : F is valid iff F* is true in all models of A. (C) In [2], Keisler gave another axiomatization of LQvalidity, and a completeness proof of it. His proof is in two parts. First, he proved completeness of ‘true in all weak models satisfying certain schemata’ using an ordinary Henkin construction. Then, and this is the hard part, he proved that from each such weak model he could construct an LQelementary equivalent LQmodel. Below we prove that there is an easy connection between such weak models and the models in 4. From this we get a new proof of Keislers completeness result. The result are all for firstorder logic. It is straightforward to extend them to alogic and to L,,,.
1. We now want to characterize the class A of all countable ordered models which have proper conservative endextensions. A model % in the language L, is ordered if and only if it satisfies MIM3. M1
VxVy [Rxy v x = y v R Y X ] ,
M2
Vx ‘dy Vz [Rxy A Ryz
M3
Vx 7 Rxx.
3
Rxz],
Then we note that an ordered model % has a proper conservative extension if and only if it satisfies
M4
V x 3y Rxy.
In fact, we get by compactness that % has then a proper L,conservative extension. The claim now is that a countable model % in the language L, is in A’ if and only if it satisfies M1M5, where
M5
[Qx 3y Fxy
for all LQformulas Fxy. 5
Kenger, Symposium
3
Qy 3x Fxy v 3y Qx Fxy]“
66
HERMAN RUGE JERVELL
First, we prove the necessity. Let 8 be an ordered model and 8 a proper conservative endextension of 8. 8 will then obviously satisfy MlM4. We will prove that it also satisfies M5. Let Fxy be a formula in LQK Assume
121 P Qx 3y Fxy.
Then
'8 I= Qx 3y Fxy, 8 k 3yFby for some b e 8
 %,
There are now two cases. Case 1. c E ?I. To each a E !?I, 8 I= Rab
Hence
A
Fbc,
8 k 3 x (Rax
A
Fxc),
8 P 3 x (Rux
A
Fxc).
8 k V y 3x (Ryx
A
Fxc),
121 t. Q x Fxc, 8 I= 3y Q x Fxy. Case 2. c E 8
 91. To each a €121, 8 k Rue
A
Fbc,
2 ' 3 I. Rac
A
3 x Fxc,
B P 3y [Ray A 3 x Fxy], Hence
8 k 3y [Ray A 3x Fxy].
8 k Qy 3x Fxy. From the two cases we conclude and 8 satisfies M5,
?I k 3y Qx Fxy v Qy 3x Fxy,
CONSERVATIVE ENDEXTENSIONS AND THE QUANTIFIER
67
LEMMA 1. 9.l E A * 8 satisjes MIM5. We use the remainder of this section to show the converse. Assume that 8 is a countable model satisfying MlM5. By MIM3, we get that 8 is ordered. By M4, 8 has a proper conservative extension. We must prove that i!ihas a proper conservative endextension. Let b be a proper conservative extension of %. A subset N of 8 is small (relative to a) if N c { b E b I b C Fb},where Fx E LQa and % I# Qx Fx. b E b is small (relative to '%) if it is included in a small set (relative to %). b E b is Zarge (relative to '%) if for all a E %, b =! Rub. LEMMA 2. No element of 23 is both small and large.
PROOF.Assume b €8 is small. Then for some Fx E LQa, b != Fb and
8 p Qx Fx, there is a E 8 such that
b
I#
3x (Rax
A
Fx),
I#
3x (Rax
A
Fx).
Since b k Fb,we have b P Rab and b is not large. Given an ordered model Q with linear order R. Let R* be another linear order obtained from R by permuting the elements {c I Q k Rcd), where d is a fixed element. Let Q* be the model obtained from Q by exchanging R with R*. Then by a straightforward argument, Q and Q* are LQQelementary equivalent. It is more difficult to give results on the preservation of LRQformulas. The Laformulas are of the following four types : (1) LWforrnulas, (2) 3 x [Rsx A Fx], where Fx E LQQ, (3) Rst A Ft, where Ft E LQQ, (4) Rst. The problem is to get control over formulas of the second type. We call such formulas bound formulas. LEMMA 3. There is a proper conservative extension of % such that each element in the extension is either small or large.
68
HERMAN RUGE JERVELL
PROOF.Let 23 be a proper conservative extension of 8. We divide the elements of 23 into three disjoint classes: S : the small elements. (Note that 8 c S.) L : the large elements. c = %  ( S lJ L).
We define a new binary relation R*xy on 23 as follows:
R*cd o c E L
A
Rcd
.v dEL
A
Rcd
. v C E SA d e S A Rcd
. V C E C A ~ E C A R C ~ . v C E SA d E C .
%* is the model we get from '23 by exchanging R with R*. It is straightforward that %* is a proper extension of % and that '23 and B* are LQelementary equivalent. We shall prove that %* is a conservative extension of 8. The only problem comes with the bound formulas. It suffices to prove: Let Fx E LQ8 and a E 8. Then
(*I
B* C 3 x [Rax A Fx] e %* k 3x [Rax A Fx].
Assume '23 k 3x [Rax A Fx],
23 k Rub
A
Fb for some b.
Since a E 8, a E S. From the definition of R*,R*ab. By LQBelementary equivalence, B* k Fb. Hence
B* I.Rub Now assume
A
Fb,
%* C 3x [Rax A F x ] .
8 I# 3x [Rax A Fx].
Let b € 8 be such that %* I=Rub. If B k Rub, then % I= i F b and %* k I Fb. We then get %* k Rub 3 7Fb.
CONSERVATIVE ENDEXTENSIONS AND THE QUANTIFIER
69
Assume B # Rub. Then b E C. By assumption % If 3x [Rax A Fx], % I# Qx Fx.
Since b 4 S, B I. i F b . By LQBelementary equivalence, b* k i F b . In this case we also get %* I= Rub 3 iFb. We conclude b* (# 3x [Rax A Fx]. This proves (*). It only remains to observe that the small elements of %* are exactly S, and the large elements are exactly C u L. b* is a proper conservative extension of % such that each element is either small or large. We have given the model 8, satisfying MlM5. If we now apply Lemma 3 (o times, then we get modelsB,, Bl ...,B,,, ..., n < coy such that Bo is the model 8, and for each i, Bi+, is a proper conservative extension of Bt,where each element is either small or large (relative to %[). Some of the properties of the chain of models Bo,2' 3, , ... are preserved under permutations. (We put % = Uncm B,,.)
DEFINITION. A chain permutation n is a permutation of the elements of = Unis a consistent triple. This is obvious. In Lemma 6 we summarize the standard results in this type of construction.
LEMMA 6. (i)
If( T u { i F } , I, z) is a consistent triple, then so is also (T u{i F},
(ii)
If( T, I u { i F } , z) is a consistent triple, then so is also (T u { F ) ,
Iu { F } , t>.
I u {lF},z).
u { F A G}, I, z) is a consistent triple, then so is also (T u ( F A G, F, G } , I, z). (iv) Zf(T, Iu {F A GI, z) is a consistent triple, then so is also either (T, Iu {F A G, F } , z} or (T, I u { F A G, G),z).
(iii)
If (T
12
HERMAN RUGE JERVELL
(v) If (T u { F v G}, I , z) is a consistent triple, then so is also (vi) (vii) (viii) (ix) (x)
( T u { F v G , F } , I , t ) o r ( T u { F v G,G],I,z). I f (T, I u { F v G}, t ) is a consistent triple, then so is also ( T , I u { F v G, F, G } , z). I f (T u (Vx Fx}, I, z) is a consistent triple and b is an element of the triple, then (T u {Vx Fx, Fb}, I, t ) is a consistent triple. I f ( T , I u {3x Fx}, t ) is a consistent triple and b is an element of the triple, then (T, I u ( 3 x Fx, Fb}, t ) is a consistent triple. Suppose Vx Fx E LRa. I f ( T , Iu {Vx Fx}, z) is a consistent trkle, then for some element a of a, Fa E 1. Suppose 3x Fx E LRa. If(T u {3x Fx}, I,t )is a consistent triple, then for some element a of a, Fa E T.
LEMMA 7. Suppose Vx Fx E LQ%  LRa, Fx 4 LQ8, and
is a consistent triple. Let bN+l be a large element of Then (T, I u
%N+,
relative to 8,.
{v~F~,FbN+i},(bi,...,bN,bN+i))
is a consistent triple. PROOF. V x FX must be of the form Vx 3y [Rxy A Gy]. Let n,o be strong chainpermutationssuchthat (To%, 1uB)extends ( n T , n ( l u{VxFx})). Then n b N + is a large element of uBN+ relative to omN and
08I# nVx F x , 0 % ~ nvX u%N
is a consistent triple.
FX,
I# XVX 3~ [ R x A~ G y ] ,
O%N+1
k n3y
aBN+l
I#
[ a N + I Y
nFbN+l
9
Gy1,
CONSERVATIVE ENDEXTENSIONS AND THE QUANTIFIER
73
LEMMA 8. Suppose that 3x Fx $ (LQ% u LRa), and (T u ( 3 x Fx}, I, ( b , , ..., b N ) ) is a consistent triple, then there is bN+,such that . By Lemma 4, =
{ f ( x , e k ) I ek
u
9
v>.
If we like, these can be replaced by one equation: f(x,y) =
u
{f(en,
ek>
I en
x, ek
y>*
Those familiar with the product topology will now see that this is the same as having f continuous on the product space Po x Po, but we need not enter into details. More important is the question of substitution.
1.3. Continuous functions of several variables on P o are PROPOSITION closed under substitution. PROOF.By ‘substitution’ here we understand generalized composition. We can analyze the process into two steps: first with distinct variables, then by identijication of variables. By way of example we might have first: f(g(x3 v), h(z, w,u)), and then pass to, say
f( d x , r),h(Y, 4A)
Since each variable is treated separately, the proof can be reduced to two special cases simply by neglecting the remaining variables. The special cases are ordinary composition, f(g(x)), and identification of two variables, h ( x , y), wheref, g and h are given continuous functions. For the sake of completeness we show the main steps of the easy proofs. In the first case we use characterization (ii) in Proposition 1.1 : em
c f(g(x)) * 3en
E
d x ) em = f(en)
.
3ek c x .3en E g(ek) em e f(en)
c)
.
* 3ek E x em c f(g(ek)). 11 Kanger, Symposium
162
DANA SCOTT
In the second case: em E h (x,x) e,3en E x . em E h (en,x) e, 3en E
x . 3 e k E x . em E h (en,ek).
Now if we think of e, = en v ek and use the monotonicity of h, we find
.
em E h (x,x ) e,3e, E x em E h (ej, ej),
which shows that h (x,x ) is continuous in x. It is no surprise that the theory of functions of several variables is closely related to that of one variable  especially as the product space Pw x Pw is homeomorphic to Pw. Without the use of general topology, however, we shall exhibit a more immediate connection that is well known to the practitioners of 1calculus in the next section when we discuss iterated application. Before we turn to the algebra of continuous functions, however, there are two general results involving limits which it will be useful to record here: the theorem on fixed points and the theorem on extending continuous functions to larger domains. Both of these theorems can be given in greater generality, but the plan here is to give very elementary proofs.
PROPOSITION 1.4. Every continuousfunction f : Pw point given by the formula fix(f)=
Pw has a least fixed
( J { f v ) b 4 Y
where 0 is the empty set andf” is the nyold composition off with itself.
PROOF. This argument is well known. Supposef has a fixed point a =f(a). Then since 0 E a and f is monotonic, we argue inductively thatf”(0) G a for all n E w. Thus fix cf) E a; hence, if fix (f)is itself a fixed point, it must be the least one. Now we remark that f”(0) E f”+l(@); hence, if ek c fix 0, we have ek c f”(0) for some n, because ek is finite. Thus m Ef(fix
.
0) 3ek E fix cf) t ,
m Ef(ek)
~ 3 n ~ w . sf”(0).m~f(ek) 3 e ~
.
3n E o m ~ f ” + l ( 0 )
e,
e,mEfixCf),
and sof(fix
0)= fix m.
LAMBDA CALCULUS AND RECURSION THEORY
163
The remaining results in this section require some knowledge of general topology, but not much; we shall indicate the elementary content of the theorems. PROPOSITION 1.5. Let X and Y be arbitrary topologicaI spaces, where X E Y as a subspace. Every continuous function f:X + P w can be extended to a continuous function f: Y ). Pw defined by means of the formula = m x ) I X E x n UI I Y E UI,
m u in
where y
E
Y and U ranges over the open sets of Y.
PROOF.Recalling the base for the topology of Pw, we note that
U, = {Z E Pro I en c Z } =
n { I z I m ~ zImEen}, }
a finite intersection. Thus to prove that f is continuous on Y, we need only show that the inverse images of the sets ( z I m E z } are open in Y. To this end, calculate

mE~(Y)
so that
iY
YI m
n {f(x) I x = u { u I m n {mI
3u [Y E u A
Ef(y)~
mE
E
E
x n u>l, E
xnw.
This set is obviously open, thus f is continuous. To prove thatfextendsf, note first that when x E X,it is obvious from the definition thatf(x) E f(x). For the opposite inclusion, suppose that m Ef(x). Sincefis continuous, there is an open subset V of X such that x’ E V always implies m Ef (x’) . But Xis a subspace of Y, so V = X n U for some open subset U of Y. Therefore, m E (I{f(x’) I x’ E X n U},
and we have rn E ~ ( x )as , was to be shown. The purpose of stressing the Extension Theorem (Proposition 1.5) is to show that there are many Pwvalued continuous functions. The reason is that there are so many subspaces of Pw, as we shall see in the Embedding Theorem (Proposition 1.6). Continuous functions between sub
164
DANA SCOTT
spaces can always be regarded as restrictions of continuous functions
f :Po + Po,as Proposition 1.5 shows. This remark justifies our interest
in and concentration on the continuous functions defined over Po. For readers not as familiar with general topology we may remark that Proposition 1.5 can be turned into a definition. Suppose that X G Po is a subset of Pw. We can of course regard it as a subspace with the relative topology, but what interests us are the continuous functions defined on X. From Proposition 1.5, we can see that a necessary and sufficient condition for f : X + Po to be continuous is that for x E X and rn E w we have m Ef ( x ) ijg. 3en c x Vx' E X [en c x'
+m
~f(x')].
We note that for this definition, because en4 X in general, we have to replace the clause m Ef(e,) in (i) of Proposition 1.1 by something more complicated. In this way we have an elementary characterization which does not employ general notions.
PROPOSITION 1.6. Every TospaceX with a countabIe basis for its topoIogy can be embedded as a subspace of Pw.
PROOF.The proof is actually quite trivial. Let the basis for the topology of X be the countable family of open sets {Vn I ~
E U ) .
That Xis a Tospace means that each point of Xis uniquely determined by the V,, that contain it. Thus if we define a function f : X + Pw by the equation f(x) = { n e w I X E Vn], then the function will be oneone from X to a subset of Pw. Note next that {x E X I n ~ f ( x )= } V,; hence f:X + Pw is continuous. Finally we must show thatfmaps open subsets of X to relatively open subsets of f ( X ) c: Pw. It is enough to show that the image of each V, is open. But in view of the last equation we can easily check that
f(vn)=f(x) n { Y ~ f ' wI ~ which shows that f(Y.) is open in f ( X ) .
E Y ) ,
LAMBDA CALCULUS AND RECURSION THEORY
165
The simplicity of the proof of Proposition 1.6 is nothing to be held against it: now that we know that Po is rich in subspaces, we can study them in more detail. All ‘classical’ spaces are included, and such nonclassical ones as the continuous lattices [S] have been shown to be of some interest also. In this paper, however, we shall not continue this line of study since our main purpose is to establish the link with Recursion theory. Nevertheless, to grasp the significance of the model Po,we needed some background on continuous functions. In summary we can list our results indicating each with a short descriptive name: Proposition Proposition Proposition Proposition Proposition Proposition
1.1, the Characterization Theorem; 1.2, the Graph Theorem; 1.3, the Substitution Theorem; 1.4, the Fixed Point Theorem; 1.5, the Extension Theorem; 1.6, the Embedding Theorem.
By giving names to these propositions, we do not mean to claim any originality or depth for the results. What makes them pleasant is rather that they are easy to prove and together show the coherence and naturalness of the theory. But still all the discussion has been quite abstract in that no applications for the theory have been illustrated. We have restricted attention to one space Po, for the most part, but we have never said what the elements of Po could be used for. This is like introducing real numbers as the completion of the rationals without ever mentioning geometry or measurement. It is mathematics, but is it life? Clearly not, and we must hasten to establish the inevitability and indispensability of the model by showing how to interpret the elements of Po and how to do ‘algebra’ on them. 2. Computability and defbability Let x E Po. What can x represent? If x = 0, there is not much x can represent  without some artificial convention. But if x = {n}, a singleton, there is something  some quantity  for x to represent: the number n itself. This is so natural that we shall assume the identification as part of our mathematical background; that is, we suppose it to be a fact
166
DANA SCOTT
that for all n E o,
n = {n}. As a consequence, we can then write w E
Po,
because every integer is by assumption a set of integers. We note that this identification is at variance with the construction of the integers in certain systems of set theory where we may find that
The point here is that we do not care how the integers are constructed; we feel we know enough about them to take them for granted  or better: to take them as given axiomatically. Thus we are free to identify them with whatever sets we wish, since we are interested now in finding more ways of using them. So much for the singletons; what about other sets? The answer will not be unique. In the first place every integer n can be taken as a pair n = (no,n,) for suitable no,n, . Therefore every set of integers can be constructed as a relation between integers (a set of ordered pairs). We shall make full use of this familiar idea shortly. Note too the correspondence between finite sets and integers (en corresponds to n); thus a set of integers could just as well represent a set of (finite) sets  each of which represents a (finite) set of (finite) sets. As we shall see, the relation, between finite sets and single integers (which can be represented by sets) will play a central role. The moral of this discussion is that even though a set is a set, it has many aspects or layers. We can Xray it to find internal structure, and pictures taken from different directions will reveal different structures. Thus the same set can have many difeerent meanings, and the meaning of the moment is only going to be clear from the context of use. Fortunately the result of this situation is richness and variety and not incoherence. Before we get on with the Xraying, we should ask what can be seen on the surface. The answer is obvious: a set is made up of its elements. We can write: x = {n I n E x ] =
u
{n I n E x} =
u
nEx
?I.
167
LAMBDA CALCULUS AND RECURSION THEORY
In case xis empty or a singleton, that is all there is to say (on the surface). Otherwise we may find
x
=0
v 1 u 2 u 666
in the finite case, or in an infinite case
x
= 1u2
v 4 u 8 u 16 u 32 u
v 2" v
...
Not just one but a multitude of integers. A set is in general a multiplicity a multiple integer, if you like. (Caution: each integer that occurs does so only once, but many integers are allowed.) The idea can be put in another way. We are very used to thinking of singlevalued numbertheoretic functions
p : o +0. Since o E P o according to our agreement above, we may also regard p:o+Pw, but p is special because p(n) E w for all n E o.What can be said for an arbitrary mapping q:o+Pw, where q(n) need not be a singleton? Such a function is very conveniently regarded as multivalued function. Thus instead of writing we write and there may very well be many such m (even none). In this way partial functions are also included since we can read the equation q(n) = 0 as saying that q is undefined at n. We are beginning to see a connection between ordinary functions and functions on Po,but there is still an imbalance. Namely, we have multivalued functions (of an integer variable), but we have not yet connected these with multiargumented functions (that is, functions f:P o P P o ) . That is not quite true, actually, for remember that w c Po. Indeed, as
168
DANA SCOTI
a subspace of Pw,the set w gets the proper discrete topology (all points are open). By the Extension Theorem, any continuous function can be continuously extended; and, since w is discrete, arbitrary functions are continuous. The extension of q : w + Pw to 4 : Po 3 Po is not all that interesting, however; for by the formula of Proposition 1.5 we find that if x E Pw, then g(x) = =
n {q(n) I n
EW}
dn)
if x = 8; ifx = new; otherwise.
=w
The function Tj is continuous but rather extreme in its behaviour. A better connection defines (1 : Pw + Po by the equation: 4(x) =
u
{ 4 ( 4 I n E XI 3
which is easily seen to be a continuous extension of q. (In fact, in a suitable sense 4 is the minimal extension while ?jis the maximal one.) It is also easy to see that a functionf: Po + Po is of the form (1 for some q ifff satisfies the equations
f(U {xn I n E w.>) =
u {f(xrJ I
n E w}
9
f(O) = Pr
for all systems where xne Po for all n e a, a E w. Such functions we call (infinitely) distributive; they are a very special case of continuous function. (A continuous function which distributes over jinite unions is infinitely distributive, by the way.) In view of this discussion, then, we can feel free to concentrate on the arbitrary continuous f:Po f Po, since these encompass ordinary (multivalued) functions. Further we know what to look for to single out the ordinary functions. Clearly it is better to have arguments and values of the same type for the sake of symmetry, but is the added generality really significant? To answer the question, we must imagine how functions of this kind are to be computed. Consider computingf(n) E Pw. The function is given its argument n which causes it to start a process of generating all the m ~ f ( n )An . infinite set cannot be calculated at once: each element must be produced in turn. (And that does not mean in numerical order!) Of course, we are considering functions abstractly in extension, so we do
LAMBDA CALCULUS AND RECURSION THEORY
169
not record the steps of the process but just collect the results into a set f(n). So much for values, now what about arguments? When we calculate f(x) where x E Po, we cannot say that x is given all in one go: it must be generated. Again, the order of generation must not count only the elements. As each n E X is brought forth, a part of the process forfcan be set in motion. And now we can see the difference between distributive f and the more general continuous J A distributivef treats each n independently because it is characterized as satisfying the equation
f(x) =
u { f ( 4I
n E XI.
A continuousf on the other hand allows for cooperation between finitely many of the elements of its argument. Thus it can wait for a finite subset en c x before it even gives out m Ef(en) E f ( x ) . The calculation  if it is to be effective  can only depend on finite portions of x, but the dependence must obey the monotonic law. Why? Because we never know that a particular integer is to be excluded from x since we generate only what is in x. In case the process puts m Ef(e,,), it must also put m E f ( e k ) in case en E e k ; because e k E x might also be possible  it cannot be excluded in finite time, so to speak. This is why computable f:Po + Po are continuous, but so far we have only a few bare hints as to why continuous functions are interesting. To get the full impact of the idea we need examples, and the best examples have to go below the surface. Recall the Graph Theorem (Proposition 1.2). Every continuous function could be recaptured from a set; in fact, if u = g r a p h w , then f(x) = fun (u) (x). Let us now take a closer look at the binary operation on sets that we are going to call application:
u(x) = fun (u) (x). We knew this was continuous in x, but it is also easy to prove it is continuous in u (indeed it is distributive in u). With suitable choices of u we obtain any desired continuous function, but with combinations involving application we can define new functions. A well worn example is composition: u (v(x)), where u, v are fixed, and x is the variable. A new and relatively bizarre example is selfapplication: x(x), where x is the variable. Note here how x is being used in two different ways: first as a
DANA SCOTT
170
graph (where we go below the surface) and then as an argument (where x is taken at face value). This shows how the same object can be given different ‘meanings’. But this is not very odd: an integer has a different meaning according as the occurrence is in the numerator or denominator of a fraction, though we must admit that x(x) is rather more difficult to understand than n/n. Consider now any such combination [ x 3 which is continuous in the variable x. Mathematically this defines a function f:Pa + Pa, where
f(x)
=
r
x I
7
for all x E Po. By Proposition 1.2 we can reduce this function to a set: 2.4
= graph
cf)
7
where in terms of the newly defined operation of application we find:
for all X E Po. In this way we have made the mathematical notion (or at least: notation) of function redundant in the continuous case, since they can all be represented by sets. Going a step further, we introduce a nonalphabetical name for this set as follows:
Ax. [“x]
= graph(f).
That is, the denotation of the Aexpression is the graph of the function defined by abstraction on the variable indicated. In outline, this is the graph model for the Acalculus, and it is in terms of this notation that we will begin to see how interesting the continuous functions can be. To make this more precise we introduce a formal language of terms (called LAMBDA) which consists of the expressions of the pure Acalculus augmented by arithmetic primitives appropriate to our plan of constructing the model from sets of integers. Explanations follow, but it will be seen at once how the arithmetic primitives are distributive extensions of wellknown point functions. Also to make the definition clearer, we write out in full the denotations of application and abstraction. One could imagine more primitives for the language, but we shall establish later a result that explains the scope of those chosen here.
LAMBDA CALCULUS AND RECURSION THEORY
171
DEFINITION. The syntax and semantics of the term language LAMBDA is indicated by these six equations:
2 3
x, y = {n E x I 0 E z } u {mE y
I 3k. k
+ 1 E z} ,
u(x)= { m 1 3 e n ~ x . ( ~ , m ) E u } ,
Ax
.
t=
{(n, m) I m E t [e,,/x]}.
The definition is somewhat informal, but it was thought that too much formality would make the plan too difficult to understand. On the left we find the LAMBDAnotation: there is one constant, two unary operations, one ternary operation, one binary operation, and finally one variablebinding operator. The notation is perfectly algebraic and these operations can be combined in any order to give compound terms in the familiar way. (In A x . t we had to be a bit more formal to allow z to be an arbitrary compound term; in the other equations variables were sufficient to convey the ideas. Of course, in place of the ‘x’ we could use any other variable.) On the right we see the denotations of the terms. Strictly speaking we have a confusion here between the form of the variables and the denotation or value of the variables. Form on the left, value on the right; or if you like, object language on the left, metalanguage on the right. (There are standard ways to resolve the confusion, but since LAMBDA is such a simple language the extra complication would not be fruitful.) Let us read the equations. The symbol 0 denotes the number 0 (remember: (0) = 0 by convention in Pw). For sets x, the operation x + 1 adds one to all elements of the set. (There is then no ambiguity about n + 1 whether we think of n E w or n E Pw.) In the case of x  1, we subtract one from all positive members of x. (Note: 0  1 = 8.) The fourth equation defines the meaning of the conditional expression. It
172
DANA SCOTT
will be clearer written out by cases: z
3
x,y = 8
if z
=
8;
=x
if z
=
0;
=Y
ifO$zbutk+ 1 ~ z ;
=
x v y otherwise.
Finally, application and abstraction are defined as before, except we have been more formal about the z. On the right, z[e,Jx] indicates that x should be replaced by en (or better: valued as en)throughout z. Since x is a bound variable, it does not really occur on the right. By virtue of the definition, every LAMBDAterm has a denotation or value once the values of the free variables are given: the value is afunction of the values of the variables. Functions (of several variables) defined in this way by LAMBDAterms are obviously called LAMBDAdefinable functions. Here is the first result.
THEOREM 2.1. All LAMBDAdefinablefunctions are continuous. PROOF.A function defined by a constant or a single variable is obviously continuous. It is left to the reader to check in detail that x + 1, x  1, z 3 x , y, and u(x) are continuous. For Iabstraction we do a special case. Suppose z (x, y) is continuous in x and y (this is an informal notation to display variables). What we must show is that Ax. z (x,y) remains continuous in y. We calculate
Ax
7 (x, Y) =
= =
=
{(n, m> I m
E7
(en, v)}
{(n, m) I 3ek c Y m E z (en, e3)
u
{{(n, m) I mEZ(en, ek>>
I ek
y>
(Ax * z (x, ek) I e, E y>.
(Note that we did not use the continuity in x to prove it about y; the assumption on x is used in Theorem 2.2.) Finally, we appeal to Proposition 1.3 to take care of compound terms formed by substitution. The fundamental Graph Theorem (Proposition 1.2) can now be stated in LAMBDAnotation thus justifying the rules of conversion :
LAMBDA CALCULUS AND RECURSION THEORY
THEOREM 2.2. Axioms (cc), more formally we have: (a)
.
Ax z = Ay
(P> (8
173
(p), (5) are all satisfied in the model. Stated
.
provided y is not free in z.
t lylx],
(Ax
7)
(v) =
Ivlxl,
Ax.z=Ax.ot,Vx.t=o.
Actually, Proposition 1.2 (the Graph Theorem) is needed only for (/?) and (E) because ( N ) is obvious from the definition. A wellknown consequence of the rules of conversion can be obtained in a sharpened form with the aid of Proposition 2.1. THEOREM 2.3. I f f is a continuous function of k variables, then there is a set u E Pm such that U(X0)
(x1) " * (xk1) =f(xO, x1, * . * , x k  1 )
holds for all x o , x l , ...,x ,  ~E Pw. PROOF.In the proof of Proposition 2.1 we could have included f as a new kary primitive. This would allow us to define: U =
1x0. 1x1 "'
k k  1
f(X0, X I ,
Xk1).
The theorem then results by applying (/?)k times. In this way we show that functions of several variables can be reduced to functions of one variable with the help of iterated application. Another wellknown consequence introduces the combinators. THEOREM 2.4. Every LAMBDAdeJinable function can be defined by iterated application starting from variables and these six constants: 0 = 0, SUC
= 1x.x
+ 1,
pred = Ax.x  1 ,
cond=ilx.Ay.Az.z~x,y,
K
= ilx.Ay.x,
. .Ax. u(x) (v(x)).
s = ilu Av
174
DANA
SCOlT
The proof can be taken over from [l], [2], or [4]. An improvement of the result special to this model is given below in Section 3. More interesting at the moment is the behaviour of other combinators. DEFINITION. The paradoxical combinator is defined by the equation :
. .
Y = Ru (Ax
u (x(x))) (Ax. u (x(x))).
THEOREM 2.5. If u is the graph of a continuous function f, then Y(u) is the least fixed point o f f . PROOF.What we must prove is that
(h
(x(x))) (h (x(4)) = fix cf>.
By way of shorthand, let
a = fix(f)
d = Ax. u (~(x)). Calculate first by (p): =

( d o ) = f(d(d))
Thus d(d) = Y(u) is a fixed point o f f , and so a E d. Suppose for the sake of contradiction that a =Id. = By continuity of application we note
44 =
u
{e,(e,)I el E 4.
Let I then be the least integer such that el E d but el(el)$ a. There must then be an integer k E e,(e,) where k a. By definition of application there is an en c e, where (n, k) E e , . But then en E d and (N, k) E d. By definition of abstraction, k E u (en(en)),and so
+
Moreover,
u ( d e n ) ) $ a*
en(en) $ a ,
since otherwise, by monotonicity, u(en(en))E u(a) = f ( a ) = a. But n < (n, k) and (n, k) c 1. Thus n is a smaller integer which satisfies the conditions that gave us the choice of I as the least such. This is the last time we shall distinguish a continuous function from its graph. It is now clear that they are interchangeable. Every set u can repre
LAMBDA CALCULUS AND RECURSION THulRY
175
sent a function, because u(x) is continuous in x. Those that are graphs satisfy u = Ax. u(x),
a condition which is written out in more primitive terms in (iii) of Proposition 1.2. From now onfis just another variable which we may single out stylistically when we are thinking of functions. As a corollary of Theorem 2.5 we may remark that Y (f) = fix (f)is continuous i n 5 where we now regard f as just an element of Po.This could have been proved directly from the definition of fix in Proposition 1.4 if we had give a topology to the function space. Instead we derive a topology from that of Po by thinking of the function space as a subset of Po: { u I u = Ax. u(x)} E Po. A complete analysis of the functionspace topology can be given, but we shall not repeat it here. We do note that it is the topology of pointwise convergence, that it is a Totopology, and that the space is naturally partially ordered in a pointwise fashion. This can be expressed as a stronger form of extensionality:
Theorem 2.5 is far from being a curiosity as we shall make it the basis of our representation of recursion within LAMBDA. Of course, if it by chance had not turned out valid, we would have taken 6x as a primitive since it is continuous and computable. Then Y would have been just a curiosity. Before we turn to a precise definition of computability, however, we need to show how pairs, triples, and sequences are represented in the language. A group of specific definitions and lemmata are required to reach the goal.
DEFINITION. 1 = (Ax. x(x)) (Ax. x(x)). LEMMA 1 . 1 = 8.
PROOF.By definition, 1 is the least fixed point of the identity function in
view of Theorem 2.5 and the definition of Y. The least fixed point is obviously the empty set for which 1 is a more dramatic name.
DANA SCOTT
176
LEMMA 2. 0 , l E Ax. 0. PROOF.By definition of abstraction, Ax. 0 = ((n, m) I m E 0 [e,/x]} = {(% m) I
m
E 0)
= ((n, 0) 1 n Em).
By virtue of our chosen coding of integer pairs we find that 0 = (0,O) and 1 = (1,O).
LEMMA 3. X
U JJ
=
(AX.
0)
3 X,
JJ.
This last is an immediate corollary of Lemma 2 and shows that (binary) union of sets is LAMBDAdefinable. It needs hardly to be mentioned that the separate integers 1 = 0 + 1, 2 = 1 + 1 , 3 = 2 + 1, etc., are also definable; hence all Jinite subsets are definable. Of course, A x . 0 is an infinite set. What about other infinite sets?
.
DEFINITION, T = Y (A,. AX x U ~ ( + X 1)) (0). LEMMA 4. T = o. PROOF.Let g be the indicated fixed point in the definition, so that T = g(0). To prove the result, we must prove something more general. Namely, that for all m E o we have Vx. x
+m
E g(x),
where we can define x
+m
=
(n
+ ml n ~ x ) .
We argue by induction. For m = 0 we have g(x) = x u g ( x
+ 1) 2 x .
+ 1, so that ( x + 1) + m E g ( x + 1) c x u g ( x + 1)
Assume the result for m. Specialize x to x
= g(x).
LAMBDA CALCULUS AND RECURSION THEORY
+
Thus x (m for all m E w.
+ 1) c g(x)
177
follows. We can then conclude that m ~ g ( 0 )
The trick just used of specializing a more general recursion can be used in many places. We prefer the more symmetrical (better: dual) notation of I,T to the conventional settheoretical notation 0, w , but this is not an important point. The elements I,T are (graphs of) functions, by the way; but as the next lemma shows, they play a rather special role.
LEMMA 5 . The onlyfixedpoints of K are 1 and T.
PROOF.We must show, a we calculate
= A x . a iff a =
1 or a
=
T. In one direction
A x . 1 = {(n,m) I m E 1) = I,
Ax.T
=
{ ( n , m ) I m E T }= T .
In the other direction, suppose a = A x . a but a =k 1.We note that a = {(n, m) I m
E a}
This means that first
m E a always implies (n, m) E a , and secondly,
k E a always implies k
=
(n,m) for some m E a .
Let k be the least element of a. By the second implication, k = (n, m) and m < k. Since m e a , it follows that m = k. But this implies k = 0 by the nature of our integer pairing function; thus, 0 E a. By the first implication, (n, 0) E a for all y1. In particular, 1 = (1,O) E a; therefore, (n, 1) E a for all n. In particular, 2 = (0, 1) E a ; therefore, (n,2) E a for all n. Backtracking 3 = (2,O) E a ; therefore, (n,3) E a for all n. Continuing in this way, all pairs belong to a, and so a = T. We now turn to the definitions of tuples and sequences.
DEFINITION.
0 = 1, (xo,
XI,
12 Kanger. Symposium
...,X k )
.
= ilz 2
xo,
= I
(XI,
..., X k ) (z  1).
178
DANA SCOTT
LEMMA 6. = I otherwise.
PROOF. The proof is by a double induction: first on k , and then on n. The case k = 0 is clear in view of our definition of the empty or 0tuple. Assume the result for k (and all n). We now prove it for k + 1 by induction on n. In case n = 0, it is clear from the definition. Assume it for n and do n 1. By the definition this will fall back on what we assumed about k.
+
The definition of ordered tuples was arranged for uniformity. Note such convenient facts as so every ktuple is at the same time a (k + 1)triple. Also,
The point of Lemma 6 is that a ktuple is a function such that application to n gives the nth coordinate. This is so convenient that we give the subscript notation an official definition. DEFINITION. u, = u(x). But we shall usually only employ this when x E o.If the reader does not enjoy the confusion between 2tuples and 3tuples; he can use (2, X , y ) and ( 3 , x , y, z), respectively; and similarly in other dimensions. Caution: Not every element of Po is an ordered couple in the sense of the definition. If the full homeomorphism between Pcu x Po and Po is desired, we must use some function as [x, y ] = (2n I n E X } u (2m
+ 1 Im Ey}.
This is not a LAMBDA definition, but one will be seen to be possible after we discuss primitive recursion. Another caution: Do not confuse ( x , y ) and (n, m), since the latter is but an integer function. If we wanted to extend this to Po,the natural
LAMBDA CALCULUS A M ) RECURSION THEORY
179
definition would read: ( X , Y ) = {(n,m>In=,mEY}.
This is more like a Cartesian product and is not satisfactory as a pairing function on all of Pw, because ( L Y ) = ( x , 1)= 1 for all x , y E Pw. Turning now to infinite sequences, we define the sequential combinator.
DEFINITION. $ =
Y(ilsAuAz.z= u o , s ( A t . u ( t
+ 1))(z  1)).
This generalizes the idea of our definition of tuples, though its formal expression may hide its significance.
LEMMA7. For all u E Pw we have: $(u) = $ ($(u))= AZ
U {un I n E z},
and so the range of $ consists exactly of the distributive functions.
PROOF (outline). By definition, equation :
$(u) (z) = z
3
$
is the least function satisfying the
uo, $ ( A t . u (t
+ 1)) (z  1).
By induction on n one shows first that Vu Vz [n E z
u,, E $(u) (z)].
This establishes that and so
U {un I n E Z} s $(u) (z) ilz .U {u,, I n E z } E $(u).
For the other inclusion, define s=Au.Az.U{u,,Jn~z}, which is reasonable since the expression is continuous in u and z. Then check that s satisfies the fixedpoint equation for $. But then S c s as desired.
180
DANA SCOTT
The proof that $(u) = $ ($(u)) rests on the fact that $(u),, = u,, for all new. Finally, note that the range of $ is the same as the set of fixed points of $. Now the equation u=jZ~.U(u,,1n~z),
as we have noted before, exactly characterizes the distributive functions.
DEFINITION. i L t ~ w . z= $ ( A z . z [ z / n ] ) . The point of this definition of &abstraction relativized to integers is that it is LAMBDAdefinable, and it produces the expected distributive function that is the way we represent functions of an ordinary integer variable in Pw. This has been a long sequence of lemmas, but the result allows us to transcribe all of recursion theory on integers over into our LAMBDAnotation in a wholesale way. In particular, we can easily do now all primitive recursions. LEMMA 8. If p : w + w is a primitive recursive function, then the corresponding distributive function fi :Pw + Pw is LAMBDAdefinable.
PROOF(outline). Again to establish this specific result (about functions of one variable) we have to prove something more general (about functions of several variables). Now the primitive recursive functions are generated from the zero, successor, and identity functions by substitution and the scheme of primitive recursion. Only the last will give us any trouble. Let us take an example. Suppose P(0) = k,
P (n
+ 1) = 4 (n, P(41,
where 4 is a function of two variables that we already know about. That is, 4 = AxIY { q ( n , m ) l n E X , m E Y )
is LAMBDAdefinable. The trick with p is to note that
fi
.
= Y ( I f . An E w n
3
k, 4 (n
 1) ( f ( n  l))),
LAMBDA CALCULUS AND RECURSION THEORY
181
or in less formal terms, 8 is the least solution of the equation:
p = ilnEco.n=,
k,4(n
 l ) @ ( n  1)).
This will work with more variables also, and it shows why 8 is LAMBDAdefinable  and the LAMBDA definition can be written down directly from its recursive definition. Having encompassed this part of recursion theory, we must ask a more expansive question : when is an arbitrary continuous function to be called computable?
DEFINITION. A continuous function f of kvariables is computable iff the relationship m Ef(en,,, e n l , * * * ,enkl)
is recursively enumerable in the integer variables m yno ,n, , ...,nkThe definition is chosen on the one hand to be a direct generalization of that of partial recursive function. For as we know a partial function p : co + w u (I}is partial recursive iff the relationship m =P(4
is r.e, (recursively enumerable) in n and m. Passing now to the distributive function j? :Pw + Pco, we note that:
rn E P(ek) iff 3n E e k rn = p(n), and so this relationship is r.e. iff p is partial recursive. On the other hand, the definition is naturally motivated. How to compute y = f ( x ) ? If x and y are infinite, we can do it only little by little. Thus start generating the en E x. As you find these (better as they are given you), hand them over to$ The function starts a process of generating the elements m E f(en). We are saying that this process is effective iff the generation is r.e. in the usual finitary sense. In this way computations with infinite objects are reduced back to computations with finite objects (numbers). The definition of the computability offis thus not an independent one, because it is given in terms of the already understood notion r.e. However, in Theorem 2.6 we shall see that a direct, nonreductive definition is at hand. This is interesting when we recall the point of making our generalization (which is not just to have a model for
182
DANA SCOTT
the %calculus). In ordinary recursion theory, when we write m = p(n) there is a distinction in type between n, m and p. The first are finitary integers, whereas the second is a function, an infinite object. In the present theory, when we write y = f(x), there is no distinction in type between x, y and f:they are all sets of integers. Of course, there can be a distinction in kind: some sets are finite, others are r.e., others are neither. It is very similar to the situation with real numbers: some are rational, others algebraic, others transcendental. But it is not for the sake of this analogy that we pursue the generalization. Rather it is the realization that the ideas of recursion theory apply just as well to computations on functions as on integers. The distinction between finite and infinite is not as important as getting at the idea a computable process. These computation procedures can just as well take other procedures as arguments as integers as arguments. Our aim then is to show that the unified, ‘typefree’ theory is not only possible but better.
THEOREM 2.6. Let f be a continuousfunction of k variables. The following three conditions are equivalent: (i) f is computable; (ii) Axo Ax, ..jlxkl f(xo, x,, ..., Xk1) as a set is r.e.; k k  1 .f(Xg, Xi, ..., xk1) is LAMBDAdeJinable. (iii) 1x0 AX,
.
* * a
PROOF. For the equivalence of (i) and (ii), we take a special case of functions of two variables. By definition, AX
2.Y .f(X, V ) = AX {(k, m> I m Ef(X,
ek>>
= {(n, m’)I In‘ E {(k,m) I m €f(en, ek)>) =
{(n, (k,m)) I m
E f ( e n , ek)>*
The equivalence is now obvious. For the equivalence of (ii) and (iii) we have to prove two things: LAMBDAdefinable functions are computable (actually that amounts to: (iii) implies (i)), and every r.e. set is LAMBDAdefinable. For the first fact (say in the case of two variables), consider the logical form of the predicate m E Len/x, ek/rl, where z is a LAMBDAterm. By the definition of the semantics of our language, this can be written as an arithmetic predicate involving such
LAMBDA CALCULUS AND RECURSION THEORY
183
recursive predicates as j = i + 1, j E e,, k = (i,j ) , conjunction and disjunction, existential number quantifiers, and bounded universal quantifiers (4‘’ E eJ. Thus the predicate is r.e. To prove the second fact, suppose a is r.e. If a = I,we use Lemma 1. If a is nonempty, there is a primitive recursive function p : w + w such that a = {p(n) I n ew}. But this is the same as: a = #(T),
and by Lemmas 4 and 8, we see that a is LAMBDAdefinable. Let RE E Pm be the class of r.e. sets. Of course, the finite sets E E RE. An interesting corollary to Theorem 2.6 is the fact that the countable
class RE forms a model for the Acalculus. Indeed, RE is the Zeus? class containing the combinators of Theorem 2.4 and closed under application, and any such class satisfies (a), (fl) and (E*). The first two are obvious. The implication from left to right in (E*) is also obvious. Suppose then that Ax. t ( x ) $ A x . o(x). , this x need not There is then some set x E Po) such that z(x) $ ~ ( x )but belong to the subclass. But in any case, there will be some m E w, where m E t(x), but m 4 ~ ( x )By . continuity, nz e z(e,), where en E x. By monotonicity, m # a(e,,), and en does belong to the subclass. In Section 3 we shall note how many closed classes there are. This completes the foundations of our recursion theory. In summary we list short descriptive names for the results of this section:
Theorem 2.1, the Continuity Theorem; Theorem 2.2, the Conversion Theorem; Theorem 2.3, the Reduction Theorem; Theorem 2.4, the Combinator Theorem; Theorem 2.5, the Recursion Theorem (or: FixedPoint Theorem); Theorem 2.6, the Definability Theorem. Needless to say, along the way we have noted many related and auxiliary results too numerous to name. In the next section we discuss some standard applications of recursion theory in the new context to give more weight to the argument that Acalculus and recursion theory make a good combination.
184
DANA SCOTT
3. Enumeration and degrees
To be able to enumerate things we shall have to introduce Godel numbers. A system of numbering can sometimes be rather complex, but in our theory the algebraic notation of the combinators makes everything easy. In fact, all we need is one combinator and iterated application. The proof is facilitated by a lemma on ordered pairs. LEMMA 1. cond ( x ) ( y ) (cond ( x ) (y)) = y.
PROOF.In case x = y = I,we note that cond (I)(I)= 1 and l(1) = 1, so the equation checks in this case. Recall
.
cond ( x ) ( y ) = ilz z
3
x, y
Thus 0 $ cond ( x ) (y), because 0 = (n, rn) iff n = 0 = rn. Furthermore, e, = l a n d I ~ x , y = l ; b u t O ~ l . A l s o cond ( x ) ( y ) (0) = x
and cond ( x ) ( y ) (1) = y ,
so if either x =I= I or y # I, then cond ( x ) ( y ) =k 1. In this case, cond ( x ) ( y ) must contain positive elements. The result now follows. THEOREM 3.1. There is a single combinator G such that all other LAMBDA definable elements can be obtainedfrom it by iterated application.
PROOF.The combinator G = cond (a) (0), where a is yet to be determined. By Lemma 1 we see G(G) = 0 , G (G(G)) = a .
So let
a=
(SUC,
pred, cond, K, S}
and all the combinators of Theorem 2.4 can be generated. The advantage of having a single generator and the one binary operation of application is that the recursion equations for the effective enumeration of all LAMBDAdefinable elements are particularly simple.
LAMBDA CALCULUS AND RECURSION THEORY
185
THEOREM 3.2. There are combinators val and apply such that and for all n, m E w ,
val(0) = G;
6)
apply (4 0E
(ii)
Val (apply (4 0)= Val (4 (Val (4)
(iii)
Therefore Val enumerates all LAMBDAdejnable elements! that is, R E = (val(n)InEcc)}.
(iv)
PROOF.The pairing function (n, m) is primitive recursive; and, by the method of Lemma 8 of Section 3, it is represented by a combinator. Just which is unimportant. We let
+
apply = i l n ~ ~ . h ~ ~ 1,m). . ( n We also need the combinators corresponding to the primitive recursive functions where fst ((n, m)) = n, snd ((n, m)) = m .
By the fixedpoint theorem we define val so that
.
val = Lk E cc) fst (k) 3 G, val (fst (k)
 1) (val (snd (k))).
Properties (i)(iii) are immediate. We can apply Theorem 3.2 in a standard way to get Kleene’s Second Recursion Theorem. First we need one other combinator.
LEMMA 2. There is a combinator num such that for all n E w we have num (n) E w ,
(0
Val (num (n)) = n .
(ii)
PROOF.We must refer back to the proof of Theorem 3.1. Since (1,O) = 1, we have val(1) = 0 Then, since (1, 1) = 4, we have val(4) = a .
186
DANA SCOTT
Next, (5, 1) = 22, so V a l (22) = SUC.
Now we can define the combinator num by the fixed point theorem so that num = An E w n 3 1, apply (22) (num (n  1)).
.
Properties (i) and (ii) are then proved by induction.
THEOREM 3.3. Given any LAMBDAdefinable element u E P o , we can efectively jind an integer n E w such that Val (n) = u(n).
PROOF.Since u is definable, so is w
.
.
= Am E OJ u (apply (m)(num (m)))
By looking at the definition, we can find an integer k E a,where Val (k) = w , by virtue of Theorem 3.2. Let n = apply (4(num (W,
and then calculate: Val (n) = Val (k) (Val (num (k)))
as desired.
=
w(k)
=
24
(apply (k) bum (k)>>
= 4n),
The effective character of the proof of Theorem 3.3 can be embodied in a combinator rec (which we leave to the reader to define explicitly) which is analogous to Y,except that it is an integer function. The properties of rec are that rec (k) E w for all k E w, and that Val (rec (k)) = val (k) (rec (k)).
LAMBDA CALCULUS AND RECURSION THEORY
187
Theorem 3.3 has many applications (cf. [7, §§ 11.211.81). We give a few hints. By specializing u, we can of course find n such that val(n) = n,
or
val(n) = n
+ 100.
The point being that the enumeration by val is exceptionally repetitious and stumbles all over itself countless times. It is a bit more difficult to get an effective infinite sequence of hits. For example, can we have a recursive function r, where for all i E w we have r(i) E w and Val (r(i)) = r (i where, say, r(i) < r (i
+ l),
+ l)? Let us make a guess that
.
r = l i E w apply (k) (num (i)) for some choice of k, and see what comes out. If this were correct, then for all i E w , val (r(i)) = Val (k) ( i ) .
So, we should like Val (k) = l i E w apply (k) (num (i
+ 1)).
Does such an integer k exist? Yes, by Theorem 3.3. (This proof seems to be rather more understandable than that of [7, p. 1861.) The apply combinator is very handy. For example, if u E RE, we can at once find a primitive recursive function s such that for all m E w Val (s(m)) = ~ ( m ) . Because, let u = Val (k), then what we want is
.
s = Am E w apply (k) (num (m)).
The guiding idea behind these arguments is that of Godel numbering; but since our syntax is so simple, the use of ordered pairs of integers is sufficient  and easy. As another application of Theorem 3.2, we give a version of the incompleteness theorem.
188
DANA SCOTT
THEOREM3.4. The relationship val (n) = Val (m)
is not r.e. in n, in E w. Therefore, there can be no formal system with efectively given rules and axioms that will generate all true equations z = cr between LAMBDAterms.
In fact, it can be shown that the set b = { n e w Ival(n) =
I}
is not r.e. The proof can be given along standard lines using the usual trick of diagonalization. We do not include the details here. We turn now to a short discussion of degrees of sets in Po. First some definitions. DEFINITION. A subclass A closed under application. DEFINITION. For x , y
E
c P o is a subalgebra iff RE c A and A is
Po,we write Deg(x) G Deg(y)
to mean that for some u E RE x = u(y).
In other words, in this last definition we want x to be computable from y . These are exactly the enumeration degrees of Rogers [7, pp. 1461471. Note that { x I Deg (4 G Deg = M Y > I u E RE>;
hence, we could simply define : Deg ( Y ) = { U < Y )
I * RE19
and the ordering of degrees above would be simple class inclusion. We suppose this is done. (Caution: Deg ( y ) consists of all objects reducible to y , not just those equivalent, or of the same degree as y.) Degrees and subalgebras are very closely related.
LAMBDA CALCULUS AND RECURSION THEORY
189
THEOREM 3.5. The (enumeration) degrees are exactly the finitely gelzerated subakebras of Po and every one can be generated by a single element.
PROOF.First let A = Deg ( y ) . We want to show that it is a subalgebra. Let u E RE, then K(u) E RE also. But then
So RE E A. Next consider two elements of A. They are of the form u(y) and u(y) where u, Y E RE. But then
is of the same form; hence, A is closed under application. But clearly G, y E A , and they generate A as a subalgebra; therefore A is finitely gener ated . In the other direction, let A be any finitely generated subalgebra. Suppose the generators are zo, z l , ...,zk 1. Using the same trick as in the proof of Theorem 3.1, let
By a similar argument, y generates A and indeed A = Deg ( Y ) ,
as desired.
This theorem seems to explain something about degrees. They form a semilattice with the join operation characterized by
U Deg ( Y )
Deg
= Deg ( ( x , y ) ) .
This is just the semilattice of finitely generated subalgebras of Pw which is a part of the complete lattice of all subalgebras. Since Po is uncountable and each finitely generated subalgebra is countable, it is trivial to prove there exists a chain of degrees: A0
c A1 E
E A, E
190
DANA SCOTT
such that the union is not finitely generated. However, if we let An = Deg (xn),
then neu
A,
c D e g ( A n E o . xn).
Otherwise, put: any countable subalgebra is contained in a finitely generated one. The proof that the intersection of two finitely generated subalgebras need not be finitely generated is rather more complicated. Not much else seems to be known about the structure of enumeration degrees. (Rogers in [7, pp. 1511531 relates enumeration degrees to Turing degrees.) Perhaps one should study the lattice of subalgebras of P o as the completion of the semilattice of degrees and try to find out the structure of the lattice. Our last application of the techniques of enumeration concern the socalled effective operations and the MyhillShepherdson Theorem ([5, p. 3131 or [7, p. 3591). Suppose q = Val (k)and consider the function
.
p = An E o apply (k)(n).
This function has the following extensionality property: (*)
val (n) = val (m) always implies Val (p(n)) = val (p(m)),
because by construction we have for all n E o: Val ( P ( 4 = 4 (Val (4). Can there be other such extensional mappings on Godel numbers? No, not if they are recursive.
THEOREM 3.6. Suppose that p(n) E (L) f o r all n E w and that p satisfies (*). I f p E RE, then there is a function q E RE such that for all n E LC).
Val (P(n)) = 4 (Val (4)
PROOF. It is boring, but there is no difficulty, in finding a combinator fin such that: (i) fin (4 E m , (ii) vaI (fin (n)) = en,
LAMBDA CALCULUS A N D RECURSION THEORY
191
for all n E w. We can then define q by analogy with our definition of &abstraction : 4 = {(n, m) I m E Val (P (5 
(m
Clearly, if p E RE, then so is q. This is obviously what we want i f p and q are related by the equation of the theorem, but the equation remains to be proved. The desired connection is a consequence of the fact that any effective mapping satisfying (*) is necessarily 'continuous' in the sense of: (**)
Val (P(n)> =
u {val
(P (fin 0')))I q E val (n)) ,
which is to hold for all ~tE w. Suppose by way of contradiction that (**) does not hold, for a particular n. There are two cases. Suppose first that we have k E V a l (P(n)), but for all j E o with e, E val (n) we have
k4
(P (fin ( j ) ) ) .
The trick here is that the single integer k forces a distinction between finite and infinite in a way that is too effective to be true. Let r be a (primitive) recursive function whose range is not r.e. Thus,
{m I m
4 {r(i) I i E w}} q! RE.
Note that the relationship j ~ v d ( n )A m # { r ( i ) I i < j ) is r.e. in the variables m andj. This means we can define a (primitive) recursive function s such that
val (s(m)) = { j E val (n) I m
4 {r(i) I i < j } }
for all m E w. Note that val (s(m)) is always a subset of the infinite set Val (n); further it isfinite if m is in the range of r, otherwise it is equal to val(n). By virtue of (*) and what we assumed about k above, we can conclude : k E val (p ( ~ ( m ) ) )iff m 4 {r(i) I i E w}.
DANA SCOTT
192
But this is impossible, because the predicate on the left is r.e. in m, and that on the right is not. That was the first case; suppose next that instead we had e, c Val (n), and for some k E w k E Val (P (fin (A)) but k $ Val cp(n)) Y

Again, we see an integer k forcing a decision. Let
.
t = Am E w e, u (Val (m)3 val (n), Val (n)).
This function has the property that for m
E w,
if Val (m)= I;
t(m) = e,
= Val (n) otherwise.
Let the (primitive) recursive function u be chosen so that for m E w :
Val (u(m))= t(m). We then find that, by virtue of (*) again,
k E Val (p (u(m))) iff t(m) = e,, iff val(m) = 1. But as we noted in the proof of Theorem 3.4, the predicate on the right is not r.e. in m ; while on the other hand, the predicate on the left is r.e. Theorem 3.6 has a very satisfying interpretation stated syntactically. Consider the closed LAMBDAterms (that is, no free variables). Suppose that n[z] is an effectively defined syntactical mapping that maps closed terms to closed terms. Suppose further that the mapping is extensional in the sense that whenever the equation t = (T is true in Pw, then so is n[z]= n[cr].We can conclude from Theorem 3.6 that in this case there is a closed LAMBDAterm p such that the equation
nkl
=
is true in Pw for all closed terms t.Informally this means that if a mapping effectively defined by ‘symbol manipulation’ outside the language has good semantical sense, then an extensionally equivalent mapping
LAMBDA CALCULUS AM) RECURSION THEORY
193
can already be defined inside the language. Thus we have a completeness theorem for definability for the language LAMBDA. This completes our short survey of the theory of enumerability as based on the Acalculus. Here is a summary of the theorems just proved: Theorem 3.1, the Generator Theorem; Theorem 3.2, the Enumeration Theorem; Theorem 3.3, the Second Recursion Theorem; Theorem 3.4, the Incompleteness Theorem; Theorem 3.5, the Subalgebra Theorem; Theorem 3.6, the Completeness Theorem (for Defhability). This would seem to be justification enough for the program of passing from the pure Acalculus to the applied calculus based on LAMBDA. What remains to be done is to show how this new calculus relates to recursion theory on domains other than cr) or Pw and to discuss the appropriate proof theory.
Note An expanded and revised version of this paper will be published under the title Data Types asLattices for the Kiel Summer School in Logic (July 1974). References [I] A. Church, The calculi of lambdaconversion. Princeton (1941). [2] H.B.Curry et al., Combinatory logic, Vols. 1 and 2 (NorthHolland, Amsterdam, 1958, 1972). [3] E. Elgot and S . Eilenberg, Recursiveness (Academic Press, New York, 1970). [4] J.R.Hindley et al., Introduction to Combinatory Logic, London Math. SOC.Lecture Notes, vol. 7, CUP (1972). [5] J. Myhill and J. C. Shepherdson, Effective operations on partial recursive functions, Z. Math. Logik Grundlagen Math. 1 (1955) 310317. [6] G .D. Plotkin, A settheoretical definition of application, School of A. 1. Memo. MIPR95, Edinburgh (1972). [7] H. Rogers, A theory of recursive functions and effective computability (McGrawHill, New York, 1967). [S] D. Scott, Continuous lattices, Lecture Notes in Mathematics, Vol. 274 (Springer, Berlin, 1972) pp. 97136. [9] D. Scott, Models for various typefree calculi, in: Logic, methodology and philosophy of science ZV, Ed. P. Suppes (NorthHolland, Amsterdam, 1973) pp. 157187. 13 Kanger, Symposium
THAT EVERY EXTENSION OF S4.3 IS NORMAL Krister SEGERBERG Abo Academy, Abo, Finland
Schiller Joe Scroggs’s paper [5] is one of the classical contributions to modal logic; it has been a source of inspiration for many later modal logicians. One way of putting the main result of his paper is this:’ SCROGGS’S FIRSTTHEOREM. For every proper normal extension L of S5 there is some finite positive index frame i (where thus 0 < i < w) such that L is determined by i. S5 is determined by the class of all finite positive index frames of length 1, as well as by the single index frame w. It is that result that is most often associated with Scroggs’s name. However, his paper also contains another result which is not without interest if one considers the discovery of McKinsey and Tarski, published in [4], of a nonnormal extension of S4: SCROGGS’S SECOND THEOREM. Every extension of S5 is normal. Yet another celebrated contribution to modal logic is R. A. Bull’s result, first proved in [l], that every normal extension of S4.3 has the finite model property. That Bull’s result may be considered as a generalization of Scroggs’s First Theorem is evident if it is given a formulation such as the following:
BULL’STHEOREM. For every normal extension L of S4.3 there is some class C of finite positive index frames (i, , ... , in offinite length (where thus n > 0 and 0 < i,, ..., in1< o) such that L is determined by C. S4.3 is determined by the class of all finite positive index frames of finire length, as well as by the single index frame (w,w, w,...). For terminology not explained here, see [7]. Indices were first introduced in [6]. 194
THAT EVERY EXTENSION OF
s4.3
IS NORMAL
195
The purpose of this note is to show that also Scroggs's Second Theorem can be generalized : THEOREM. Every extension of S4.3 is normal. In order to keep down the length of the paper we shall make use of two results in the literature. One is [7, Lemma 3.5.11: If C is a class of transitive connected Kripke frames no cluster of which is degenerate except possibly the first one, then C determines a normal logic. (By a Kripke frame we understand a structure 3 = ( t , U,R) such that (U,R) is a frame (in the ordinary sense) generated by t. A formula a is valid in 3if, for every model 9.l = ( t , U,R, V ) , % kca.) The other result is due to Kit Fine and HAkan Franztn, independently of one another, and may be extracted from either [2] or [8]: If L is an extension of S4.3, not necessarily normal, and a is a formula not a theorem of L, then there exists a finite Kripke frame 3 = ( t , U,R) such that the following conditions are satisfied: (i) 3 is a Kripke frame for L ; that is, every theorem of L is valid in 3. (ii) There is some u E U and some model % on such that % If. a. (iii) If u R t , then u = t. PROOFOF THE THEOREM. Let L and 01 be as specified. By the former result it is more than enough to show that there exists some transitive, strongly connected Kripke frame for L in which 01 fails to be valid. Let 3,9.l and u be as in the formulation of Fine's and Franztn's result. Note that since 3is a Kripke frame for L, and L 2 S4.3, ?will j be transitive and strongly connected. If u = t, there is nothing more to prove. Suppose therefore that u # t. By (iii), not u R t. We define a function f on U as follows : ifuRx, u ifnot u R x .
x
Let %" = (u, U",R", Vu) be the submodel generated by u from %. It is easy to see that f is a pmorphism from (U,R, V ) to (U", RU,V") stable on every propositional letter, and since3 = u we may therefore conclude that, for every 8, % C, if and only if %" C, 8. Hence by virtue
196
KRISTER SEGERBERG
of the Pmorphism Theorem (suitably modified) and (i), p =(u, U”,R ” ) is a Kripke frame for L. Moreover, by (ii) 01 is not valid in p . Fine has improved Bull’s result considerably (see [3]). Our result can also be extended, by not nearly as widely. For example, whereas Scroggs’s First Theoren can be extended to 04.3this was proved by Fine and, independently, by FranzCn  Scroggs’s Second Theorem cannot. For consider the Kripke frame 3 = ( t , U,R), where t = 0, u = (0, 1, 21, and R = ((O,O>, ( 0 , I>, (0, 2), ltf Y(a) such that if a, p E T and a 5 p, then @ (f,a) (il, ..., in) = @ (f,,I!?) (il, ... , iJ, for all i,, ..., in E !P(a). @ is also such that for all i l , ... ,in and all j , , ...,j , in T(a) such that ik and j , are pidentical, 1 5 k I n, we have @ (f,a)
(4 . ,i n ) 9
*.
= @ (f, a) ( j l ,
...,
jfl).
We shall define a partial function V, of two arguments. The first argument being closed termexpressions and formulaexpressions and the second argument being elements of T. The values of V, will be either individuals or one of the truthvalues 1 (truth) or 0 (falsity). We extend Y(a). our language by introducing names for the individuals in a, b, c, ... will be used as notations for such names and 3, 6, t,... will denote the corresponding individuals. We introduce, of course, exactly one name for each individual. Let a E T.The names of the individuals of T(a) are 0place function symbols, so we add as axioms
UaET
U € Z
for each name a such that ci E !P(a), and in the definition of V, we let the notions of a term, termexpression,formula, etc. refer to this enlargement of the language relative to G and a. VG is then defined by induction on the construction of termexpressions and formulaexpressions as follows: (i) V , (a, a) is defined if 3 E Y(a)and then V, (a, a) = i. (ii) V, (ftl i n ,a) is defined if V, ( t l , a), ..., V, (t,,, a) are all defined, and then
(iii) VG (7x A(x), a) is defined if for each /?E T such that a I ,6 and each ci E !P(p), V, (
[email protected]), p) is defined, and the set
contains only pidentical individuals. If VG (7x A(x), a) is defined, it is equal to one of these indivuduals in !P(a).
DESCRIPTIONS IN INTUITIONISTIC LOGIC
211
(iv) VG(Ptl t,, a) is defined if V6 (tl ,a), ..., VG( f n , a) are all defined, and then
Ve (t = s, a) is defined if VG(t, a) and V, (s, a) are both defined, and then
(v) V, (I,(x) is defined and = 0. (vi) V , ( A & B, 01) is defined if V, (A, a) and V, (B, a) are both defined, and then
V , ( A & B , a ) = 1 if V 6 ( A , a ) = V6(B,a) = 1, =0
otherwise.
(vii) V , ( A v By&) is defined if V G ( A , a )and
[email protected],a) are both defined, and then
VG( A v Bya) = 1 if V, (A, a) = 1 or V, (B, a ) = 1 , =0
otherwise.
(viii) V , ( A 3 B, a) is defined if for each p E T such that a 5 j3 (1) V, (A, /3) is defined,
(2) V , (A,
If
V6 ( A 3
8) = 1 only if
V , (B, j3) is defined.
B, a) is defined, then
V, ( A 3 B, a) = 1 if for each p E T such that as/?,V,(A,j3)=0or V~(B,j3)=1, = 0 otherwise.
212
SOREN STENLUM)
(ix) V, (Vx A(x), a ) is defined if for each B E T such that a 5 j3 and each 5 E !P(a), V, (A(a),B) is defined, and then
V, (Vx A(x), a) = 1 if for all 5
B, V,
E T such that
(A(&
B)
= 1,
= 0 otherwise.
(x) V, (3x A(x), a) is defined if for each E T such that a I p and each 5 E !P(a), V, (A(a),B) is defined, and then VG(3x A(x), a) = 1 if for some 5 E Y(a),we have V, (&a), 4 = 1 , = 0 otherwise.
This completes the definition of V,. It is easy to verify that for each structure G = ( T , 5 , Y,@) and for all IY, j3 E T such that a I B, if V, (t, a ) is defined, then VG( t , j3) is defined and V , ( t , a ) = V, (t, b) E !P(a).Also, if V, ( A , IY) is defined, then V , ( A , p) is defined and if V, ( A , a ) = 1, then V, ( A , P) = 1. A closed termexpression t is said to have a refererice in G if Vs ( t , IY) is defined for all a E T.A closed formulaexpression A is said to express a proposition in G if V, ( A , a ) is defined for all a E T and A is said to be valid in G if V, ( A , a ) = 1 for all a E T. It is then possible to establish the following (classical) soundness and completeness results as in [2] :
THEOREM 4. (i) t is a term i f t has a reference in each structure G. (ii) A is a formula @ A expresses a proposition in each structure G. (iii) A is a theorem i f A is valid in each structure G. Here t and A range over closed termexpressions and formulaexpression s in the original language of Section 3.
References [l] P.MartinLof, An intuitionistic theory of types, mimeographed, Stockholm (1973). [2] S. Stenlund, The logic of description and existence, Philosophical Studies (Philosophical Society and the Department of Philosophy, University of Uppsala, Uppsala, 1973).
AUTHOR INDEX* Aczel, P., 114 Aho, A., 145,152
Jervell, H.R., 6380 Jbnsson, B., 114,142
Bell, J.L., 16,31 Bull, R.A., 38,39,194,196
Keisler, H. J., 63,65,78,80 Kleene, S.C., 82,108 Kochen, S.,20,24,31 Kreisel, G., 82,90,103,108 Kripke, S.A., 127,129,142 Kueker, D. W., 40,5254,62
Chang, C.C., 52,61 Church, A., 96,108,174,193 Culik 11, K., 145,150152 Curry, H.B.,81,95,96,108,154,174,193 Ehrenttucht, A., 146,152 Eilenberg, S.,193 Elgot, E., 193 Feys, R., 81, 95,96,108 Fine,K., 1531, 111, 114,140,142,195,
196
Fisher, F. M., 40,61 Fitch, F.B., 118,142 Frege, G., 93,105,108 Fuhrken, G., 63,80 Gabbay, D.M., 141,142 Gale, D., 5, 14 Gardenfors, P., 3239 Gerson, M., 111, 142 Girard, J. Y., 82,85,87,97,98,108 Godel, K.,106,108 Goldblatt, R.I., 15,31 Hanson, W.H., 118,127,142 Hansson, B., 3239 Hayashi, T., 145,152 Herman, G., 146,149,152 Hindley, J.R., 154,174,193 Hinman, P.G., 13, 14 Hintikka, J., 4062 Howard, W.A., 81,82,95,100,108
Lachlan, A.H., 15,31 Lee, K., 146,149,152 Lemmon,E.J.,16,20,28,31,32,39,lll,
113115, 117,118,120,124,127,133, 137,138,142 Lindenmayer, A., 144,152 Makkai, M., 52,62 MartinLof, P., 81109, 198,199,212 McKinsey, J.C.C., 31, 33,39,194,196 Moschovakis,Y.N., 1, 7,11, 13, 14 Myhill, J., 155,190,193 Opatrny, J., 150,152 Paz, A., 147,152 Plotkin, G.D., 155,193 Prawitz, D., 81,96,104,108,109 Prior, A., 112,116,142 Rantala, V . , 40  62 Reyes, G.E., 54,62 Rogers,H., 154,155, 187,188,190,193 Rozenberg, G., 144,146,149,152, I53 Sahlqvist, H., 110143 Salomaa, A., 144153 Schumm, G., 128,142 Scott, D., 16,20,28,31,32,39,111, 114,
115, 117,118,120,124,127,133, 137, 138, 142,154193
* The page numbers of authors' contributions are in italics. 213
214
AUTHOR INDEX
Scroggs, S.J., 194,196 Tait, W.W., 82, 87, 107,109 Segerberg,K.,16,24,3133,39,110,112, Tarski, A., 114,142, 194, 196 118, 120, 123, 126, 131133, 140143, Thomason,S., 22,31, 110112, 114,118, 194196 133,140,142,143 Shelah, S., 54,62 Troelstra, A.S., 82,90,108 Shepherdson, J.C., 155, 190,193 Tuomela, R.,51, 62 Slomson,A.B., 16,31 Stenlund, S., 82, 85,89,109, 197212 Van Leeuwen, J., 146,149, 152, 153 Stewart, F.M., 5, 14 Vesley, R.E.,82,108 Suppes, P.,62 Vitanyi, P.,147, 148,153 Svenonius, L.,51,62 Szilard, A., 147, 153 Wood, D., 144,153