This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
] This code is used in section 5.1. 5.6.3.3. T h e structure of refine is very analogous to that of its type in t h a t it also assumes the premises of transitivity of reification, t h a t is, a version vl is assumed to be a reification of a version v2 wrt. the retrieve function retr12, and a version v2 is assumed to be a reification of a version v3 wrt. a retrieve function
retry3: ( P r o o f of transitivity. 5.6.3.3 > --
VersionsAndRetrieveFunctions t reif12 : vl K ~ t ~ v2; reif23 : v2 KK_~t~3 v3
[import
P r o o f of vl ---~etr13 v3. 5.6.3.4 } .'. vl __Kr~tr13V3
This code is used in section 5.6.3.2.
5.6.3.4. T h e proof t h a t vl is a reification of a version v3 wrt. the retrieve function retr12 o>retr23 is decomposed into proofs of three lemmas:
5.6. Proof of Transitivity of Reification in Deva
173
(Proof of vl ~retrl3 V3" 5.6.3.4) -[( Verification of the retrieve condition. 5.6.4.1 ) ; ( Transitivity of operator reification. 5.6.5.1 ) ; ( Transitivity of the reification condition. 5.6.6.1 ) t-( Proof assembly. 5.6.7 / .'. Vl ~retr~3 V3
] This code is used in section 5.6.3.3. 5.6.4
V e r i f i c a t i o n of t h e R e t r i e v e C o n d i t i o n
5.6.4.1. The first proposition to be shown is that the composed retrieve function retr13 satisfies the retrieve condition. - The property of invariant preservation preservation is derived by the logical cut of the preservation components of the two assumptions rei]12 and reif23. - The proof of the completeness condition completeness is not yet detailed at this point. (Verification of the retrieve condition. 5.6.4.1 / -
retrieval13 := preservation := reif12 9 retrieval, preservation 9 [st1 ? s t a t e l
e [invl (sty)
in 2(retrl (st ))]]
o> reif23, retrieval 9 preservation
.'. [st1 ? state1 F- [ invl (st1)F- inv3(retr13(stl)) ]] , completeness := (Proof of completeness condition. 5.6.4.2 ) .'. valid_retrieve( invl , inv3, retr13) This code is used in section 5.6.3.4.
5.6.4.2. The proof of the completeness condition consists of showing that a given abstract state st3 :state3 that satisfies the abstract invariant is in the range of the retrieve function retr13 from state1 to state3, where state1 is further restricted by the concrete invariant invl (st1). The proof idea is to make a block-structured proof using the two completeness assumptions contained in the reification assumptions reif12 and reif23. The proof is given in a top-down presentation involving three parts. Unfortunately, it looks technically rather heavy; this is mainly due to t h e elimination law of existential quantification. The reader is advised to study again this law ( ez .dim, see section 4.2.1.7), to understand the scoping structure of the proof.
174
5. Case Study on VDM-Style Developments
( P r o o f of completeness condition. 5.6.4.2 ) =
[st3 ? state3; inv_hyp : inv3 (st3) F reif23 9 retrieval, completeness (inv_hyp) .'. 3[ st2 state2 F retr23(st2) = st3 A inv2(st2) ] ste : state2 ; hyp : retr23 (st2) = st3 A inv2(st2) \ ex. d i m
(First block. 5.6.4.3 )
3[ sty: state1 ~ term(st1) = st3 A state1 F retr13(stl) = st3 A invl(sh) ] .
..
3[ stl
)
inVl(Stl)]
] This code is used in section 5.6.4,1.
5.6.4.3.
(First block. 5.6.4.3 )
reif12 9 retrieval, completeness (conj. elimr ( hyp ) ) .'. 3[ sh : state1 F ferric(st1) = st2 A invl(stl) ] I st1 : statel ; h y p l : retr12 (st1) = st2 A invl(stl) \ ex. elim ~[ i ~ e c o n - - ~ o c k - - ~ . 6 ~ 4 . 4 - ~ - - -
)
!
.'.
[ .'. 3[stl: state1 F retr13(sh) st3 A invl(stl)] 3[ st1 state1 F retr13(stl) = st3 A invl(stl) ]
This code is used in section 5.6.4.2. Given hyp and hypl, we now use a simple law of equational reasoning (eq_compose, see section 4.2.2.6) to establish that st3 is in the range of retr13.
5.6.4.4.
( Second block. 5.6.4.4 ) -
conj. intro ( eq_compose( eonj. eIiml ( hyp l ), conj. elimI ( hyp ) ) .'. retrl~( stl ) = st3) (conj. elimr ( hyp l ) .'. invl (st1)) ... retr13(st~) = st3 A invl(stl) ex. intro .'. q[ sh : state1 F retr13(stl) = st3 A invl(stl) ] This code is used in section 5.6.4.3. 5.6.5
Transitivity of Operator Reification
5.6.5.1. The second property needed is a lemma stating transitivity of the operator reification condition. Assuming this condition to hold for operator
5.6. Proof of Transitivity of Reification in Deva
175
pairs opl, op2 and op2, op3 respectively, the condition must be shown for the pair opt, op3 : ( Transitivity of operator reification. 5.6.5.1 } --
op_refine :---- [ i m p o r t Operations op-reifl2 : opl E_~ ~-I op-reif23 :" o p 2 E op t~ f
op2; -
opl [--O~V1 7"etT13op3"
5.6.5.2
] This code is used in section 5.6.3.4.
5.6.5.2. A proof of the latter proposition is presented below: Its assumptions have the same structure as that of the definition of operation reification, i.e., it is assumed that the concrete invariant (invl_hyp) and that the precondition of the abstract operation is satisfied (op3_pre_hyp). In order to make use of the assumptions about operation reification (op_reif12, op-reif23), it is necessary to first derive the intermediate invariant (inv2_proof) and the precondition of the intermediate operation (op2_pre_proof). Note, that the first proof makes use of the overall hypothesis reif12. More precisely, it needs the assumption that retrl: preserves the invariants. In the proof body, the fact that the precondition is preserved (domain) follows from the corresponding fact contained in the assumption op_reifl2. ( Proof of opl K_~ 1,ret~3 op3. 5.6.5.2 } = [i ? in; stl-i ? state1 t-[sta-i := retr12 (stl_i); st3_i := retr23 (st2_i) ~-[ invl_hyp : invl (stl_i); op3_pre_hyp : op3 (i, st3_i), pre F[ inv2_proof := reif12 9 retrieval, preservation (invl_hyp) .'. inv:( st2-i) ; op2_pre_proof := op-reif23 (inv2_proof, op3_pre-hyp), domain .'. op2( i, st2_i), pre ~-~ domain := op_reifl2 ( invl_hyp, op2_pre_proof), domain .'. opl(i, stl_i), pre , result := (Proof of the result case. 5.6.5.3 }
] ] ]
l
176
5. Case Study on VDM-Style Developments
.'. Opl ~_.~
op3
This code is used in section 5.6.5.1. 5.6.5.3. The fact that the postcondition is preserved (result) can be shown by cutting the corresponding facts contained in the two assumptions op_reifl2 and
op-reif2s. (Proof of the result case. 5.6.5.3 )
[o ? out; stl_o ?. state1 }-[ st2_o := retr12 (stl_o); st3_o := retr2a (st2-o) ~- op_reifl~ ( invl_hyp , op2_pre_proof ). result .'. [ opl (i, stl-i), post (o, stl_o) ~ op2(i, sta_i), post (o, st2_o) ] o> op_reif2a( inv2_proof , opa_pre_hyp ). result .'. [ opl (i, stl-i), post (o, stl_o) F- opa(i, sta_i), post (o, sta_o) ]
] ] This code is used in section 5.6.5.2. 5.6.6
T r a n s i t i v i t y of t h e R e i f i c a t i o n C o n d i t i o n
5.6.6.1. To lift transitivity of operator reification to the level of transitivity of the version reification condition, an inductive proof will be used. ( Transitivity of the reification condition. 5.6.6.1 ) -
I trans_version_reifctn := Iv1 : version (invl); v2 : version (inv2); v3 : version (inv3) ~-version_reif( vl , v2, retr12 ) version_reif( v2 , v3, retr23 ) version_reif( vl , va , retrla )
] ; { Inductive proof of the transitivity condition. 5.6.6.2 )
This code is used in section 5.6.3.4. 5.6.6.2. The inductive proof makes use of the induction principle for version triples, instantiated to the property of transitivity of version reification condition. The inductive base and the inductive step are given spearately. At the end, after the induction principle has been applied, the conditions requiring equal length are resolved.
5.6. Proof of Transitivity of Reification in Deva ( Inductive proof of the transitivity condition.
177 5.6.6.2 ) -
trans_base :-- ( B a s e case. 5.6.6.3 ) ; trans_step := ( I n d u c t i v e step. 5.6.6.4 } ; trans_version_reifctn_proof := version_triple_induction ( P := trans_version_reifctn, base :=trans_base, step := trans_step) .'. [ i m p o r t Versions
el vlength (Vl) = vlength(v2); vlength( v2 ) = vlength( v3 )
I
trans_version_reifctn ( vl, v2, v3) ] / equal_length(reifl~.reifieation) / equal_length(reif23.reification) .'. trans_version_reifctn(vl , v2, v3)
This code is used in section 5.6.6.1.
5.6.6.3. For the base case of e m p t y versions, it essentially suffices to use the case of e m p t y versions in the definition of the reification condition.
( Base case. 5.6.6.3 ) _~
[version_reif(cinv := invl, ainv := inv2)(H, ~, retrl:) ~-[version_reif( einv :--- inv2, ainv := inv3)([], [], retr23) t- def _version_reif ( einv :-- invl , ainv := inv3 ). empty .'. version_reif( cinv :-- invl, ainv := inv3)(~, ~, retr13)
] \ imp. intro
] \ imp. intro .'. trans_version_reifctn([], H, []) This code is used in section 5.6.6.2.
5.6.6.4. For the inductive step, it is necessary to prove transitivity of the "consed" versions under the inductive a s s u m p t i o n of transitivity of the original versions ( induetive_hyp ). ( I n d u c t i v e step. 5.6.6.4 ) -
178 [import
5. Case Study on VDM-Style Developments
VersionsAndOperations
t inductive_hyp : trans_version_reifctn (__Vl,_v2, v3) ( P r o o f o--------~ tra----ns-version_reifctn( op~ -@Vl, op2 @ v2~ op3 @ v3 ). 5.6.6.5}
] This code is used in section 5.6.6.2. 5.6.6.5.
The transitivity proof can be further broken down in an obvious way.
( P r o o f of trans_version_reifctn( opl @ Vl, op2 @ v2, op3 @ va ). 5.6.6.5 > _~
[ reif~2_hyp : version_reif(opl @ Vl~ op2 @ v2, retr12) ~[ reif23-hyp : version_reif( op2 @ v2, op3 @ v3, retr23) }-(Proof of version_reif(opl @ vl, op3 @ v3, retr13) 5.6.6.6 )
] \ imp. intro
] \ imp. intro .'. trans_version_reifctn( opl @ Vl , op2 @ v~, op3 @ v3) This code is used in section 5.6.6.4. 5.6.6.6. We now come to the core of the proof: the hard part (newop) consists of showing the operation reification condition for the operator pair opl and op3. The idea is to use transitivity of operator reification, derived above (op_refine). The easy part of the condition (base) follows from the inductive hypothesis. ( P r o o f of version_reif(opl @ vl, op3 @ v3, retr13) 5.6.6.6 > -
base
:= inductive_hyp
\ imp. eIim (def_version_reif. cons. down (reif12_hyp).base)
\ i, p. di,
(de/_ ersion_ ei/. cons. down (reif23-hyp).base)
.'. version_rei]( vl , v3 , retr13 ) , newop := op_refine / de/_version_reif, cons . down (reif12-hyp). newop / def_version_reif, cons. down (reif23_hyp). newop 9 opt E_~
op3
\ def_version_reif, cons. up .'. version_reif( opl @ Vl, op3 @ v3 , retr13 ) This code is used in section 5.6.6~5.
5.7. Discussion 5.6.7
179
Proof Assembly
Finally the above results can be put together to prove that version vl is a reification of version v3 wrt. the retrieve function retr13. Remember that, according to the definition of reification, there are three conditions to be satisfied: the version verification condition (which follows directly from the reification assumptions), the conditions for the composed retrieve function (which hs been verified in Sect. 5.6.4), and the version reification condition (which follows from the derivations in Sect. 5.6.6) using the version reification conditions of the assumptions. ( Proof assembly. 5.6.7 ) _-version
:-- 1 concrete := reif12 9 v e r s i o n , concrete , abstract
:= reif23 9 v e r s i o n , abstract
... I V l , / , v3 ,/D , retrieval
:= retrieval]3 .'. valid_retrieve(invl, inv3, retr13)
, reification :-- trans_version_reifctn_proof \ imp. elim (reif12.reification) \ i m p . d i m (reif23.reification) .'. version_reif( vl , v3, retr13 )
This code is used in section 5.6.3.4. 5.7
Discussion
Currently the VDM methodology is applied predominantly in a semi-formal style. In this case study we have experimented with a complete formalization of VDM developments. We will try to evaluate this experiment at the end of the book, fie., discuss drawbacks and benefits, also in comparison with the case study on algorithm calculation. At this place we would like to briefly discuss some phenomena that are more specific to the nature of VDM. The formalization of VDM datastructures has been incomplete but typical. It was not intended to match the currently evolving VDM standardisation, however we hope that the reader is convinced enough that the given formalization could be extended to cover the VDM datastructures and notations to a very wide degree. A similar remark applies to the formalization of LPF and to the formalization of the HLA development. Furthermore, we did not tackle at all topics such as operation decomposition and modules in VDM. Transitivity of VDM reification is neither deep nor trivial to prove. The proof has been elaborated in a mathematical style on paper, before becoming formalized in Deva. Proving properties like transitivity of reification helps to better understand formal methods. For example it is instructive to see, using the crossreference tables at the end of the book, where and how precisely the condition
180
5: Case Study on VDM-Style Developments
t h a t the retrieve function preserves invariants ( used in the transitivity proof.
reif12, retrieval .preservation) is
6
Case Study
on Algorithm
Calculation
This chapter presents a formalization of some selected parts of an algorithmic calculus for developing programs from specifications known as the Bird-Meertens Formalism or "Squiggol" [14], [78]. This calculus derives its power from recognizing some basic, yet powerful, laws that come with algebraic data types such as trees, lists or bags. In particular, these laws express the properties of a few higher order operators over these types. Together with an economic and concise, APL-like, notation, these laws allow the calculation of an efficient functional program from a (possibly) inefficient functional specification without resorting to inductive proofs. The design of the Bird-Meertens formalism began with the development of a theory of lists centered around some select problem classes. However, the generality of this approach to algorithm calculation was quickly recognized and the calculus is now being actively extended by various groups: On the one hand, new theories for specific data types and specific problem classes are developed (cf., e.g., [58] and [15]) On the other hand, the calculus is redesigned in a relational setting (in contrast to the original functional one) in order to better cope with polymorphism and non-determinism (cf. [7] and [8]). A readily obtainable overview of recent developments is the paper by Malcolm [76]. He extends earlier work by Hagino [49] in categorical programming and shows how key transformation laws for (categorically defined) data types can be derived from their definition. 6.1
Overview
The formalization we want to present in this chapter covers some basic data structures, their properties, and the calculations of two algorithms. In fact, it is a formalization of what Bird describes in Secs. 1 and 3 of [14]. The basic theory for algorithm calculation consists, in the context of this setting, of a sorted predicate calculus with an extensional notion of equality and some elementary notions of algebra. See Chap. 4 for the definition of the context CalculationalBasics. The formalization begins by introducing join lists, i.e., lists with constructors for the empty list, for the singleton list, and for the concatenation (or join) of two lists. By excluding the constructor for the empty list, one obtains nonempty join lists which will come up in Sect. 6.7. Some basic theory about join lists is then developed, in particular the "promotion theorem" is stated and two instances are derived. Some of the results are adapted to the case of non-empty join lists. After that, the notion of segments is introduced and a development law resembling the Horner-scheme for computing polynomials is presented. The given laws are then used to develop an efficient algorithm solving a segment problem: A linear time algorithm for computing the maximum segment sum of a list of integers. Finally, non-empty lists are used to introduce the data type of trees with finite but non-zero branching, and an important optimization of tree search algorithms is derived: it is shown how the a~-pruning strategy for two player game-trees can be derived starting from the minimax evaluation scheme.
182
6. Case Study on Algorithm CMculation
c o n t e x t CalculationalDevelopment i m p o r t BasicTheories ; i m p o r t CalculationaIBasics ; ( Join lists. 6.2 ) ; ( Non-empty join lists. 6.3 ; ( Some theory of join lists. 6.4 )
:---
; ( Some theory of non-empty join lists. 6.5 } ; ( Segment problems. 6.6 } ; (Tree evaluation problems. 6.7 }
6.2
Join Lists
Join lists, just like the data types presented in Chap. 4, will be formalized by first declaring the constructors, and then their defining axioms. In addition, we define several higher-order operators on join lists. ( Join lists. 6.2 ) =
JoinLists := list :[sort ~- sort] ; ( Constructors of join lists. 6.2.1 } ; ( Axioms of join lists. 6.2.2 ) ; (Map and reduce operators for join lists. 6.2.3 ) ; (Left and right reduce operators for join lists. 6.2.4 context
This code is used in section 6.1. 6.2.1. Join lists are built from three constructors: The empty list (1~), the singleton list constructor (((.))), and concatenation or "join" operation (-H-). For example, the expression (a) ~+ ((b) -t+ (c)) is a list consisting of the three elements a, b, and c, where the type declaration of -H- requires a, b, and c to be of the same sort. In the sequel, such right bracketed lists shall be abbreviated to
(a, b, c). Constructors 1_~ : [s ((.)} : [s () (.) : [ s
of join lists. 6.2.1 ) ? sort f- list(a)] ? sort F- [s F- list(s)]] ? sort [list (s); list(s)
This code is used in section 6.2.
list(s)]]
6.2. Join Lists
183
6.2.2. The singleton list constructor is an injection and the join operation is associative with the empty list as identity, i.e., join lists have a monoid structure. In fact, they are the free monoid structure over the base sort and this property is characterized by an induction axiom. (Axioms of join lists. 6.2.2 ) -
psingleton : injective (([:])) psoin : monoid (-~, 1~) ; ioin_induction : s ? sort ; P ? [list (s) ~- prop] P ( l ~ ) ; [ x : s ~ P((x))];[xs, ys ? list (s) k
P (xs); P ( y s ) , P (xs q+ ys) ]
[ xs ? list (s) F- P(xs)]
This code is used in section 6.2. 6.2.3. In the context of algorithm calculation, there are two very important operators on join lists: map and reduce. The map operator (,) applies a function to each element of a list and returns the list of results, i.e.,
f* ( a l , . . . , a~)
( f ( a l ) , . . . ,f(a~)).
=
Mapping a function over the empty list returns the empty list which makes the "section" f* a homomorphism on the monoid of join lists. The reduce operator '/', in effect, "feeds" the elements of a list to a monoid operator '| i.e.,
=
ale...ea
.
Reducing the empty list by a monoid operator returns the unit of the operator which again makes the section | a homomorphism on the monoid of join lists. These two operators are now formally specified in Deva by the following declarations. ( Map and reduce operators for join lists. 6.2.3 ) --(-)*(.)
; map_def
: [ s , t ? sort t-[[si-t];list(s) k-list(t)]] :Is, t ? sort ; f ? [s F- t]
t-~ empty := f . 1 ~ = 1~ , singleton := [x ? s t- f 9 (x> = (f(x)}] , join := [xs, ys ? list (s) t- f 9 (xs -3- ys) = ( / 9 xs) § (I * ys) D ]
184
6. Case Study on Algorithm Calculation
; (.) / (.)
:[s?sort
F[[s;sFs];list(s)
Fs]]
; reduce_def : [s ? sort; (.)|
?[s;sFs];1 e? s monoid (@, l e )
:= (| / l-H- = l e k , singl~to,~ := [ x ? s k ( e ) / (~> = 9 ] empty
:= [xs, ys ? list (s) F (~) / (~s -,- ys) = ((~) / xs) 9 ( ( e ) / ys)]
, joi,~
This code is used in section 6.2.
6.2.4. There are two more related reduce operators for lists: left-reduce '-A' and right reduce ' ~ ' which feed the elements of a list to any binary operator in a fixed order starting with some seed, i.e., |
( a l , . . . , a~)
=
((a | al) @ a 2 ) " . | a~,
e§
( a l , . . . , a~)
=
m ~'"(a~-i
9 (a~ e a)).
For an illustration of such a directed reduction, consider the following equation which reminds one of Hornet's rule for computing polynomials. (al x a2 x " " x a ~ ) + ( a 2 x aa x ... x a ~ ) + . . . +
=
a~ + l
{ multiplication distributes over addition }
(((ax+l) x a2+1) x - . - ) x a,~+l The latter expression can be expressed in terms of a left-reduction: (((al+l) =
xa2+l)x..-)xa~+l
{define|174
(((1 | al) | a2) | --
| a~
{ definition of 74 } |
, a~}
The defining equations for left- and right-reduction complete the formalization of join lists. Left and right reduce operators for join lists. 6.2.4 } (.)-/~(.)(.)
: [ s , t ? sort F - [ [ t ; s k t ] ; t ; l i s t ( s ) k t ] ]
(.)~-(.)(.)
:Is, t ? sort k [ [ s ; t ~ - t ] ; t ; l i s t ( s )
; lreduce_def
:
kt]]
185
6.3. Non-empty Join Lists
Is, t? sort; ( . ) e ( - ) ? [ t ; s b t ] ; x ? t ~-~ base := (| -/-+x l-m = x , tee := [ y ? s ; ys ? list (s) ~- (~) §
(ys ~+ ) = ((~) §
ys) ~ y]
D ]
; rreduce_def : Is, t? sort; ( . ) e ( - ) .* [s;t F t]; x ? t t-~ base := (O)(-/-~ 1~_ -- x , tee := [ y ? s ;Us .~ list (s) F ( e ) + ~
( -H- ys) = ~ 9 ( ( e ) §
ys)]
]
This code is used in section 6.2.
6.3
Non-empty
Join Lists
Non-empty join lists are obtained from join lists by removing the constructor of the empty list and suitably adapting all further declarations. This adaptation is straightforward and the result is listed below. The reader may compare with the corresponding declarations for (ordinary) join lists. ; (Axioms of non-empty join lists. 6.3.2 ) ; ( Map operator for non-empty join lists. 6.3.3 ) ; ( Reduce operator for non-empty join lists. 6.3.4 > ; ]
This code is used in section 6.1.
6.3.1.
< Constructors of non-empty join lists. 6.3.1 > -
((.))
: [ s ? sort F- [s t- ne_list(s)]] ; (.) -H- (.) : [ s ? sort t- [he_list (s); ne_list(s) t- ne_list(s)]] This code is used in section 6.3.
186
6. Ca,se Study on Algorithm Calculation
6.3.2.
( Axioms of non-empty join lists. 6.3.2 } -
psingleton : injeetive (([5)) ; pjoin : associative (-Pr) ; join_induction : Is ? sort ; P ? [ ne_list ( s ) ~ prop] I
~_][x : s ~- S((x))]; [xs, ys ? ne_list ( s ) F-
f
P (~s); POs) P (xs 44- ys)
[ ~s ? he_list (s) ~- p(~s)
This code is used in section 6.3. 6.3.3.
( Map operator for non-empty join lists. 6.3.3 ) -: Is, t ? sort ~- [[s F t];ne_list(s) ~- he_list(t)]]
(.),(.)
; map_def : Is, t ? sort; f ? [s ~- t] ~-~ singleton := [x ? s I- f 9 (x) = (f(x))] , join
:= [xs, ys ? ne_list (s) ~- f * (xs -t4- ys) = (f * xs) -~ (f * ys)]
] This code is used in section 6.3. 6.3.4.
(Reduce operator for non-empty join lists. 6.3.4 ) =
:[s?sort
(.) / (.)
F-[[s;s~-s];ne_list(s) F-s]]
; reduce_def :
[ ? sort; ( ' ) e ( ' ) . ~ [ s ; s ~ - s ] ; l e ?
s
associative (0)
singleton : = [ x T s ~ ( e ) / (x) = x ] join := [xs, ys ? he_list (s) ~ (e) / (~s ~ ys) = ((e) / xs) 9 ((e) / ys) ] This code is used in section 6.3. 6.3.5.
(Left and right reduce operators for non-empty join lists. 6.3.5 )
(.)§ ; (.)~(.)(.)
: Is, t? sort ~-[[t;s ~ t];t;ne_list(s)~ t]] : [ s , t ? sort ~-[[s;t~-t];t;ne_list(s)
F-t]]
6.4. Some Theory of Join Lists
187
; lreduce_def :
[~,t? ~ort;
(.)e(.)
?[t;~-t];
~ ? t; y ? s
~4 base := ( e ) §
(y) = x 9 y := [ys ? ne_list (s) F ( @ ) ~
, rec
(ys -FF (y}) = ((|
ys) @ y]
] ; rreduce_def :
[ s , t ? sort; ( . ) e ( . ) ?[s;tF- t]; x ? t; y ? s F-~ base := (| , rec
~/-~ (y) -- y 9 x
:= [ys ? ne_list (s) e (|
((y) -FF ys) = y 9 ((@)~%~ ys)]
] This code is used in section 6.3. We have chosen to introduce non empty join lists in separation from join lists. Alternatively, it would have been possible to first introduce non-empty join lists, and then obtain (arbitrary) join lists by adding the empty list and extending the defining laws of the operators. While technically being perfectly feasible, this approach leads to a somewhat implicit formalization of join lists, emphasizing very much the differences to non-empty join lists. Since join lists are one of the "classical" datastructures of calculational algorithm calculation, we have chosen to follow the approach to explicitly formalize them in the first place. Note that in Deva it is not possible to obtain non-empty join lists by "hiding" the empty list in join lists. The problem is that many laws of join lists involve the empty list, and it is not clear how these laws should be adapted in general. Note further that we used identical names for the operators and laws of both kinds of lists. Thus name-clashes would arise when importing in parallel the contexts formalizing the two types of lists. These name clashes would have to be resolved by renaming, as done for example in 4.4.1. However, in this chapter the two types of lists will not be used at the same time. 6.4
Some Theory
of Join Lists
( Some theory of join lists. 6.4) context import
TheoryOfJoinLists := JoinLists
; (Map distribution. 6.4.1 ) ; ( Catamorphisms. 6.4.2 ) ; (Promotion theorems. 6.4.3 )
This code is used in section 6.1.
188
6. Case Study on Algorithm Calculation
6.4.1. One of the properties of the map operator which can be derived from the laws given in the previous chapter is that it distributes over composition, i.e.,
(f,)o(g,)
=
(fog),
for functions f and g of proper type. An inductive proof of this property is left as an easy exercise for the reader. Since its formalization would somewhat distract from the line of this presentation, it is omitted. For examples of formalized proofs by join list induction in Deva see [104]. ( Map distribution. 6.4.1 )
map_distribution : [ s , t , u ? sort ;f ? [t ~ u];g ? [s ~ t] ~ ( f * ) o (g*) = ( f o g ) , ] This code is used in section 6.4.
6.4.2.
A homomorphism whose domain is the monoid of join lists is called
a (join-list) catamorphism. A central theorem of the theory of lists states that any catamorphism can be uniquely factorized into a reduce and a map, i.e., for any catamorphism h there exist a monoid operator | with identity 1 e and a function f such that h
=
(e/)o(f,),
i.e., h(l
)
=
le,
h((a)) h(x y)
=
f(a),
=
h(x) e
Because this factorization is unique, we can introduce a special notation to denote catamorphisms: (~| ~ (@/) o ( f , ) . A further consequence is that any catamorphism can be written as a left- or a right reduction ~| q~'f~
,
wherea|174
[ (~+T!le, w h e r e a |
=
Gb
Proofs of these specialization properties of catamorphisms are again left to the reader as an exercise, we just state the corresponding declarations: Catamorphisms.
(~(.),(.)])
6.4.2 } -
:=[s,t?
specialization :
sort ~-[|
6.4. Some Theory of Join Lists
189
[s,t? sort;(.)@(.) ?[t;tl-t];1 e ? t; f ? [ s } - t ] k
~left
monoid (@, l e ) :---[ | :=[ a : t ; b-7 7 7 - a ~ ]
,right:=[|
k (I|
s;b: t k f ( a ) e b ] k q e , f )
|
]
|
]
This code is used in section 6.4. 6.4.3. A central notion of algorithm calculation is promotability: A function f is | --* |174 where both binary operators are assumed to be monoid operators, iff /(1,)
=
1@
f(a@b)
=
f(a)|
This says nothing but that f is a monoid homomorphism, the purpose of the notation is to emphasize how f behaves with respect to the monoid operators. Clearly, any catamorphism ([@,f]) is (-H- --+ |174 One is interested in the promotability properties of a function because they imply powerful transformation laws. This is stated by the promotion theorem: A function f is | ~ | prom| iff f o(@/)
=
(@/)o(f*)
The equivalence of these two characterizations of promotability is formalized in Deva as follows. ( Promotion theorems. 6.4.3 ) -
promotion :Is, ? sort;(.)| ?[s;sks];lr monoid (| le), monoid (|174 {f (1,) = l|
s;(.)@(.) ?[t;tkt];1,?
t
: [s k t]
b ? s k f ( a | b) =f(a) @/(b)]}
1o ( ( e ) / ) : ( ( |
(1,)
See Ms| sections 6.4.4 and 6.4.5. This code is used in section 6.4. 6.4.4. We can use the promotion theorem to prove two theorems of the theory of lists: map promotion
(1,)o ( §
=
(§
((f,),),
190
6. Case Study on Algorithm Calculation
i.e., f* is -~+ --+ -H--promotable, and reduce promotion ( e / ) o (-~/)
=
( ~ / ) o ((~/)*),
i.e., | is -t+ ~ | under the assumption that | operator. Intuitively, these two laws correspond to the equations (f*)(Xl q+ 9" " q+- Xr,)
=
(e/)(x, ~ - - . ~+ x~)
=
(f*xl)-H-'"-w(f*Xn), (e/x1) e . . . 9 ( e / x ~ )
is a monoid and
Both theorems are obtained as simple instantiations of the promotion theorem. Promotion theorems. 6.4.3)+ =
map_promotion
:=
[s,t? sort;f ?[skt]
k promotion ( ~ pjoin , pjoin ~, (f,)). down ( ~ map_def, empty, map_def, join ~)
] .'. [ s , t ? sort;f? [s k t] k ( f . ) o ((-H-) / ) = ((-H-) / ) o ((f*)*)]
; reduce_promotion := Is ? sort; | ? [ s ; s k s ] ; l e ? s F[ plus_props : monoid (| 1| F promotion (~ pjoin, plus_props ~, ((| down ( ~ reduce_def (plus_props). empty, reduce_def (plus_props). join ~)
] ] " [s?sort;|
s b ((e)
m~176 /)o ( ( §
/)*)]
= ( ( , ) / ) o (((e)
6.4.5. We combine these two laws in form of a simple promotion tactic which, when used, tries to unfold with one of them. It will be used in Sect. 6.6. Note that this tactic is conditional, i.e., it depends on an additional argument, namely props$ which is required by the reduce promotion law. (Promotion theorems. 6.4.3)+ =
; promotion_tac := [s ? sort;| ? [ s ; s k s ] ; l ~ ? s F[ plus_props : monoid (•, l e ) k ~ unfold (map_promotion), unfold ( reduce_promotion(plus_props ) ) ~
] ]
6.5. Some Theory of Non-Empty Join Lists 6.5
Some Theory
of Non-Empty
191 Join Lists
A theory of non-empty join lists, sufficient for the purposes of this case study, is obtained by selection and adaptation of the material of the previous section. Essentially everything involving empty lists and identity elements of binary operators is removed. The specialization rule is stated in a stronger form, i.e., with a weaker condition: the initialization element e of a left-reduction | has to be an identity on the range of f only. This stronger form is useful for the example in Sect. 6.7. ( Some theory of non-empty join lists. 6.5 ) c o n t e x t TheoryOfNonEmptyJoinLists := [ i m p o r t NonEmptyJoinLists ; (Map distribution (for non-empty join lists). 6.5.1 ) ; ( Catamorphisms (for non-empty join lists). 6.5.2 ) ; (Promotion theorem (for non-empty join lists). 6.5.3 )
This code is used in section 6.1.
6.5.1.
(Map distribution (for non-empty join lists). 6.5.1) -
map_distribution : [ s , t , u ? sort ; / ? [t ~ u];g ? [s ~ t] F (f*)o (g*) = (fo g)* ] This code is used in section 6.5. 6.5.2.
(Catamorphisms (for non-empty join lists). 6:5.2 ) --
(~(-),(.)]) :=Is, t ? sort f - [ | ; specialization : Is, t ? sort; ( - ) e ( ' ) ? [ t ; t F - t ] ; e ? t ; f ?[st- t] associative ( e ) ; [ b ? s f- ((e G f(b)) =)(f(b))]
11r : = [ m : = [ a : t ; b :
s
,right := [| :=[ a : s ;b: t ~- f(a) | b] t- ([|
= |
]
] This code is used in section 6.5. 6.5.3. (Promotion theorem (for non-empty join lists). 6.5.3) promotion :
192
6. Case Study on Algorithm Calculation
[s, t ? sort ; (.) e (.) ? Is; s ,- s ]; (.) | (.) ? It; t l~ associative (| associative (| [s ~- t] I-
k
t]
[a,b ? s k f ( a | 1 7 4 f o ((e)/)
=
((|
(/,)
This code is used in section 6.5. 6.6
Segment
Problems
The first development presented in this chapter is taken from a class of problems involving segments (i.e., contiguous sublists). Before presenting the problem and its solution, some theory about segments is necessary. ( Segment problems. 6.6 } c o n t e x t SegmentProblems := [ import
TheoryOfJoinLists
; (Initial and final segments. 6.6.1 ) ;(Horner's rule. 6.6.2 ) ; ( Left and right accumulations. 6.6.3 } ; ( T h e "maximum segment sum" problem. 6.6.4)
This code is used in section 6.1. 6.6.1.
Initial and final segments of a list are recursively defined by inits(1-H-) inits((a} q- x) tails(l_~) tails(x -~ (a})
=
(1§
=
(l-H-)q- (((a}-~+) * inits(x))
=
{1~),
=
((-H-(a)) * tails(x)) -H- (1~}
i.e.,
inits() tails()
= =
), ,..., , 1+~>
The segments of a list are obtained by first taking the list of initial segments, then forming the tails of each initial segment, and finally joining the resulting three-level list into a two-level list:
segs
=
(-H-/) o (tails,)~ inits.
Note that due to the fact that tails generates an empty list for any input, segs will, in general~ generate multiple occurrences of the empty list.
6.6. Segment Problems
193
(Initial and final segments. 6.6.1 ) inits, tails:
[ s ? sort ;list(s) F list(list(s))]
; def_inits :
~ e m p t y :-- inits (1_~)= (1_~) ,cons
: = I s ? sort; x ? s; xs ? list(s) F inits ({x) -~ xs) = {1~) -H- (({x}-H-) * inits(xs))
] ; def_taits "
)
empty := i its COFtS
)
=
[ s ? sort; x ? s; xs ? list (s)
:~
b inits (xs -~+ {x)) = ((-H-{x)) * inits(xs)) +~ {1_~} ] ; segs
:=Is?sort
F ((-H-) /) o (tails*)~ inits]
This code is used in section 6.6.
6.6.2. rule
Using tails, we are now able to give a concise formulation of Homer's (al x a2 x .-. x a,~) + (a2 x a3 x -.- x a , ~ ) + . . . + a,~+ 1 (((al+l)
x a2+l) x-..)xan+l
Recall (p. 184) that the latter expression can be denoted as |
( a l , . . . ,an},
where a | b ~ ( a x b) + 1. For the first expression, we calculate: (al • 2 1 5 =
an)+(a2 x-..•
an)+.-.+an+l
{ definition of + / } + ~ ( a t x a2 x ... x a~,a2 x .-- x a n , . . . , a n , l )
=
{ definition of x / } + I(xl{al,...
=
, an}, x l ( a 2 , . . .
, a n ) , . . . , x l ( a n ), x/l~_)
{ definition of ( x / ) * } ( ( + / ) o ( x / ) * ) { ( a l , . . . , an},(a2,... , an},... ,(an), 1_~}
=
{ definition of tails } ( ( + / ) o ( x / ) . o t a i l s ) ( a l , . . . , a~}
By abstracting from the concrete operations + and x, Horner's rule can now be formulated completely inside the calculus: Assume that | and | are monoid
194
6. Case Study on Algorithm Calculation
operators and that the following distributivity property holds: a | (b | c) = (a | b) | (a | c). Then, Horner's rule states
(e/)o(v/),otails where a | b g (a @ b) | 1| the above formulation.
=
|247174
The Deva formalization is a direct transcription of
(Horner's rule. 6.6.2 } =_
homer_rule : [s ? s o r t ; | 1 7 4 1 7 4
s; |
: = [ a , b : s k @(|
b), l|
]
kit distrib_monoids (@, l e , @, 1|
] This code is used in section 6.6.
6.6.3. Finally, two accumulation operators, whose purpose is to construct lists of successive intermediate results of directed reductions, can be formulated in terms of the directed reduction operators and initial segments. The operators of left-accumulation ~ a and right-accumulation | are characterized by the equations: |
=
(|
|
=
(|
inits, tails,
and
or, more descriptively, by
07~a(a1,...,an) @ y//La(al, 999 , an)
=
(a,a@al,...,((a|174174
=
(al |
and
| (an-1 | (a~ @ a ) ) , . . . , an | a, a}
Note that, assuming | can be computed in constant time, both expressions on the right-hand side can be computed in linear time depending on the length of the input list. The formulation in Deva is again a straightforward transcription: (Left and right accumulations. 6.6.3}
-
(.) § (.) := [8, t ? sort k [ e :It; s ~ t]; a: t ~ ( ( ~ § o i~its ]] ; (-)~7~t(.)(-) := [s, t ? sort k [| :[s;t k t]; a: t b ( ( ( t ~ ) * ) o tails]] This code is used in section 6.6.
6.6. Segment Problems
195
6.6.4. The theory presented in the last section will now be apphed to develop an efficient algorithm for the maximum segment sum problem. The problem is, for a given list of integers, to compute the maximum of the sums of its segments. For example, the maximum segment sum (or ross) of the list ( - 1 , 3 , - 1 , 2, - 4 , 3) is 4, the sum of the segment ( 3 , - 1 , 2). A first systematic approach to the solution is to proceed in three steps: Step A: Compute all segments, Step B: compute all their sums, and finally Step C: compute the maximum of all these sums. In the algorithm calculation style this specification-like algorithm is written as
follows ('T' denotes the maximum operator): ,x
ross
=
(T / ) ~ ( + / ) * C
~ segs
B
A
Note, that + / c o m p u t e s the sum of a list, and thus (+/)* computes the list of sums of a list of lists. Intuitively, this algorithm is cubic, since it maps a linear function ( + / ) over a list consisting of a quadratic number of lists. This first, rather natural but computationally expensive solution is therefore inappropriate to be considered as an implementation. A second, more constrained attempt is to proceed as follows: Step A': Go through the elements of the list from left-to-right, Step B': for every element, compute the maximum segment sum of all segments ending in that element, and finally Step C': compute the maximum of all these sums. This algorithm is written as follows: mss
=
(T/)~
((T / ) o
( + / ) * o tails)*~ inits
C'
~
A'
The nice property of this expression is that Horner's rule can be applied to B I. The following proof in the calculus first shows the equivalence of the two solutions using tactics for promotion and distrilJution, and then transforms the second solution via Horner's rule to yield a linear algorithm. The laws necessary for the development are associativity (of both operators ' T' and '+'), existence of identities (2 for i", 0 for +), and the distributivity property (a T b) + c = (a + c) T (b + c). Consequently, we have a distributive monoid pair. (The "maximum segment sum" problem. 6.6.4 ) s
; (-) T ( - ) , ( . ) + ( )
:
: [ s ; s e s]
; max_plus_props : ; max_props
sort
distrib_monoids ( ( T ) , •
:= max_plus_props, mon_ptus
196
6. Case Study on Algorithm Calculation
(((+)/)*)~
egs 9 [
list
]
;
:= ((T)/)~
; development
:= (Implicit development of mss (not checked). 6.6.5 }
This code is used in section 6.6.
6.6.5. The development itself is presented in forward direction by equational reasoning, similar to the original presentation in the literature. In fact, Bird's original development which appeared in [14] is reproduced in Fig. 15. Comparing that rigorous but still informal development with the formal, machine checked, Deva development reveals that there is actually not that much difference between the two. The "hints" of Bird which refer to the rule(s) that justify each transformation step translate into the "application" of those rules. The progress of the development is indicated by the judgements. It is this close (syntactic) similarity between the calculations in in the Squiggol calculus and their counterpart in Deva which gives us reason to claim that we have indeed faithfully modeled the Squiggol calculus. 6.6.6. The reader will have noticed that this development is quite implicit, i.e., a lot of details are left unstated. The development becomes a bit more explicit if the two loops are unfolded into two single steps each. ( First explicitation of the development of mss (not checked). 6.6.6 ) =
f e5 .'. ross = (('[)/)o ( ( ( + ) / ) * ) o ( ( % ) / ) o (tails*)~ inits \ fun fold (map_promotion) .'. ross = ( ( T ) / ) ~ ((-t4-)/)o ((((+)/)*)*) o (tails,)~ inits \ funfold (reduce_promotion (max_props)) .'. mss = ( ( T ) / ) ~ ( ( i T ) / ) * ) ~ ((((+)/)*)*) ~ (tails*)~ inits \ fun fold (map_distribution) .'. ross = ( ( T ) / ) ~ ( ( ( ( T ) / ) ~ (((+)/)*))*) ~ (tails*)~ inits \ funfold(map_distribution) .'. mss = ((]')/)o ( ( ( ( T ) / ) ~ ( ( ( + ) / ) * ) ~ tails),)~ inits \ fun fold ( horner_rule ( max_plus_props )) .'. ross = ( ( T ) / ) ~ ([a, b: s b (a + b) T 017~o)
6.6.7. In this presentation, enough details were available to trace the transformations step by step. However, the unfold rule is used without explicitly stating the actual variant used in each application, i.e., in this example the variant funfold is always used (c.f. Chap. 4.4.1). Similarly, no explicit substitutions are given for the implicit parameters of unfold. For example, its implicitly defined functional parameter F describes the position at which the law of the unfold
6.6. Segment P r o b l e m s
197
msB
=
{ definition } T/
=
o (+/)*
o ~egs
{ definition of segs } T I ~ ( + / ) * ~ -H-I ~ tails* o inits
= =
{ map and reduce promotion } T / ~ ( T / o ( + / ) , o tails)* o inits { Hornet's rule with a | b = (a + b) T 0 } T / ~ | J*o * ~ inits
=
{ accumulation lemma } T / o |247
( Implicit development of mss (not checked). 6.6.5 )
frr ... m ~ s = ( ( T ) / ) o -
-
(((+)/)*)o
s~gs
definition of segs .'. ross = ( ( T ) / ) o ( ( ( + ) / ) , ) 0
((-H-)/)o (tails*)~ inits
\ loop promotion_tac (max_props) .'. ross = ( ( T ) / ) o ( ( ( T ) / ) * ) ~ ( ( ( ( + ) / ) , ) , ) o ( t a i l s , ) o inits
\ loop unfold (map_distribution) .'. mss = ( ( T ) / ) o ( ( ( ( T ) / ) o ( ( ( + ) / ) , ) o
t a i l s ) , ) ~ inits
\ unfold(horn~r_~ule(max_plu~_props)) .'. mss = ( ( T ) / ) ~ (([a, b: s F- (a + b) T 0]740)*) ~ inits -
-
definition of left accumulation .'.= ross = f i T ) / ) ~ ([a, b: s ~ (a + b) I 0 ] §
This code is used in section 6.6.4. Fig. 15. The classic development of the maximum segment sum algorithm and its Deva formalization
rule is applied. W h e n specifying the precise variant of unfold used and adding the substitutions for the p a r a m e t e r F, the development looks as follows. (Second explicitation of the development of mss. 6.6.7/=-
fre]~ 9
ross
(($)/)o (((+)/),)o
((-~)/)o
( t a i l s * ) ~ inits
\ f u n f o I d ( F := I f :[ list ( l i s t ( l i s t ( s ) ) ) F- l i s t ( s ) ] t- ( ( $ ) / ) o f o ( t a i l s * ) ~ inits], map_promotion)
198
6. Case Study on Algorithm Calculation
= ( ( $ ) / ) o ( ( _ ~ ) / ) o ( ( ( ( + ) / ) , ) , ) o (tails,)~ inits \ funfold(F := [f :[ list (list(s)) ~- s] ~- f o ((((+) / ) , ) , ) o (tails*)~ inits], reduce _promotion ( max_props ) ) .'. ross = ( ( T ) / ) ~ ( ( ( T ) / ) , ) o ( ( ( ( + ) / ) , ) , ) o (tails,), inits \ funfold(F := [f :[ list (list(list(s))) t- list(s)] ~ ((T)/)o f o (tails*)~ inits], map_distribution) .'. ross = ( ( T ) / ) ~ ( ( ( ( T ) / ) ~ ( ( ( + ) / ) * ) ) * ) ~ (tails,)~ inits \ funfold(~ := [ / : [ list (list(s)) ~ list(s)] ~ ((T)/)~ f ~ inits ], map_distribution) .'. r o s s = ( ( T ) / ) ~ ( ( ( ( T ) / ) ~ ( ( ( + ) / ) * ) ~ tails),)~ inits \ funfold(F := [ f :[ list (s) ~- s] F- ( ( T ) / ) ~ ( f , ) 0 inits], homer_rule (max_plus_props)) .. ross = ( ( T ) / ) ~ ([a, b: s ~- (a + b) T 0 ] ~ o ) ... r o s s
6.6.8. The preceding version is still far from being fully explicit, there are many implicitly defined parameters of the unfold rule and the involved laws that are not explicitly given. After inserting explicit substitutions for these parameters, the development looks as follows. (Third explicitation of the development of mss. 6.6.8 ) _=
frefl .'. ross = ( ( T ) / ) ~ ( ( ( + ) / ) * ) ~ ((-H-)/)o (tails,)~ inits \ funfold(f :=- ( ( ( + ) / ) , ) o ((-14-)/), 9 := ((-t4-)/)o ((((+)/)*)*), F : = [ f :[ list (list(list(s))) F- list(s)] ~- ( ( T ) / ) ~ f ~ (tails*)~ inits], map_promotion (f := ( ( + ) / ) ) ) .'. mss = ( ( T ) / ) ~ ((-~-)/)~ ( ( ( ( + ) / ) * ) * ) ~ (tails,)~ inits \ funfold(f := ( ( T ) / ) ~ ((4t-)/), 9 := ( ( ] ) / ) o (((T)/)*), F : = [ f :[ list (list(s)) ~- s] ~- f o ( ( ( ( + ) / ) * ) * ) o (tails*)~ inits], reduce_promotion (| := (T), le := J-, max_props)) .'. ross = ( ( T ) / ) ~ ( ( ( T ) / ) * ) ~ ( ( ( ( + ) / ) * ) * ) ~ (tails*)~ inits \ funfold(f := (((T)/)*)~ ((((+)/)*)*), g := ( ( ( T ) / ) ~ ( ( ( + ) / ) * ) ) * , F : = [ f :[ list (list(list(s))) t- list(s)] ~- ( ( $ ) / ) o f o (tails,)~ inits], map_distribution (f := ((T)/), g := ((+) /)*)) .'. mss = ( ( T ) / ) ~ ( ( ( ( T ) / ) ~ ( ( ( + ) / ) * ) ) * ) ~ (tails,), inits \ funfold(f := ( ( ( ( T ) / ) ~ (((+)/)*))*) ~ (tails*), g : = ( ( ( T ) / ) ~ ( ( ( + ) / ) * ) ~ tails)*, F : = I f :[ list (list(s)) ~ list(s)] t- ( ( T ) / ) ~ f ~ inits], map_distribution (f := ( ( T ) / ) ~ ( ( ( + ) / ) * ) , g := tails))
6.7. Tree Evaluation Problems
199
.'. ross = ( ( T ) / ) ~ ((((T)/)~ ( ( ( + ) / ) * ) ~ tails)*)~ inits \ funfold(f := ((T)/)o ( ( ( + ) / ) , ) o tails, g := [a, b: s ~- (a + b) T 01740, F : = [ f :[ list (s) ~- s] t- ( ( T ) / ) ~ (f*)~ inits], horner_ruIe (| := (T), | := (+), l e := _1_,1| := 0, max_plus_props)) ... ross = ( ( T ) / ) 0 ([a, b: s ~ (a + ~) T 0 ] § This Deva text looks quite horrible, yet it is still far from being completely explicit: There remain those implicit arguments which instantiate the constructions to the proper sorts. Their number is vast, e.g. every use of / and * has two such implicit arguments. A fully explicit version of the development is not presented, it would be an explosion wrt. the first development. These consecutive explicitations of the maximum segment sum development have been shown in order to motivate the need for suppressing annoying details of developments. The developer should be allowed to think only in terms of the implicit development or its first explicitation. The second and third explicitations, and even further ones, should be consulted only occasionally to check things such as installtiation details. 6.7
Tree Evaluation
Problems
The second development presented in this chapter concerns the aft-pruning strategy for two-player game-trees. ( Tree evaluation problems. 6.7 ) -c o n t e x t TreeEvaluationProblems := [[ i m p o r t
TheoryOfNonEmptyJoinLists
; ( Arbitrary branching trees. 6.7.1 ) ; ( Minimax evaluation. 6.7.2 ) ; ( Interval windows. 6.7.3 } ; ( a/3-pruning. 6.7.13 }
] This code is used in section 6.1.
6.7.1. The context below defines the constructors for non-empty trees with an arbitrary but finite branching degree. Trees have values at their tips and a nonempty list of subtrees at their forks. It is possible to introduce the second-order operators on trees which correspond to those on lists (cf. [79]). (Arbitrary branching trees. 6.7.1 ) _---
tree :[sort F- sort] ; tip
: [ s ? sort ; s ~
tree(s)]
200
6. Case Study on Algorithm Calculation
; fork : [ s ? s o r t ; ne_list(t
ee(s))
This code is used in section 6.7. 6.7.2. A game-tree is a tree that describes the possible moves of a two-player game up to some given depth. The minimax evaluation of a game tree starts with a rating given for all the leaf positions of the tree. It then computes the ratings of the moves further up in the tree in a bottom-up, minimax fashion. The game-tree in Figure 16 illustrates the minimax evaluation scheme. A circular node denotes a move of the player whereas a square node denotes a move of her opponent. The tree shows two possible moves of the player which have both two possible reactions of the opponent. Further levels are not shown. The numbers (ranging over the set of integers) denote the rating associated with each move: At a circular node they are a lower bound for the value to be reached by the player at that move, whatever the opponents will do. Conversely, at a square node they are an upper bound for the value to be reached by the opponent at that move, whatever the player will do. The numbers can be computed b o t t o m up by the so-called minimax-evaluation: i.e., the rating of a leaf is given and left unchanged. The rating of a circular node results from taking the m a x i m u m of the ratings associated with its immediate descendant square nodes while the rating of a square node results from taking the minimum of its immediate descendant circular nodes.
Fig. 16. Minimax evaluation
Minimax evaluation eval(t) of a tree is recursively defined below: While on
6.7. Tree Evaluation Problems
201
tip positions the evaluation just returns the value associated with the tip, on fork positions all branches are recursively evaluated and the results of this evaluation, which correspond to the players (or the opponents) optimization, are negated and their maximum is computed, i.e., eval(tip(n))
=
n
eval(fork(tl))
=
T / ( ( - ~ eval) * tl) {notation for list catamorphisms}
=
(IT,-~ evalD(tl)
Equivalently, the minimum of the results could be computed and then negated. Because of the double negation law - ( - a ) = a, the evaluation successively minimizes and maximizes on each level up. The formalization in Deva abstracts from the concrete type of integers as used in Fig. 16. Instead, an arbitrary sort s of values is considered which we assume to be equipped with the structure of a boolean algebra.
( Minimax
evaluation.
6.7.2 )
: sort
8
_1_,-V ; -(-)
: S
:[s~-s]
;(') T ('), (') ~ (') : [s;s~ s] ; ba : boolean_algebra (_l_, T, (T), (J.),-) ; eval
: [ tree (s) ~ s ]
; eval_def
: ~ tips
:=evalotip=id
, forks := evalo fork -- ~(T), ( - ) o eval D
This code is used in section 6.7. 6.7.3. The idea underlying a~-pruning is to record intervals within which the minimax value of the node may fall. Once one knows for sure that the rating associated with an immediate descendant node falls outside this interval, further evaluation of the subtree (of which the descendant node is the root) may be discontinued. The context we are about to define will introduces the function I~ which "coerces" its argument into the interval [a,/3]. First, the definition of I~ is given: Note that the theory of the partial ordering induced by the boolean algebra is imported, making available all the laws introduced in Sect. 4.4.3. ( Interval windows. 6.7.3 ) ___import ,.
PartialOrdering ( ba)
,()(.):=[~, ~ :
s ~
(~ ~) o (~
T)]
202
6. Case Study on Algorithm Calculation
; ( Properties of interval windows. 6.7.5 ) This code is used in section 6.7. 6.7.4. Note that the definition of I~ is too loose, since it does not enforce the constraint that c~ E ~. Alternatively the constraint could be introduced directly as a condition of the coercion function, however this would cause notational clutter. With the given definition, the constraint will come up anyway as a condition in a lot of the desired properties of interval windows. The first such property is that if the interval is [A_,T] then the interval window is the identity function. The derivation of this property consists of a straightforward boolean simplification, details of which are suppressed by using the tactic bool_simp repeatedly. As illustrated in the derivation of the maximum segment sum, the iteration can be unfolded into individual boolean transformation steps. 6.7.5.
(Properties of interval windows. 6.7.5 }
I_bottop := [ a ? s ; L H S := I~(a)
~- trefl .'. L H S = T $ (• T a) \ l o o p bool_simp (ha)
]
.'. L H S =
a
\ extensionality, down .'.IT = i d See also sections 6.7.6, 6.7.7, 6.7.8, 6.7.9, 6.7.10, and 6.7.11. This code is used in section 6.7.3. 6.7.6. The next property states that the interval window function always delivers results smaller than its upper bound. The proof consists of a single application of the absorption law. ( Properties of interval windows. 6.7.5 }h- -
;I_zero : = [ a , / 3 , a ? ; LHS
s
:=~Tl~(a)
F- trefl .'. L H S = 13 $ (/~ $ (a T a)) \ unfold(ha, absorp .join) .'. L H S = ,~
] 9 [-,Z,a?
s
ZTl
(a)=Z]
6.7. Tree Evaluation Problems
203
6.7.7. An interval window I~ is narrowed by post-composing it with (a T), for some value a with a ~ /7. The proof is a slightly more complex boolean transformation; two transformation steps depend upon the condition a _ ~, one step applies the associativity law. (Properties of interval windows. 6.7.5 )d-
; I_narrow := [(~,/3, a, b ? s ; hyp : ~ E a,a E/3~; LHS := a T I~(b) I-
trefl LHS = a T (/3 $ (~ T b ) ) \ fold(exch_meet_join(hyp.2)) .'.
.'. LHS \ fold(ba, .'. LHS \ fold(ba,
=/3 ~ (a T ((~ T b)) assoc .join) = /3 $ ((a T a) T b) commut .join)
.'. LHS =/3 ~ ((~ T a) T b) \ unfold(hyp.1) .'. LHS =/3 ~ (a T b) .'. LHS -- I~(b)
F ~aEa'aE/3~
laT(i~(b))=i~(b)]
[~,/3, a,b? s
6.7.8. I_narrow can be specialized to the case of narrowing an interval window with its own lower bound. ( Properties of interval windows. 6.7.5 )+ .~
; L o n e :=[a,/3, a ? s; hyp : a E l 3 F I_narrow (~refl_smth .'. a E a, hyp ~) .. ~
T ,~(a) = ,~(a)
] '. [~,/3, a ? s
T ~(a) : ,'.(a) ] 6.7.9. Applying an interval window to a negated value is equivalent to negating and exchanging the bounds of the interval window, applying it, and negating the result. The proof, which makes use of the tactic moving a negation inside a boolean expression, can be presented more elegantly when starting from the right-hand side of the desired equation.
204
6. Case Study on Algorithm Calculation
(Properties of interval windows. 6.7.5 )+ ; I_neg_shift := [a,3 ? s ; hyp : (~ E 3 ~-[ a ? s ; RHS := - I - ~ ( a )
~- trefl .'. RHS = - ( - a ~ ( - 3 T a)) \ l o o p bool_neg_simp (ba) .'. RHS = (~ T (3 ~ - a ) \ fold(exch_meet_join(hyp)) .'. RHS =/3 $ (a T - a ) .'. RHS = I~(-a) \ sym .'. I~ ( - a) -- RHS
] \ extensionality, down
] ' [a,37
s ~-
(-) = (-)o
6.7.10. Interval windows commute with T. The proof involves reflexivity, some associative-commutative manipulations, and finally a distributivity law. (Properties of interval windows9 6.7.5 )+ -
;I_max_shift := [a, 3, a, b? s ; L H S := I~(a T b) }- trefl .'. LHS = I3 .[ (a T (a T b)) \ fold(refl_smth) . ' . L H S = 3 ~ ((a T a) T (a T b)) \ l o o p bool_ac (ba) . ' . L H S = / 3 1 ( ( a T a) T (a $ b)) \ unfold(ha, distrib .meet) .'.LHS=(31(a T a)) T (3.L (a T b)) 9 LHS = I~(a) T I~(b) ]
" [a,3, a,b? s ~- I~(a T b ) = l ~ ( a ) T l ~ ( b ) ] 6.7.11. I_max_shift is now used to resolve the condition of the promotion theorem (cf. Sect. 6.4.3), when proving that interval windows can be promoted within catamorphisms of the form (IT,f~.
69 Tree Evaluation Problems
205
(Properties of interval windows. 6.7.5 )+ ; I_hom_promote := [t ? sort;a,~3? s ; f
?[t F-s]; LHS := I~ o (I(T),f])
F-frefl 9
LHS = ]~o ( ( $ ) / ) o ( f . )
\ unfold(promotion(~ ba. assoc, join, ba. assoc, join ~, I~).down)(I_max_shift) 9 LHS = ( ( T ) / ) ~ (1~*)o (f*) \ unfold(map_distribution) 9
LHS = ( ( T ) / ) ~ ((1~o f ) . )
9 LHS
= {~(T), I~o f])
] 9 [t ? sort ;a,/3 ? s ; f ? [t t- s] ~- I~o (~(l"),f]) =- (~(T),I~~ f])]
6.7.12. The al3-algorithm prunes the game tree by computing boundaries [aft] within which the minimax value of a node will fall: For example, assume that one player has already evaluated one of her possible moves to, say, a rating of a. Now, take any of her other possible moves (call it A). Assume that her opponent can respond to A by a move B rated x such that x U a, i.e., a move worse than a from her point of view. The a3-pruning strategy now allows to conclude that there is no purpose in evaluating any further reactions to A, because the opponent is assumed to play optimal and thus will choose a reaction to A which is at least as good (from his view) as B. This strategy can be illustrated on the game-tree shown in Fig. 17. The left subtree corresponds to one of the two possible moves of the player, and yields at least a value of 2, whatever action the opponent chooses. Now, consider the move corresponding to the right subtree: Since the opponent's left reaction already yields a value of 1, it is unnecessary to inspect its right reaction since the player's second move will never obtain a value greater than 1. This strategy requires some bookkeeping in the algorithm, which is achieved here in form of intervals in which the minimax values must fall in order to prevent pruning. Further, it requires some form of direction of the evaluation, which is a left-order tree traversal in this presentation. In Figure 17, the tree is decorated with such intervals, computed by a left order traversal. If there is a node on the level below an interval whose value does not fit into the interval, the tree can be pruned 9 Since the value 1 of the third node on the bottom level does not fall within the previously computed interval [2, T], the subtree belonging to the node marked with * can be cut off. The development of the algorithm proceeds in several steps: First, a bounded variant (leval~) of the minimax evaluation schemed can be defined by coercing the values of each player node into the given interval [a, fl]. 6.7.13.
(o#3-pruning. 6.7.13) =
206
6. Case Study on Algorithm Calculation
~V-"'-7
Fig. 17. c~/3-pruning
See also sections 6.7.14, 6.7.15, 6.7.18, 6.7.19, 6.7.20, 6.7.21, 6.7.22, and 6.7.24. This code is used in section 6.7. 6.7.14. Using an elementary property of interval windows, it can then be shown that leval~ is equivalent to the minimax evaluation scheme. ( a~-pruning. 6.7.13 )+ --: ; correctness := frefl " leval~ -~ I~ o eval \ unfold(I_bottop)
.'. leval~ -= id o eval \ unfold(pig.left)
.'. leval~ = eval
6.7.15. A recursive characterization of leval~ can be derived by a simple development. (
&/3-pruning. 6.7.13 )+
; ieval_rec :=
--
6.7. Tree Evaluation Problems [a,/3 ? s ; hyp : a E/3; L H S , := leval~ F~ tips
207 o
tip; LHS2 := leval~ ~ fork
:= (Calculation of leval~ o tip. 6.7.16)
, forks := ( Calculation of leval~ o fork 9 6.7.17 )
]
.'.
[a,Z?
s F
~ tips
:= ]eval~ o tip = I~
, forks := leval~ ~ fork
6.7.16.
--- {I(T),
(Calculation of leval~ o tip. 6.7.16)
( - ) o Ieval_-~])
-
/reIZ .'. LHS1 = I~ o eval o tip \ unfold(eval_def.tips) .'. LHS1 = I~ o id \ unfold(pid.right) 9 LHS1 = I~
This code is used in section 6.7.15. 6.7.17.
( Calculation of leval~ o fork. 6.7.17 )
Irej~ .'. LHS2 = I~ o eval o fork \ unfold(eval_def.forks) 9 LHS2 = I~o {I(]'), ( ( - 9 ~ eval)) \ unfold(I_hom_promote) .'. LHS2 = {I(T), (1~o ( - ) ~ eval)~ \ unfold(I_neg_shift(hyp)) 9 LHS2 = {I(T), ( - ) ~ I-~* eval~ 9 LHS2 = (I(T), ( - ) ~ leval-~I}
This code is used in section 6.7.159 6.7.18. In the second step, the (currently undirected) evaluation of the subtrees will be specialized into a directed evaluation from left to right. In calculational algorithm calculation, this design step corresponds to specializing the catamorphism {IT,- ~ leval_-~]) derived above into a left-reduction. This specialization will introduce an auxiliary operator next, which updates the current minimax value a by consideration of the next subtree t. next is defined as follows:
208
6. Case Study on Algorithm Calculation
( c~/3-pruning. 6.7.13 }+
; next := [a,/3 : s ; a : s ; t : tree (s) ~- a t - ( l e v a l Z ~ ( t ) ) ] 6.7.19. In order to apply the specialization law of n o n - e m p t y lists to a catamorphism (It,f]) (cf. Sect. 6.5), a left-identity of T on the range of f must be found 9 T h e lemma below establishes a as a left-identity of T on the range of -
o leval-/~.
(a/3-pruning. 6.7.13 )-t-
; spec_cond := [a,/3, b ? s ; hyp : a E/3; LHS := a 1" ( - ( I - ~ ( b ) ) ) F trefl .'. LHS = a ~[ ( ( - ) o
I=~)(b)
\ fold(I_neg_shift(hyp)) .'. L H S = a T I ~ ( - b ) \ unfold(I_one(hyp)) .'. LHS = (1~o ( - ) ) ( b ) \ unfold(I_neg_shift(hyp)) .'. LHS = - ( I - ~ ( b ) )
9 [~,/3, b?
s
I-
I
1` -C~(b) ]a E #= -~=;(b)
6.7.20. A straightforward application of specialization now yields the desired left-reduction. ( a # - p r u n i n g . 6.7.13 ) + -
; specialize := [a,/3 ? s ; hyp : a E_/9; LHS := leval~ o fork
F-[ lemma := spec_cond (hyp) .'. [ t ? tree (s) F a 1` -I-_~(eval(t)) = -I-_~(evaI(t))] F ieval_rec (hyp). forks 9 LHS = ~(t), ( - ) ~ leval-~D \ unfold(specialization(ba, assoc .join, lemma).left) .'. LHS = next(a, #) ~
] ]
~___# .'.[o6#?
s F ' leval~ o fork = next(a, #) §
]
6.7. Tree Evaluation Problems
209
6.7.21. In the third and crucial step, two properties about the operator next are derived which will allow the actual pruning of game-trees: The first property is the fact that the evaluation of the moves already considered may lift the lower bound for the next move. The proof is quite analogous to the above proof of the specialization condition, except that this time the rule I_narrow is used. ( ctfl-pruning. 6.7.13 ) + _--;lift :=[c~,/3, a ; hyp
?
s; t ? tree(s)
:
~a E a,a E ~
; lemma :---- trans_smth (hyp.1, hyp.2) .'. ~ E ; LHS
:= next (ol,/3, a, t)
~- trefl 9 L H S = a T ( ( - ) ~ I-~)(eval(t))
fold ( I_neg_shift ( lem~na) ) 9 LHS = a T I~(-(eval(t))) \ unfold(I_narrow(hyv)) .'. L H S = ('~ o ( - ) ) ( e v a I ( t ) )
unfold( I_neg_shift( hyp.2 ) ) .'. L H S = - 0 e v a l - ~ ( t ) ) ] .'. [a, Z, a ? s ; t ?
tree (s)
next (ot, ~, a, t) = - ( , e v a , - ~ ( t ) )
6.7.22. The second property of next is the fact that in case the evaluation ever reaches the upper bound, the remaining moves can be pruned. The derivation consists essentially of applying the law I_zero.
( (~/3-pruning.
6.7.13 )-t- -
; p r u n e := [(~,/37 s; t ? tree(s); hyp :(~ ___/~; L H S := n e x t ( ~ , f l , ~ , t )
trefl .'. L H S = ~ T ( ( - ) ~ ,-_~)(eval(t))
~ fold( I_neg_shift( hyp ) ) .'. L H S =/3 T ]~(- eval (t)) \ unfold(I_zero) .'. L H S = fl
]
~E9 .. [a, Z ? s ; t ? tree (s) ~-
next (~, 9, 9, t) = ~]
210
6. Case Study on Algorithm CMculation
6.7.23.
Finally, the results can be summarized into a recursive system of conditional equations for lev~l~. Recursive equations for a/3-pruning. 6.7.23 )
eval = leval~ , lev~t~
,~
o tip =
, leval~ o fork = next(ce, 13) 74~
,It ? tree (s) ~- next(,:~,/3, a, t) = - ( , e v ~ l = ~ ( t ) ) ] ,It ? tree (s) ~- next((~,/3,/3, t) = 13]
This code is used in section 6.7,24, 6.7.24. These equations can be seen as a functional program for a/3-pruning. The conditions in these equations (i.e., (~ E a and a ___ /3) arise from several sources: First, although defined without any conditions, the notion of interval windows I~ makes only sense for those values a and/3 for which a E / 3 holds, This is reflected by the conditions which most of the properties about interval windows require. Second, the condition that the argument of next must fall within the interval between a and /3 is actually an invariant assertion of the algorithm. { a/3-pruning. 6.7.13 }+
; a/3_pruning := [c~,/3, a ? s
~-[ hyp
:
~a E_ a,a U_ /3~
; lemma := trans_smth ( h y p . l , h y p . 2 )
" a E_
~-~ sym (correctness) , ieval_rec (lemma). tips , specialize (lemma)
, lift (hyp) , prune (lemma) ]
] 9 [o~,/3, a ? s ~- {Recursive equations for a/3-pruning. 6.7.23 } ] 6.8
Discussion
Currently, algorithm calculation is used predominantly as a paper-and-pencil method. In this case study we have made an attempt to completely formalize its
6.8. Discussion
211
developments and their underlying theory. We will try to evaluate this attempt in the next chapter, i.e., discuss drawbacks and benefits, also in comparison with the VDM case study. At this place we would like to briefly discuss some phenomena that are more specific to the nature of algorithm calculation. From a syntactical point of view, we can say that the use of (sectioned) infix symbols and of super- and subscripts was sufficient to capture many common notations of algorithm calculation. On the other hand, some notational conventions exceed infix operators as e.g. the notation for finite lists (cf. Sect. 6.2). As for all other examples, we do not make any semantic adequacy proofs in this book. In the case of algorithm calculation, this would hardly be possible anyway since there has not yet been developed any "definite" theory. Intuitively however, we claim to have captured essential aspects of the methodology. Remember that the development of a tree pruning strategy (cf. Sect. 6.7), led to a system of conditional equations. Actually, such conditions are not explicitly mentioned in the semi-formal presentation in [14], but tacitly used in the developments. The system of equations derived there corresponds exactly to the one derived in this presentation, only the conditions are missing. It is interesting that the presence of these conditions occurred during the attempt to type-check a naive formalization of the development without conditions on the computer, which uncovered typing errors. This fact points to a significant advantage of the use of Deva to investigate existing methods and developments: The strict logical control (via typing) may uncover hidden assumptions and omitted proof steps. One could ask, whether the unconditional equation system could not be derived regardless of these technical difficulties since the pruning algorithm is initialized with the unproblematic interval from _l_to 7- and, as mentioned above, the conditions are actually invariant properties. However, this is not a Deva issue but an issue of algorithm calculation.
213
7
Conclusion
In the previous chapters we introduced the generic development language Deva and reported on two experiments in which certain aspects of formal software development methods were completely expressed in the Deva language. We now go on to discuss some of the benefits and shortcomings of these experiments and, in the light of these, to reexamine the major design choices of Deva. Provable Adequacy An important question which can be asked of our Deva formalizations is whether they do indeed formalize what they claim to formalize, namely VDM and calculational developments. We wish to examine this question of adequacy from two angles: the theoretical and the pragmatic. The most satisfying answer would, of course, be a proof of adequacy. In fact, in Chap. 3 we showed a framework in which such proofs of adequacy could be performed, but, we have not given such proofs for our formalizations. There are two reasons why we have not done so. First, we shunned the amount of work associated with such proofs. We knew that in principle it could be done and that it has been done, at least for small select pieces of formalizations, such as predicate logic or Peano-arithmetic (el. [30]). Second, beyond the logical basis and the data types of the development methods, it was not at all clear against what adequacy proofs should be performed, simply because the development methods were not formally defined. This situation has recently changed somewhat as a drafted VDM-standard [2] and publications like [58] have become available. Since we have not given proof of the adequacy of our formalizations, an understanding of the Deva language and its mathematical properties must suffice for the reader to trust the formalizations. This situation is admittedly far from ideal and must be improved upon. It is, however, common to other work in this area. A reassuring fact, though, is that the properties of Deva ensure that Deva itself does not introduce new errors, i.e. any error, such as logical inconsistency, in a formalized theory must lie in the formalization itself. Pragmatic
Adequacy
It is more important for us to assess the adequacy of our experiments from a pragmatic point of view: Did our experiments cover the pragmatic aspects of VDM and calculational development? The honest and straightforward answer is that, while essential methodological aspects were adequately expressed in Deva, the complete formalization approach made VDM and calculational development too complex and time-consuming for practical use. We illustrate this by discussing some underlying technical issues in more detail. Deva enforces the formalization of all the necessary basic theories and the theories underlying the design methods of the case studies. This problem was tackled basically by constructing successive formal approximations in Deva and
214
7. Conclusion
validating them with reference to the case studies. In retrospect, while the development of the basic theories involved a quite straightforward transcription of material available in good handbooks, a!airly large amount of effort went into getting the method formalizations "right". We found it much harder to develop a precise understanding of a design method as a set of theories than to use Deva to formalize these theories. In fact, it appears very important to separate two activities, at least conceptually: first, to elaborate a clear and complete theory of a method, with elegant proofs, and to express it semi-formally in a model handbook; second, to "code" this theory in Deva or in some other similar language or system. The problem was that complete theories of design methods are rarely available, so to use Deva we were forced us to develop them on our own. The effort involved, however, can be considered worthwhile because it promotes understanding of methodological issues and it results in a quite usable fragment of a general library. We believe that the scaling-up of formal developments will go hand in hand with the development of extensive and well-structured libraries of theories containing notations, laws, development schemes~ and tactics and ranging from basic mathematics, to development methodologies to concrete application areas. Given the availability of such libraries, it is tempting to experiment with novel combinations of the calculi and data types underlying the different methodologies. For example, on the basis of the theories presented in this book, it is, technically, a simple matter to experiment with a variant of VDM using the calculationally oriented presentation of sequences from the second case study. Deva enforces the explicit type declaration of each object involved in a formalization. This requirement is certainly debatable since many simple object declarations can be inferred by inspection of the kinds of uses of the objects. One can envisage an ML-like mechanism to automatically infer missing declarations. On the other hand, we quite frequently found (especially in the more complex cases) that this added type information enhanced clarity and understanding of a development. Still, the user should have the choice, and so this is another topic for further research. Deva requires the explicit statement of all side conditions in rules and developments. From a logical point of view, this is definitely an advantage. Pragmatically, however, developments become more complex since side conditions must be discharged by auxiliary proofs before the rule itself can be applied. In our experience, the crucial challenge in formalizing developments is to identify a set of abstract, yet illuminating, laws from which all auxiliary proofs can be economically composed. Deva requires the explicit evaluation of all syntactic operations as used by specific methods. Examples of such operations rarely occur in the formalizations presented in this book, but one example is the evaluation of selection in composite VDM objects. A more complex example would be the normalization of schemas in Z. In some such cases, one can deal with syntactic operations by defining a suitable tactic, for example, the selection tactic to evaluate selections from composite VDM objects. However, the non-recursive tactics in Deva are
215 often too weak for that purpose. Z schema normalization cannot, for example, be evaluated by Deva tactics in a natural way. The lack of recnrsive tactics in Deva is probably the language's most serious expressive shortcoming. Such expressive shortcomings of languages are frequently overcome by support tools in the form of preprocessors. For instance, one can imagine a preprocessor for Devil which performs Z schema normalization. However, these tools have to be carefully designed so that error reports, for example, are given in terms of the original input, and not in terms of the translation. Deva requires that the logical chain in developments be made fully explicit. In rigorous developments on paper this chain is usually ensured by appropriate comments in between the individual development steps (cf. the informal and the formalized version of the proof of the binomial formula). Well-presented Deva developments are, in fact, not too far from this presentation style: A comment is replaced by an application of (or a cut with) a law or a tactic. In general, we can identify three levels on which Deva assists in the (static) presentation of formal developments and formalizations: Deva's own syntax, the possibility of defining mixfix operators, and the facilities introduced by the WEB system. The design of Deva's own syntax was mainly driven by experiments with expressing program deductions. More experiments might well suggest adaptations (see below). The possibility of declaring or defining infix, prefix, and postfix operators was essential to cover a sufficient amount of the methods' notation. There is, however, room for improvement here. For example, precedence parsing is inadequate for handling arbitrary context-free notations, such as the VDM or Squiggol notations for enumerated lists. Notations like these are, however, not in principle beyond the scope of a generic development language. After all, many generic parsing tools are available and can be used in a preprocessing step. The WEB system frees the user from the straitjacket of a formal language. We have not made full use of WEB's features, mostly because the system was not available when we developed the formalizations. Also, it is still a prototype and can be improved in several ways. One such improvement, for example, would be to control the spacing around operators depending on their height in the syntax tree. What we have not yet paid any attention to is the dynamic design and presentation of a development and its formalizations. At the moment, we do not have a sophisticated graphical editor for entering and editing Deva documents but continue to use conventional text editors. This is, of course, not what the current state-of-the-art would lead us to expect. Graphical editors such as MathPad (cf. [9]) or G:F (cf. [40] provide nice examples of what such editors might look like. Another exciting idea suggested by the web-like structure of the presentation of developments is to extend the editor by incorporating hyper-text facilities so that the users can navigate through their formalizations, zoom into a development to view its internal details or merely examine it on a more abstract level. This idea is also put forward by Lamport in his short paper on how to write proofs [69]. Such a tool should, of course, be integrated into the complete support environment. In particular, it should have a natural interface to the checker.
216
7. Conclusion
Within the formalizations, the degree of automation was rather low. Usually proof step details or details of reasoning chains were left implicit by synthesis of arguments through pattern-matching or by using alternatives and iterations. But there were no sophisticated induction tactics and no reasoning modulo a ruleset, such as associativity and commutativity. With regard to these very important kinds of automation, the current Deva version cannot compete with many related approaches, especially those that are tailored to a particular method. In this sense, our Deva formalizations of VDM and calculational development remain at the prototyping level. This situation could probably be improved in the future by interfacing Deva with more specialized and more convincingly automated proof support systems. We believe that the use of such systems is quite indispensable when trying to tackle more significant applications than those presented in this book. Of course, the combination of such systems with Deva poses a number of engineering, as well as theoretical, challenges. All these technical points raise a more general question: What is the relation between performing developments rigorously and formalizing them in Deva? How much extra effort is involved when using Deva? Clearly, there is the extra effort of learning tile Deva language. We hope that the present book makes this a feasible task, although we do not claim it is an easy one. Apart from that, it is very difficult to give a plausible answer to this question, because so little experimental material is yet available as evidence. Nevertheless, for the reader's interest, amusement, or disgust, we propose a rather quantitative answer: the formalization of a rigorous development in Deva leads to an expansion resulting from two sources: the internal "formal noise" of Deva, and the amount of "real" additional information needed to formalize the rigorous development. In our experience, the first source is characterized by an expansion factor usually located within the interval [1, 2]; in other words, the formal noise of (the current version of) Deva rarely more than doubles the original size of a carefully presented rigorous development. We merely wish to measure the expansion factor of a Deva development relative to a rigorous development. Hence, this figure does not take into account the size of the formalizations of the necessary basic theories, which can, admittedly, be considerable. Future language developments should try to reduce, the expansion factor; it should, ideally, be located within the interval [0, 1] (excluding, of course, 0), i.e. formalization of developments should, ideally, lead to a reduction in size, as is the case with the formalization of algorithmic descriptions in good programming languages. Of course, these comparisons are based on the size of "mature" - - i.e. repeatedly reviewed, restructured, and simplified - - formalizations, and not on initial versions for martyrs. The second source of expansion differs according to specific methodologies and their respective requirements as to how much information should be given in a rigorous development. For example, we found calculational developments to require less additional information than VDM developments. In the case of this second expansion factor, we therefore feel unable to set an upper bound valid for any method and any developer using it. Ideally, a rigorous development should already contain all information necessary for its correctness, in other words, the
217 factor should be 1. However such program derivations are rarely available, the use of Deva forced us to develop them on our own; a problem that was already mentioned above in the context of the formalization of programming methods. This statement about the extra effort involved when using Deva should be taken with a pinch of salt. First, the distinction between the two sources of expansion is not quite sharp, for example, because the need for additional information can often arguably be seen as a weakness of the current Deva-tactics. Second, purely quantitative measurement is clearly insufficient, it does not, for example, cover good structuring and readability. A particularly intriguing set of future experiments might be to rewrite existing case studies and good textbooks on rigorous software design in Deva and then compare the expansion factors. Of course, such exercises should be performed parallel to the use of approaches other than Deva, e.g. those discussed in the introduction. This would allow a comparison of the expansion factors, with respect to both size and obscurity, yielded by Deva with those of related approaches.
Review of the General Design of Deva In our introduction, we listed a number of ideal requirements for generic development languages. We then went on to illustrate how developments can be formalized on the basis of specififc design choices made with respect to the Deva language. Previous discussions have, essentially, reviewed these results from an internal viewpoint, i.e., from inside the Deva framework. Going on from this, we now want to step outside this framework and review the general design of Deva itself. This is particularly useful since the Deva language is in a process of active evolution, and future versions of the language should, hopefully, reduce the drawbacks of the current one. In one sentence, the general design of (the current version of) Deva can be summarized as the adoption of a generalized typed A-calculus as a basic framework and its extension to cover products, sums, compositions, theories, and implicit developments. Adopting a typed A-calculus allowed us to encode natural deduction-style proofs (cf. the figures on page 40 and 39) and calculational style proofs (cf. the figures on page 32 and 32). Allowing types to be A-structured themselves effected a very economic generalization of this approach, and this provided the essential expressive power to formMize logics. Nevertheless, there are some drawbacks to this approach: In typed A-calculi, assumptions are recorded as type declarations, a technique which, essentially, amounts to introducing pointers, i.e., identifiers to formulas. If enforced, as in Deva, this pointer technique can lead to some unnecessary technical complications. For example, it would sometimes be more elegant to record assumptions directly as sets of assumed formulas, as is done in sequent calculus. The bottom-up grafting of various structuring concepts on the basic Acalculus framework was essentially driven by experiments with expressing program deductions. The cut as a key concept of compositional development, re-
218
7. Conclusion
mained extremely technical in its bottom-up definition, though, ideally, composition should be as simple and basic as application. A major drawback of our approach to implicit developments, i.e., considering implicit developments to be valid if they can be explained by explicitly valid developments, is its lack of real compositionality, i.e., the implicit validity of a constructed development is not described in terms of the implicit validity of its parts. Rather, it is described in terms of the explanations, i.e., compilations into explicit developments, of its parts. Moreover, our approach has led to a two-level, i.e., implicit/explicit, view of developments. Instead of such a black or white situation, in many cases one would prefer a continuum stretching from highly implicit to fully explicit developments. One would, ideally, like to understand a development language from looking at its algebraic properties. Unfortunately, our ),-calculus-based approach led to an operational, i.e., reduction-based, language definition whose consistency rests on rather deep confluence and strong normalization theorems. It would be nice to have, at least in addition, a more algebraic specification of the essential concepts underlying Deva. Current work is elaborating an algebraic framework in which the basic ideas underlying Deva can be captured by a set of crisp algebraic laws and which avoids many of the drawbacks discussed above (see [94] for a first approximation).
Future Work
Let us briefly summarize what we anticipate to be potential areas for future research. In order to better understand and isolate the key concepts for expressing and supporting formal software developments, more case studies must be conducted. Such case studies should guide and inspire the further development of the Deva language and its implementation. Then there are syntactic issues: What facilities are needed to express formalizations even closer to the syntax of development methods? Is the idea of using a preprocessor to translate, for example, VDM developments into Deva expressions reasonable? And there are important semantic issues: How can Deva be extended by a more powerful concept of tactics, including typed recursion? How can interfaces to (external, specific) provers be defined and related to Deva? Is there a set of algebraic laws that characterizes the key concepts of developments as identified in Deva? Finally, there are the issues of tool support: extending and adapting a library of theories for software developments, supporting the interactive development and "debugging" formalizations, implementing interfaces to external systems such as provers and preprocessors, and designing a graphical user interface with hypertext features extending the current WEB system. Thus, not too surprisingly, future research areas include logic and algebra as well as database management and interface design.
219
Final Remarks The main objective of this book was to show how the generic development language Deva can be used to express formal software developmeats. The long term goal is to tackle case studies of a more significant size. This can only be achieved through exchange and cooperation with related research fields such as mathematics of programming, proof languages, support systems, and specific application domains. We hope that the book will prove a helpful tool in this interaction, and we would very much like to see other approaches to formal software development published in a similar - - and, of course, eventually much better - - form.
A
Machine-level
Definition
of the Deva
Kernel
As promised in Sect. 3.2.6, we will present a formal definition of the kernel of the explicit part of Deva using de Bruijn indices. We will proceed exactly as in Sect. 3.2 and will point out the major differences.
A.1
Formation
Let Nt denote the (denumerable) set of text identifiers, ];c the (denumerable) set of context identifiers, and N the positive natural numbers, then the syntax of the kernel of the explicit part of Deva is specified by the following grammar:
T:=prim I N I I T(N:=T) C : = V t : T [ Y t : = T I contextYr
I T.'.T, I importN I [g;g].
Note that the only difference to Deva is that the use of a text- or a contextidentifier is now denoted by a natural number. In the following, indices are typeset in boldface in order to make them more easily recognizable.
A.2
Environments
Since environments are the major tool for identifying bindings, it should be obvious that most of the changes to definition of Deva given in the main text will involve the handling of environments. Again, an environment is a linear list of bindings, where a binding is either a declaration, a definition, or a context definition. The partial evaluation of context operations being carried out while pushing a context onto an environment is somewhat complicated to define. Since de Bruijn indices represent the static distance to a binding occurrence, they have to be adjusted during a partial evaluation of context operations. This adjustment of indices is accomplished by the shift function which takes as arguments an environment E, a text or a context e, and two integers i and k, and returns as a result a text or a context which is derived from e by adjusting by i all indices (relative to E) except those smaller than or equal to k. As an abbreviation, we will write shift(e, i) for shift(e, i, 0). The definition of the shift function is now presented together with the definition of pushing a context onto an environment in a mutually recursive way. First of all, in the following cases the definition of the shift function is pretty straightforward: shiRE(prim, i, k) = p r i m
shiftE(tl(n := t2), i, k) = shiftE(tl, i, k)(n := shiftE(t2, i, k)) shiftE(tl .'. t2, i, k) = shiftE(tl, i, k) .'. shiftE(t2, i, k)
222
A. Machine-level Definition of the Deva Kernel
shiftE(x : t, i, k) = x : shiftE(t, i, k) shiftE(x := t, i, k) = x := shiftE(t, i, k) s h i f t E ( c o n t e x t p := c, i, k) = c o n t e x t p := shiftz(c, i, k) An index n is shifted if it points behind the first k bindings: [
shiftE(n,i,k) = ~ n + i
ifn>k otherwise
~import n+i s h i f t E ( i m p o r t n, i, k) = [ i m p o r t n
ifn>k otherwise
In the following two situations, those indices which point to bindings contained in the context c must not be shifted - - this explains the need for the third argument of the shift function. shiftE([C ~- t], i, k ) = [ c ' b shiftEec,(t, i, k + k')] shiftE([ c; c1~, i, k) = [ c'; shiftEec, (Cl, i, k + k')~ where c' = shiftE(c, i, k) and k = # ( E • c') - # E . Finally, we define the operation of pushing a context onto an environment. Note that the lookup function defined in the main text may now be replaced by the expression E . n denoting the n - t h entry (from the right) in the environment E. In case this entry does not exists (because n is too large), E . n is undefined. This should not cause major concern since, in general, all the definitions given in this text make sense only if they are closed relative to some environment.
E| EGx
E| := t = E |
t), (x := t},
E | c o n t e x t p := c = E | ( c o n t e x t p := c),
E|
ifE.n
= contextp:=c,
E (9 [ cl; c2] : (E @ cl) | c2.
A.3
The definition
o f "r a n d
The definition of the bijection (restricted to closed texts and contexts) 7 which translates relative to an environment a text or a context using concrete names to the corresponding text or context using de Bruijn's indexing scheme is given
A.4. Closed texts and closed contexts
223
below:
TE (prim) = p r i m TE(X) pOSE(X) ~-E([c ~ t]) = b-E(c) ~- ~-E+c(t)] =
~-E(tl(n := t2) = ~-~(tl)(~ := ~-E(t~)) ~-E(tl .'. t2) = ~-E(tl).'. ~'E(t2) ~'E(X : t) = X: ~-E(t) ~-~(Z := t) = x := ~-E(t)
TE(context p := c) = c o n t e x t p := 7E(C)
TE ( i m p o r t p) = i m p o r t pos E (p) ~-E(~;
c2]) = ~-E(~);
~'~(~)]
Here, posE(x ) denotes the position of the first link describing x counting from the right. Again, this operation may be undefined if x is not closed relative to E. The definition of n is trivial since n just steps recursively through its argument and replaces any identifier at a binding by the standard identifier 'u'. We only give the following three equations since in all other situations n is applied recursively to the components of its argument.
~ ( x : t) = ,,: ,~(t) ~(~ := t) = ~ := ,~(t) ~ ( c o n t e x t p := c) = c o n t e x t u := ~(c)
A.4
Closed texts and closed contexts
The definition of closed texts and closed contexts is almost exactly the same as in Sect. 3.2.4. Of course identifiers have to be replaced by indices and the crucial two rules which check whether an identifier is defined within the environment are replaced by
~-d E
E . n is defined
and
E ~-d n
A.5
Reduction
F-d E
E . n is defined
E ~-d i m p o r t n
of texts and contexts
Reduction is defined similarly to the definition given in Sect. 3.2.5. But now, one has to be careful about possible adjustment of indices during the reduction process. Altogether, except for the obvious changes, the following axioms replace
224
A. Machine-level Definition of the Deva Kernel
their respective counterparts: E b n
if E . n
t>~ s h i f t E ( t , n ) ,
= x := t,
E F [x: tl F t2](1 :---- t) D~ [x := t ~- t~], E ~- [z: tl }- t2](n + 1 : - t) [~1 Ix: tl F- t2(n := shiftE(t, 1))], E ~- [x := h ~- shiftE(t2, 1)] ~>1 t2 E F- [ c o n t e x t p := c ~- shiftE(t, 1)1 E>1 t E t- i m p o r t n
A.6
I>1 s h i f t E ( c , n ) ,
ifE.n
= c o n t e x t p := c.
Conversion
The rules for conversion are obvious and are thus omitted. A.7
Type
assignment
The rules for type assignment are exactly the same as in Sect. 3.2.7 except for the following:
[ shiftE(t,n) typE(n ) = ~typE(shift(t,n)) [,undefined A.8
Auxiliary
functions
if E . n
= x :t
if E . n if E . n
= x := t
and predicates
= undefined for validity
The name-irrelevant conversion is defined as in Sect. 3.2.8. As for the definition of the 'domain' relation, only the following rules need to be adjusted: E ~- t y p E ( h ) ~ [x : t2 t- t3] 1)(1 := x), shiftE(t4, 1)) (n + 1)-dOmE(tl , t4)
n-domEe~:t2(shift(h,
E ~- tl ,z [x : t2 ~- t3]
n-domE,~:t2(t3,shiftE(t4,1))
E F-el t4
E ~-cz t4
(n + 1)-dOmE(h , t4) A.9
Validity
The same changes apply to the definition of validity as for the definition of closed texts or contexts.
B
Index of D e v a C o n s t r u c t s
Contexts Context declaration ]?t : = T . . s e e C o n t e x t definition )2c : = T . . s e e C o n t e x t definition ]3t ? 7 ` . . . s e e I m p l i c i t definition i m p o r t Vc s e e C o n t e x t inclusion i m p o r t c o n t e x t ]2c . s e e C o n t e x t inclusion C(N : = T ) see Context application C(7`) ...... s e e C o n t e x t , direct application g(7`) ...... s e e C o n t e x t , direct application g [ ] 3 t = : ~t] ...... see Renaming C[Pc --=: )Jr] . . . . . . see Renaming [g;Cl ........ s e e C o n t e x t join Vt
: 7`
9 9 9 see
Texts prim ....... see Primitive text [C t- 7-] . . . . . . . . see Abstraction 7`(N : = 7`) . . . . . see Application T / 7" ......... see Application T \ 7" ......... see Application 7`(7`) .... s e e A p p l i c a t i o n , direct 7`(1;~ : = 7 ` ) . . . . see Application, named 7 " .'. 7 ` ........ see Judgement ~l;~ : = 7 ` , . . . , ) d r : = 7 ` ~ . . . . . see Product 7`.N .... see P r o d u c t projection 7`.)dr ...... see Product, named projection T where N:=7` . . see P r o d u c t modification 7` w h e r e V t : = 7 ` . s e e P r o d u c t , named modification
4v~:=
r 7`
alt[T,...,7-] ... l o o p 7" . . . . . . . . . .
see
Alternative Iteration
see
A Abstraction Alternative Application direct named
..... 17-18, 21, 47 ......... 30, 43, 73 ......... 18, 21, 47 .............. 73 .............. 73
C Case d i s t i n c t i o n . . . . . . . . Context ................ Abstraction .......... Application .......... Declaration ..... 16, Definition . . . . . . . . . Direct application .... I m p l i c i t definition . 21, Inclusion . . . . . . . 25, Join . . . . . . . . . . . . . . Named application ..... Renaming ........... Cut .............. 39, Direct ......... 39, Named ............
43, 60 14 25 64 21, 47 14, 47 25, 73 22, 73 43, 47 14, 47 73 64 42, 62 42, 73 42, 73
D Declaration ............ Definition . . . . . . . . . . . . .
21, 47 14, 47
I I m p l i c i t definition . . . . Iteration ..............
21, 22, 73 30, 73
J Judgement
............
20, 47
I.., I v~ :=7`1-.. I
Y t : = ^r T ~ . . . . . . . . . . s e e Sum caseT`ofT` . s e e Case d i s t i n c t i o n 7` u p t o N o> 7` a t N . . . . see Cut 7` o> 7 ` . . . . . . . . s e e Cut, direct 7` o> 7` a t Vt . . . . see Cut, named
P Primitive text .......... Product .............. Modification ........ Named modification
17, 47 26, 58 42, 58 .... 73
226
A. Machine-level Definition of the D e v a Kernel Named projection ...... Projection .........
73 26, 58
R Renaming
...............
64
S Sum
................. Case distinction
.....
43, 60 43, 60
T Text ................... 15 Abstraction ..... 17, 21, 47 Alternative ..... 30, 43, 73 A p p l i c a t i o n . . 18, 21, 47, 73 Case distinction ....... 60 Cut ........ 39, 42, 62, 73 Identifier .......... 21, 47 Iteration .......... 30, 73 Judgement ......... 20, 47 Primitive text ...... 17, 47 Product ........... 26, 58 Product modification 42, 58, Product projection Sum .............
73 26, 58, 73 43, 60
C C.1
Crossreferences Table of Deva Sections Defined
in the Tutorial
( A first version of a parametric equality. 2.1.4 ) ( A proof with local scope. 2.2.3.1 ) This code is used in section 2.2.3. (Axioms. 2.1.4.2 ) This code is used in section 2.1.4. ( Constructors of natural numbers. 2.1.5.1 ) This code is used in section 2.1.5. (Core of the proof. 2.2.3.5 ) This code is used in section 2.2.3.4. (Derivation of reverse substitution I. 2.2.4.3 ) (Derivation of reverse substitution". 2.2.4.4 ) (Derivation of reverse substitution. 2.2.4.2) This code is used in section 2.2.4.1. (Doubling proof. 2.2.2 ) (Equality of natural numbers. 2.1.5.2 ) This code is used in section 2.1.5. (First scope. 2.2.3.2) This code is used in section 2.2.3.1. ( Further properties of the operators on natural numbers. 2.1.5.5, 2.1.5.6) This code is used in section 2.1.5.4. ( Minimal Logic. 2.2.3 ) (Natural numbers. 2.1.5 ) This code is used in section 2.1.2. (Operator declaration. 2.1.4.1 ) This code is used in section 2.1.4. (Operators on natural numbers. 2.1.5.3) This code is used in section 2.1.5. ( Parametric equality. 2.1.4.5) This code is used in section 2.1.2. ( Preliminaries. 2.1.3 ) This code is used in section 2.1.2. ( P r o o f body. 2.1.6.1 ) This code is used in section 2.1.6. ( P r o o f development. 2.1.6.2, 2.1.6.3, 2.1.6.4) This code is used in section 2.1.6.1. ( Proof of a simple law. 2.1.4.3 ) This code is used in section 2.1.4. ( P r o o f of symmetry. 2.1.4.4 ) This code is used in section 2.1.4.3. ( Proof of the binomial formula. 2.1.6 ) ( P r o o f text for unfold. 2.1.4.6 ) (Properties of the operators on natural numbers. 2.1.5.4 ) This code is used in section 2.1.5. (Reverse Substitution. 2.2.4.1 ) (Second scope. 2.2.3.3 ) This code is used in section 2.2.3.2. ( Third scope. 2.2.3.4) This code is used in section 2.2.3.3. ( Transformation tactics. 2.2.2.1 ) This code is used in section 2.2.2.
C.2 +:
Index
of Variables
Defined
in the Tutorial
2.1.5.3,2.1.5.4,2.1.5.5,2.1.5.6, 2.1.6,2.1.6.1,2.1.6.2,2.1.6.3, 2.1.6.4,2.2.2. 2.1.3,2.1.4,2.1.4.1,2.1.4.2, 2.1.4.3,2.1.4.4,2.1.4.5,2.1.4.6, 2.1.5,2.1.5.1,2.1.5.2,2.1.5.3, 2.1.5.4,2.1.5.5,2.1.5.6,2.1.6,
2.1.6.1,2.1.6.2,2.1.6.3,2.1.6.4, 2.2.2,2.2.2.1,2.2.3,2.2.3.1, 2.2.3.2,2.2.3.3,2.2.3.4,2.2.3.5, 2.2.4.1,2.2.4.2,2.2.4.3,2.2.4.4. 2.2.3,2.2.3.1,2.2.3.2,2.2.3.3, 2.2.3.4,2.2.3.5. •
2.1.5.3,2.1.5.4,2.1.5.5,2.1.5.6,
228
C. Crossreferences
2.1.6, 2.1.6.1, 2.1.6.2, 2.1.6.3, 2.1.6.4, 2.2.2. (-)('): 2.1.5.3, 2.1.5.5, 2.1.5.6, 2.1.6, 2.1.6.1, 2.1.6.4. add_def: 2.1.5.3, 2.1.5.6, 2.2.2.1. add_props: 2.1.5.4, 2.1.5.6, 2.1.6.3, 2.1.6.4, 2.2.2.1. base_lemma: 2.2.2. BinomiaIFormulaDevelopment: 2.1.2. distr: 2.1.5.5, 2.1.5.6, 2.1.6.2, 2.1.6.4. doubling: 2.1.5.5, 2.1.5.6, 2.1.6.4. doubling_thesis: 2.2.2. DoublingProof : 2.2.2. elim: 2.2.3, 2.2.3.5. exp_def: 2.1.5.3, 2.1.5.6. fold: 2.1.4.4, 2.1.4.5, 2.1.6.4, 2.2.2, 2.2.2.1. induction: 2.1.5.6, 2.2.2. intro: 2.2.3, 2.2.3.2, 2.2.3.3, 2.2.3.4, 2.2.3.5. MinimaILogic: 2.2.3. mult_def: 2.1.5.3, 2.1.5.6, 2.2.2. mult_props: 2.1.5.4, 2.1.5.6, 2.1.6.3, 2.1.6.4. nat: 2.1.5, 2.1.5.1, 2.1.5.2, 2.1.5.3, 2.1.5.4, 2.1.5.5, 2.1.5.6, 2.1.6, 2.1.6.4, 2.2.2. NaturaINumbers: 2.1.5, 2.1.5.6, 2.1.6, 2.1.6.4, 2.2.2. ParametricEquality: 2.1.4, 2.1.4.5, C.3
Table of Deva Sections
2.1.5.2, 2.1.5.6. proof: 2.L6, 2.1.6.4, 2.2.2, 2.2.3.1. ProofOfBinomialFormula : 2.1.6, 2.1.6.4. prop: 2.1.3, 2.1.4.1, 2.1.4.2, 2.1.4.4, 2.1.4.5, 2.1.5.6, 2.2.1, 2.2.3, 2.2.3.1, 2.2.3.5, 2.2.4.1, 2.2.4.3, 2.2.4.4. recur_lemma: 2.2.2. refl: 2.1.4.2, 2.1.4.4, 2.1.4.5, 2.2.2. rsubst: 2.2.4.1. s: 2.1.4, 2.1.4.5. sort: 2.1.3, 2.1.4, 2.1.4.5, 2.1.5, 2.1.5.6, 2.2.1. squaring: 2.1.5.5, 2.1.5.6, 2.1.6.1, 2.1.6.4. subst: 2.1.4.2, 2.1.4.4, 2.1.4.5, 2.1.4.6, 2.2.4.2, 2.2.4.3, 2.2.4.4. succ: 2.1.5.1, 2.1.5.3, 2.1.5.6, 2.2.2. succ_lemma: 2.2.2. symmetry: 2.1.4.5, 2.2.4.2, 2.2.4.3, 2.2.4.4. symmetry_proof: 2.1.4.3. symmetry_proposition: 2.1.4.3, 2.1.4.4. thesis: 2.1.2, 2.1.___fi,2.1.6.1, 2.1.6.4, 2.2.3.1, 2.2.3.5. transform: 2.2.2. unfold: 2.1.4.4, 2.1.4.5, 2.1.6.2, 2.1.6.3, 2.1.6.4, 2.2.2, 2.2.2.1.
Defined
in t h e C a s e S t u d i e s
( (~-pruning. 6.7.13, 6.7.14, 6.7.15, 6.7.18, 6.7.19, 6.7.20, 6.7.21, 6.7.22, 6.7.24 ) This code is used in section 6.7. (Abstract initialization. 5.5.6.13 } This code is used in section 5.5.6.12. ( Abstract version verification. 5.5.6.10 } This code is used in section 5.5.6.5. ( Arbitrary branching trees. 6.7.1 ) This code is used in section 6.7. ( Assembly of main verification. 5.5.6.5 ) This code is used in section 5.5.6. ( Auxiliary Deductions. 5.5.7.3 ) This code is used in section 5.5.7.2. ( Axioms of finite maps. 4.3.5.2 } This code is used in section 4.3.5. ( Axioms of finite sets. 4.3.2.2 } This code is used in section 4.3.2. ( Axioms of join lists. 6.2.2 ) This code is used in section 6.2. ( Axioms of natural numbers. 4.3.1.2 } This code is used in section 4.3.1. ( Axioms of non-empty join lists. 6.3.2 } This code is used in section 6.3. ( Axioms of propositional logic. 4.2.1.3 } This code is used in section 4.2.1.1.
C.3. Table of Deva Sections Defined in the Case Studies
229
(Axioms of sequences. 4.3.3.2 ) This code is used in section 4.3.3. (Axioms of tuples. 4.3.4.2 ) This code is used in section 4.3.4. (Base case. 5.6.6.3 / This code is used in section 5.6.6.2. (Basic theories for algorithm calculation. 4.1.3 / This code is used in section 4.1. (Basic theories of VDM. 4.1.2 / This code is used in section 4.1. ( B o d y of the proof of the second retrieve lemma. 5.5.5.1 1 This code is used in section 5.5.5. (Boolean algebra. 4.4.3.5, 4.4.3.6, 4.4.3.7, 4.4.3.8 ) This code is used in section 4.4.3.1. (Calculation of leval~ o fork. 6.7.17) This code is used in section 6.7.15. (Calculation of leval~ o tip. 6.7.16 / This code is used in section 6.7.15. (Case of negative reaction. 5.5.8.7) This code is used in section 5.5.8.4. ( Case of positive reaction (not checked). 5.5.8.5 ) ( Case of positive reaction. 5.5.8.6 ) This code is used in section 5.5.8.4. ( C a t a m o r p h i s m s (for n o n - e m p t y join lists). 6.5.2) This code is used in section 6.5. ( C a t a m o r p h i s m s . 6.4.2 / This code is used in section 6.4. ( Concrete initialization. 5.5.6.9 / This code is used in section 5.5.6.8. ( C o n c r e t e version verification. 5.5.6.6 / This code is used in section 5.5.6.5. ( Construction of the abstract version. 5.5.2.8 / This code is used in section 5.5.2.9. ( Construction of the concrete version. 5.5.3.3 / This code is used in section 5.5.3.4. ( C o n s t r u c t i o n of versions. 5.3.2.1 / This code is used in section 5.3.2.4. ( Constructors o f finite maps. 4.3.5.1 ) This code is used in section 4.3.5. ( Constructors of finite sets. 4.3.2.1 ) This code is used in section 4.3.2. ( Constructors of join lists. 6.2.1 / This code is used in section 6.2. ( Constructors of natural numbers. 4.3.1.1 / This code is used in section 4.3.1. ( Constructors of n o n - e m p t y join lists. 6.3.1 / This code is used in section 6.3. ( C o n s t r u c t o r s of sequences. 4.3.3.1 ) This code is used in section 4.3.3. ( Constructors of tuples. 4.3.4.1 ) This code is used in section 4.3.4. ( Declaration of the other abstract operations. 5.5.2.7 ) This code is used in section 5.5.2.9. (Declaration of the other concrete operations. 5.5.3.2 ) This code is used in section 5.5.3.4. (Definition of VDM reification. 5.3.3.5 ) This code is used in section 5.3.3.6. (Definition of the retrieve function. 5.5.4.1 ) This code is used in section 5.5.4.6. ( Derivation of selection laws. 5.5.2.1 ) This code is used in section 5.5.2.9. ( Derived laws of propositional logic. 4.2.1.6 ) This code is used in section 4.2.1.1. (Derived operators and laws of finite sets. 4.3.2.3, 4.3.2.4, 4.3.2.5, 4.3.2.6, 4.3.2.7, 4.3.2.8, 4.3.2.9) This code is used in section 4.3.2. (Derived operators and laws of sequences. 4.3.3.3, 4.3.3.4, 4.3.3.5, 4.3.3.6, 4.3.3.7 ) This code is used in section 4.3.3. (Derived operators of finite maps. 4.3.5.3, 4.3.5.4) This code is used in section 4.3.5. ( Derived operators of natural numbers. 4.3,1.3 ) This code is used in section 4.3.1. (Derived operators of propositional logic. 4.2.1.4) This code is used in section 4.2.1.1.
230
C. Crossreferences
(Derived properties of tuples. 4.3.4.4, 4.3.4.5, 4.3.4.6 } This code is used in section 4.3.4. (Distributive monoid pair. 4.4.3.4) This code is used in section 4.4.3.1. ( E m p t y sort. 5.3.1.2 ) This code is used in section 5.3.1.4. (Extensional equality of terms or functions. 4.4.1.1) This code is used in section 4.1.3. ( Finite maps. 4.3.5) This code is used in section 4.1.2. { Finite sets. 4.3.2 ) This code is used in section 4.1.2. ( First block. 5.6.4.3 } This code is used in section 5.6.4.2. ( First explicitation of the development of ross (not checked). 6.6.6 ) (First five abstract operations. 5.5.6.11 ) This code is used in section 5.5.6.10. ( F i r s t five concrete operations. 5.5.6.7 ) This code is used in section 5.5.6.6. (First five operations. 5.5.6.15 ) This code is used in section 5.5.6.14. (First three abstract operations. 5.5.6.12 ) This code is used in section 5.5.6.11. (First three concrete operations. 5.5.6.8) This code is used in section 5.5.6.7. (First three operations. 5.5.6.16) This code is used in section 5.5.6.15. ( Fold rules for the overloaded equality. 4.4.1.5 ) This code is used in section 4.4.1.1. (Frequently used contexts. 5.6.1 ) This code is used in section 5.6.3.2. ( F u r t h e r axioms and laws of predicate logic. 4.2.1.8, 4.2.1.9, 4.2.1.10, 4.2.1.11, 4.2.1.12 } This code is used in section 4.2.1.7. ( General evaluation tactic for the HLA case study. 5.5.4.4 ) This code is used in section 5.5.4.6. ( Generic algebraic properties. 4.4.3.2 ) This code is used in section 4.4.3.1. ( H L A abstract specification of the state and the invariant. 5.5.2 ) This code is used in section 5.5.2.9. ( H L A abstract specification. 5.5.2.9 ) This code is used in section 5.5. ( H L A concrete specification of the state and the invariant. 5.5.3 ) This code is used in section 5.5.3.4. ( H L A concrete specification. 5.5.3.4 ) This code is used in section 5.5. ( H L A global parameters. 5.5.1 ) This code is used in section 5.5. (HLA retrieve function. 5.5.4.6 } This code is used in section 5.5. ( HLA verification. 5.5.6 ) This code is used in section 5.5. (Horner's rule. 6.6.2 ) This code is used in section 6.6. ( H u m a n leukocyte antigen typing problem. 5.5 } This code is used in section 5.1. (Implicit development of mss (not checked). 6.6.5 ) This code is used in section 6.6.4. ( I m p o r t s needed by CalculationaIBasics. 4.4.1 ) This code is used in section 4.1.3. ( Imports needed by VDMBasics. 4.3) This code is used in section 4.1.2. (Induced Partial Ordering. 4.4.4 ) This code is used in section 4.1.3. (Induction on version triples. 5.6.2.4 } This code is used in section 5.6.3.2. ( Induction on versions. 5.3.2.3 } This code is used in section 5.3.2.4. (Induction principle. 5.6.2.1 ) This code is used in section 5.6.2.4. (Inductive proof of the transitivity condition. 5.6.6.2 } This code is used in section 5.6.6.1. (Inductive step. 5.6.6.4 ) This code is used in section 5.6.6.2.
C.3. Table of Deva Sections Defined in the Case Studies
231
Initial and final segments. 6.6.1 / This code is used in section 6.6. Initialization. 5.5.6.17) This code is used in section 5.5.6.16. Interval windows. 6.7.3 / This code is used in section 6.7. Invariants. 5.6.1.1 ) This code is used in section 5.6.1. Join lists. 6.2 ) This code is used in section 6.1. Law of the excluded middle. 4.2.1.5 / This code is used in section 4.2.1.1. Left and right accumulations. 6.6.3) This code is used in section 6.6. Left and right reduce o p e r a t o r s for join lists. 6.2.4 / This code is used in secion 6.2. Left and right reduce o p e r a t o r s for n o n - e m p t y join lists. 6.3.5) This code is lsed in section 6.3. Length of versions. 5.6.2.2 ) This code is used in section 5.6.2.4. ical basis. 4.1.1 / This code is used in section 4.1. M a p and reduce o p e r a t o r s for join lists. 6.2.3 / This code is used in section 6.2. M a p distribution (for n o n - e m p t y join lists). 6,5.1 ) This code is used in sec,ion 6.5. M a p distribution. 6.4.1 / This code is used in section 6.4. M a p o p e r a t o r for n o n - e m p t y join lists. 6.3.3 / This code is used in section 6.3. ' M i n i m a x evaluation. 6.7.2) This code is used in section 6.7. 'Monoid. 4.4.3.3 ) This code is used in section 4.4.3.1. N a t u r a l numbers. 4.3.1 ) This code is used in section 4.1.2. N o n - e m p t y join lists. 6.3) This code is used in section 6.1. Operations. 5.6.1.2) This code is used in section 5.6.1. O p e r a t o r s of propositional logic. 4..2,1.2 ) This code is used in section 4.2.1.1. Overloading equality. 4.4.1.3) This code is used in section 4.4.1.1, P a r a m e t r i c equality of functions. 4.2.3 ) This code is used in section 4.1.1. P a r a m e t r i c equality of terms. 4.2.2) This code is used in section 4.1.1, Partially discharged proof obligations. 5.5.6.1 ) This code is used in section 5.5.6. P r e d i c a t e Logic. 4.2.1.7) This code is used in section 4.1.1. Principle of extensionality. 4.4.1.2 ) This code is used in section 4.4.1.1. P r o m o t i o n t h e o r e m (for n o n - e m p t y join lists). 6.5.3 / This code is used in section 6.5. ( P r o m o t i o n theorems. 6.4.3, 6.4.4, 6.4.5) This code is used in section 6.4. ( P r o o f assembly. 5.6.7) This code is used in section 5.6.3.4. ( P r o o f obligations for operations. 5.3.1.3 ) This code is used in section 5.3.1.4. ( P r o o f obligations for the a b s t r a c t operations. 5.5.6.2) This code is used in section 5.5.6.1. ( P r o o f obligations for the concrete operations. 5.5.6.3/ This code is used in section 5.5.6.1. ( P r o o f obligations for the o p e r a t i o n reifications. 5.5.6.4 ) This code is used in section 5.5.6.1. ( P r o o f obligations for the retrieve function. 5.3.3.4) This code is used in section 5.3.3.6. ( P r o o f obligations for versions. 5.3.2.2 ) This code is used in section 5.3.2.4. (Proof of opl op3. 5.6.5.2 ) This code is used in section 5.6.5.1.
E_~
232
C. Crossreferences
( P r o o f of vl -----rctr13V3. 5.6.3.4 ) This code is used in section 5.6.3.3. ( P r o o f of version_reif(opl @ Vl, Op3 @ v3, retr13) 5.6.6.6 / This code is used in section 5.6.6.5. ( P r o o f of trans_version_reifctn(opl @ Vl, op2 @ v2, op3 @ v3). 5.6.6.5 } This code is used in section 5.6.6.4. ( P r o o f of completeness condition. 5.6.4.2 / This code is used in section 5.6.4.1. { P r o o f of inductive base (not checked). 5.5.8.2 / ( P r o o f of inductive base. 5.5.8.3 ) This code is used in section 5.5.8. ( P r o o f of inductive step. 5.5.8.4 ) This code is used in section 5.5.8. ( P r o o f of the first retrieve lemma. 5.5.8) This code is used in section 5.5.4.5. ( P r o o f of the second retrieve lemma. 5.5.5 / This code is used in section 5.5.4.5. ( P r o o f of the DetPosNeg operation reification. 5.5.7 ) This code is used in section 5.5.6.4. ( P r o o f of the Fail equation. 5.5.7.6 } This code is used in section 5.5.7.2. ( P r o o f of the Neg equation. 5.5.7.5) This code is used in section 5.5.7.2. ( P r o o f of the Pos equation. 5.5.7.4) This code is used in section 5.5.7.2. ( P r o o f of the Resexp equation. 5.5.7.7) This code is used in section 5.5.7.2. ( P r o o f of the result case. 5.6.5.3 } This code is used in section 5.6.5.2. ( P r o o f of transitivity. 5.6.3.3 / This code is used in section 5.6.3.2. (ProofofDetPosNeg(i,S~).post(o,So ). 5.5.7.2 ) This code is used in section 5.5.7. ( P r o p e r t i e s of interval windows. 6.7.5, 6.7.6, 6.7.7, 6.7.8, 6.7.9, 6.7.10, 6.7.11 / This code is used in section 6.7.3. ( P r o p e r t i e s of parametric equality of functions. 4.2.3.1/ This code is used in section 4.2.3. / Properties of parametric equality of terms. 4.2.2.1, 4.2.2.2, 4.2.2.3, 4.2.2.4, 4.2.2.5, 4.2.2.6) This code is used in section 4.2.2. (Propositional Logic. 4.2.1.1 ) This code is used in section 4.1.1. / Recursive equations for a/3-pruning. 6.7.23 / This code is used in section 6.7.24. ( R e d u c e operator for n o n - e m p t y join lists. 6.3.4 ) This code is used in section 6.3. (Reification condition. 5.3.3.2 ) This code is used in section 5.3.3.1. (Reification of operations. 5.3.3.1 / This code is used in section 5.3.3.6. (Reification of versions. 5.3.3.3 ) This code is used in section 5.3.3.6. ( Reification verification. 5.5.6.14 ) This code is used in section 5.5.6.5. (Reified versions are of equal length. 5.6.2.3 / This code is used in section 5.6.2.4. (Retrieve functions. 5.6.1.4) This code is used in section 5.6.1. / Second block. 5.6.4.4 ) This code is used in section 5.6.4.3. (Second explicitation of the development of ross. 6.6.7 } ( Segment problems. 6.6 ) This code is used in section 6.1. (Selection from a tuple. 4.3.4.3) This code is used in section 4.3.4. (Sequences. 4.3.3) This code is used in section 4.1.2. (Signature of V D M operations. 5.3.1.1) This code is used in section 5.3.1.4. ( S o m e Bits of Algebra. 4.4.3.1 ) This code is used in section 4.1.3. ( Some laws of partial orderings. 4.4.4.1, 4.4.4.2 ) This code is used in section 4.4.4. (Some theory of join lists. 6.4 / This code is used in section 6.1. ( Some theory of n o n - e m p t y join lists. 6.5 ) This code is used in section 6.1.
C.4. Index of Variables Defined in the Case Studies
233
(Specification of DctPosNeg. 5.5.2.6 ) This code is used in section 5.5.2.9. (Specification of cDetPosNeg. 5.5.3.1 ) This code is used in section 5.5.3.4. (Tactic for evaluating the retrieve function. 5.5.4.3 ) This code is used in section 5.5.4.6. (Terms involving functions. 4.4.2.1 ) This code is used in section 4.1.3. ( T h e " m a x i m u m segment sum" problem. 6.6.4 ) This code is used in section 6.6. ( Third explicitation of the development of mss. 6.6.8 ) (Transitivity of operator reification. 5.6.5.1 } This code is used in section 5.6.3.4. (Transitivity of reification. 5.6.3.2 ) This code is used in section 5.1. (Transitivity of the reification condition. 5.6.6.1 ) This code is used in section 5.6.3:4. ( Tree evaluation problems. 6.7 ) This code is used in section 6.1. ( Tuples. 4.3.4 ) This code is used in section 4.1.2. ( Two auxiliary specifications. 5.5.4 ) This code is used in section 5.5.4.6. ( Two retrieve lemmas. 5.5.4.5 / This code is used in section 5.5.4.6. (Unfold rules for the overloaded equality. 4.4.1.4) This code is used in section 4.4.1.1. ( V D M operations. 5.3.1.4 ) This code is used in section 5.3. ( VDM reification. 5.3 ) This code is used in section 5.1. ( VDM tactics. 4.3.6) This code is used in section 4.1.2. ( V D M version reification. 5.3.3.6 / This code is used in section 5.3. ( VDM versions. 5.3.2.4 / This code is used in section 5.3. (Verification of the retrieve condition. 5.6.4.1 ) This code is used in section 5.6.3.4. ( Versions and operations. 5.6.1.5 ) This code is used in section 5.6.1. (Versions and retrieve functions. 5.6.1.6) This code is used in section 5.6.1. (Versions. 5.6.1.3 ) This code is used in section 5.6.1. ( Fail selection. 5.5.2.5 ) This code is used in section 5.5.2.1. (Neg selection. 5.5.2.4) This code is used in section 5.5.2.1. (Pos selection. 5.5.2.3) This code is used in section 5.5.2.1. ( Resexp selection. 5.5.2.2) This code is used in section 5.5.2.1. C.4
Index
of Variables
Defined
9 : 6.2.3, 6.3.3, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.5.1, 6.5.2, 6.5.3, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.7.11. +: 4.3.1.3, 4.3.2.4, 4.3.3.5, 5.6.2.2, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8. -~: 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.4.4, 6.6.1, 6.6.5, 6.6.6, 6.6.7, 6.6.8. - - 4.3.1.3, 6.7.2, 6.7.9, 6.7.15, 6.7.17, 6.7.18, 6.7.19, 6.7.20, 6.7.21, 6.7.22, 6.7.23.
in the Case Studies
/: 6.2.3, 6.3.4, 6.4.2, 6.4.3, 6.4.4, 6.5.2, 6.5.3, 6.6.1, 6.6.2, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.7.11. A: 4.2.1.2, 4.2.1.3, 4.2.1.4, 4.2.1.10, 4.3.2.9, 4.3.3.2, 4.3.3.7, 4.3.4.2, 4.3.4.5, 5.3.3.4, 5.4.2, 5.5.2.6,
: 4.3.1.3. >: 4.3.1.3. V: 4.2.1.2, 4.2.1.3, 4.2.1.5, 4.2.1.10, 4.3.1.3, 4.3.2.5, 4.3.2.9, 5.5.1, 5.5.8.5, 5.5.8.6, 5.5.8.7. A: 5.5.1, 5.5.2, 5.5.2.1, 5.5.3, 5.5.4, 5.5.4.5, 5.5.5.1, 5.5.8. .1_: 6.6.4, 6.6.8, 6.7.2, 6.7.5, 6.7.14, 6.7.23. defe: 4.3.2.5, 5.5.8.3, 5.5.8.6, 5.5.8.7. c-< 4.3.3.4. K: 4.4.4, 4.4.4.1, 4.4.4.2, 6.7.7, 6.7.8, 6.7.9, 6.7.15, 6.7.19, 6.7.20, 6.7.21,
6.7.22, 6.7.23, 6.7.24. l:
4.4.3.5, 4.4.3.6, 4.4.3.7, 4.4.3.8, 4.4.4, 4.4.4.2. ([(.), (.)]): 6.4.____22,6.5.2, 6.7.2, 6.7.11, 6.7.15, 6.7.17, 6.7.20. - ' 4.4.3.5, 4.4.3.6, 4.4.3.7, 4.4.3.8, 4.4.4. | 4.3.4, 4.3.4.1, 4.3.4.3, 4.3.4.5, 4.3.4.6, 4.3.5.1, 5.5.2, 5.5.3. @: 5.3.2.1, 5.3.2.2, 5.3.2.3, 5.3.3.3, 5.5.2.8, 5.5.3.3, 5.6.2.1, 5.6.2.2, 5.6.6.5, 5.6.6.6. def~: 4.3.3.4. def,/: 5.3.2.2, 5.5.6.6, 5.5.6.7, 5.5.6.8, 5.5.6.9, 5.5.6.10, 5.5.6.11, 5.5.6.12, 5.5.6.13. defE>s,,: 4.3.3.6, 4.3.6, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7.
C.4. Index of Variables Defined in the Case Studies
def,~e,: 4.3.3.6, 4.3.6, 5.5.8.3, 5.5.8.6. def~: 4.3.2.3, 4.3.6. de/,: 4.3.2.7., 4.3.6. defu: 4.3.2.8. : 6.2.____!,6.2.2, 6.2.3, 6.2.4, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.6.1. T: 4.3.4, 4.3.4.1, 4.3.4.2, 4.3.4.3, 4.3.4.5, 4.3.4.6, 4.3.5.1, 5.5.2, 5.5.3. C: 4.3.2.8. T: 4.4.3.5, 4.4.3.6, 4.4.3.7, 4.4.3.8, 4.4.4, 4.4.4.1, 4.4.4.2. U: 4.3.2.8, 4.3.2.9. • 4.3.1.3. -" 5.5.1, 5.5.2.6, 5.5.3.1, 5.5.7.5. o: 4.4.2.1, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.5.1, 6.5.2, 6.5.3, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.7.2, 6.7.3, 6.7.9, 6.7.11, 6.7.13, 6.7.14, 6.7.15, 6.7.16, 6.7.17, 6.7.19, 6.7.20, 6.7.21, 6.7.22, 6.7.23. (.) ;--o~() (.): 5.3.3.1, 5.3.3.3, 5.5.6.4, -~'.6.}i.1, 5.6.5.2, 5.6.6.6. +: 5.5.1, 5.5.2.6, 5.5.3.1, 5.5.7.4. (-)('): 4.3.1.3.
(.) _E() (.): 5.3.3.5, 5.5.6,
5.6,
5.6.3.3, 5.6.3.4. T: 6.7.2, 6.7.5, 6.7.14, 6.7.23. version_tell((.),(.), (.)): 5.3.3.3, 5.3.3.5, 5.5.6.5, 5.5.6.17, 5.6.2.3, 5.6.6.1, 5.6.6.3, 5.6.6.5, 5.6.6.6, 5.6.7. .~: 6!7.2, 6.7.3, 6.7.5, 6.7.6, 6.7.7,
236 6.7.9, 6.7.10. T: 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.7.2, 6.7.3, 6.7.5, 6.7.6, 6.7.7, 6.7.8, 6.7.9, 6.7.10, 6.7.11, 6.7.15, 6.7.17, 6.7.18, 6.7.19, 6.7.20, 6.7.21, 6.7.22. : 4.2.1.4, 4.2.1.5, 4.2.1.10, 4.2.2.2, 4.3.1.2, 4.3.1.3, 4.3.2.3, 4.3.2.4, 4.3.2.5, 4.3.3.2, 4.3.3.6, 4.3.5.2, 4.3.5.3, 4.3.5.4, 5.5.8.4. absorptive: 4.4.3.2, 4.4.3.5. add_clef: 4.3.1.3. Algebra: 4.4.3.1. apply: 4.3.5.3, 4.3.5.4. apply_def : 4.3.5.3. associative: 4.4.3.2, 4.4.3.3, 4.4.3.5, 6.3.2, 6.3.4, 6.5.2, 6.5.3. be: 4.4.3.6, 4.4.3.7, 4.4.3.8, 4.4.___~4, 4.4.4.1, 4.4.4.2, 6.7.2, 6.7.3, 6.7.5, 6.7.6, 6.7.7, 6.7.9, 6.7.10, 6.7.11, 6.7.20. BasicTheories: 4.1, 5.1, 6.1. bool_ac: 4.4.3.8, 6.7.10. bool_neg_simp: 4.4.3.7, 6.7.9. bool_simp: 4.4.3.6, 6.7.4, 6.7.5. boolean_algebra: 4.4.3.5, 4.4.3.6, 4.4.3.7, 4.4.3.8, 4.4.4, 6.7.2. CaleulationalBasies: 4.1.3, 6.1. card: 4.3.2.4, 4.3.3.7, 5.4.2. eard_def: 4.3.2.4, 4.3.6. cCell: 5.5.3, 5.5.3.1, 5.5.4, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. eDetFail: 5.5.3.2, 5.5.3.3, 5.5.6.3, 5.5.6.4. eDetFail_valid: 5.5.6.3, 5.5.6.8. eDetPosNeg_valid: 5.5.6.3, 5.5.6.8. eDetSer: 5.5.3.2, 5.5.3.3, 5.5.6.3, 5.5.6.4. eDetSer_valid: 5.5.6.3, 5.5.6.6. eElimNeg: 5.5.3.2, 5.5.3.3, 5.5.6.3, 5.5.6.4. eElimNeg_valid: 5.5.6.3, 5.5.6.7. cElimPos: 5.5.3.2., 5.5.3.3, 5.5.6.3, 5.5.6.4. eElimPos_valid: 5.5.6.3, 5.5.6.7.
C. Crossreferences cElimPos2: 5.5.3.2, 5.5.3.3, 5.5.6.3, 5.5.6.4. cElimPos2_valid: 5.5.6.3, 5.5.6.6. Cell: 5.5.2, 5.5.2.6, 5.5.4.5, 5.5.5, 5.5.7.4, 5.5.7.5. cExper: 5.5.3, 5.5.3.1, 5.5.4, 5.5.4.5, 5.5.5, 5.5.8, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. eFail: 5.5.3, 5.5.3.1, 5.5.4, 5.5.4.1, 5.5.7.6. elnit: 5.5.3.2, 5.5.3.3, 5.5.6.3, 5.5.6.4. elnit_valid: 5.5.6.3, 5.5.6.9. CI7~VHLA: 5.5.3, 5.5.3.3, 5.5.6.1, 5.5.6.3, 5.5.6.4, 5.5.6.5, 5.5.6.9, 5.5.6.17, 5.5.7. cNeg: 5.5.3, 5.5.3.1, 5.5.4, 5.5.4.1. commutative: 4.4.3.2, 4.4.3.5. conj: 4.2.1.3, 5.5.5, 5.5.7.2, 5.5.7.3, 5.6.4.3~ 5.6.4.4. conj_props: 4.2.1.10, 4.2.1.12, 5.5.8.7. ePos: 5.5.3, 5.5.3.1, 5.5.4, 5.5.4.1, 5.5.7.4. ered: 5.5.1. cRes: 5.5.3, 5.5.3.1, 5.5.4, 5.5.8.3~ 5.5.8.4, 5.5.8.5, 5.5.8.6, 5.5.8.7. cResexp: 5.5.3, 5.5.3.1, 5.5.4, 5.5.4.1, 5.5.7.4. eStateHLA: 5.5.3, 5.5.3.1, 5.5.3.2, 5.5.4.1, 5.5.7. cVerHLA: 5.5.3.3, 5.5.6, 5.5.6.5. def _inits: 6.6.1. def _length: 5.6.2.2. def _tails: 6.6.1. def_version_reif: 5.3.3.3, 5.5.6.14, 5.5.6.15, 5.5.6.16, 5.5.6.17, 5.6.6.3, 5.6.6.6. DetFail: 5.5.2.7, 5.5.2.8, 5.5.3.2, 5.5.6.2, 5.5.6.4. DetFail_reif: 5.5.6.4, 5.5.6.16. DetFail_valid: 5.5.6.2., 5.5.6.12. DetPosNeg: 5.5.2.6, 5.5.2.7, 5.5.2.8, 5.5.6.2, 5.5.6.4, 5.5.7, 5.5.7.1, 5.5.7.2, 5.5.7.6. DetPosNeg_reif: 5.5.6.4, 5.5.6.16,
C.4. Index of Variables Defined in the Case Studies 5.5.7. DetPosNeg_valid: 5.5.6.2, 5.5.6.12. DetSer: 5.5.2.7, 5.5.2.8, 5.5.6.2, 5.5.6.4. DetSer_reif: 5.5.6.4, 5.5.6.14. DetSer_valid: 5.5.6.2~ 5.5.6.10. development: 6.6.4. disj: 4.2.1.3, 5.5.8.4. disj_props: 4.2.1.10, 4.2.1.12, 5.5.8.7. distrib_monoids: 4.4.3.4, 6.6.2, 6.6.4. distributive: 4.4.3.2, 4.4.3.4, 4.4.3.5. dora: 4.3.5.1, 4.3.5.2, 4.3.5.3, 4.3.5.4. dora_restriction: 4.3.5.4. elems: 4.3.3.7, 5.4.2, 5.5.4, 5.5.8.5, 5.5.8.6, 5.5.8.7. elems_def : 4.3.3.7. EIimNeg: 5.5.2.7, 5.5.2.8, 5.5.6.2, 5.5.6.4. ElimNeg_reif: 5.5.6.4, 5.5.6.15. ElimNeg_valid: 5.5.6.2, 5.5.6.11. ElimPos: 5.5.2.7, 5.5.2.8, 5.5.6.2, 5.5.6.4. ElimPos_reif: 5.5.6.4, 5.5.6.15. ElimPos_valid: 5.5.6.2, 5.5.6.11. ElimPos2: 5.5.2.7, 5.5.2.8, 5.5.6.2, 5.5.6.4. ElimPos2_reif: 5.5.6.4, 5.5.6.14. ElimPos2_valid: 5.5.6.2, 5.5.6.10. eq_eompose: 4.2.2.6, 5.6.4.4. equal_length: 5.6.2.3, 5.6.6.2. equiv_props: 4.2.1.10, 4.2.1.12, 5.5.5.1, 5.5.8, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. eval: 6.7.2, 6.7.13, 6.7.14, 6.7.16, 6.7.17, 6.7.20, 6.7.21, 6.7.22, 6.7.23. eval_def: 6.7.2, 6.7.16, 6.7.17. ex: 4.2.1.7, 5.6.4.2, 5.6.4.3, 5.6.4.4. exch_meet_join: 4.4.4.2, 6.7.7, 6.7.9. excluded_middle: 4.2.1.5, 5.5.8.4. exp_def : 4.3.1.3. E:cper: 5.5.2, 5.5.2.1, 5.5.2.6, 5.5.2.7, 5.5.3.2, 5.5.4, 5.5.4.5, 5.5.5, 5.5.5.1, 5.5.7.4, 5.5.7.5. ExtensionalEquality: 4.4.1.1.
237
extensionality: 4.4.1.2, 4.4.2.1, 6.7.5, 6.7.9. Fail: 5.5.2, 5.5.2.1, 5.5.2.5, 5.5.2.6, 5.5.7.6, 5.5.7.7. false: 4.2.1.2, 4.2.1.3, 4.2.1.4, 4.2.1.6, 4.2.1.10, 4.2.1.12, 5.5,8.3, 5.5.8.7. false_dim: 4.2.1.3. feq: 4.4.1.1_, 4.4.1.2, 4.4.1.3. ffold: 4.4.1.1, 4.4.1.5. FiniteMaps: 4.3.5. fold: 4.2.2.5, 4.2.3.1, 4.4.1.1, 4.4.1.5, 4.4.3.8, 4.4.4.1, 6.7.7, 6.7.9, 6.7.10, 6.7.19, 6.7.21, 6.7.22. fork: 6.7.1, 6.7.2, 6.7.15, 6.7.17, 6.7.20, 6.7.23. frefl: 4.4.1.1, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.7.11, 6.7.14, 6.7.16, 6.7.17. FrequentContexts: 5.6.1, 5.6.3.1. frsubst: 4.4.1.1., 4.4.1.3. fsubst: 4.4.1.1, 4.4.1.3. ]sym: 4.4.1.1, 4.4.1.3, 4.4.1.5. ftfold: 4.4.1.5. ftrans: 4.4.1.1, 4.4.1.3. ftunfold: 4.4.1.4, 4.4.1.5. FunctionaITerms: 4.4.2.1. funfotd: 4.4.1.1., 4.4.1.4, 6.6.6, 6.6.7, 6.6.8. gen_distributive: 4.4.3.2, 4.4.3.5. hd: 4.3.3.3. hd_def : 4.3.3.3. HLA CaseStudy: 5.5. HLAEvaluation: 5.5.4.4, 5.5.8.2, 5.5.8.5. hornet_rule: 6.6.2, 6.6.5, 6.6.6, 6.6.7, 6.6.8. hyp_Fail: 5.5.7.3, 5.5.7.6. hyp_Neg: 5.5.7.3, 5.5.7.5. hyp_Pos: 5.5.7.3, 5.5.7.4. hyp_Resexp: 5.5.7.3, 5.5.7.7. id: 4.4.2, 4.4.2.1, 6.7.2, 6.7.5, 6.7.14, 6.7.16. imp: 4.2.1.3, 4.2.1.6, 5.6.6.3, 5.6.6.5, 5.6.6.6, 5.6.7. induction: 4.3.1.2. Init: 5.5.2.7, 5.5.2.8, 5.5.6.2, 5.5.6.4.
238
Init_reif: 5.5.6.4, 5.5.6.17. Init_valid: 5.5.6.2, 5.5.6.13. inits: 6.6.1, 6.6.3, 6.6.5, 6.6.6, 6.6.7, 6.6.8. inject@e: 4.4.2.1, 6.2.2, 6.3.2. InVHLA: 5.5.2, 5.5.2.8, 5.5.6.1, 5.5.6.2, 5.5.6.5, 5.5.6.13, 5.5.6.17. Invariants: 5.6.1.1, 5.6.1.3, 5.6.2.1. join_induction: 6.2.2, 6.3.2. fen: 4.3.3.5, 4.3.3.7, 5.4.2. len_de] : 4.3.3.5. leq_def : 4.3.1.3. list: 4.3.4, 6.2, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.6.1, 6.6.4, 6.6.7, 6.6.8. LogiealBasis: 4.1.1, 4.3, 4.4.1. LogicaISimplification: 4.2.1.12, 4.3.6. lreduee_def: 6.2.4, 6.3.5. map: ~.3.5, 4.3.5.1, 4.3.5.2, 4.3.5.3, 4.3.5.4. map_commute: 4.3.5.2. map_def: 6.:2.3, 6.3.3, 6.4.4. map_distribution: 6.4.1, 6.5.1, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.7.11. map_dom: 4.3.5.2, 4.3.6. map_induction: 4.3.5.2. map_overwrite: 4.3.5.2. map_promotion: 6.4.4, 6.4.5, 6.6.6, 6.6.7, 6.6.8. max_plus_props: 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8. max_props: 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8. mk: 4.3.4.1, 4.3.4.2, 4.3.4.3, 4.3.4.5, 5.5.2.1, 5.5.4.1. mk_injective: 4.3.4.2, 4.3.6. mkbin_injective: 4.3.4.5, 5.5.8.6, 5.5.8.7. mkmt: 4.3.4.1, 4.3.4.5, 5.5.2.1, 5.5.4.1. monoid: 4.4.3.3, 4.4.3.4, 6.2.2, 6.2.3, 6.4.2, 6.4.3, 6.4.4, 6.4.5. ross: 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8. mt: 4.3.4, 4.3.4.1, 4.3.4.5, 4.3.4.6, 4.3.5.1, 5.5.2, 5.5.3. mult_def : 4.3.1.3.
C. Crossreferences
4.2.1.7, 4.3.1, 4.3.1.1, 4.3.1.2, 4.3.1.3, 4.3.2.4, 4.3.3.5, 4.3.4, 5.6.2.2. NaturalNumbers: 4.3.1. ne_list: 6.3, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.7.1. Neg: 5.5.2, 5.5.2.1, 5.5.2.4, 5.5.2.6, 5.5.7.5. no_dupI: 4.3.3.7, 5.4.2, 5.5.3. no_other_results: 5.5.1. observ: 4.3.2.6, 5.5.5.1. OP: 5.3.1.1, 5.3.1.3, 5.3.2.1, 5.3.2.2, 5.3.2.3, 5.3.3.1, 5.3.3.3, 5.5.2.6, 5.5.2.7, 5.5.3.1, 5.5.3.2, 5.6.1.2, 5.6.2.2. op_refine: 5.6.5.1, 5.6.6.6. op-reifl2: 5.6.5.1, 5.6.5.2, 5.6.5.3. op-reif23: 5.6.5.1, 5.6.5.2, 5.6.5.3. op_valid: 5.3.1.3, 5.3.2.2, 5.5.6.2, 5.5.6.3. Operations: 5.6.1.2, 5.6.1.5, 5.6.5.1. PartialOrdering: 4.4.4, 6.7.3. peano3: 4.3.1.2. peano4: 4.3.1.2, 4.3.6. pid: 4.4.2.1, 6.7.14, 6.7.16. pjoin: 6.2,2, 6.3.2, 6.4.4. Pos: 5.5.2, 5.5.2.1, 5.5.2.3, 5.5.2.6, 5.5.7.4, 5.5.7.5. pred: 4.3.1.3. pred_def : 4.3.1.3. promotion: 6.4.3, 6.4.4, 6,5.3, 6.7.11. promotion_tae: 6.4.5, 6.6.5. prop: 4.1.!, 4.2.1.2, 4.2.1.3, 4.2.1.4, 4.2.1.5, 4.2.1.7, 4.2.1.8, 4.2.1.9, 4.2.1.10, 4.2.1.11, 4.2.2, 4.2.2.5, 4.2.3, 4.2.3.1, 4.3.1.2, 4.3.1.3, 4.3.2.2, 4.3.2.3, 4.3.2.5, 4.3.2.9, 4.3.3.2, 4.3.3.6, 4.3.5.2, 5.3.1.1, 5.3.1.3, 5.3.2.1, 5.3.2.2, 5.3.2.3, 5.3.3.1, 5.3.3.3, 5.3.3.4, 5.3.3.5, 5.5.1, 5.6.1.1, 5.6.2.1, 5.6.2.2, 5.6.2.3, 6.2.2, 6.3.2. prop_id: 4.2.1.4. PropositionaILogie: 4.2.1.1, 4.2.1.7. props_in: 4.3.2.9, 5.5.5, 5.5.5.1. nat:
C.4. Index of Variables Defined in the Case Studies prsubst: 4.2.1.9, 5.5.5.1, 5.5.8.3, 5.5.8.6, 5.5.8.7. psingleton: 6.2.2, .6.3.2. psubst: 4.2.1.8, 4.2.1.12, 4.3.6, 5.5.5.1, 5.5.8.5, 5.5.8.6. reduce_def: 6.2.3, 6.3.4, 6.4.4. reduce_promotion: 6.4.4, 6.4.5, 6.6.6, 6.6.7, 6.6.8. refine: 5.6.3.1, 5.6.3.2, 5.6.3.3. refl: 4.2.1.10, 4.2.1.12, 4.2.2, 4.2.2.1, 4.2.3, 4.4.1.1, 5.5.2.1, 5.5.2.2, 5.5.2.3, 5.5.2.4, 5.5.2.5, 5.5.5.1, 5.5.7.6, 5.5.7.7, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. refl_smth: 4.4.4.1, 6.7.8, 6.7.10. reif12: 5.6.3.3, 5.6.4.1, 5.6.4.2, 5.6.4.3, 5.6.5.2, 5.6.6.2, 5.6.7, 5.7. reif23: 5.6.3.3, 5.6.4.1, 5.6.4.2, 5.6.6.2, 5.6.7. Res: 5.5.2, 5.5.2.1, 5.5.2.6, 5.5.4.5, 5.5.5, 5.5.5.1, 5.5.7.4, 5.5.7.5. Resexp: 5.5.2, 5.5.2.1, 5.5.2.2, 5.5.2.6, 5.5.7.4, 5.5.7.5, 5.5.7.7. Result: 5.5.1, 5.5.2, 5.5.2.1, 5.5.3, 5.5.3.1, 5.5.4.5, 5.5.5, 5.5.8. RetrHLA: 5.5.4.1, 5.5.6, 5.5.6.1, 5.5.6.4, 5.5.6.5, 5.5.6.17, 5.5.7. Retrl: 5.5.4, 5.5.4.1, 5.5.4.5, 5.5.5, 5.5.5.1, 5.5.7.4, 5.5.8, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. Retrx_def: 5.5.4, 5.5.4.3, 5.5.8.3, 5.5.8.6, 5.5.8.7. retr12: 5.6, 5.6.1.4, 5.6.3.3, 5.6.3.4, 5.6.4.1, 5.6.4.3, 5.6.5.1, 5.6.5.2, 5.6.5.3, 5.6.6.1, 5.6.6.3, 5.6.6.5. retr13: 5.6.1.4, 5.6.3.3, 5.6.3.4, 5.6.4.1, 5.6.4.2, 5.6.4.3, 5.6.4.4, 5.6.5.2, 5.6.6.1, 5.6.6.3, 5.6.6.6, 5.6.7. retr23: 5.6, 5.6.1.4, 5.6.3.3, 5.6.3.4, 5.6.4.2, 5.6.5.1, 5.6.5.2, 5.6.5.3, 5.6.6.1, 5.6.6.3, 5.6.6.5. retr_lemmal: 5.5.4.5, 5.5.5.1. retr_lemma2: 5.5.4.5., 5.5.7.4, 5.5.7.5. RetrEvaluation: 5.5.4.3, 5.5.4.4.
239
retrieval13: 5.6.4.1, 5.6.7. retrieve_lemma: 5.5.6.1, 5.5.6.5. RetrieveFunctions: 5.6.1.4, 5.6.1.6. rng: 4.3.5.4. rreduce_def: 6.2.4, 6.3.5. rsubst: 4.2..2...5, 4.2.2.6, 4.2.3.1, 4.3.6, 4.4.1.1, 4.4.1.3, 4.4.1.5, 5.5.4.3, 5.5.7.4, 5.5.7.5, 5.5.7.6, 5.5.7.7, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. s: 4.4.4, 6.6.4, 6.7.2. segs: 6.6.1, 6.6.4, 6.6.5. sel: 4.3.4.3, 4.3.4.6, 5.5.2, 5.5.2.1, 5.5.2.2, 5.5.2.3, 5.5.2.4, 5.5.2.5, 5.5.3. SelHLA: 5.5.2.1, 5.5.7.4, 5.5.7.5, 5.5.7.6, 5.5.7.7. selRe~: 5.5.2.1, 5.5.5.1. sel_def: 4.3.4.3, 4.3.4.4. seq: 4.3.3, 4.3.3.1, 4.3.3.2, 4.3.3.3, 4.3.3.4, 4.3.3.5, 4.3.3.6, 4.3.3.7, 5.4.2, 5.5.3, 5.5.3.1, 5.5.4, 5.5.4.5, 5.5.5, 5.5.8. seq_freel : 4.3.3.2. seq_free2: 4.3.3.2, 4.3.6. seq_induction: 4.3.3.2, 5.5.8. Sequences: 4.3.3, 5.3.2.2. set: 4.3.2, 4.3.2.1, 4.3.2.2, 4.3.2.3, 4.3.2.4, 4.3.2.5, 4.3.2.6, 4.3.2.7, 4.3.2.8, 4.3.2.9, 4.3.3.7, 4.3,4.6, 4.3.5.1, 4.3.5.4, 5.5.2, 5.5.2.1, 5.5.2.7, 5.5.3.2, 5.5.4, 5.5.4.5, 5.5.5.1, 5.5.8. set_absorp: 4.3.2.2. set_commut: 4.3.2.2. set_induction: 4.3.2.2. sort: 4.1.1, 4.2.1.7, 4.2.1.11, 4.2.2, 4.2.2.1, 4.2:2.2, 4.2.2.3, 4.2.2.4, 4.2.2.5, 4.2.2.6, 4.2.3, 4.2.3.1, 4.3.1, 4.3.2, 4.3.2.1, 4.3.2.2, 4.3.2.3, 4.3.2.4, 4.3.2.5, 4.3.2.6, 4.3.2.7, 4.3.2.8, 4.3.2.9, 4.3.3, 4.3.3.1, 4.3.3.2, 4.3.3.3, 4.3.3.4, 4.3.3.5, 4.3.3.6, 4.3.3.7, 4.3.4, 4.3.4.1, 4.3.4.2, 4.3.4.3, 4.3.4.5, 4.3.4.6, 4.3.5, 4.3.5.1, 4.3.5.2,
240
4.3.5.3, 4.3.5.4,4.4.1.2,4.4.1.4, 4.4.1.5, 4.4.2.1,4.4.3.2,4.4.3.3, 4.4.3.4, 4.4.3.5,4.4.3.6,4.4.3.7, 4.4.3.8, 4.4.4,5.3.1.1,5.3.1.2, 5.3.1.3, 5.3.2.1,5.3.2.2,5.3.2.3, 5.3.3.1, 5.3.3.3, 5.3.3.4, 5.3.3.5, 5.4.2, 5.5.1, 5.6.1.1, 5.6.1.2, 5.6.1.4, 5.6.1.5, 5.6.1.6, 5.6.2.2, 5.6.2.3, 6.2, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.3, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.5.1, 6.5.2, 6.5.3, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.7.1, 6.7.2, 6.7.11. sortlist: 4.3.4, 4.3.4.1, 4.3.4.2,4.3.4.3. specialization: 6.4.2, 6.5.2, 6.7.20. StateHLA: 5~5.2, 5.5.2.6, 5.5.2.7. state1: 5.6.1.1, 5.6.1.2, 5.6.1.4, 5.6:1.5, 5.6.1.6, 5.6.4.1, 5.6.4.2, 5.6.4.3, 5.6.4.4, 5.6.5.2, 5.6.5.3. state2: 5.6.1.1, 5.6.1.2, 5.6.1.4, 5.6.1.5, 5.6.1.6, 5.6.4.2. state3: 5.6.1.1, 5.6.1.2, 5.6.1.4, 5.6.1.5, 5.6.1.6, 5.6.4.2. sub_def : 4.3.1.3. subst: 4.2.2, 4.2.2.1, 4.2.2.3, 4.2.2.4, 4.2.2.5, 4,2.3, 4.4.1.1, 4.4.1.3, 4.4.1.4, 4.4.1.5. succ: 4.3.1.1, 4.3.1.2, 4.3.1.3. sym: 4.2.1.10,4.2.2.1, 4.2.2.3, 4.2.2.5, 4.2.3.1, 4.4.1.1, 4.4.1.3, 5.5.5.1, 5.5.8.5, 5.5.8.6, 6.7.9, 6.7.24. symneg: 4.2.2.2, 5.5.8.7. tails: 6.6.1, 6.6.2, 6.6.3, 6.6.5, 6.6.6, 6.6.7, 6.6.8. teq: 4.4.1.1, 4.4.1.2, 4.4.1.3. tffold: 4.4.1.5. tfold: 4.4.1.1, 4.4.1.5. tfunfold: 4.4.1.4, 4.4.1.5. tip: 6.7.1, 6.7.2, 6.7.15, 6.7.16, 6.7.23. tl: 4.3.3.3. tl_def : 4.3.3.3. trans: 4.2.1.10, 4.2.2.3, 4.2.3.1, 4.4.1.1, 4.4.1.3. trans_base: 5.6.6.2.
C. Crossreferences trans_smth: 4.4.4.1, 6.7.21, 6.7.24. trans_step: 5.6.6.2. trans_version_reifctn_proof : 5.6.6.2,
5.6.7. TransitivityofReification: 5.6.3.2. tree: 6.7.1, 6.7.2, 6.7.18, 6.7.20,
6.7.21, 6.7.22, 6.7.23. trefl: 4.4.1.1, 4.4.2.1, 4.4.4.1, 6.7.5, 6.7.6, 6.7.7, 6.7.9, 6.7.10, 6.7.19, 6.7.21, 6.7.22. triv: 4.4.4.2. trsubst: 4.4.1.1, 4.4.1.3. true: 4.2.1.4, 4.2.1.6, 4.2.1.10, 4.2.1.12, 5.5.2, 5.5.2.6, 5.5.3.1, 5.5.5.1, 5.5.7, 5.5.8.2, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. true_proof: 4.2.1.6, 5.5.5.1, 5.5.7, 5.5.8.2, 5.5.8.3, 5.5.8.5, 5.5.8.6, 5.5.8.7. tsubst: 4.4.1.1, 4.4.1.3. tsym: 4.4.1.1, 4.4.1.3, 4.4.1.5. ttrans: 4.4.1.1, 4.4.1.3. tunfold: 4.4.1.1, 4.4.1.4. tuple_prop: 4.3.4.6, 5.5.5.1. tuple_unf_tactic: 4.3.4.4, 5.5.2.1, 5.5.2.2, 5.5.2.3, 5.5.2.4, 5.5.2.5. Tuples: 4.3.4. unfold: 4.2.2.4, 4.2.2.5, 4.2.3.1, 4.3.4.4, 4.4.1.1, 4.4.1.4, 4.4.1.5, 4.4.3.6, 4.4.3.7, 4.4.3.8, 4.4.4.1, 6.4.5, 6.6.5, 6.6.7, 6.7.6, 6.7.7, 6.7.10, 6.7.11, 6.7.14, 6.7.16, 6.7.17, 6.7.19, 6.7.20, 6.7.21, 6.7.22. unit: 4.4.3.2, 4.4.3.3, 4.4.3.5, 4.4.3.6, 4.4.4.1. univ: 4.2.1.7, 5.5.5.1. valid_retrieve: 5.3.3.4, 5.3.3.5, 5.5.6.1, 5.5.6.5, 5.6.4.1, 5.6.7. VDMBasics: 4.1.2, 5.1. VDMCaseStudy: 5.1. VDMDecomposition: 4.3.6. VDMEvaluation: 4.3.6, 5.5.4.4. VD MReification : 5.___33. VDMSimplification: 4.3.6.
C.4. Index of Variables Defined in the Case Studies VDMTactics: 4.3.6, 5.1. VerHLA: 5.5.2.8, 5.5.6, 5.5.6.5. version: 5.3.2.1, 5.3.2.2, 5.3.2.3, 5.3.3.3, 5.3.3.5, 5.5.2.8, 5.5.3.3, 5.5.6.5, 5.6.1.3, 5.6.2.1, 5.6.2.2, 5.6.2.3, 5.6.6.1, 5.6.7. version_induction: 5.3.2.3. version_triple_induction: 5.6.2.1, 5.6.6.2. Versions: 5.6.1.3, 5.6.1.5, 5.6.1.6, 5.6.2.1, 5.6.6.2.
241
VersionsAndOperations: 5.6.1.5, 5.6.2.1, 5.6.6.4. VersionsAndRetrieveFunctions: 5.6.1.6, 5.6.3.3. vlength: 5.6.2.1, 5.6.2.2, 5.6.2.3, 5.6.6.2. void: 4.3.5.1, 4.3.5.2, 4.3.5.4, 5.3.1.2, 5.5.2.6, 5.5.2.7, 5.5.3.1, 5.5.3.2, 5.5.7. zero:
4.4.3.2, 4.4.3.5, 4:4.3.6, 4.4.4.2.
D
References
1. J. R. AbriM. The B-Tool (abstract). In R. Bloomfield, L. Marshall, and C. Jones, editors, VDM'88 - The Way Ahead. Springer-Verlag, 1988. 2. D. Andrews. VDM specification language: Proto-Standard. Draft proposal, BSI IST/5/50, December 1992. 3. M. Anlauff. A Support Environment for a Generic Devlopment Language. Forthcoming dissertation., TU Berlin, 1994. 4. R. D. Arthan. On formal specification of a proof tool. In VDM'91 Formal Software Developments, volume 551 of LNCS, pages 356-370, 1991. 5. A. Avron, F. Honsell, I. A. Mason, and R. Pollack. Using typed A-calculus to implement formal systems on a machine. Journal of Automated Reasoning, 9(3):309354, 1992. 6. R. C. Backhouse. Making formMity work for us. EATCS Bulletin, 38:219-249, June 1989. 7. R. C. Backhouse, P. J. de Bruin, P. Hoogendijk, G. Malcolm, T. S. Voermans, and J. van der Woude. Polynomial relators. In Proceedings of the 2nd Conference on Algebraic Methodology and Software Technology, A M A S T '9i, Workshops in Computing, pages 303-362. Springer-Verlag, 1992. 8. R. C. Backhouse, P. de Bruin de, G. Malcolm, T. S. Voermans, and J. van der Woud'e. Relational catamorphisms. In MSller B., editor, Proceedings of the IFIP TC2/WG2.1 Working Conference on Constructing Programs, pages 287318. Elsevier Science Publishers B.V., 1992. 9. R. C. Backhouse, R. Verhoeven, and O. Weber. Mathpad. Technical report, Technical University of Eindhoven, Department of Computer Science, 1993. 10. H. Barendregt. Introduction to generalised type systems. Journal of Functional Programming, 1(4):375-416, 1991. 11. H. Barringer, J. H. Cheng, and C. B. Jones. A logic covering undefinedness in programs. Acta Informatica, 5:251-259, 1984. 12. M. Beyer. Specification of a LEX-like scanner. Forthcoming technical report., TU Berlin, 1993. 13. M. Biersack, R. Raschke, and M. Simons. DVWEB:A web for the generic development language Deva. Forthcoming technical report., TU Berlin, 1993. 14. R. Bird. Lectures on constructive functional programming. In M. Broy, editor, Constructive Methods in Computer Science, volume F69 of NATO ASI Series, pages 151-216. Springer-Verlag, 1989. 15. R. Bird and O. de Moor. List partitions. Formal Aspects of Computing, 5(1):255279, 1993. 16. Second Workshop on Logical Frameworks, 1992. Preliminary proceedings. 17. S. Brian and J. Nicholls. Z base standard (version 1.0). Oxford University Computing Laboratory, Programming Research Group, November 1992. 18. M. Broy and C. B. Jones, editors. Programming Concepts and Methods. North Holland, 1990. 19. L. Cardelli and G. Longo. A semantic basis for Quest. Research report 55, Digital, System Research Center, Palo Alto, 1990. 20. J. Cazin, P. Cros, R. Jacquart, M. Lemoine, and P. Michel. Construction and reuse of formal program developments. In Proceedings Tapsoft 91, LNCS 494. Springer-Verlag, 1991. 21. P. Cazin and P. Cros. Several play and replay scenarii for the HLA program development. Technical Report RR.T1-89.d, CERT, Toulouse, 1989.
243 22. R. Constable et al. Implementing Mathematics with the NuPRL Proof Development System. Prentice Hall, 1986. 23. C. Coquand. A proof of normalization for simply typed )~-calculus written in ALF. In BRA Logical Frameworks [16]. Preliminary proceedings. 24. T. Coquand and G. Huet. The calculus of constructions. Information and Computation, 76:95-120, 1988. 25. P. Cros et al. HLA problem oriented specification. Technical Report RR.T3-89c, CERT, Toulouse, 1989. 26. N. G. de Bruijn. Lambda calculus notation with nameless dummies. Indigationes Mathematicae, 34:381-392, 1972. 27. N. G. de Bruijn. A survey of the project AUTOMATH. In To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism, pages 579-606. Academic Press, 1980. 28. N. G. de Bruijn. Generalizing AUTOMATH by means of a lambda-typed lambda calculus. In Preeedings of the Maryland 1984-1985 Special Year in Mathematical Logic and Theoretical Computer Science, 1985. 29. N. G. de Bruijn. A plea for weaker frameworks. In Huet and Plotkin [54], pages 123-140. 30. P. de Groote. Ddfinition et Properidtds d'un Mdtacalcul de Reprdsentation de Thdories. PhD thesis, University of Louvain, 1990. 31. P. de Groote. Nederpelt's Calculus Extended with a notion of Context as a Logical Framework, pages 69-88. Cambridge University Press, 1991. 32. P. de Groote. The conservation theorem revisited. In Conference on Typed )~Calculi and Applications, volume 664 of LNCS, pages 163-174. Springer-Verlag, 1993. 33. P. de Groote. Defining A-typed A-calculi by axiomatizing the typing relation. In STACS '93, volume 665 of LNCS, pages 712-723. Springer-Verlag, 1993. 34. E. Dijkstra and W. H. J. Feijen. A Method of Programming. Addison-Wesley, 1988. 35. E. W. Dijkstra and C. Scholten. Predicate Calculus and Predicate Transformers. Springer-Verlag, 1990. 36. G. Dowek et al. The Coq proof assitant user's guide. Technical report, I N R I A Rocquencourt, 1991. 37. R. Dulbecco, editor. Encyclopedia of Human Biology, volume 4. Academic Press, 1991. 38. A. Felty and D. Miller. Encoding a dependent-type /~-calculus in a logic programming language. In Proceedings of CADE 1990, volume 449 of LNCS, pages 221-236. Springer-Verlag, 1990. 39. B. Fields and Morten Elvang-Geransson. A VDM case study in mural. IEEE Transactions on Software Engineering, 18(4):279-295, 1992. 40. R. Gabriel. The automatic generation of graphical user-interfaces. In System design: concepts, methods and tools~ pages 589-606. IEEE Computer Society Press, 1988. 41. R. Gabriel, editor. ESPRIT Project ToolUse, Final Report of the Deva Support Task: Retrospective and Manuals, number 425 in "Arbeitpapiere der GMD". Gesellschaft ffir Mathematik und Datenverarbeitung, 1990. 42. S. J. Garland and J. V. Guttag. An overview of LP, the Larch prover. In Proceedings of the Third International Conference on Rewriting Techniques and Applications, volume 355 of LNCS, pages 137-151. Chapel Hill, N.C., 1989.
244
D. References
43. J.-Y. Girard, P. Taylor, and Y. Lafont. Proof and Types. Cambridge University Press, 1989. 44. M. J. Gordon, R. Milner, and C. P. Wadsworth. Edinburgh LCF: A Mechanized Logic of Computation, volume 78 of LNCS. Springer-Verlag, 1979. 45. M. J. C. Gordon. HOL: A proof generating system for higher-order logic. In G. Birthwhistle and P. A. Subrahmanyam, editors, VLSI specification, Verification and Synthesis. Kluwer, 1987. 46. M. J. C. Gordon and T. F. Melham, editors. Introduction to HOL: A Theorem Proving Environment. Cambridge University Press, 1993. to appear. 47. David Gries. The Science of Programming. Springer-Verlag, 1981. 48. J. Guttag and J. Homing. Larch: Languages and Tools for Formal Specification. Springer-Verlag, 1993. 49. T. Hagino. A typed lambda calculus with categoricM type constructors. In D. H. Pitt, A. Poign~, and D. E. Rydeheard, editors, Category Theory and Computer Science, LNCS 283, pages 140-157. Springer-Verlag, 1987. 50. R. Harper, F. Honsell, and G. Plotkin. A framework for defining logics. In Proceedings of the Symposium on Logic in Computer Science, pages 194-204. IEEE, 1987. 51. M, Heisel, W. Reif, and W. Stephan. Formal software development in the KIV system. In M. R. Lowry and R. D. McCartney, editors, Automating Software Design, pages 547-576. The MIT Press, 1991. 52. J. R. Hindley and J. P. Seldin. Introduction to Combinators and )~-Calculus. Cambridge University Press, 1986. 53. G. Huet, editor. Logical Foundations of Functional Programming. AddisonWesley, 1990. 54. G. Huet and G. Plotkin, editors. Proceedings of the First Workshop on Logical Frameworks. Cambridge University Press, 1991. 55. Special issue on formal methods. IEEE Software, September 1990. 56. B. Jacobs and T. Melham. Translating dependent type theory in higher-order logic. In Conference on Typed A-Calculi and Applications, volume 664 of LNCS , pages 209-229. Springer-Verlag, 1993. 57. S. Js and R. Gabriel. ToolUse: A uniform approach to formal program development. Technique et Science Informatiques, 9(2), 1990. 58. J. Jeuring. Theories of Algorithm Calculation. PhD thesis, University of Utrecht, 1993. 59. C. B. Jones. Systematic Software Development using VDM. Prentice Hall, 1986. 60. C. B. Jones. Systematic Software Development using VDM, second edition. Prentice Hall, 1990. 61. C. B. Jones, K. D. Jones, P. A. Lindsay, and R. Moore. Mural: A Formal Development Support System. Springer, 1991. 62. S. L. Peyton Jones. The Implementation of Functional Programming Languages. Prentice Hall International, 1987. 63. D, Knuth. Literate programming. The Computer Journal, 27(2):97-111, May 1984. 64. D. Knuth. Literate Programming. Center for the Study of Language and Information, 1992. 65. G. Koletsos. Sequent calculus and partial logic. Master's thesis, Manchester University, 1976. 66. C. Lafontaine. Writing tactics in Deva.l: Play and replay of VDM proof obligations. Technical Report RR89-9, Universit@ de Louvaln, 1989.
245 67. C. LafontMne. Formalization of the VDM reification in the Deva meta-cMculus. In Broy and Jones [18], pages 333-368. 68. C. Lafontaine, Y. Ledru, and P. Schobbens. Two approaches towards the formalisation of VDM. In D. Bjorner, C. Hoare, and H. Langmaack, editors, Proceedings of VDM'90: VDM and Z, volume 428 of LNCS, pages 370-398. Springer, 1990. 69. L. Lamport. How to write a proof. Technical Report 94, DEC Systems Research Center, 1993. 70. M. Leeser. Using NuPRL for the verification and synthesis of hardware. In C. A. R. Hoare and M. C. J. Gordon, editors, Mechanized Reasoning and Design, pages 49-68. Prentice Hall International, 1992. 71. M. H. Li6geois. Development and replay of the HLA typing case study using VDM. Technical Report RR89-22, Universit@ de Louvain, 1989. 72. Z. Luo. An Extended Calculus of Constructions. PhD thesis, University of Edin~ burgh, 1990. 73. Z. Luo. Program specification and data refinement in type theory. In TAPSOFT '91, volume 493 of LNCS, pages 143-168. Springer-Verlag, 1991. 74. Z. Luo and R. Pollack. The LEGO proof development system: A user's manual. Technical Report ECS-LFCS-92-211, University of Edinburgh, LFCS, 1992. 75. L. Magnusson. The new implementation of ALF. In BRA Logical Frameworks [16]. Preliminary proceedings. 76. G. Malcolm. Data structures and program transformation. Science of Computer Programming, 14:255-279, 1990. 77. L. Marshall. Using B to replay VDM proof obligations. Technical Report RR 87-30, Universit@ Catholique de Louvain, 1987. 78. L. Meertens. Algorithmics: Towards programming as a mathematical activity. In J. W. de Bakker, M. Hazewinkel, and J. K. Lenstra, editors, Proceedings of the CWI Symposium on Mathematics and Computer Science, volume 1, pages 289334. North Holland, 1986. 79. L. Meertens. Variations on trees. International Summer School on Constructive Algorithmics, September 1989. 80. R. P. Nederpelt. An approach to theorem proving on the basis of a typed lambda calculus. In W. Bibel and R. Kowalski, editors, 5th Conference on Automated Deduction, volume 87 of LNCS, pages 182-194. Springer, 1980. 81. P. A. J. Noel. Experimenting with isabell in zf set theory. Journal of Automated Reasoning, 10(1):15-58, 1993. 82. L. C. Paulson. Logic and Computation. Cambridge University Press, 1987. 83. L. C. Paulson. The foundation of a generic theorem prover. Journal of Automated Reasoning, 5:363-397, 1989. 84. P. Pepper. A simple calulus for program transformation (inclusive of induction). Science of Computer Programming, 9(3):221-262, 1987. 85. F. Pfenning. Logic programming in the LF logical framework. In Huet and Plotkin [54], pages 149-181. 86. F Pfenning. A proof of the church-tosser theorem and its representation in a logical framework. Technical Report CMU-CS-92-186, Carnegie Mellon University, 1992. 87. R. Pollack. Implicit syntax. In First Workshop on Logical Frameworks, 1990. Preliminary proceedings. 88. D. Prawitz. Natural Deduction. Almquist ~ Wiskell, Stockholm, 1965. 89. T. Santen. Formalization of the SPECTRUM methodology in Deva: Signature and logical calculus. Technical Report 93-04, TU Berlin, 1993.
246
D. References
90. M. Simons. Basic contexts, internal communication, GMD Karlsruhe,, 1990. 91. M. Sintzoff. Suggestions for composing and specifying program design decisions. In B. Robinet, editor, Proc. 4th Symposium on Programming, volume 83 of LNCS, pages 311-326, 1980. 92. M. Sintzoff. Understanding and expressing software construction. In P. Pepper, editor, Program Transformations and Programming Environments, pages 169180. Springer-Verlag, 1980. 93. M. Sintzoff. Expressing program developments in a design calculus. In M. Broy, editor, Logic of Programming and Calculi of Discrete Design, volume F36 of NATO ASI Series, pages 343-356. Springer-Verlag, 1986. 94. M. Sintzoff. Endomorphic typing. In B. MSller, H. Partsch, and S. Schuman, editors, Formal Program Development. Springer-Verlag, 1993. to appear. 95. M. Sintzoff, M. Weber, P. de Groote, and J. Cazin. Definition 1.1 of the generic development language Deva. ToolUse research report, Unit6 d'Informatique, Universit6 Catholique de Louvain, Belgium, 1989. 96. D. R. Smith. KIDS - a knowledge based software development system. In M. R. Lowry and R. D. McCartney, editors, Automating Software Design, pages 483514. The MIT Press, 1991. 97. Special issue on formal methods: part 1. The Computer Journal, 35(5), 1992. 98. Special issue on formal methods: part 2. The Computer Journal, 35(6), 1992. 99. L. S. van Bethem Jutting. Checking Landau's "Grundlagen" in the Automath system. PhD thesis, Eindhoven TechnicM University, 1979. 100. D. T. van Daalen. The Language Theory of AUTOMATH. PhD thesis, Technische Hoogeschool Eindhoven, 1980. 101. M. Weber. Explaining implicit proofs by explicit proofs. Private communication. 102. M. Weber. Formalization of the Bird-Meertens algorithmic calculus in the Deva meta-calculus. In Broy and Jones /18], pages 201-232. 103. M. Weber. Deriving transitivity of VDM reification in the Deva meta-calculus. In S. Prehn and W. J. Toetenel, editors, VDM~91 Formal Software Development Methods, volume 551 of LNCS, pages 406-427. Springer, 1991. 104. M. Weber. A Meta-Calculus for Formal System Development. Oldenbourg Vetlag, 1991. t05. M. Weber. Definition and basic properties of the Deva meta-catculus. Formal Aspects of Computing, 1993. to appear.
Lecture Notes in Computer Science For information about Vols. 1-665 please contact your bookseller or Springer-Verlag
Vnl. 666: J. W. de Bakker, W.-P. de Roever, G. Rozenberg (Eds.), Semantics: Foundations and Applications. Proceedings, 1992. VIII, 659 pages. 1993.
Vol. 684: A. Apostolico, M. Crochemore, Z. Galil, U. Manber (Eds.), Combinatorial Pattern Matching. Proceedings, 1993. VIII, 265 pages. 1993.
Vol. 667: P. B. Brazdil (Ed.), Machine Learning: ECML 93. Proceedings, 1993. XII, 471 pages. 1993. (Subseries LNAI).
Vol. 685: C. Rolland, F. Bodart, C. Cauvet (Eds.), Advanced Information Systems Engineering. Proceedings, 1993~ XI, 650 pages. 1993.
Vol. 668: M.-C. Gaudel, J.-P. Jouannaud (Eds.), TAPSOFT '93: Theory and Practice of Software Development. Proceedings, 1993. XII, 762 pages. 1993.
Vol. 686: J. Mira, J. Cabestany, A. Prieto (Eds.), New Trends in Neural Computation, Proceedings, 1993. XVII, 746 pages. 1993.
Vol. 669: R. S. Bird, C. C. Morgan, J. C. P. Woodcock (Eds.), Mathematics of Program Construction. Proceedings, 1992. VIII, 378 pages. 1993,
Vol. 687: H. H. Barrett, A. F. Gmitro (Eds.), Information Processing in Medical Imaging. Proceedings, 1993. XVI, 567 pages. 1993.
Vol. 670: J. C. P. Woodcock, P. G. Larsen (Eds,), FME '93: Industrial-Strength Formal Methods. Proceedings, 1993. XI, 689 pages. 1993.
Vol. 688: M. Gauthier (Ed.), Ada-Europe '93. Proceedings, 1993. VIII, 353 pages. 1993.
Vol. 671: H. J. Ohlbach (Ed.), GWAI-92: Advances in Artificial Intelligence. Proceedings, 1992. XI, 397 pages. 1993. (Subseries LNAI). Vol. 672: A. Barak, S. Guday, R. G. Wheeler, The MOSIX Distributed Operating System. X, 221 pages. 1993. Vol. 673: G. Cohen, T. Mora, O. Moreno (Eds.), Applied Algebra, Algebraic Algorithms and Error-Correcting Codes. Proceedings, 1993. X, 355 pages 1993. Vol. 674: G. Rozenberg (Ed.), Advances in Petri Nets 1993. VII, 457 pages. 1993. Vol. 675: A. Mulkers, Live Data Structures in Logic Programs. VIII, 220 pages. 1993.
Vol. 689: J. Komorowski, Z. W. Ras (Eds.), Methodologies for Intelligent Systems. Proceedings, 1993. XI, 653 pages. 1993. (Subseries LNAI). Vol. 690: C. Kirchner (Ed.), Rewriting Techniques and Applications. Proceedings, 1993. XI, 488 pages. 1993. Vo1.691: M. Ajmone Marsan (Ed.), Application and Theory of Petri Nets 1993. Proceedings, 1993. IX, 591 pages. 1993. Vol. 692: D. Abe[, B.C. Oni (Eds.), Advances in Spatial Databases. Proceedings, 1993. XIII, 529 pages, 1993. Vol. 693: P. E. Lauer (Ed.), Functional Programming, Concurrency, Simulation and Automated Reasoning. Proceedings, 1991/1992, XI, 398 pages. 1993,
Vol. 676: Th. H. Reiss, Recognizing Planar Objects Using Invariant Image Features. X, 180 pages. 1993.
Vo1. 694: A. Bode, M. Reeve, G. Wolf (Eds.), PARLE '93. Parallel Architectures and Languages Europe. Proceedings, 1993. XVII, 770 pages. 1993,
Vol. 677: H. Abdulrab, J.-P. P6cuchet (Eds.), Word Equations and Related Topics. Proceedings, 1991. VII, 214 pages. 1993.
Vol. 695: E. P. Klement, W. Slany (Eds.), Fuzzy Logic in Artificial Intelligence. Proceedings, 1993. VIII, 192 pages. 1993. (Subseries LNAI).
Vol. 678: F. Meyer auf der Heide, B. Monien, A. L. Rosenberg (Eds.), Parallel Architectures and Their Efficient Use. Proceedings, 1992. XII, 227 pages. 1993.
Vol. 696: M. Worboys, A. F. Grundy (Eds.), Advances in Databases. Proceedings, 1993. X, 276 pages. 1993.
Vol. 679: C. Fermtiller, A. Leitsch, T. Tammet, N. Zamov, Resolution Methods for the Decision Problem. VIII, 205 pages. 1993. (Subseries LNAI). Vol. 680: B. Hoffmann, B. Krieg-BriJckner (Eds.), Program Development by Specification and Transformation. XV, 623 pages. 1993. Vol. 681: H. Wansing, The Logic of Information Structures. IX, 163 pages. 1993. (Subseries LNAI). Vol. 682: B. Bouchon-Meunier, L. Valverde, R. R. Yager (Eds.), IPMU '92 - Advanced Methods in Artificial Intelligence. Proceedings, 1992. IX, 367 pages. 1993. Vol. 683: G.J. Milne, L. Pierre (Eds.), Correct Hardware Design and Verification Methods. Proceedings, 1993. VIII, 270 Pages. 1993.
VoI. 697: C. Courcoubetis (Ed.), Computer Aided Verification. Proceedings, 1993. IX, 504 pages. 1993. Vo1. 698: A. Vnronkov (Ed.), Logic Programming and Automated Reasoning. Proceedings, 1993. XIII, 386 pages. 1993. (Subseries LNA1), Vo1. 699: G. W. Mineau, B. Moulin, J. F. Sowa (Eds.), Conceptual Graphs for Knowledge Representation. Proceedings, 1993. IX, 451 pages. 1993. (Subseries LNAI). Vol. 700: A. Lingas, R. Karlssnn, S. Carlsson (Eds.), Automata, Languages and Programming. Proceedings, 1993. XII, 697 pages. 1993. Vol. 701: P. Atzeni (Ed.), LOGIDATA+: Deductive Databases with Complex Objects. VIII, 273 pages. 1993. Vol. 702: E. B~rger, G. Jgtger, H. Kleine Biining, S. Martini, M. M. Richter (Eds.), Computer Science Logic. Proceedings, 1992. VIII, 439 pages. 1993.
Vol. 703: M. de Berg, Ray Shooting, Depth Orders and Hidden Surface Removal. X, 201 pages. 1993.
Vol. 724: P. Cousot, M. Falaschi, G. Fi16, A, Rauzy (Eds.), Static Analysis. Proceedings, 1993. IX, 283 pages. 1993.
Vol. 704: F. N. Paulisch, The Design of an Extendible Graph Editor. XV, 184 pages. 1993.
Vol. 725: A. Schiper (Ed,), Distributed Algorithms, Proceedings, 1993. VII1, 325 pages. 1993.
Vol. 705: H. Griinbacher, R. W. Hartenstein (Eds.), FieldProgrammable Gate Arrays. Proceedings, 1992. VIII, 218 pages. 1993.
Vol. 726: T. Lengauer (Ed.), Algorithms - ESA '93, Proceedings, 1993. IX, 419 pages. 1993
Vol. 706: H. D. Rombach, V. R. Basili, R. W. Selby (Eds.), Experimental Software Engineering Issues. Proceedings, 1992. XVIII, 261 pages. 1993. Vol. 707: O. M. Nierstrasz (Ed.), ECOOP '93 - ObjectOriented Programming. Proceedings, 1993. XI, 531 pages. 1993. Vol. 708: C. Laugier (Ed.), Geometric Reasoning for Perception and Action. Proceedings, 1991. VIII, 281 pages. 1993. Vol. 709: F. Dehne, J.-R. Sack, N, Santoro, S, Whitesides (Eds,), Algorithms and Data Structures. Proceedings, 1993. XII, 634 pages. 1993. Vol. 710: Z. t~sik (Ed.), Fundamentals of Computation Theory. Proceedings, 1993. IX, 471 pages. 1993. Vol. 7 t i : A. M. Borzyszkowski, S. Sekotowski (Eds.), Mathematical Foundations of Computer Science 1993. Proceedings, 1993. XIII, 782 pages. 1993. Vol. 712: P. V. Rangan (Ed.), Network and Operating System Support for Digital Audio and Video, Proceedings, 1992. X, 416 pages. 1993.
Vol. 727: M. Filgueiras, L. Damas (Eds.), Progress in Artificial Intelligence. Proceedings, 1993. X, 362 pages. 1993. (Subseries LNAI). Vol. 728: P, Torasso (Ed.), Advances in Artificial Intelligence, Proceedings, 1993. XI, 336 pages. 1993. (Subseries LNAI). Vol. 729: L. Donatiello, R. Nelson (Eds.), Performance Evaluation of Computer anti Communication Systems. Proceedings, 1993. VIII, 675"pages. 1993, Vol. 730: D. B. Lomet (Ed.), Foundations of Data Organization and Algorithms. Proceedings, 1993. XI1,412 pages. 1993. Vol. 731: A. Schill (Ed.), DCE - The OSF Distributed Computing Environment. Proceedings, 1993. VIII, 285 pages. 1993. Vol. 732: A. Bode, M. Dal Cin (Eds.), Parallel Computer Architectures. IX, 31 l pages. 1993. Vol. 733: Th. Grecbenig, M. Tscheligi (Eds.), Human Computer Interaction. Proceedings, 1993. XIV, 450 pages. 1993. Vol. 734: J. Volkert (Ed.), Parallel Computation. Proceedings, 1993. VIII, 248 pages. 1993.
VoI. 713: G. Gottlob, A. Leitsch, D. Mundici (Eds.), Computational Logic and Proof Theory. Proceedings, 1993. XI, 348 pages. 1993.
Vol. 735: D. Bj0rner, M. Broy, I. V. Pottosin (Eds.), Formal Methods in Programming and Their Applications. Proceedings, 1993. IX, 434 pages. 1993.
Vol. 714: M. Bruyn00ghe, J. Peujam (Eds.), Programming Language Implementation and Logic Programming. Proceediogs, 1993. XI, 421 pages. 1993.
Vol. 736: R. L. Grossman, A. Nerode, A. P. Ravn, It. Rischel (Eds.), Hybrid Systems. VIII, 474 pages. 1993.
Vol. 715: E. Best (Ed.), CONCUR'93. Proceedings, 1993. IX, 541 pages. 1993.
V01. 737: J. Calmer, J. A. Campbell (Eds,), Artificial Intelligence and Symbolic Mathematical Computing. Proceedings, 1992. VIII, 305 pages. 1993.
Vo1. 716: A. U. Frank, L Campari (Eds.), Spatial Information Theory. Proceedings, 1993. XI, 478 pages. 1993,
Vol. 738: M. Weber, M. Simons, Ch. Lafontaine, The Generic Development Language Deva. XI, 246 pages. 1993.
Vol. 717: I. Sommerville, M. Paul (Eds.), Software Engineering - ESEC '93. Proceedings, 1993. XII, 516 pages. 1993.
Vol. 739: H. Imai, R. L. Rivest, T. Matsumoto (Eds.), Advances in Cryptology - ASIACRYPT '9l. X, 499 pages. 1993.
Vol. 718: J. Seberry, Y. Zheng (Eds.), Advances in Cryptology -AUSCRYPT '92. Proceedings, 1992. XIII, 543 pages. 1993.
Vol. 740: E. F. Brickell (Ed.), Advances in Cryptology CRYPTO '92. Proceedings, 1992. X, 565 pages. I993.
Vol. 719: D. Chetverikov, W.G. Kropatsch (Eds.), Computer Analysis of Images and Patterns. Proceedings, 1993. XVI, 857 pages, 1993. VoI. 720: V.MaHk, J. La~ansk37, R.R. Wagner (Eds.), Database and Expert Systems Applications. Proceedings, 1993. XV, 768 pages. 1993. Vol. 721: J. Fitch (Ed.), Design and Implementation of Symbolic Computation Systems. Proceedings, 1992. VIII, 215 pages. 1993, Vol. 722: A. Miola (Ed.), Design and Implementation of Symbolic Computation Systems. Proceedings, 1993. XII, 384 pages. 1993. Vol. 723: N. Aussenac, G. Boy, B. Gaines, M. Linster, J.G. Ganascia, Y. Kodratoff (Eds.), Knowledge Acquisition for Knowledge-Based Systems. Proceedings, 1993. XIII, 446 pages. 1993. (Subseries LNAI).
Vol. 741: B. Preneel, R. Govaerts, J. Vandewalle (Eds.), Computer Security and Industrial Cryptography. Proceedings, 1991. VIII, 275 pages. 1993. Vol. 742: S. Nishio, A. Yonezawa (Eds.), Object Technologies for Advanced Software. Proceedings, 1993. X, 543 pages. 1993.