Unification Grammars Like computer programs, the grammars of natural languages can be expressed as mathematical objects...
140 downloads
1156 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Unification Grammars Like computer programs, the grammars of natural languages can be expressed as mathematical objects. Such a formal presentation of grammars facilitates mathematical reasoning with grammars (and the languages they denote), on one hand, and the computational implementation of grammar processors, on the other hand. This book presents one of the most commonly used grammatical formalisms, unification grammars, which underlies such contemporary linguistic theories as lexical-functional grammar (LFG) and head-driven phrase structure grammar (HPSG). The book provides a robust and rigorous exposition of the formalism that is both mathematically well founded and linguistically motivated. Although the material is presented formally, and much of the text is mathematically oriented, a core chapter of the book addresses linguistic applications and the implementation of several linguistic insights in unification grammars. The authors provide dozens of examples and numerous exercises (many with solutions) to illustrate key points. Graduate students and researchers in both computer science and linguistics will find this book a valuable resource. is professor emeritus of computer science at the Technion-Israel Institute of Technology. His research for the last twenty years has focused on computational linguistics in general, and on the formal semantics of natural language in particular, mainly within the framework of type-logical grammar. His most recent research topic is proof-theoretic semantics for natural language. He is the author of several books and approximately 150 scientific articles. He regularly serves on editorial boards and program committees of several major conferences, including the committee of the FoLLI (Association for Logic, Language, and Information) Beth Dissertation Award.
NISSIM FRANCEZ
is associate professor of computer science at the University of Haifa, Israel. His areas of interest include computational linguistics and natural language processing, with a focus on formal grammars, morphology, syntax, and Semitic languages. He is the author or editor of nearly 100 publications. He has served on the program committees of numerous conferences, as a member of the standing committee overseeing the yearly Formal Grammar conference, and as the editor-in-chief of the journal Research in Language and Computation.
SHULY WINTNER
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
Unification Grammars NISSIM FRANCEZ Technion-Israel Institute of Technology, Haifa, Israel SHULY WINTNER University of Haifa, Haifa, Israel
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
CAMBRIDGE UNIVERSITY PRESS
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Tokyo, Mexico City Cambridge University Press 32 Avenue of the Americas, New York, NY 10013-2473, USA www.cambridge.org Information on this title: www.cambridge.org/9781107014176 © Nissim Francez and Shuly Wintner 2012 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2012 A catalog record for this publication is available from the British Library. Library of Congress Cataloging in Publication Data Francez, Nissim. Unification grammars / Nissim Francez, Shuly Wintner. p. cm. Includes bibliographical references and index. ISBN 978-1-107-01417-6 (hardback) 1. Grammar, Comparative and general–Mathematical models. 2. Unification grammar. 3. Lexical-functional grammar. 4. Head-driven phrase structure grammar. I. Wintner, Shuly, 1963- II. Title. P151.F6775 2011 2011031353 415.01 51–dc23 ISBN 978-1-107-01417-6 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate.
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
To Tikva, for a never failing unification To Galia, Tomer and Inbal, with all my love וקָרב אֹתָ אֶחָד אֶל אֶחָד לְ לְעֵ אֶחָד והָיוּ לַאֲחָד י בְּיד יחזקאל לז י ז
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
Contents
Preface Acknowledgments
page ix xii
1
Introduction 1.1 Syntax: the structure of natural languages 1.2 Linguistic formalisms 1.3 A gradual description of language fragments 1.4 Formal languages 1.5 Context-free grammars 1.6 CFGs and natural languages 1.7 Mildly context-sensitive languages 1.8 Motivating an extended formalism
1 3 4 6 12 14 22 29 30
2
Feature structures 2.1 Motivation 2.2 Feature graphs 2.3 Feature structures 2.4 Abstract feature structures 2.5 Attribute-value matrices 2.6 The correspondence between feature graphs and AVMs 2.7 Feature structures in a broader context
34 35 37 52 54 64 74 83
3
Unification 3.1 Feature structure unification 3.2 Feature-graph unification 3.3 Feature structure unification revisited 3.4 Unification as a computational process 3.5 AFS unification 3.6 Generalization vii http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
85 85 86 93 94 99 108
viii
Contents
4
Unification grammars 4.1 Motivation 4.2 Multirooted feature graphs 4.3 Abstract multirooted structures 4.4 Multi-AVMs 4.5 Unification revisited 4.6 Rules and grammars 4.7 Derivations 4.8 Derivation trees
115 116 118 125 130 137 146 151 157
5
Linguistic applications 5.1 A basic grammar 5.2 Imposing agreement 5.3 Imposing case control 5.4 Imposing subcategorization constraints 5.5 Subcategorization lists 5.6 Long-distance dependencies 5.7 Relative clauses 5.8 Subject and object control 5.9 Constituent coordination 5.10 Unification grammars and linguistic generalizations 5.11 Unification-based linguistic formalisms
165 166 167 172 174 178 185 191 197 201 208 209
6
Computational aspects of unification grammars 6.1 Expressiveness of unification grammars 6.2 Unification grammars and Turing machines 6.3 Off-line parsability 6.4 Branching unification grammars 6.5 Polynomially parsable unification grammars 6.6 Unification grammars for natural languages 6.7 Parsing with unification grammars
213 214 226 233 239 244 251 253
7
Conclusion
275
Appendix A List of symbols
277
Appendix B
280
Preliminary mathematical notions
Appendix C Solutions to selected exercises
284
Bibliography Index
299 307
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
Preface
This book grew out of lecture notes for a course we started teaching at the Department of Computer Science at the Technion, Haifa, in the spring of 1996, and later also taught at the University of Haifa. The students were advanced undergraduates and graduates who had good knowledge of formal languages but limited background in linguistics. We intended it to be an introductory course in computational linguistics, but we wanted to focus on contemporary linguistic theories and the necessary mechanisms for reasoning about and implementing them, rather than on traditional (statistical, corpus-based) natural language processing (NLP) techniques. We realized that no good textbook existed that covered the material we wanted to teach. Although quite a few good introductions to NLP exist, including Pereira and Shieber (1987), Gazdar and Mellish (1989), Covington (1994), Allen (1995), and more recently, Manning and Schütze (1999), Jurafsky and Martin (2000), and Bird et al. (2009), none of them provides the mathematical and computational infrastructure needed for our purposes. The focus of this book is two dimensional. On one hand, we focus on a certain formalism, unification grammars, for presenting, studying, and reasoning about grammars. Although it is not the sole formalism used in computational linguistics,1 it has gained much popularity, and it now underlies many ongoing projects. On the other hand, we also focus on fundamental natural language syntactic constructions, and the way they are specified in grammars expressed in this formalism. Other monographs are either too informal (Shieber, 1986) or too advanced for the students we are addressing (Carpenter, 1992; Shieber, 1992). Of course, the linguistic texts (Pollard and Sag, 1994; Dalrymple et al.,
1
Notably, type-logical (categorial) grammars (Moortgat, 1997; Steedman, 2000) promote a radically different view of syntax.
ix
Cambridge Books Online © Cambridge University Press, 2012
x
Preface
1995; Sag and Wasow, 1999; Butt et al., 1999) are inadequate for our purposes because their emphasis is on linguistic issues, rather than the underlying mathematical and computational formulations. Since 1996 we have taught courses and tutorials based on drafts of this book at universities and research institutes worldwide. Unification grammars had become part of the curriculum in several introductory computational linguistics programs, and there was still no adequate textbook on the subject. With this book, we hope to fill a gap on the introductory computational linguistics bookshelf. Whom is this book for? Computational linguistics is a relatively young field of research that lies at the interface of linguistics and computer science. Students of computational linguistics usually come from one of these two disciplines and frequently lack background in the other. This book is an introductory textbook, and as such is oriented toward students with little background in either. However, we do assume at least some basic knowledge of both paradigms, linguistics and mathematics. More specifically, we assume that the reader has a fair knowledge (equivalent to what is usually acquired in one introductory course) of syntax, as well as some acquaintance with its elementary terminology. In the same way, we assume a background of at least one year of undergraduate study in mathematics. In particular, we assume acquaintance with elementary set theory, formal language theory, basic algorithms and graph theory, some basic universal algebra, and some logic. As far as programming is concerned, acquaintance with some logic programming language (such as Prolog) can be helpful, but it is definitely not necessary to understand this book. As an aid to readers, and to establish a common mathematical terminology, Appendix B contains a brief overview of some of the basic mathematical concepts we use in the book. The organization of the book Since this book is intended to be accessible to readers with some background in computer science or a related discipline, the text consists of mathematical presentations of concepts in the study of natural language syntax. After an introductory chapter in which we outline the problems and recapitulate basic concepts (in particular, context-free grammars), Chapter 2 introduces feature structures, the main building blocks of unification grammars. The unification operation is discussed in Chapter 3, and Chapter 4 defines grammars and their
Cambridge Books Online © Cambridge University Press, 2012
Preface
xi
languages. The linguistic applications of the formalism we develop are discussed in Chapter 5, where a series of grammar fragments are presented for various natural language phenomena. The book does not deal with theories of meaning (semantics). Chapter 6 discusses computational issues, and in particular, parsing with unification grammars. The discussion is limited to untyped unification formalisms; extension to the typed case is planned for the future. As we intend the book to serve as a textbook, we have scattered various exercises throughout the text. Some of them are simple, intended to apply certain ideas discussed in the text; some complete the text (for example, ask the reader to complete some proofs); and some call for a deeper understanding and creative thought. Solving the exercises is, in our mind, a good way of internalizing the concepts discussed in the text. Nevertheless, it is not required in order to follow the text. We provide sketches of solutions to selected exercises (those marked by ‘(*)’) in Appendix C. At the end of each chapter we list references to the original publications on which our presentation is based, as well as suggestions for further reading. General conventions Throughout the book, Sans Serif font is used for depicting phrases in natural languages. When the examples are not drawn from English, glosses are provided. Ungrammatical strings are preceded by ‘∗’, as is common in the linguistic literature. We use italics for emphasis and bold face for introducing new concepts. When a symbol is referenced, rather than used, we usually enclose it within single quotes. For example, the symbol ‘’ denotes unification. When the context in which a symbol appears eliminates possible confusion, we sometimes omit the quotes.
Cambridge Books Online © Cambridge University Press, 2012
Acknowledgments
Many students and colleagues have read drafts of this book and provided valuable comments. We thank Gilad Ben-Avi, Daniel Feinstein, and Israel Gutter for spotting several errors in earlier versions of this book and for suggesting useful improvements. Special thanks are due to Yael Sygal for not only suggesting numerous improvements, but also actively contributing two proofs to the text. The work of the first author was partially supported, over the years, by grants from the Israel Academy of Sciences and Arts (Israel Science Foundation), and by the Technion Vice-president’s Fund for the Promotion of Research at the Technion. The second author was partially supported by the Israel Science Foundation.
xii
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
1 Introduction
Natural languages1 are among Nature’s most extraordinary phenomena. While humans acquire language naturally and use it with great ease, the formalization of language, which is the focus of research in linguistics, remains evasive. As in other sciences, attempts at formalization involve idealization: ignoring exceptions, defining fragments, and the like. In the second half of the twentieth century, the field of linguistics has undergone a revolution: The themes that are studied, the vocabulary with which they are expressed, and the methods and techniques for investigating them have changed dramatically. While the traditional aims of linguistic research have been the description of particular languages (both synchronically and diachronically), sometimes with respect to other, related languages, modern theoretical linguistics seeks the universal principles that underlie all natural languages; it is looking for structural generalizations that hold across languages, as well as across various phrase types in a single language, and it attempts to delimit the class of possible natural languages by formal means. The revolution in linguistics, which is attributed mainly to Noam Chomsky, has influenced the young field of computer science. With the onset of programming languages, research in computer science began to explore different kinds of languages: formal languages that are constructed as a product of concise, rigorous rules. The pioneering work of Chomsky provided the means for applying the results obtained in the study of natural languages to the investigation of formal languages. One of the earliest areas of study in computer science was human cognitive processes, in particular natural languages. This area of research is
1
Here and throughout the book we refer by ‘language’ to written language only, unrelated to any acoustic phenomenon associated with speech. In addition, we detach language from its actual use, in particular, in human communication.
1
Cambridge Books Online © Cambridge University Press, 2012
2
1 Introduction
today categorized under the term artificial intelligence (AI); the branch of AI that studies human languages is usually referred to as natural language processing (NLP). The main objective of NLP is to computationally simulate processes related to the human linguistic faculty. The ordinary instruments in such endeavors are heuristic techniques that aid in constructing practical applications. The outcomes of such research are computer systems that perform various tasks associated with language, such as question answering, text summarization, and categorization. The application to Internet search gave another boost to NLP. A different scientific field, which lies at the crossroads of computer science and linguistics, has obtained the name computational linguistics. While it is related to NLP, there are distinct differences between the two. Computational linguistics studies the structure of natural languages from a formal, mathematical and computational, point of view. It is concerned with all subfields of the traditional linguistic research: phonology (the theory of speech sounds), morphology (the structure of words), syntax (the structure of sentences), semantics (the theory of meaning), and pragmatics (the study of language use and its relation to the non-linguistic world). But it approaches these fields from a unique point of departure: It describes linguistic phenomena in a way that, at least in principle, is computationally implementable. As an example, consider the phenomenon of anaphoric reference. In virtually all natural languages, it is possible to refer, within discourse, to some entities that were mentioned earlier in the discourse through the use of pronouns. Consider, for example, the following English sentence: And he dreamed that there was a ladder set up on the earth, and the top of it reached to heaven; and the angels of God were ascending and descending on it! (Genesis 28:12)
The pronoun it (in both of its occurrences) refers back to the noun phrase a ladder. While most speakers of English would have no problem recognizing this fact, a formal explanation for it seems to require a substantial amount of knowledge. From a purely syntactic point of view, nothing prevents the pronoun from referring to other previously mentioned entities, such as the earth or heaven. Computational linguistic approaches to anaphora devise algorithms that relate pronouns occurring in texts with the entities to which they refer (this process is known as anaphora resolution). While some approaches – to this problem as well as to others – are purely analytic, others are based on probabilistic tools and the survey of online corpora of texts. In this book we view natural languages in a formal way; more specifically, we assume that there exists a set of formal, concise rules, mathematically
Cambridge Books Online © Cambridge University Press, 2012
1.1 Syntax: the structure of natural languages
3
expressible, that characterizes the syntax of every natural language.2 When the grammars of natural languages are formal entities, they can be naturally subjected to the application of various paradigms and techniques from computer science. The rules that govern the syntax of natural languages, and in particular, the formal way to express them, are the focus of this book. 1.1 Syntax: the structure of natural languages It is a well-observed fact that natural languages have structure. Words in the sentences of any natural language are not strung together arbitrarily; rather, there are underlying rules that determine how words combine to form phrases, and phrases combine to yield sentences. Such rules are based on the observation that the context constrains, to a great extent, the possible words and phrases that can occur in it. As an example, consider the sentence quoted above: And he dreamed that there was a ladder set up on the earth
Observe that the prefix of the sentence determines its possible extension. For example, after reading And he dreamed that there was a, readers expect words from a certain category (nouns); whereas after And he dreamed that there was, one would expect (among other things, perhaps) phrases whose structure is different (noun phrases). Exercise 1.1. What kind of phrases can possibly follow And he dreamed that? Try to characterize them as best as you can. Syntax is the area of linguistics that assigns structure to utterances, thus determining their acceptability. It cannot, however, be viewed independently of other areas of linguistics. Syntax is an indispensable means for assigning meaning to utterances; most theories of semantics rely on syntax in that they define meanings compositionally. Thus, the meaning of a phrase is defined as a function of the meanings of its subparts. Furthermore, the principles of syntax are believed by many researchers to be responsible for the structure of words; that is, morphology is viewed by many as a subfield of syntax. Syntax also has an important influence on phonology: the structure of utterances occasionally affects phonological processes. While we do not discuss these phenomena in this book, they contribute to the importance of syntax as a central field in linguistics. 2
We are interested in a synchronic, rather than a diachronic, description of languages: We are concerned with the features of languages at a given point in time, ignoring historic changes.
Cambridge Books Online © Cambridge University Press, 2012
4
1 Introduction
Syntax, then, is concerned with the structure of natural languages. It does so by providing the means for specifying grammars. A grammar is a concise description of the structure of some language. Its function is multifaceted. First and foremost, it specifies the set of sentences in the language. Second, it assigns some structure to each of these sentences. These two tasks are discussed later in Section 1.5, and in more detail in Chapter 4. Finally, it can be used to inform computational algorithms that can then analyze sentences. Such algorithms, called parsers, are discussed in Chapter 6. For a grammar to be used for a computational application, it must be formally defined. The next section discusses grammatical formalisms, that is, formal languages for specifying grammars.
1.2 Linguistic formalisms Natural languages are natural phenomena, and linguistics can be viewed as part of the natural sciences: Just as physicists study the structure of the universe, so do linguists study the structure of languages. Just as physics formulates claims about the material world, linguistics is also an empirical science, formulating empirical claims about (one or more) languages that can be verified or falsified. Usually, such claims are validated by informants: native speakers, who pass judgments on the predictions of the theory; recently, online corpora of texts were used to achieve the same aim.3 And just as physicists need an underlying formalism with which to express their theories (in general, mathematics), so do linguists. A clear line should be drawn between the linguistic theories that are organized, coherent sets of generalizations regarding languages, on one hand, and the formalisms in which they are expressed, on the other hand. This book is dedicated to describing the underlying formalisms in which several contemporary linguistic theories, notably, lexical-functional grammar (LFG) and head-driven phrase structure grammar (HPSG) are expressed: unification grammars. Continuing the analogy to physics, this book should be viewed as an elementary textbook for current physics (i.e., linguistic) theories, focusing on the necessary underlying mathematics (e.g., differential equations) and its use in describing physical (linguistic) laws. Why should there be any mathematics involved with linguistic theories? Consider what is needed from a good linguistic theory: It must be capable of describing facts about the world (in this case, facts about natural languages), but it must also be able to make generalizations; obviously, a comprehensive 3
Unlike the natural sciences, linguistics “observables” are open to interpretation. For example, not all native speakers may agree on the grammaticality of some written utterance, and sometimes performance constraints can obscure competence judgments. Observing language can hardly be detached from the daily practice of its use as an actual communication means.
Cambridge Books Online © Cambridge University Press, 2012
1.2 Linguistic formalisms
5
list of facts is impossible to come with, given the infinite nature of human languages. The language we have used thus far for describing (very roughly) the structure of sentences is English. Indeed, for many years the structure of natural languages was expressed in natural languages. The problem with using natural language to express theories is that it is informal. It is possible to give a characterization of phenomena using English, but to account for them formally, a more rigorous means is needed. Such tools are called linguistic formalisms. In particular, using mathematics to specify and reason about languages allows for proofs of claims about them. A linguistic formalism is a (formal) language, with which claims about (natural, but also formal) languages can be made. In general, one builds a (mathematical) model of a (fragment of a) natural language within such a formalism. Then, within the model, claims can be proved (deductively), forming predictions with empirical contents that can be confronted with actual facts. Furthermore, the models, when formulated in a computational formalism, enable machine implementation of linguistic analyses, for example, parsers. What is required from such a formalism? First and foremost, it should be formal; natural languages will not do. It must also be recursive, or in other words, must provide finite means for expressing infinite sets. It must be precise so that the subtleties of the theory can be easily and accurately expressed. And it must be expressive enough that the wealth of phenomena of which natural languages consist can be accounted for. Of course, the question of expressive power (the formal class of languages that can be defined with the formalism) cannot even be posed when a formalism is expressed in a natural language, rather than mathematically. The choice of a linguistic formalism carries with it linguistic consequences: It implies that certain generalizations and predictions are more central than others. As an example, consider the notion of verb valence, whereby verbs are subcategorized according to the number and type of arguments they expect to have (in simple terms, the distinction between intransitive, transitive, and ditransitive verbs). Expressing such a notion in a linguistic formalism has the immediate consequence that concatenation and constituent order become dominant factors (as opposed to, say, the length of words or their distance from the beginning of the sentence). When a formalism is based on phrase-structure rules, it presupposes the notion of phrase structure (as opposed to, say, dependencybased formalization). When a formalism is highly lexicalized, it highlights the importance of the lexicon in linguistic theory, perhaps at the expense of other components of the grammar. In this book we focus on unification-based formalisms, promoting the importance of feature structures and of phrase-structure rules based on feature
Cambridge Books Online © Cambridge University Press, 2012
6
1 Introduction
structures for linguistic expression. Some of the consequences of this choice include reliance on phrase structure (with concatenation as the sole stringcombination operation); centrality of the lexicon; very powerful and very general rules; and potentially very detailed analyses, due to the use of (deeply embedded) features and values. We advocate for the benefit of this type of formalism to linguistic theory in Chapter 5. 1.3 A gradual description of language fragments This book deals with formalisms for describing natural languages. The concept of natural languages (as opposed to artificial ones) is very hard to define, and we do not attempt to do so here. Instead, we informally characterize small fragments of certain languages, notably English, which we use to exemplify the material explained in the book, especially in Chapter 5. We start with E0 , which is a small fragment of English; E0 is then extended in different directions, forming broader-coverage subsets. Indeed, where such small fragments are concerned, it is rather simple to speak about similar sublanguages of other languages, and we will do so for a variety of languages, including French, Hebrew, Russian, and German. Although we do not present any formal account of the similarities among the language fragments, our intention is to account for similar phenomena. For example, when E0 accounts for local structures, involving the expression of basic predications (a verb as a relation, and its arguments), similar phenomena in, say, Hebrew are expressed in H0 , possibly by different means. One of the major issues discussed in this book is the quest for an adequate formal definition for natural languages. However, to account for small fragments, such as E0 , we can use an informal description. This is what we do in the material that follows. 1.3.1 Basic sentences E0 is a small fragment of English consisting of very simple sentences, constructed with only intransitive and transitive (but no ditransitive) verbs, common nouns, proper names, pronouns, and determiners. Typical sentences are A sheep drinks. Rachel herds the sheep. Jacob loves her.
Similar strings are not E0 - (and not English-) sentences4 : ∗Rachel feed the sheep ∗Rachel feeds herds the sheep 4
Recall that a string preceded by ‘∗’ is ungrammatical.
Cambridge Books Online © Cambridge University Press, 2012
1.3 A gradual description of language fragments
7
∗The shepherds feeds the sheep ∗Rachel feeds ∗Jacob loves she ∗Jacob loves Rachel she ∗Them herd the sheep
Of course, many strings are not E0 sentences, although they are perfectly grammatical English sentences: Rachel has seen Jacob.
However, this uses a tense that is outside the scope of our discussion. Rachel was loved by Jacob
is in the passive voice, which we do not deal with. Other English sentences that are outside the scope of E0 are covered by later fragments. All E0 sentences have two components, a subject, realized as a noun phrase, and a predicate, realized as a verb phrase. A noun phrase can be a proper name, such as Rachel, or a pronoun, such as they, or a common noun, possibly preceded by a determiner: the lamb or three sheep. A verb phrase consists of a verb, such as feed or sleeps, with a possible additional object, which is a noun phrase. Furthermore, there are constraints on the combination of phrases in E0 . We list three of them, informally: • The subject and the predicate must agree on number and person: If the subject
is a third person singular, so must the verb be. • Objects complement only – and all – the transitive verbs. • When a pronoun is used, it is in the nominative case if it is in the subject
position, and in the accusative case if it is an object. As can be seen, the examples of sentences given in this section obey all the restrictions, whereas each of the nonsentences violates at least one of them. A major part of modeling this fragment involves defining the means through which these constraints can be formulated and enforced. 1.3.2 Subcategorization In presenting linguistic examples, we extend E0 in several directions. While the formal definitions of the language fragments that are used to illustrate a presentation are given in the appropriate places in the text, we sketch these future extensions here. Our first concern is to refine E0 so that the valence of verbs is better accounted for. This means that the distinction among intransitive, transitive, and ditransitive verbs will be made more refined: Esubcat is a
Cambridge Books Online © Cambridge University Press, 2012
8
1 Introduction
fragment of English, based on E0 , in which verbs are classified into subclasses according to the complements they “require.” Recall that in E0 , transitive verbs occur with (noun phrase) objects, and intransitive verbs do not. In Esubcat transitive verbs can occur with different kinds of objects. For example, verbs such as eat, see, and love require a noun-phrase object; but verbs such as say and think can take a sentential object; verbs such as want, try and tend take an infinitival verb phrase; some verbs require prepositional phrases, sometimes with a specific preposition.5 Esubcat will also allow verbs to have more than one object: Verbs such as give and sell can occur with two complements. For example, the following sentences are in Esubcat : Laban gave Jacob his daughter. Jacob promised Laban to marry Leah. Laban persuaded Jacob to promise him to marry Leah.
Similar strings that violate this constraint are: ∗Rachel feeds Jacob the sheep ∗Jacob saw to marry Leah
The modeling problem here is to specify valence and enforce its saturation. 1.3.3 Control With the addition of infinitival complements in Esubcat , Econtrol can capture constraints of argument control in English. Informally, what we mean by control is the phenomenon by which certain verbs that require a sentential complement allow this complement to be incomplete: Usually, the subject of the complement is missing. The missing constituent is implicitly understood as one of the main verb’s other complements: either the subject or the object. For example, the verb promise takes two objects, one of which is an infinitival verb phrase. The understood subject of this verb phrase is the subject of promise. In the sentence Jacob promised Laban to work seven years,
it is the subject Jacob that is the understood subject of the verb work. On the other hand, in Laban persuaded Jacob to work seven years,
it is the object Jacob that is the understood subject of work. The difference lies in the main verb: Promise is said to be a subject control verb, whereas persuade is an object control verb. In Econtrol , phrases that contain infinitival complements are assigned a structure that reflects their intended interpretation. 5
We do not account for prepositional phrases in this fragment.
Cambridge Books Online © Cambridge University Press, 2012
1.3 A gradual description of language fragments
9
Here, the modeling problem is to identify the “missing” controlled part, distinguish between the two cases, and provide the means for “filling” the gap correctly. 1.3.4 Long-distance dependencies We now consider another extension of Esubcat , namely Eldd , typical sentences of which are [
[
(1) The shepherd wondered whom Jacob loved . (2) The shepherd wondered whom Laban thought Jacob loved . (3) The shepherd wondered whom Laban thought Rachel claimed Jacob loved
[
.
[
In all these sentences (and, clearly, the sequence can be prolonged indefinitely), the transitive verb loves occurs without an explicit noun phrase in the object position. The symbol ‘ ’, called a gap, is a place holder, positioned at the alleged surface location of the “missing” object. An attempt to replace the gap with an explicit noun phrase results in ungrammaticality: (4) ∗The shepherd wondered whom Jacob loved Rachel.
However, despite the absence of the object of loves from its surface position, there is another element in the surface structure, namely whom, which is the “understood” object of loves. In some theories, it is considered a “dislocated” object (due to movement transformation). More abstractly, whom is referred to as the filler of the gap. It is important to notice that the sequence (1)–(3) (and its extensions) shows that there is no (theoretical, or principled) bound on the surface distance between a gap and its filler. This unboundedness motivated the use of the term long-distance dependencies, or unbounded dependencies, for such phenomena. Some comments are needed regarding long-distance dependencies. First, note that the gap need not be in the object position. Sentences (5)–(6) show the beginning of a similar chain of sentences, in which there is a gap in the subject position of an embedded clause: [
[
(5) Jacob wondered who loved Leah. (6) Jacob wondered who Laban believed
loved Leah.
Again, an explicit noun phrase filling the gap results in ungrammaticality: (7) ∗Jacob wondered who the shepherd loved Leah
The filler here is the “understood” subject. Also, note that more than one gap may be present in a sentence (and, hence, more than one filler), as shown in (8a,b):
Cambridge Books Online © Cambridge University Press, 2012
1 Introduction
.
[
[
[
(8a) This is the well which Jacob is likely to draw water from (8b) It was Leah that Jacob worked for without loving .
[
10
In some languages (e.g., Norwegian) there is no (principled) bound on the number of gaps that can occur in a single clause. Multiple gaps are outside the scope of our discussion. There are other fragments of English in which long-distance dependencies are manifested in other forms. One example, which we do not cover here, is topicalization, as shown in (9)–(10): [
[
(9) Rachel, Jacob loved (10) Rachel, every shepherd knew Jacob loved
Another example is the interrogative sentence, such as in (11)–(12): [
(11) Who did Jacob love ? (12) Who did Laban believe Jacob loved
[
?
Here again, the modeling problem involves identifying the “missing” constituent and providing the means for correlating the “gap” with the dislocated pronoun that “fills” it. 1.3.5 Relative clauses A different yet similar case of remote dependencies is the relative clause; these are clause that modify nouns, and typically an element is missing in the clause that is semantically “filled” by the noun being modified. Consider the following examples: (13) The lamb that Rachel loves (14) The lamb that loves Rachel
In (13), the noun lamb is modified by the clause that Rachel loves; note that the subcategorization constraints of loves are violated, since an accusative object is not explicit. Rather, the “understood” object of loves is the head noun lamb. In (14) a similar situation is exemplified, but it is the subject of loves, rather than its object, that is missing (and is identified with the head noun lamb). Relative clauses can be much more complicated than these two simple examples; the “missing” element can be embedded deeply within the clause, and in some cases, it is replaced by a pronoun. We only address the simple cases in which the head noun fills the position of either the subject or the direct object of the relative clause; this fragment of English, which is an extension of Esubcat , is called Erelcl . Modeling this case requires mechanisms very similar to those used in the modeling of long-distance dependencies.
Cambridge Books Online © Cambridge University Press, 2012
1.3 A gradual description of language fragments
11
1.3.6 Coordination Another extension of E0 , independent of the above, is the phenomenon of coordination which is accounted for in the language fragment Ecoord . Put simply, coordination is the combination of two constituents of the same category through a conjunctive word; the yield of the process is a constituent of the same category. In English, many categories can be coordinated: nouns, noun phrases, prepositional phrases, verbs, verb phrases, and even complete sentences. Some examples of coordination in English are: No man lift up his [hand] or [foot] in all the land of Egypt Jacob saw [Rachel] and [the sheep of Laban] Jacob [went on his journey] and [came to the land of the people of the east] Jacob [went near], and [rolled the stone from the well’s mouth], and [watered the flock of Laban his mother’s brother] every [speckled] and [spotted] sheep Leah was [tender eyed] but [not beautiful] [Leah had four sons], but [Rachel was barren] She said to Jacob, “[Give me children], or [I shall die]!”
Ecoord is an extension of E0 in which basic categories of E0 can be conjoined: nouns, noun phrases, verbs, verb phrases, and sentences. We will also touch upon the problem of nonconstituent coordination: the phenomenon by which two strings that are not considered to be complete constituents by ordinary criteria can be conjoined. The modeling problem here is to identify the constituents that can be coordinated and determine the properties of the coordinated phrase based on the properties of its components. 1.3.7 What is not included? While our language fragments account for several interesting phenomena in natural languages, it is of course impossible to provide an exhaustive list of such phenomena. Several other constructions are left outside of the discussion. We briefly survey a few notable examples here. An extension of the long distance dependency phenomena discussed above is known as pied piping. This is the situation in which a whole phrase, rather than a single wh-pronoun, is moved to the beginning of a clause (e.g., in interrogative sentences). In the following example, Which lamb is such a phrase: Which lamb does Rachel love?
Note that fronting the wh-pronoun only is ungrammatical: Which does Rachel love lamb?
Cambridge Books Online © Cambridge University Press, 2012
12
1 Introduction
A related case is called island constraints. Such constraints involve islands, or phrases from which it is impossible to extract components. For example, in Rachel likes the lamb with the brown fleece,
it is impossible to extract fleece and express it as a wh-pronoun, as in: ?
[
∗What does Rachel like the lamb with the brown
Languages other than English are invaluable sources of interesting linguistic constructions that we do not address here. A famous example involves Dutch, where the embedding of verb phrases (as in Jacob thought that Laban believed that Rachel loved him) can require complex agreement constraints, known as cross-serial dependencies, among all the verbs’ arguments. Such constraints were proven to be beyond the expressive power of context-free grammars (see the following section). In a different dimension, we do not discuss problems that lie on the interface between syntax and semantics, such as anaphora resolution, ellipsis, and the like; and we do not discuss natural language generation (although its counterpart, parsing, is the topic of much of Chapter 6).
1.4 Formal languages We described the language fragments in the previous section informally, using English. Formal language theory is a branch of computer science that deals with languages as mathematical objects. In this section, we provide a necessarily brief introduction to the basic concepts of this theory. As is usual in formal language theory, the basic objects constitute a fixed, finite set of letters (or terminals), Σ, called the alphabet. The size of Σ is denoted by |Σ|. Subscripted meta-variables σ range over elements of Σ. A string (word) is any sequence of letters, usually written as σ1 · · · σn . For example, for Σ = {0, 1, . . . , 9}, words are the usual decimal notations for natural numbers. Meta-variables u, v, w range over strings. The string w = σ1 · · · σn is said to be of length |w| = n ≥ 0. For example, for w = 1944, |w| = 4. For w = 001, |w| = 3. The unique empty string (whose length is 0) is denoted . Usually, a word of length 1 is identified with the corresponding letter. The set of all words over Σ is denoted Σ∗ . Note that for a nonempty Σ, Σ∗ is (countably) infinite. A binary concatenation operation on strings, ‘·’, is defined by setting (σ1 · · · σn ) · ) = σ1 · · · σn σ1 · · · σm . For example, suppose Σ = {0, 1}. Then 101 · (σ1 · · · σm 0011 = 1010011. Also, |w1 · w2 | = |w1 | + |w2 |. Note that concatenation is an associative binary operation, with as the identity element, so that u · (v · w) = (u · v) · w and · w = w · = w. A repetition (iteration) operator on words is
Cambridge Books Online © Cambridge University Press, 2012
1.4 Formal languages
13
defined as follows: for w ∈ Σ∗ , wn = w · . . . · w (n times). w0 = . When clear, we abbreviate w1 · w2 by w1 w2 . A formal language is an arbitrary set of strings, that is, an arbitrary subset of Σ∗ , and subscripted meta-variable L ranges over languages. Concatenation is lifted to an operation on languages by setting L1 · L2 = {w1 · w2 | w1 ∈ L1 , w2 ∈ L2 }. This induces lifting repetition to languages as well: Ln = {w1 · · · wn | wi ∈ L for 1 ≤ i ≤ n}, L0 = {}. A useful operation on languages is the Kleene closure: L∗ = n≥0 Ln . This motivates the notation Σ∗ for the set of all strings over Σ. Observe that the number of languages over a (nonempty) given alphabet is noncountably infinite. Exercise 1.2 (*). Give an example to show that concatenation is not commutative (i.e., does not satisfy w1 w2 = w2 w1 for every w1 , w2 ∈ Σ∗ ). Exercise 1.3 (*). Formulate a condition under which an equation of the form w1 · x = w2 , where x ranges over Σ∗ , has a solution. Show that if the equation is solvable, it has a unique solution. Exercise 1.4 (*). Give an inductive definition of the repetition operations w n , Ln . Exercise 1.5 (*). For Σ = ∅, what is Σ∗ ? Exercise 1.6 (*). Show that in general, Σ∗1 ∪ Σ∗2 = (Σ1 ∪ Σ2 )∗ . Given some language L ⊆ Σ∗ , one naturally looks for a general way of describing L. Intuitively, we are interested in some mechanism for describing languages, such that the denotation of the description can yield L for any given L. Such mechanisms are called grammars, and formal language theory provides means for proving that the language generated by a given grammar is indeed L. Often, it is possible to provide a definition of a language that is completely independent of any grammar that generates it. For example, when a language is finite, one can simply stipulate its members. Some infinite languages can also be defined without resorting to a grammar: for example, the language of all strings of even length, or the language of all palindromes, or the language whose words have a prime number of occurrences of the letter ‘a’. For all these languages, when a grammar is given one can prove whether it indeed generates exactly that language. However, there is an inherent problem in providing a general methodology for specifying languages. While in some cases this is easy and straight-forward
Cambridge Books Online © Cambridge University Press, 2012
14
1 Introduction
to do, in others it is impossible, and the grammar remains the only definition of the language.
1.5 Context-free grammars Among the most elementary formal mechanisms used in describing languages are context-free grammars (CFGs). Motivated by the concept of phrase structure in natural languages, CFGs naturally generalize phrase structures and can be used to describe languages (not necessarily natural ones) that are thus characterized. We sketch the details of the formalism in this section, and in Section 1.6, question its applicability to adequately describe the syntax of natural languages. We assume some preliminary knowledge of mathematics; refer to Appendix B for a short review of the notions used in this book. Definition 1.1 (Context-free grammars) A context-free grammar (CFG) is a four-tuple Σ, V, S, P , where: • Σ is a finite, nonempty set of terminals, the alphabet; • V is a finite, nonempty set of grammar variables (category names, or non-
terminal symbols), such that Σ ∩ V = ∅;
• S ∈ V is the start symbol; • P is a finite set of production rules, each of the form A → α, where A ∈ V
and α ∈ (V ∪ Σ)∗ .
For a rule A → α, A is called the rule’s head and α is its body. A form is a sequence of terminal and nonterminal symbols, element of (V ∪ Σ)∗ . We use α, β to range over forms. Throughout this chapter, we abbreviate the term ‘context-free grammars’ to just grammars. Meta-variables A, B and C range over V ; X ranges over (V ∪ Σ); α, β, γ range over forms; and G ranges over grammars. When grammars are exemplified, usually only the productions set P is displayed, and Σ and V are assumed to include only terminal and nonterminal symbols occurring in P . The start symbol is assumed to be the head of the first rule. Elements of Σ are usually lowercase, whereas nonterminals are uppercase. A rule of the form A → is called an -rule. A rule of the form A → B (with possibly A = B) is called a unit-rule. Example 1.1 depicts a CFG that is used as a running example in this chapter. When grammar rules have the same head, we sometimes depict the head only once, listing all the different bodies next to it, separated by vertical bars. The
Cambridge Books Online © Cambridge University Press, 2012
1.5 Context-free grammars
15
Example 1.1 A simple CFG. The following grammar Ge is used as a running example: S → Va S Vb S → Va → a Vb → b
example grammar would then be: S → Va S Vb | Va → a Vb → b A derivation relation is defined between forms, relative to a grammar G. Definition 1.2 (Derivation) α derives β in G (notated α ⇒G β), iff there exist A, α1 , α2 and γ in G such that A → γ ∈ P , α = α1 Aα2 and β = α1 γα2 . The k rule A → γ is said to be applicable to α. α ⇒G β if α derives β in k steps: α ⇒G α1 ⇒G α2 ⇒G . . . ⇒G αk , and αk = β. The reflexive-transitive closure ∗ ∗ k of ‘⇒G ’ is ‘⇒G ’: α ⇒G β if α ⇒G β for some k ≥ 0. A G-derivation is a sequence of forms α1 , . . . , αn , such that for every i, 1 ≤ i < n, αi ⇒G αi+1 . When G is clear from the context, the G-indexing on ‘⇒’ (and its derived relations) is omitted. Example 1.2 Derivation. Following is a Ge -derivation: S ⇒ Va SVb ⇒ Va Va SVb Vb ⇒ aVa SVb Vb ⇒ ∗ aVa SbVb ⇒ aVa Sbb ⇒ aVa bb ⇒ aabb. Hence S ⇒ aabb. A different Ge derivation of the same word is the following: S ⇒ Va SVb ⇒ aSVb ⇒ aVa SVb Vb ⇒ aaSVb Vb ⇒ aaVb Vb ⇒ aabVb ⇒ aabb. This derivation is leftmost: it always selects the leftmost nonterminal in the sentential form as the symbol to expand. ∗
Definition 1.3 (Sentential forms) If A ⇒G α, then α is (a phrase) of category ∗ A. A form α is a sentential form of a grammar G iff S ⇒G α, i.e., α is of category S. Definition 1.4 (Language) The (formal) language generated by a grammar G ∗ with respect to a category name (variable) A is LA (G) = {w | A ⇒ w}. The language generated by the grammar is L(G) = LS (G).
Cambridge Books Online © Cambridge University Press, 2012
16
1 Introduction
This provides a solution to the general problem of characterizing languages: To describe an arbitrary language L, one can provide a grammar G such that L(G) = L. A language that can be generated by some CFG is a context-free language, and the class of context-free languages is the set of languages, every member of which can be generated by some CFG. However, not all languages over Σ can be characterized by a CFG. If no CFG can generate a language L, L is said to be trans-context-free. Example 1.3 Language. L(Ge ) = {an bn | n ≥ 0}. Proof It is easy to see that LVa (Ge ) = {a} and LVb (Ge ) = {b}. We first prove that L(Ge ) ⊆ {an bn | n ≥ 0} by induction on the length of a derivation sequence n for words in L(Ge ). The induction hypothesis is that, if S ⇒ w, then n = 3k + 1 n and w = ak bk for some k ≥ 0. Suppose S ⇒ w. If n = 1, that is, S ⇒ w, then there must be a rule S → w in Ge , and the only possible rule is S → ; hence, w must be (= a0 b0 ). Assume that the hypothesis holds for every i < n and n ∗ that S ⇒ w. Then this derivation has the form S ⇒ α ⇒ w. The only applicable rule in the first step of this derivation is S → Va SVb , hence α = Va SVb . Thus, m ∗ ∗ there are w1 , w2 , w3 such that w = w1 w2 w3 , Va ⇒ w1 , S ⇒ w2 , Vb ⇒ w3 , where m m < n. By the induction hypothesis, applied to the shorter derivation (S ⇒ w2 ), k k m = 3k + 1 and w2 = a b . Va derives only a, Vb derives only b, each in one 1+m+1 3+m step, hence α ⇒ w, and therefore S ⇒ w, where w = aak bk b = ak+1 bk+1 . Indeed, 3 + m = 3 + (3k + 1) = 3(k + 1) + 1. Next, we prove that L(Ge) ⊇ {an bn | n ≥ 0} by induction on n. The induction 3n+1 1 hypothesis is that if w = an bn , then S ⇒ w. If n = 0, that is, w = , then S ⇒ by applying the rule S → . Suppose that the hypothesis holds for n − 1, and assume that w = an bn , where n > 0. Then w = aan−1 bn−1 b. By the induction 3(n−1)+1
⇒ an−1 bn−1 . Since S → Va SVb is a production and Va ⇒ hypothesis, S 3n+1 a and Vb ⇒ b, we add three more derivation steps to obtain S ⇒ an bn = w. Note that there are different orders for these derivation steps, but in any ordering exactly three steps are applied. By the two mutual inclusions, L(Ge ) = {an bn | n ≥ 0}.
The language L(Ge ) is infinite: It includes an infinite number of words. We hinted in Section 1.1 that it is possible to capture infinity with finite means; indeed, Ge is a finite grammar. To be able to produce infinitely many words with a finite number of rules, a grammar must be recursive: There must be
Cambridge Books Online © Cambridge University Press, 2012
1.5 Context-free grammars
17
at least one rule whose body contains a symbol, from which the head of the rule can be derived. Put formally, a grammar Σ, V, S, P is recursive if there exists a chain of rules p1 , . . . , pn ∈ P , such that for every 1 < i ≤ n, the head of pi+1 occurs in the body of pi , and the head of p1 occurs in the body of pn . In Ge , the recursion is simple: the chain of rules is of length 0, namely the rule S → Va S Vb is in itself recursive — the nonterminal symbol S occurs both in the head and in the body. Sometimes, recursion is triggered by a longer chain of rules, but its effects are similar. Exercise 1.7 (*). What is the language generated by the following grammar? S → a S a | b S b | | a | aa | b | bb In the sequel we abuse the term ‘category’ to refer to both the category name (e.g., A, B, etc.) and the collection of phrases derivable from this category (i.e., LA , LB , etc.) We extend the concept of a derivation from a single category to sequences ∗ ∗ thereof. Formally, if Ai ⇒ wi for 1 ≤ i ≤ k, we write A1 · · · Ak ⇒ w1 · · · wk . ∗
Exercise 1.8 (*). Show that if Xi ⇒ αi for 1 ≤ i ≤ n and X → X1 …Xn is a ∗ rule, then X ⇒ α1 · · · αn . A sentential form might be obtained by different derivation sequences of the same grammar. Example 1.2 depicted two different derivations for the word aabb. A natural equivalence relation over derivations equates derivations that differ only in the order in which rules are applied. The two derivations in Example 1.2 are thus equivalent because exactly the same rules were applied (to the same occurrences of nonterminals), only in a different order. Exercise 1.9 (*). Show that there is only one derivation for in Ge . Exercise 1.10. Show that any equivalence class of derivations in Ge is finite. Is this true for every grammar G? A natural representation for an equivalence class of derivation sequences is a derivation tree. This tree serves as the description of the syntactic structure that is attributed to the string by the grammar (See Example 1.4.) Definition 1.5 (Derivation tree) A derivation tree over a grammar G = Σ, V, S, P is a finite, labeled, ordered tree such that: • Every node is labeled by a symbol of V ∪ Σ ∪ {}. • The root is labeled by S. • Every internal node is labeled by an element of V .
Cambridge Books Online © Cambridge University Press, 2012
18
1 Introduction
• Leaves are labeled by elements of Σ ∪ {}. • A leaf labeled by has no sisters. • If an internal node is labeled by A ∈ V and its daughters are labeled (in
order) by X1 , . . . , Xn , then A → X1 · · · Xn ∈ P . If v is a node in a derivation tree (other than the root) then μ(v) denotes the mother node of v.
Example 1.4 Derivation tree. Following is a derivation tree for the word aabb according to Ge : S S Va Va S Vb Vb a
a
b
b
Definition 1.6 (Depth) The depth of a tree τ , denoted d(τ ), is the length of the longest path connecting the root of τ to a leaf.
Example 1.5 Depth. Refer back to Example 1.4. The depth of the tree is 3 (due to the central path in the tree). It is common in formal language theory to relate different grammars that generate the same language by an equivalence relation. Definition 1.7 (Grammar equivalence) Two grammars G1 and G2 (over the same alphabet Σ) are weakly equivalent (denoted G1 ≡ G2 ) iff L(G1 ) = L(G2 ). We refer to this relation as weak equivalence, as it only relates the generated languages. Equivalent grammars may attribute totally different syntactic structures to members of their common languages. This fact can be exemplified by considering the grammar Gf , which contains only two productions: S → aSb and S → . It is easy to show that Gf is equivalent to the example grammar Ge , as the languages they generate is identical: {an bn | n ≥ 0}. However, as
Cambridge Books Online © Cambridge University Press, 2012
1.5 Context-free grammars
19
Example 1.6 demonstrates, each grammar assigns a different structure (i.e., a different derivation tree) to equal strings. See also Example 1.7. Example 1.6 Equivalent grammars, different trees. Following are two different tree structures that are attributed to the string aabb by the grammars Ge and Gf , respectively. S
S
S
S
Va Va S Vb Vb a
a
b
b
a a b b
Example 1.7 Structural ambiguity. Example 1.6 is somewhat trivial: The two different trees do not encode “interesting” differences in the structure of the string. As a more realistic example, consider a grammar,Garith , for simple arithmetic expressions over the symbols a, b and c: S → a|b|c |S+S |S∗S | (S) This grammar generates strings of symbols from a, b, c, where each two symbols are separated by either ‘+’ or ‘∗’, that is, by simple arithmetic expressions over three symbols and two operators. Expressions can (but do not have to) be enclosed in parentheses. An important observation with regard to this grammar is that it is ambiguous: There exist strings with which the grammar associates more than one structure. Consider the string a + b ∗ c. Two different trees can be associated with this string by Garith : S
S S
S
S
S S
a + b ∗ c
S
S
S
a + b ∗ c
Cambridge Books Online © Cambridge University Press, 2012
20
1 Introduction
The differences between the two trees of Example 1.7 are substantial. If the tree structure is to be interpreted as denoting the precedence order of the arithmetic operations, than one tree denotes a + (b ∗ c), while the other denotes (a + b) ∗ c. Recall the discussion of constituents from Section 1.1. It is possible to define, for the (formal language) grammar Garith , an equivalent notion of constituents. Note that the grammar describes how expressions are formed from subexpressions by the two recursive rules that derive S. Given a derivation tree for some expression, every subexpression that is the yield of some subtree is a constituent. In the preceding example, the leftmost tree induces a view of the subexpression b ∗ c as a constituent; the rightmost tree does not, rather it defines a + b as a constituent. When a single grammar assigns more than one structure to some string, the grammar is said to be ambiguous. If the structures assigned by Garith are to be interpreted as denoting the precedence of operators, the consequences if the grammar is ambiguous might be severe. Fortunately, for this simple language it is possible to construct an unambiguous equivalent grammar: Although the languages generated by both grammars are identical, the structures the grammars associate with strings are different. Such an unambiguous grammar must pre-define a precedence order of the arithmetic operations. In Example 1.8, we define an equivalent grammar, G∗ , where the ‘∗’ operator takes precedence over ‘+’, by considering two types of constituents: Subexpressions whose only operators are ‘*’ (i.e., multiplications) are viewed as constituents of category T ; “sums” of constituents of type T are constituents of type S. Example 1.8 An unambiguous grammar for arithmetic expressions, G∗ . The following is an unambiguous grammar for arithmetic expressions over the symbols a, b, c and the operators ‘∗,+’, where multiplication takes precedence over addition: S → T +S |T T → C ∗T |C C → a | b | c | (S) The grammar associates a single tree with the string a + b ∗ c: S S T T C
T C
a + b
C ∗
c
Cambridge Books Online © Cambridge University Press, 2012
1.5 Context-free grammars
21
Exercise 1.11 (*). Show an unambiguous grammar for Boolean expressions over the constants false and true, the variables a,b,c, and the operators not, and, or. The grammar must assign to boolean expressions trees in which not precedes and, which precedes or. Notice that the (weak) equivalence relation is stated in terms of the generated language. Consequently, equivalent grammars do not have to be described in the same formalism for them to be equivalent. We will later see how grammars specified in different formalisms can be compared. It is convenient to divide grammar rules into two classes: one that contains only phrasal rules of the form A → α, where α ∈ V ∗ , and another that contains only terminal rules of the form B → σ where σ ∈ Σ. It turns out that every CFG is equivalent to some CFG of this form. Definition 1.8 (Normal form) A grammar G is in phrasal/terminal normal form iff for every production A → α of G, either α ∈ V ∗ or α ∈ Σ. Productions of the form A → σ are called terminal rules, and A is said to be a preterminal category, the lexical entry of σ. Productions of the form A → α, where α ∈ V ∗ , are called phrasal rules. Furthermore, every category is either preterminal or phrasal, but not both. For a phrasal rule with α = A1 · · · An , w = w1 · · · wn , w ∈ LA (G), and wi ∈ LAi (G) for i = 1, . . . , n, we say that w is a phrase of category A and each wi is a subphrase (of w) of category Ai . A subphrase wi of w is also called a constituent of w. By choosing to consider only normal-form grammars, the expressiveness of the CFG formalism is retained, as the following exercise suggests. Furthermore, the normal form seems to be suitable for the requirements of linguists, who prefer to distinguish between the preterminal and phrasal categories. Exercise 1.12 (*). Prove that every CFG is equivalent to a CFG in phrasal/ terminal normal form. Suggest an algorithm for transforming a general CFG to one in normal form. Exercise 1.13. Prove that every CFG G, such that ∈ L(G), is equivalent to a CFG that contains no -rules. Exercise 1.14. Define a cycle of unit rules as a set of unit rules X1 → X2 , X2 → X3 , . . ., Xn−1 → Xn , Xn → X1 (for some n ≥ 1). Prove that every CFG is equivalent to a CFG which includes no cycles of unit rules. Context-free grammars are a powerful tool, and many formal languages can be (weakly) characterized using CFGs. The class of (formal) languages for
Cambridge Books Online © Cambridge University Press, 2012
22
1 Introduction
which a CFG exists is called context-free languages. However, not every formal language is context-free. For example, it can be proved that no context-free grammar G exists such that L(G) = Lww = {ww | w ∈ Σ∗ }. Without getting into formal proofs, the explanation of the observation that Lww is not contextfree is that w can be arbitrarily long. A grammar for Lww must ‘remember’ in some way all the letters in w before it accounts for the next occurrence of w. CFGs can be thought of as mechanisms with memory, but this memory has a very particular form: that of a stack. Therefore, there exist CFGs that generate languages in which strings correspond naturally to a last-in-first-out sequences, such as the language {an bn | n ≥ 0} we saw above or the language {wwr | wr is w reversed }. Since Lww is not such a language, it can be proven that no CFG generates it. Given some CFG G and some string w, it is often desirable to determine whether w ∈ L(G). This problem, known as the recognition problem, has been thoroughly investigated, and various algorithms for solving it have been devised. A related problem is that of parsing, where, in addition to recognition, a structure that the grammar assigns to the string is required, too. Most recognition algorithms can be easily modified for parsing. Again, we address this problem in more details in Chapter 6. For the time being, suffice it to say that parsing with CFGs is possible and computationally efficient.
1.6 CFGs and natural languages CFG is a suitable formalism for describing formal languages, especially programming languages. Is it also appropriate for describing natural languages? This question has two parts. First, one has to find out whether natural languages, in general, when viewed as formal languages, are context-free. This question was extensively discussed in the literature, and we survey some of the discussion below. A different question is whether CFGs provide the desired descriptive mechanism for the structure of natural language utterances. We discuss this question later in this section. It was the scientific revolution begun by Chomsky that, in the 1950s, cleared the way for using formal devices to describe natural languages. Chomsky described four different formal mechanisms for doing this, which form a hierarchy in the sense that all the languages describable by type-i + 1 grammars are also describable by type-i grammars (i has the values 0 to 2 here). The weakest mechanism, type-3 grammars, has the capacity to generate all and only the regular languages – these are the same languages that are recognizable by finite state automata. Type-2 grammars are exactly the class of
Cambridge Books Online © Cambridge University Press, 2012
1.6 CFGs and natural languages
23
CFGs. Higher classes in the hierarchy are context-sensitive grammars and unrestricted rewriting systems, capable of generating all the recursively enumerable languages. Notice that all levels in the hierarchy include infinite languages; for example, the regular (and hence, type-3) language Σ∗ is infinite. However, the level of structural complexity increases as one goes up the hierarchy. As Chomsky himself noted, type-3 grammars are inadequate for modeling natural languages, simply because natural languages are clearly beyond regular languages. To prove such a claim with respect to a particular language, say L, one can use a technique known as the pumping lemma. As it is not widely used in the context of natural languages, however, we do not describe it here. A more common technique for proving that a language is transregular (i.e., cannot be described by any regular grammar) is the following: Show a language, R, which is provably regular; show that the intersection of L and R (i.e., the set of strings that belong to both) forms a language that is known to be transregular. Since regular languages are known to be closed under intersection (i.e., the intersection of two regular languages is always a regular language), it is obtained that L is not regular. Example 1.9 demonstrates this technique. Exactly the same techniques can be applied to prove that a given language is beyond the generative capacity of type-2 languages, that is, context-free languages. There exists a pumping lemma for context-free languages, and such languages are also closed under intersection with regular languages (as well as under homomorphism, like regular languages, but unlike regular languages, context-free languages are not closed under complementation: If L is contextfree it does not imply that the set of strings not in L, namely Σ∗ − L, is contextfree). The question whether natural languages fall inside the class of context-free languages has been addressed extensively in the linguistic literature, usually with a negative answer. It is beyond the scope of this introduction to list all the arguments that have been raised in this dispute, bearing data from such languages as English, German, and Dutch as well as also Bambara and Mohawk. We note only that today it is widely believed that there exist natural languages whose structure cannot be described by any CFG; hence, CFGs are not an adequate formalism to model natural languages. (See the discussion at the end of this chapter.) Example 1.10 sketches such a construction, known as cross-serial dependencies, in a particular language. The initial belief of the appropriateness of a grammar formalism based (solely) on phrase structure stems from the observation made by traditional grammarians
Cambridge Books Online © Cambridge University Press, 2012
24
1 Introduction
Example 1.9 English is not a regular language. A famous example demonstrating that natural languages are not regular has to do with phenomena known as “center embedding.” The following is a grammatical English sentence: A white male hired another white male.
The subject, A white male, can be modified by the relative clause whom a white male hired: A white male – whom a white male hired – hired another white male.
Now, the subject of the relative clause can again be modified by the same relative clause, and in principle, there is no bound to the level of embedding allowed by English. It is obtained, then, that the language Ltrg is a subset of English: Ltrg = { A white male (whom a white male)n (hired)n hired another white male | n > 0 } Observe that Ltrg is the intersection of the natural language English with the regular set Lreg = { A white male (whom a white male)∗ (hired)∗ hired another white male } and since Ltrg is known to be transregular, so is English.
Example 1.10 Cross-serial dependencies. A certain Swiss-German that is dialect spoken around Zurich, allows constructions in which subordinate clauses can be embedded unboundedly deep within each other. This, coupled with the facts that this dialect marks case on nouns and that certain verbs assign particular cases to their arguments, results in constructions such as the following: De Jan säit, dass mer (d’chind)k (em Hans)l es huus haend wele laam hälfen aastriiche. Crucially, however, such constructions are grammatical if and only if both k = m and l = n. It can be proven, using the pumping lemma for context-free languages, that Swiss-German is therefore not context-free.
of the existence of “natural categories.” Different members of a natural category are “equivalent” in having what is known as “equal distribution”; namely,
Cambridge Books Online © Cambridge University Press, 2012
1.6 CFGs and natural languages
25
they can replace each other in every context and still preserving syntactic well-formedness. The usual parts of speech (noun, verb, etc.) form such categories. Parts of speech however, are only terminal categories, and the concept of “natural categories” is lifted to phrasal categories, based on these parts of speech. Thus, a category that is headed, in a sense that will not be explicated here, by a noun, is called a noun phrase, and all noun phrases have an approximately equal distribution. In the same way, verb phrases are categories headed by verbs, and so on. In the previous section we discussed the notion of weak grammar equivalence. Since grammars also assign structure to strings, it is desirable to compare, not only the strings that are generated by a grammar, but also their assigned structure. Thus a stricter requirement for grammar equivalence can be based on their strong generative capacity, defining two grammars to be equivalent only if they generate the same language and assign the same structure to strings. Notice that while the concept of weak generative capacity is mathematically well understood, strong generative capacity is far less formal: What the “correct” structures that a grammar should associate with the strings it generates are is not clear, and it also unclear in what way the structures assigned by different formalisms can be compared. The discussion below, therefore, is more intuitive and less formal. It is generally claimed that CFGs do not have strong generative capacity for assigning “correct” structures to natural language sentences. In what follows we shall try to draw intuition from E0 , a small fragment of English (see its characterization in Section 1.3). Under the (natural) assumption that the lexicon is finite, the language E0 is actually finite (since it contains no embedded clauses nor coordination). Therefore, the sentences of the language can be enumerated. Let s0 , s1 , . . . , sn be an exhaustive enumeration of all the sentences of E0 . Then the following grammar generates E0 : S → s0 | s1 | · · · | sn This grammar lists only sentences, without assigning any further structure to them; it is thus devoid of any strong generative capacity. This completely blocks any attempt to associate a compositional semantics with the grammar. Specifically, the grammar for E0 would be something like: S → | | | |
a sheep drinks Rachel herds the sheep Jacob loves her the sheep love Rachel
...
Cambridge Books Online © Cambridge University Press, 2012
26
1 Introduction
and indeed, no syntactic relationships are assigned by this grammar to the sentences it generates. A better attempt to capture the syntax of E0 with a context-free framework is the (context-free) grammar G0 of Example 1.11. G0 contains five preterminal categories: D (determiners), N (nouns), V (verbs), Pron (pronouns) and PropN (proper names). These naturally correspond to the parts of speech categorization of words in English. Note that these categories (with the exception of pronouns) are viewed as open, that is, their exact extension is immaterial for the current discussion; hence the ellipsis in the bodies of the respective rules. The phrasal categories are S (sentences), VP (verb phrases) and NP (noun phrases). Thus, this grammar induces some natural notions of constituents, those strings that are assigned the above-mentioned categories by the grammar. Example 1.11 A context-free grammar G0 : S VP NP D N V Pron PropN
→ → → → → → → →
NP VP V | V NP D N | Pron | PropN the, a, two, every, . . . sheep, lamb, lambs, shepherd, water . . . sleep, sleeps, love, loves, feed, feeds, herd, herds, . . . I, me, you, he, him, she, her, it, we, us, they, them Rachel, Jacob, . . .
Keeping the observations regarding the properties of CFGs in mind, several problems of the example grammar became apparent. One of them is the problem of overgeneration: As we demonstrate in Example 1.12, the grammar generates many strings that are not E0 (and, hence, not English) sentences: ∗Rachel feed the sheep ∗The shepherds feeds the sheep ∗Rachel feeds ∗Jacob loves she ∗Them herd the sheep
Why is this grammar overgenerating? One reason is that the grammar ignores an important property of natural languages: terminal symbols of the grammar (i.e., words in a natural language)6 have properties, both syntactic and semantic, 6
Notice that words are the terminal symbols of natural languages, but are strings of terminal symbols in formal languages. In other words, natural language words are elements of the alphabet when such languages are viewed formally.
Cambridge Books Online © Cambridge University Press, 2012
1.6 CFGs and natural languages
27
Example 1.12 Overgeneration. Following is a derivation tree for the string ∗the lambs sleeps they according to G0 , indicating the fact that the grammar overgenerates: S NP D
VP N
V
NP Pron
the
lambs
sleeps
they
that affect their ability to combine with other words and form larger phrases. We elaborate on this issue in the next section. Consider the following examples, drawn from a variety of natural languages: English Nouns, as well as determiners, are marked for number (either “singular” or “plural”). Lambs, two are plural, lamb, a is singular. English Pronouns are marked for case (either “nominative,” “accusative,” or “genitive”). I, he are nominative, me, him are accusative, my, his are genitive.7 French Nouns, as well as determiners, are marked for gender (“masculine” or “feminine”), in addition to their markedness for number. chien, la (dog, the) are feminine, chat, le (cat, the) are masculine. French Verbs are marked for person (either “first,” “second,” or “third”). Mangeons (eat) is plural, first person, whereas mangez is second person. Hebrew Nouns and adjectives are marked for definiteness (“definite” or “indefinite”), in addition to their markedness for number and gender. ha-kaddur, ha-gadol (the-ball, the-big) are definite; kaddur, gadol (ball, big) are indefinite. Russian Nouns and adjective are marked for case. Bolshoy, teatr (big, theater) are nominative; bolsho-mu, teatr-u (big, theater) are dative. These languages, as well as many others, impose agreement restrictions on the formation of phrases. Agreement can be defined informally as a requirement 7
Genitive pronouns are outside the scope of E0 .
Cambridge Books Online © Cambridge University Press, 2012
28
1 Introduction
for identity in the value of some feature on the agreeing components. Consider the following examples: English A determiner and a noun can combine to form a noun phrase only if they agree on number: two lambs is grammatical, ∗a lambs is not. French Determiners and nouns must agree on gender when forming a noun phrase: le chat (the cat) is grammatical, ∗la chat is not. French When a noun phrase and a verb combine to form a sentence, they must agree on number and person: nous mangeons (we eat) is grammatical, ∗nous mangez is not. Hebrew Nouns and adjectives must agree on definiteness when forming a noun phrase: ha-kaddurim ha-gdolim (the big balls) is grammatical, ∗kaddurim ha-gdolim is not. Russian Nouns and adjectives must agree on case when combined to form a noun phrase: bolshoy teatr, bolsho-mu teatr-u (big theater) but ∗bolshoy teatr-u, ∗bolsho-mu teatr. As can be seen from the examples, the overgenerating grammar G0 does not account for the observations regarding agreement in English (and in E0 ). Languages also impose control constraints on phrase formation. When two subphrases are combined, one of them might control the value of some feature in the other subphrase. In English, for example, the case of a pronoun must reflect its function in a sentence: Rachel feeds him is grammatical, ∗Rachel feeds he is not. When a noun phrase and a verb phrase are combined to form a sentence, the verb (which controls the subject NP) imposes the value nominative for the case feature of the NP. This constraint of E0 is also violated by G0 . Another violation of L(G0 ) with respect to E0 (and, hence, to English too), is the ignoring of the subcategorization property, whereby a sentence with a transitive verb requires an object, while intransitive verbs prohibit objects. A better grammar would license the sentences: the lambs sleep Jacob loves Rachel
while rejecting: ∗the lambs sleep the sheep ∗Jacob loves
G0 does not reflect these restrictions on E0 . While it is possible to have a finer-tuned CFG to reflect such restrictions (and we present one later; see Example 5.5), as we explain in Chapter 5, this is not the most suitable way.
Cambridge Books Online © Cambridge University Press, 2012
1.7 Mildly context-sensitive languages
29
1.7 Mildly context-sensitive languages In the light of the deficiencies of context-free grammars in providing a plausible account of natural languages, a “mild,” or minor extension of CFGs was proposed as more adequate for these purposes. Mildly context-sensitive languages were initially proposed as the correct language family in which natural languages reside, but they were introduced rather informally, in a way that does not precisely define a concrete set of languages. The original definition called for a class of languages that: • contains all the context-free languages; • includes, in addition, languages exhibiting some constructions that are known
to be beyond context-free, but only those that are observed in natural languages. This includes cross-serial dependencies (constructions that can be mapped to languages such as {an bm cn dm | m, n ≥ 0}); multiple agreement ({an bn cn | n > 0}); and reduplication ({ww | w ∈ Σ∗ }); • can be parsed in polynomial time (in the length of the input); and • have linear growth; this means that when strings in the language are ordered by length, the gaps between two consecutive lengths cannot be arbitrarily large. In fact, each length can be obtained as a linear combination of a fixed number of lengths. The motivation behind this definition was to characterize a class of languages that is likely to be the minimal class that includes all natural languages. Several linguistic formalisms followed that spelled out more precise definitions of mildly context-sensitive grammars, classes of grammars that generate the class of mildly context-sensitive languages. These include Tree-Adjoining Grammars (TAG), Head Grammars, Combinatory Categorial Grammars and Linear Indexed Grammars. These four formalisms were developed independently, and they all use very different mechanisms to define class, of languages. Crucially, they were all motivated by the wish to provide an adequate formalism for natural languages. It is therefore a very surprising, yet extremely elegant and encouraging, result that all four formalisms were actually found to define precisely the same class of languages. While this class is accurately called the set of treeadjoining languages, some refer to it as the class of mildly context-sensitive languages. Unfortunately, there is some evidence that some natural languages may not even be mildly context-sensitive. These include Dutch, Georgian, Chinese, and a few others. Furthermore, it is not clear that mildly contextsensitive grammars have the strong generative capacity necessary to account
Cambridge Books Online © Cambridge University Press, 2012
30
1 Introduction
for natural languages. In the next section we propose an extended formalism, motivated by the deficiencies observed in context-free grammars in Section 1.6. 1.8 Motivating an extended formalism To conclude this chapter, we discuss some of the properties of context-free grammars that make them less attractive as a formalism for expressing the structure of natural languages. We also propose, in anticipation of the remainder of this book, alternative principles that are beneficial for an adequate formalism. These principles motivate the introduction of feature structures in Chapter 2 and the operation of unification in Chapter 3. String combination operations In context-free grammars, concatenation is the only operation used to combine substrings into a larger string. The only way in which the grammar is allowed to manipulate strings is by concatenating them to each other (refer back to the Definition 1.2 to observe this). This is not self-evident, and other formalisms may prefer to offer more string combination operations. Relevant operations that have been proposed in the literature include scrambling (interleaving phrases in a certain way) and wrapping (embedding one phrase within another). For example, wrapping allows the insertion of one constituent into another. Thus, if Rachel told a story is a phrase, and if it is wrapped around the phrase Jacob so that Rachel told Jacob a story is obtained, the constituent told a story becomes discontinuous. Note that in view of such phrases, the notion of constituency becomes questionable, at least requiring a redefinition. The formalisms considered in this book, however, all adhere to having concatenation as the only string operation. Syntactic structures and descriptive relations In CFG, phrase structure is the only syntactic relationship usable as a descriptive component. In other words, the structure that a grammar induces on a sentence in its language is a derivation tree, and this tree only. Again, other syntactic relationships are conceivable. For example, LexicalFunctional Grammar (LFG) adopts an approach with two levels (or dimensions) of representation which the grammar interfaces. In addition to the phrasestructure tree, there is a level of representation making use of grammatical functions (generalizations of the traditional “subject,” “object,” etc.) which participates in imposing restrictions on well-formedness. One of the major devices that we put forward in this book for expressing structural relations and inducing them on strings is the notion of value sharing,
Cambridge Books Online © Cambridge University Press, 2012
1.8 Motivating an extended formalism
31
also known as reentrancy. Pieces of the structural descriptions of strings can be shared among several strings, thereby expressing some commonalities in the structure. This facilitates, inter alia, an elegant expression of gap-filler relationships. The properties of terminal and nonterminal symbols The terminal symbols of a context-free grammar (words in natural languages) have no properties (except for their identity). Consequently, no information can be drawn from them, and there is no relation among different terminals: Two terminals can either be identical or different, but nothing in between. The same holds for nonterminal symbols (grammar variables): derivations, which refer to nonterminal symbols, can only refer to the identity of the symbols, never to their properties or structure. But terminals and nonterminals do have properties that determine well-formedness of their combinations. This is clearly evident from the requirement of agreement and subcategorization in E0 , as described in the previous section, and there are more such examples. The formalism that we introduce in this book extends the notion of terminal and nonterminal symbols to entities that have internal structure, and can therefore better express both the properties of natural language constructions (both lexical and grammatical) as well as relations among them. Lexicalism As a result of the atomic nature of terminal symbols, most of the information encoded in a context-free grammar lies in the production rules. In contrast, modern computational formalisms for expressing grammars adhere to an approach called lexicalism. According to lexicalist views, the main source of restrictions on well-formedness is the lexicon, as its entries can now be highly structured. This leaves open the role of grammar rules. In some theories (e.g., LFG and HPSG), these still have a role (though less central than in CFGs), and should be compatible with the lexicon. In other theories, such as categorial grammar (in its various forms), no language-specific grammar rules are left. Only a very small number of universal rules remain, controlling combinations of phrases. The lexicon becomes the sole source of distinction between languages. This approach is called radical lexicalism. Beyond syntax While syntactic structure is a core component of the linguistic information carried out by an utterance, several other layers of information exist which the grammar may want to express. In particular, it is natural to augment a grammar for a natural language with some representation of semantics. With context-free grammars, any attempt to extending the grammar with semantics
Cambridge Books Online © Cambridge University Press, 2012
32
1 Introduction
(attributing some meaning to elements of the formal language) requires extra means. There is no uniform way to include semantics with the syntax. As an alternative, the expressive power added to the formalisms to cope with the issues discussed above does allow also a certain way of representing semantic information. However, a detailed description would require a certain familiarity with formal semantics, and we shall not dwell much on these issues, which are beyond the scope of this book. Further reading The basic notions underlying research in the syntax of natural languages can be found in any textbook on linguistics, such as Lyons (1968) or Akmajian et al. (1984). The notion of constituency and the importance of substitutivity as a criterion for constituency are usually attributed to Bloomfield (1933) but probably date back to the ancient grammarians. The use of formal mechanisms for specifying the structure of natural languages originated with Chomsky (1957). Other examples in this chapter, as well as in the rest of the book, are inspired by the Old Testament, in particular Genesis 28–29. Context-free grammars and their mathematical theory are discussed in many formal language introductory texts, such as Aho and Ullman (1972), Harrison (1978), or Lewis and Papadimitriou (1981). The definitions of pumping lemmas (for regular languages as well as for context-free languages) can be found in introductory textbooks on formal languages, along with examples of their application. The adequacy of CFGs for English was posed as an open question by Chomsky (1956). The question of whether natural languages can be characterized by context-free grammars has received much consideration in the literature. In a review of the debate, Pullum and Gazdar (1982) discuss many of the arguments for the non-context-freeness of natural languages, and come to the conclusion that none of them is both empirically and formally sound. They refute a variety of different claims that allegedly show that natural languages are trans-contextfree, sometimes on pure formal (mathematical) grounds and sometimes on grounds of the insufficient or misleading data. This conclusion motivated the construction of a grammatical formalism called Generalized Phrase-Structure Grammars (GPSG) (Gazdar et al., 1985), which was initially believed to be (weakly) equivalent in its generative capacity to CFGs, although it included mechanisms to facilitate grammar design and capture additional linguistic generalizations. Incidentally, it was later also proven that the full architecture of GPSG is not context-free equivalent (Ristad, 1990; Barton et al., 1987b, chapter 8).
Cambridge Books Online © Cambridge University Press, 2012
Further reading
33
The concept of weak versus strong generative capacity is discussed in Bresnan et al. (1982), where it is claimed that Dutch is not context-free. This claim is challenged by Manaster-Ramer (1987), who points to the limitations of strong generative-capacity arguments and states that “weak generative capacity has to be the ground where formal battles over linguistic models are won and lost.” He then continues by arguing that Dutch is not even weakly contextfree. The same claim is raised by Shieber (1985) with respect to a dialect of Swiss-German spoken in the area of Zurich. A very good discussion of the generative capacity required for handling both the morphology and syntax of natural languages is given in Gazdar and Pullum (1985). The interested reader should consult Savitch et al. (1987) for a collection of many of the abovementioned papers and others dedicated to a formal study of the complexity of natural languages. A different study of the computational complexity of natural languages is given by Barton et al. (1987b), where a variety of different sources for intractability are listed, and the complexity of a variety of linguistic formalisms is surveyed. Finally, Miller (1999) provides a rigorous and useful characterization of strong generative capacity, which is defined as the modeltheoretic semantics of a linguistic formalism. This definition is applied to the analysis of a range of linguistic formalisms. The class of mildly context-sensitive languages was informally defined by Joshi et al. (1975) as a class of languages that includes the context-free languages, allows some non-CF constructions that are known in natural languages, and guarantees polynomial parsing time. Several linguistic formalisms have been proposed as adequate, including TAGs (Joshi, 2003), Linear Indexed Grammars (Gazdar, 1988); Head Grammars (Pollard, 1984) and Combinatory Categorial Grammars (Steedman, 2000). In a seminal work, Vijay-Shanker and Weir (1994) prove that all four formalisms are weakly equivalent. They all generate the class of tree-adjoining languages, which we refer to here as mildly context-sensitive languages. For this class, recognition algorithms with time complexity O(n6 ) are known (Vijay-Shanker and Weir, 1993; Satta, 1994). As a result of the weak equivalence of four independently developed (and linguistically motivated) extensions of CFG, this class is considered to be linguistically meaningful, a natural class of languages for characterizing natural languages. Evidence that some natural languages are not even mildly contextsensitive comes from Dutch (Manaster-Ramer, 1987; Groenink, 1997), Chinese (Radzinski, 1991), and Old Georgian (Michaelis and Kracht, 1997). The two most popular grammatical formalisms that offer string combination operators, other than concatenation, are Head Grammars (Pollard, 1984), which provides a wrapping operation, and TAG (Joshi et al., 1975; Joshi, 1987), whose adjoining operation is nonconcatenative.
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
2 Feature structures
Motivated by the violations of the context-free grammar G0 , discussed in the previous chapter, we now extend the CFG formalism with additional mechanisms that will facilitate the expression of information that is missing in G0 in a uniform and compact way. The core idea is to incorporate into the grammar the properties of symbols in terms of which violations of G0 were stated. Properties are represented by means of feature structures. As we show in this chapter, feature structures provide a natural representation for the kind of linguistic information that grammars specify, a natural (partial) order on the amount of information stored in these representations and an efficient operation for combining the information stored in two representations. We begin this chapter with an overview of feature structures, motivating their use as a representation of linguistic information (Section 2.1). We then present four different views of these entities. We begin with feature graphs (Section 2.2), which are just a special case of ordinary (labeled, directed) graphs. The wellstudied mathematicaland computationalpropertiesof graphsmakethisvieweasy to understand and very suitable for computational implementation. This view, however, introduces a level of arbitrariness, which is expressed in the identities of the nodes. We therefore introduce two additional views. One, simply called feature structures (Section 2.3), is defined as equivalence classes of isomorphic feature graphs, abstracting over node identities. The other, called abstract feature structures (Section 2.4), uses sets rather than graphs and is easy to work with mathematically. Finally (Section 2.5), we define attribute-value matrices (AVMs). This is the view that is most common in the (computational) linguistic literature. As we show, AVMs stand in one-to-one correspondence with feature graphs, and it is instructive to think of AVMs as syntactic notions, denoting feature graphs. The relations between these two views are discussed in Section 2.6. Some other (nonlinguistic) uses of feature structures are discussed in Section 2.7.
34
Cambridge Books Online © Cambridge University Press, 2012
2.1 Motivation
35
2.1 Motivation Words in natural languages have properties, and our first motivation in this chapter is to model these properties in the lexicon. We would like to associate with words, not just atomic symbols, as in CFGs, but rather structural information that reflects their properties, as in Example 2.1. Example 2.1 A simple lexicon. NUM : sg lamb: PERS : third NUM : sg I: PERS : first NUM : sg dreams: PERS : third
: pl PERS : third NUM : [ ] sheep: PERS : third lambs:
NUM
In Example 2.1, lexical items (of E0 ) are associated with feature structures. To depict feature structures graphically we use AVMs, which are popular in computational linguistics. Such matrices (which are fully defined in Section 2.5) list, within square brackets, a set of features (typeset in SMALLCAPS), along with their values (typeset in italics). Each ‘row’ in an AVM is a pair f : v, denoting that the feature f has the value v. The lexical entry for lamb, for example, states that NUM (number) is a feature of the word, valued sg (singular); PERS is another feature, whose value is third. This is in contrast to lambs, which bears the same value for the PERS feature, but a different value, pl (plural), for the NUM feature. The noun sheep in English (and, hence, in E0 ) can be either singular or plural; hence, the value of the NUM feature of sheep is unspecified; this is modeled by the empty feature structure. Feature structures map features into values, which are themselves feature structures. A special case of feature structures are atoms, which represent structureless values. For example, to deal with number (and impose its agreement), we use a feature NUM, and a set of atomic feature structures {sg, pl} as its values, representing singularity and plurality, respectively. When a value is not atomic, it is complex. A complex value is, recursively, a feature structure consisting of features and values, as shown in Example 2.2. Deciding how to group features is up to the grammar designer, and is intended to capture syntactic generalizations. If NUMBER and PERSON ‘go together’ in formulating restrictions, it is more appropriate to group them as in this example. Moreover, such a grouping might be beneficial when feature structures are
Cambridge Books Online © Cambridge University Press, 2012
36
2 Feature structures
Example 2.2 A complex feature structure. ⎡ loves:
⎣
VTYPE : AGR :
⎤ transitive ⎦ NUM : sg PERS : third
Here, loves has two features: VTYPE (verb type), which specifies it as a transitive verb, and AGR (agreement), whose value is a feature structure with two features, NUM and PERS , each having an atom as its value.
being modified. Feature structures are useful not only for describing static, fixed linguistic data, but also for recording changes in the data. For example, the lexical properties of sheep (in particular, its unspecified number) can be altered when this word occurs in a sentence; due to subject–verb agreement in English, the properties of the verb can constrain the correct number of sheep when it serves as a subject. Consequently, processes of derivation and parsing (the application of grammar rules) are able to manipulate feature structures to reflect application of insufficient on misleading data. When the properties of some feature structure are changed, it is possible to change the value of only one feature, namely AGR, rather than specify two separate changes for each subfeature. We will see in Chapter 3 how the unification operation on feature structures can modify the values of features. In Example 2.1, the lexical ambiguity of sheep is represented by an empty feature structure as the value of the NUM feature. This is interpreted as the value of this feature being unconstrained. However, it would have been useful to be able to state that the only possible values for this feature are, say, sg and pl. There are at least two different ways to specify such information: by either listing a set of values for the feature or restricting its value to a certain “type” of permissible values. We do not explore these possibilities here. Words are not the only linguistic entities that have properties. Words are combined into phrases, and those also have properties which can be modeled by feature:value pairs. For example, the noun phrase a sheep has the value sg for the NUM feature, while two sheep has the value pl for NUM. The AVM specifying the combined phrase is related to those specifying the component words; this is controlled by grammar rules, as described in Chapter 4. This example demonstrates the possibility that two different phrases (of the same category) have different values for some feature. Consequently, grammar nonterminals, too, must be decorated with features representing the endowment of phrases of this category with that feature. In the remainder of this chapter we define this
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
37
extension of the terminal and nonterminal symbols of CFGs, setting the stage for the extension of CFGs to unification grammars.
2.2 Feature graphs The foregoing informal discussion of feature structures depicts them using an AVM representation, which is common in the linguistic literature. In this section, we begin the discussion of feature structures by defining the concept of feature graphs, using well-known concepts of graph theory. A graph view of feature structures facilitates computational processing because so many properties of graphs are well understood and because graphs lend themselves to efficient processing. We return to AVMs in Section 2.5 and discuss their correspondence with feature graphs in Section 2.6. 2.2.1 Definitions Feature graphs are defined over a signature consisting of nonempty, finite, disjoint sets F EATS of features and ATOMS of atoms. Features are used to encode properties of (linguistic) objects, such as NUMBER and GENDER. Atoms are used for the (atomic) values of such features, such as in plural and feminine. We use a convention of depicting features in SMALL CAPITALS and atoms in italics. Definition 2.1 (Signature) A signature is a structure S = ATOMS, F EATS , where ATOMS is a finite set of atoms and F EATS is a finite set of features. We assume some fixed signature throughout this presentation. Meta-variables f, g (with or without subscripts or superscripts) range over features, and a, b, and so forth, over atoms. When clear from the context or when fixed, references to the signature are omitted. While much of the following discussion holds for arbitrary signatures, we usually assume that both F EATS and ATOMS are nonempty (and sometimes even assume that they include more than one element each). A path (over F EATS) is a finite sequence of features, and the set PATHS = F EATS∗ is the collection of all paths. Meta-variables π, α (with or without subscripts) range over paths. is the empty path, denoted also by ‘ ’. The length of a path π is denoted |π|. For example, if F EATS = {A , B }, then PATHS includes , A , B , A , B , A , B , B , B , B , A , B , and so on. Definition 2.2 (Feature graphs) A feature graph A = QA , q¯A , δA , θA is a finite, directed, connected, labeled graph consisting of a finite, nonempty set of
Cambridge Books Online © Cambridge University Press, 2012
38
2 Feature structures
nodes QA (such that QA ∩ F EATS = QA ∩ ATOMS = ∅), a root q¯A ∈ QA , a partial function δA : QA × F EATS → QA specifying the arcs such that every node q ∈ QA is accessible from q¯A , and a partial function, marking some of the sinks: θA : QS → ATOMS, where QS = {q ∈ QA | δA (q, f )↑ for every f }. Given a signature of features F EATS and atoms ATOMS, G(F EATS, ATOMS) denotes the set of all feature graphs over the signature. The arcs of a feature graph are thus labeled by features. The root is a designated node from which all other nodes are accessible (through δ); note that nothing prevents the root from having incoming arcs. Sink nodes (nodes with no outgoing edges) can be marked by an atom, but can also be unmarked. We use meta-variables A, B (with or without subscripts) to refer to feature graphs. We use Q, q¯, δ, θ, to refer to constituents of feature graphs. When displaying feature graphs, the root is depicted as a gray-colored node, usually at the top or the left side of the graph. The identities of the nodes are arbitrary, and we use generic names such as q0 , q1 etc. to refer to them. Example 2.3 depicts feature graphs. While they are used for the purpose of illustration only, this and subsequent examples make reference to features and atoms that are likely to Example 2.3 Feature graphs. The graph displayed below is Q, q¯, δ, θ , where Q = {q0 , q1 , q2 , q3 }, q¯ = q0 , δ(q0 , AGR ) = q1 , δ(q1 , NUM ) = q2 , δ(q1 , PERS) = q3 , QS = {q2 , q3 }, θ(q2 ) = pl, θ(q3 ) = third. q2 pl NUM q0
AGR
q1 PERS
q3 third
In the following graph, the leaves q2 and q3 bear no marking. In other words, the marking function θ is undefined for the two sinks in its domain.
q0
AGR
NUM
q2
PERS
q3
q1
The graph displayed above is Q, q¯, δ, θ , where Q = {q0 , q1 , q2 , q3 }, q¯ = q0 , δ(q0 , AGR ) = q1 , δ(q1 , NUM ) = q2 , δ(q1 , PERS) = q3 , QS = {q2 , q3 }, and θ is undefined for its entire domain.
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
39
be used in natural grammars for natural languages, and indeed, similar feature graphs are used for linguistic applications in Chapter 5. A feature graph is empty if it consists of a single, unmarked node with no arcs. A feature graph is atomic if it consists of a single, marked node with no arcs. In Example 2.4, A is empty and B is atomic. Example 2.4 Empty and atomic feature graphs. A, an empty feature graph:
q0
B, an atomic feature graph:
q0 pl
The concept of paths is natural when graphs are concerned. While a path is a purely syntactic notion (every sequence of features constitutes a path), interesting paths are those that can be interpreted as actual paths in some graph, leading from the root to some node. The definition of δ is therefore extended to paths: Given a feature graph A = QA , q¯A , δA , θA , define δˆA : QA × PATHS → QA as follows: δˆA (q, ) = q δˆA (q, f π) = δˆA (δA (q, f ), π)
(defined only if δA (q, f )↓)
Since for every node q ∈ QA and every feature F ∈ F EATS, δA (q, F ) = δˆA (q, F ), we identify δˆ with δ in the future and use only the latter. When the index (A) is clear from the context, it is omitted. When δA (q, π) = q , we say that π leads (in A) from q to q . Definition 2.3 (Paths) The paths of a feature graph A are Π(A) = {π ∈ qA , π)↓}. PATHS | δA (¯ In words, the paths of a feature graph A are all the paths in A that lead from its root q¯A to some node in QA . Note that this set may be infinite (we elaborate on this possibility below). Exercise 2.1 (*). Is there a feature graph A such that Π(A) = ∅? The following lemma states that if π is a path in some feature graph A, then all the prefixes of π (that is, all the paths that yield π when some other path is added to their ends) are also paths in A. This is a straight-forward property of graphs in general.
Cambridge Books Online © Cambridge University Press, 2012
40
2 Feature structures
Lemma 2.4 For every feature graph A = QA , q¯A , δA , θA , a node q ∈ QA and paths α, β ∈ PATHS, if δA (q, αβ)↓ then δA (q, α)↓. Thus Π(A) is prefix-closed for every feature graph A. Exercise 2.2. Prove Lemma 2.4. Of particular interest are paths that lead from the root of a feature graph to some node in the graph. For such paths we define the notion of a value, which is the subgraph whose root is the node at the end of the path. It would have been possible to define as value the node itself, rather than the subgraph it induces; the choice is a matter of taste, as moving from one view of values to another is trivial. Definition 2.5 (Path value) For a feature graph A = QA , q¯A , δA , θA and a path π ∈ Π(A), the value valA (π) of π in A is a feature graph B = QB , q¯B , δB , θB , over the same signature as A, where: • q¯B = δA (¯ qA , π); • QB = {q ∈ QA | for some π , δA (¯ qB , π ) = q } (QB is the set of nodes
reachable from q¯B );
• for every feature f and for every q ∈ QB , δB (q , F) = δA (q , f ) (δB is the
restriction of δA to QB );
• for every q ∈ QB , θB (q ) = θA (q ) (θB is the restriction of θA to QB ).
Example 2.5 Paths. Consider the first feature graph, A, depicted in Example 2.3. Its paths are Π(A) = {, AGR , AGR NUM , AGR PERS } The value of the path AGR in A is: NUM
valA (AGR ) =
q2 pl
q1 PERS
q3 third
and the value of the path AGR NUM in A is: valA (AGR NUM ) = q2 pl Note that, for example, the value of AGR PERS NUM in A is undefined.
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
41
Exercise 2.3 (*). For an arbitrary feature graph A, what is valA ()? Exercise 2.4 (*). Prove or refute: For every feature graph A and paths π1 , π2 , qA , π1 )) = θA (δA (¯ qA , π2 )) if and only if valA (π1 ) = valA (π2 ). θA (δA (¯ Exercise 2.5. Prove that for every feature graph A and paths π1 , π2 , δA (¯ qA , π1 ) = δA (¯ qA , π2 ) iff valA (π1 ) = valA (π2 ). Exercise 2.6 (*). Prove that if A = valA (π), then all the paths of A are suffixes of paths in A: For every π ∈ Π(A ), there exists some path απ ∈ Π(A) such that απ π ∈ Π(A). Exercise 2.7 (*). Show a feature graph A for which valA (π) = A for every π ∈ PATHS. Show that this feature graph is unique, up to the identities of the nodes. Exercise 2.8. Prove that for every feature graph A and paths α, β ∈ PATHS, valvalA (α) (β) = valA (αβ) when both are defined. The definition of path values raises the question of when two paths have equal values. We distinguish between paths which lead to one and the same node and those whose values are isomorphic (see Definition 2.8 in Section 2.2.2) but not identical. The former case is called reentrancy. Definition 2.6 (Reentrancy) Let A = Q, q¯, δ, θ be a feature graph. Two paths A
q , π1 ) = δ(¯ q , π2 ), π1 , π2 ∈ Π(A) are reentrant in A, denoted π1 π2 , iff δ(¯ implying valA (π1 ) = valA (π2 ). A feature graph A is reentrant iff there exist A
two distinct paths π1 , π2 ∈ Π(A) such that π1 π2 . Exercise 2.9 (*). Let A be a feature graph such that there exist a node q ∈ Q, q = q¯ and two different paths π1 , π2 ∈ Π(A) such that δ(q, π1 ) = δ(q, π2 ). Prove that A is reentrant. The notion of reentrancy touches on the issue of the distinction between type identity and token identity. Two feature graphs are token identical if their components (i.e., their sets of nodes, roots, transition functions, and atommarking functions) are identical. They are type identical if they are isomorphic, not necessarily requiring their nodes to be identical. We discuss feature-graph isomorphism in Section 2.2.2. Reentrancy is also a crucial notion for the definition of the unification operation, which we discuss in the following chapter. The ability of a single sub-graph to be shared as the value of several paths in some feature graph is paramount
Cambridge Books Online © Cambridge University Press, 2012
42
2 Feature structures
Example 2.6 A reentrant feature graph. The following feature graph, A, is reentrant because δA (q0 , AGR ) = δA (q0 , SUBJ , AGR ) = q1 NUM
q0 SUBJ
AGR
q2 pl
q1
q4
AGR
PERS
q3 third
The (single) value of the (different) paths AGR and SUBJ AGR in A is: NUM
q2 pl
q1 PERS
q3 third
for expressing sophisticated syntactic relations in the structures that unification grammars induce on strings, as we show in Chapter 4. Early feature-structure-based formalisms used to employ only acyclic feature graphs. However, modern ones usually allow (or even require) feature structures to be possibly cyclic. While the linguistic motivation for cyclic feature structures is limited, there is good practical motivation for allowing them: When implementing a system for manipulating feature graphs, it is usually easier to support cycles than to guarantee that all the graphs in a system are acyclic. The reason is that unification, which is the major operation defined on feature graphs, can yield a cyclic graph even when its operands are acyclic. See Exercise 3.14 (Page 94). Definition 2.7 (Cycles) A feature graph A = QA , q¯A , δA , θA is cyclic if two paths π1 , π2 ∈ Π(A), where π1 is a proper subsequence of π2 , are reentrant: A
π1 π2 . A is acyclic otherwise. Note that cyclicity is a special case of reentrancy (every cyclic feature graph is reentrant, but not vice versa). A corollary of the definition is that when a feature graph is cyclic, it has at least one node q such that δ(q, α) = q for some nonempty path α. See Example 2.7. Exercise 2.10 (*). Prove: A is acyclic iff Π(A) is finite.
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
43
Example 2.7 A cyclic feature graph. Following is a cyclic feature graph, C: H
q0
F
q1
G
q2 a
The value of the path F in C, as well as the values of the (infinitely many) paths F Hn , for n ≥ 0, is the same feature graph: H
q1
G
q2 a
The value of the path F G in C, as well as the values of the (infinitely many) paths F Hn G , for n ≥ 0, is the same feature graph: q2 a
2.2.2 Feature-graph subsumption Since feature graphs are just a special case of directed, labeled graphs, we can adapt the well-defined notion of graph isomorphism to feature graphs. Informally, two graphs are isomorphic when they have the same structure; the identities of their nodes may differ without affecting the structure. In our case, we require also that the labels of sink nodes be identical in order for two graphs to be considered isomorphic. Definition 2.8 (Feature-graph isomorphism) Two feature graphs A = QA , q¯A , δA , θA and B = QB , q¯B , δB , θB are isomorphic, denoted A ∼ B, iff there exists a one-to-one and onto mapping i : QA → QB , called an isomorphism, such that: • i(¯ qA ) = q¯B ; • for all q1 , q2 ∈ QA and f ∈ F EATS, δA (q1 , f ) = q2 iff δB (i(q1 ), f ) = i(q2 );
and • for all q ∈ QA , θA (q) = θB (i(q)) (either both are undefined, or both are
defined and equal).
Cambridge Books Online © Cambridge University Press, 2012
44
2 Feature structures
Exercise 2.11. Prove that ‘∼’ is an equivalence relation. Feature graphs are used for expressing (linguistic) information. Informally, when two feature graphs are isomorphic, they contain the same information. Specifically, if two graphs A and B differ in their structure, they encode different information and are hence not isomorphic. If they have exactly the same structures, but some (leaf) node in A is marked by a different atom than is its counterpart in B, they again provide different information and are therefore not isomorphic. However, if A and B only differ in the identities of their nodes, we view then as expressing exactly the same information. They may be technically different, but they are still isomorphic. A weaker relation over feature graphs, one which plays a more major role in this book, is subsumption. Subsumption compares the amount of information encoded in feature graphs: If a graph A includes all the information that is included in B (and, possibly, additional information), we say that B subsumes A. We define the concept and explore its properties below. Definition 2.9 (Subsumption) Let A1 = Q1 , q¯1 , δ1 , θ1 and A2 = Q2 , q¯2 , δ2 , θ2 be two feature graphs. A1 subsumes A2 (denoted by A1 A2 ) iff there exists a total function h : Q1 → Q2 , called a subsumption morphism, such that • h(¯ q1 ) = q¯2 ; • for every q ∈ Q1 and for every f , such that δ1 (q, f ) ↓, h(δ1 (q, f )) =
δ2 (h(q), f );
• for every q ∈ Q1 , if θ1 (q)↓, then θ1 (q) = θ2 (h(q)).
If A1 A2 , then A1 is said to subsume, or be more general, than A2 ; A2 is subsumed by, or is more specific than, A1 . The morphism h associates with every node in Q1 a node in Q2 ; if an arc labeled f connects q with q , then such an arc connects h(q) with h(q ). In other words, δ and h commute, as depicted in the following diagram, where δ-arcs are depicted using solid lines, whereas h-mappings are depicted using dashed lines: h δ:
f
f
h In addition, if a node q ∈ Q1 is marked by an atom, then its image h(q) must be marked by the same atom (recall that only sinks can be so marked). Note that
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
45
if a sink in Q1 is not marked, there is no constraint on its image (in particular, it can be a nonsink). To illustrate the definition, Example 2.8 schematically depicts a morphism (an instance of the commutative diagram depicted above), where dashed arrows indicate the function h. Example 2.8 Subsumption morphism. This example schematically depicts two feature graphs, A1 and A2 . The subsumption morphism h is indicated by dashed arrows, mapping nodes of A1 to nodes of A2 . A2
A1 h q¯
h(¯ q) h
q
h(q) F
F
h q
h(q )
It is possible to lift morphisms from nodes to feature graphs: for feature graphs A and B, h(A) is obtained by taking the nodes in B which are images (through h) of nodes in A, along with all the arcs which connect them. In this case, when h is a morphism from A to B, h(A) is a subgraph of B. It might be tempting to think that this subgraph is isomorphic to A, but this need not be the case, due to reentrancy. Exercise 2.12 (*). Prove or refute: If A B, then QA ⊆ QB . Exercise 2.13 (*). Prove or refute: If A B, then |QA | ≤ |QB |. Exercise 2.14 (*). Prove or refute: If A B, then |QB | ≤ |QA |. Exercise 2.15 (*). Prove or refute: If A B, then A has at least as many arcs as B.
Cambridge Books Online © Cambridge University Press, 2012
46
2 Feature structures
Exercise 2.16 (*). Prove or refute: If A B, then B has at least as many arcs as A. The definition of subsumption only requires the existence of a subsumption morphism; by the definition, there could have been several different such morphisms, mapping the nodes of a feature structure to those of a feature structure it subsumes. However, this is not the case. When A B, the subsumption morphism mapping QA to QB is unique. To prove this, we need the concept of the distance of some node in a feature graph from the root of the graph. Example 2.9 Subsumption. This example depicts two feature graphs, A and B, where A B. The mapping h from QA to QB is depicted as dashed arrows, connecting the nodes of A to their images in B. q2A
NUM
A:
AGR
q0A
q1A PERS
NUM
B:
AGR
q0B SUBJ
q4B
q3A
q2B
third
pl
q1B AGR
PERS
q3B
third
Indeed, B can – and does – have nodes that do not correspond to nodes in A: such a node is q4B in the example. In addition, while the sink q2A is not marked by an atom (that is, it is a variable), its image in B, q2B , is marked as pl. Notice that no subsumption morphism can be defined from QB to QA , since there is no node into which q4B can be mapped. In particular, it cannot be mapped to the root of A since this would necessitate an arc from q0A to itself (as the root of A would be the image of both q4B and q0B ). Trying to take h−1 as an inverse subsumption morphism will fail both because of q4B and because it would map q2B to q2A , violating the last clause of the subsumption relation (a marked sink must be mapped to a sink with the same mark). We conclude that B A.
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
47
Example 2.10 Subsumption. This example depicts two feature graphs, A and B, where A B. Note that while h(A) is indeed a subgraph of B, it has one fewer node (than A), and hence is not isomorphic to A. Assume that the hypothesis holds for all nodes q such that d(q) ≤ n. Let qn+1 be a node for which d(qn+1 ) = n + 1. Then there exists some node qn ∈ QA such that δA (qn , f ) = qn+1 for some f , and d(qn ) ≤ n (hence the induction hypothesis holds for qn ). q2A A:
F
q0A
q1A
G H
q3A
B:
F
q0B SUBJ
q4B
q1B
G H
q2B
AGR
Definition 2.10 (Distance) Given a feature graph A = QA , q¯A , δA , θA and a node q ∈ QA , the distance of q from the root q¯A is defined as d(q) = min{|π| | qA , π) = q}. δA (¯ Notice that the minimum is computed on the lengths of the paths, which are natural numbers, and therefore the minimum always exists. Note also that all the nodes are, by definition, accessible from the root. Theorem 2.11 If A B and h : QA → QB is a subsumption morphism, then h is unique: if h is also a subsumption morphism from QA to QB , then for every q ∈ QA , h(q) = h (q). Proof Assume that h, h are both subsumption morphisms from QA to QB . We prove that h = h by induction on the structure of A (that is, on d(q)). qA ) = 0): by definition of subsumption, h(¯ qA ) = q¯B , h (¯ qA ) = Base: for q¯A (d(¯ qA ) = h (¯ qA ). q¯B , hence h(¯
Cambridge Books Online © Cambridge University Press, 2012
48
2 Feature structures h(qn+1 ) = h(δA (qn , f )) = δB (h(qn ), f ) = δB (h (qn ), f ) = h (δA (qn , f )) = h (qn+1 )
since qn+1 = δA (qn , f ) definition of subsumption the induction hypothesis definition of subsumption since qn+1 = δA (qn , f )
Hence h(q) = h (q), and therefore h = h .
Given a feature structure, what modifications can be made to it in order for it to become more specific? Three different kinds of modifications are possible: 1. Adding arcs 2. Adding reentrancies 3. Marking unmarked sinks by some atom We illustrate in Example 2.11 various instances of subsumption and show how all these instances fall into one or more of the categories listed above. These properties are then formalized by Corollary 2.16. Example 2.11 Subsumption as an order on information.
NUM NUM
sg
NUM
NUM
NUM PER
NUM1
NUM2
sg sg
pl
adding arcs
pl
adding atomic marks
sg
adding arcs
third
NUM1 NUM2
sg
adding reentrancies
Lemma 2.12 If A B and h : QA → QB is a subsumption morphism, then qA , π)↓, h(δA (¯ qA , π)) = δB (¯ qB , π). for every path π such that δA (¯ Proof By induction on (the length of) π. For π = , h(δA (¯ qA , )) = h(¯ qA ) = qB , π). Assume that for all π up to length k < n, the proposition holds. q¯B = δB (¯ qA , π)↓. Then: Let π = π · f be of length n. Assume that δA (¯ h(δA (¯ qA , π)) = h(δA (¯ qA , π · f )) definition of π qA , π ), f )) definition of δ = h(δA (δA (¯
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
49
= δB (h(δA (¯ qA , π )), f ) definition of subsumption = δB (δB (¯ qB , π ), f ) induction hypothesis qB , π · f ) definition of δ = δB (¯ qB , π) = δB (¯
Lemma 2.13 If A B, then Π(A) ⊆ Π(B).
Proof If A B, through the subsumption morphism h : QA → QB and π ∈ qA , π)) = δB (¯ qB , π). Hence π ∈ Π(B). Π(A), then by Lemma 2.12, h(δA (¯ qA , π))↓, then Lemma 2.14 If A B, then for each π ∈ Π(A), if θA (δA (¯ qB , π))↓ and θA (δA (¯ qA , π)) = θB (δB (¯ qB , π)). θB (δB (¯ Proof θA (δA (¯ qA , π)) = θB (h(δA (¯ qA , π))) definition of subsumption = θB (δB (¯ qB , π)) Lemma 2.12 A
Lemma 2.15 If A B and π1 , π2 are reentrant in A (i.e., π1 π2 ), then B π1 , π2 are reentrant in B (that is, π1 π2 ). A
Proof If π1 π2 , then δA (¯ qA , π1 ) = δA (¯ qA , π2 ). By Lemma 2.12, applyqB , π1 ) = ing the morphism h to both sides of the equation, this implies δB (¯ B
δB (¯ qB , π2 ), hence π1 π2 .
Corollary 2.16 If A B, then: • Π(A) ⊆ Π(B); • for each π ∈ Π(A), if θA (δA (¯ qA , π)) ↓, then θB (δB (¯ qB , π)) ↓ and
qA , π)) = θB (δB (¯ qB , π)); θA (δA (¯ A
B
• for each π1 , π2 ∈ Π(A), if π1 π2 , then π1 π2 (and, therefore, if A is
reentrant/cyclic, then so is B). We explore below some of the properties of the subsumption relation. Theorem 2.17 If A is an atomic feature graph and A B, then A ∼ B. Proof Let A = {q0 }, q0 , δ, θ be an atomic feature graph, hence θA (q0 )↓. Let B be such that A B (through h) and let q1 = h(q0 ). Then θB (q1 ) = θA (q0 ) (from the definition of subsumption), in particular θB (q1 )↓, and hence δB is undefined for its entire domain. Hence A ∼ B.
Cambridge Books Online © Cambridge University Press, 2012
50
2 Feature structures
Theorem 2.18 Subsumption has a least element: There exists a feature graph A such that for all feature graph B, A B. Proof Consider the (empty) feature graph A = {q0 }, q0 , δ, θ , where δ and θ are undefined for their entire domains. For every feature graph B, A B by mapping (through h) the root q0 to the root of B, q¯B . The two clauses of Definition 2.9 hold vacuously. Theorem 2.19 Subsumption is reflexive: For every feature graph A, A A. Proof Take h to be the identity function that maps every node in A to itself. Note that by Theorem 2.11, the identity is the only subsumption morphism embedding A in itself. Theorem 2.20 Subsumption is transitive: If A B and B C then A C. Proof If A B, then there exists a subsumption morphism hA : QA → QB . Similarly, there exists a subsumption morphism hB : QB → QC . Since both hA and hB are total, their (functional) composition is a total function h = hB ◦ hA such that h : QA → QC . It is easy to see that h is a subsumption morphism: • By definition of hA and hB , hA (¯ qA ) = q¯B and hB (¯ qB ) = q¯C , hence (hB ◦
qA ) = q¯C . hA )(¯
• Suppose q ∈ QA and δA (q, f )↓. Then:
h(δA (q, f )) = hB (hA (δA (q, f ))) by definition of h = hB (δB (hA (q), f )) since A B = δC (hB (hA (q)), f ) since B C = δC (h(q), f ) by the definition of h The proof of atom-marking preservation is left as an exercise.
Exercise 2.17. Complete the last part of the preceding proof. Theorem 2.21 Subsumption is not antisymmetric: If A B and B A, then antisymmetric not necessarily A = B. Proof Consider the feature graphs A = {¯ qA }, q¯A , δ, θ and B = {¯ qB }, q¯B , δ, θ , where δ and θ are undefined for their entire domains, and where q¯A = q¯B . Trivially, both A B and B A, but A = B.
Cambridge Books Online © Cambridge University Press, 2012
2.2 Feature graphs
51
Exercise 2.18. Let A = QA , q¯A , δA , θA be a feature graph for which |QA | = 1, qA , f )↑ and θA (¯ qA )↑. Prove that A B for every for every f ∈ F EATS, δA (¯ feature graph B. Thus, feature-graph subsumption forms a partial pre-order on feature graphs. It is a pre-order since it is not antisymmetric; it is partial since there are feature graphs that are incomparable with respect to subsumption. Example 2.12 Feature-graph subsumption is a partial relation. Following are three examples of incomparable feature graphs. In the top two examples, the feature graphs are incomparable because of the inconsistent atomic labels of the sink nodes; the bottom feature graphs are incomparable due to mismatching features leaving the roots. sg pl NUM
NUM
sg
NUM
PERS
pl
There is a clear connection between feature-graph isomorphism and featuregraph subsumption, which we explicate in the following theorem. Theorem 2.22 A ∼ B iff A B and B A. Proof Clearly, if A ∼ B, then both A B and B A since the isomorphism i is a subsumption morphism from QA to QB , and i−1 is a morphism from QB to QA . For the reverse direction, assume that A B through a subsumption morphism h1 : QA → QB , and B A through a subsumption morphism h2 : QB → QA . 1. First, we show that h1 is a one-to-one function. Assume towards a contradiction that it is not: Then there exist two nodes q1 , q2 ∈ QA such that q1 = q2 but qA , π1 ) = q1 and δA (¯ qA , π2 ) = h1 (q1 ) = h1 (q2 ). Let π1 , π2 be such that δA (¯ qB , π1 ) and h1 (q2 ) = δB (¯ qB , π2 ). Since q2 . By Lemma 2.12, h1 (q1 ) = δB (¯ qB , π1 ) = δB (¯ qB , π2 ). Now, h2 is also a subsumph1 (q1 ) = h1 (q2 ), also δB (¯ qB , π1 )) = δA (¯ qA , π1 ) = tion morphism, and hence by Lemma 2.12, h2 (δB (¯ qB , π2 )) = δA (¯ qA , π2 ) = q2 . Hence q1 = q2 , in contradiction q1 and h2 (δB (¯ to the assumption. Hence h1 is one-to-one.
Cambridge Books Online © Cambridge University Press, 2012
52
2 Feature structures
2. Next, we show that h1 is onto. Assume towards a contradiction that it is not: then there exists q ∈ QB such that for no q ∈ QA , q = h1 (q ). Let π qB , π) = q. By Lemma 2.12, h2 (δB (¯ qB , π)) = δA (¯ qA , π); be such that δB (¯ qA , π)) = δB (¯ qB , π) = q. Hence there exists a node q (namely, then, h1 (δA (¯ δA (¯ qA , π)), such that h1 (q ) = q, in contradiction to the assumption. Hence h1 is onto. Thus, h1 (and, similarly, h2 ) are one-to-one and onto, and hence are isomorphisms. Hence A ∼ B. Exercise 2.19. Prove that if A B through a subsumption morphism h1 , and B A through h2 , then h1 = h2 −1 . Exercise 2.20 (*). Consider the following two feature graphs: F
A:
F
q0
q1
F
B:
q2
Does A B? Does B A?
2.3 Feature structures Feature graphs are a useful notation, but they are too discriminating. Usually, the importance of the identities of the nodes in a graph is inferior to the structure of the graph (including the labels on its nodes and arcs). It is therefore beneficial to collapse feature graphs that only differ in the identities of their nodes into an equivalence class. The definition of feature structures as equivalence classes of isomorphic feature graphs facilitates a view that emphasizes the structure and ignores the irrelevant information encoded in the nodes. Definition 2.23 (Feature structures) Given a signature of features F EATS and atoms ATOMS, let F S = G|∼ be the collection of equivalence classes in G(F EATS, ATOMS) with respect to feature graph isomorphism. A feature structure is any member of F S. We use meta-variables fs to range over feature structures. This definition calls for a natural lifting of some properties of feature graphs to feature structures, as the following theorem reveals. Theorem 2.24 Let fs be a feature structure, and let A ∈ fs, B ∈ fs be two feature graphs in fs. Then:
Cambridge Books Online © Cambridge University Press, 2012
2.3 Feature structures
53
• Π(A) = Π(B); • for each π ∈ Π(A), θA (δA (¯ qA , π))↓ iff θB (δB (¯ qB , π))↓ and θA (δA (¯ qA , π)) =
qB , π)); θB (δB (¯
A
B
• for each π1 , π2 ∈ Π(A), π1 π2 iff π1 π2 (and, therefore, A is
reentrant/cyclic iff B is reentrant/cyclic). Proof Since A, B ∈ fs, they are isomorphic: A ∼ B. By Theorem 2.22, if A ∼ B, then A B and B A. By Corollary 2.16, (Page 49), the three conditions hold. Exercise 2.21. Prove that if A ∼ B, then for each π ∈ Π(A), valA (π) ∼ valB (π). Theorem 2.24 enables us to refer to the paths of feature structures even though they are not, strictly speaking, graphs. Definition 2.25 (Paths) Let fs be a feature structure. Then the paths of fs are defined as Π(fs) = Π(A) for some A ∈ fs. By Theorem 2.24, the definition is independent of the representative A. From here on, we will usually refer to feature structures through some feature graph representative, taking care that all definitions are representative independent. For example, we can lift the definition of reentrancy from feature graphs to feature structures in the natural way: Definition 2.26 (Feature structure reentrancy) Two paths π1 , π2 are reentrant fs
A
in a feature structure fs, denoted π1 π2 , if π1 π2 for some A ∈ fs. fs is fs
reentrant if for some π1 = π2 , π1 π2 . By Theorem 2.24, the definition is independent of the representative A. Feature-structure cyclicity is defined in a similar way. As another example, we lift the definition of subsumption from feature graphs to feature structures: Definition 2.27 (Feature structure subsumption) If fs1 and fs2 are feature strucˆ 2 , iff for some A ∈ fs1 and some B ∈ fs2 , tures, fs1 subsumes fs2 , denoted fs1 fs A B. Since feature structure subsumption is defined in terms of a representative, we must show that the definition is representative independent. Lemma 2.28 The definition of feature structure subsumption is independent of the representative: If A ∼ A and B ∼ B , then A B iff A B .
Cambridge Books Online © Cambridge University Press, 2012
54
2 Feature structures
Proof Assume that A ∼ A through an isomorphism iA : QA → QA , and B ∼ B through an isomorphism iB : QB → QB . If A B there exists a subsumption morphism h : QA → QB . Then h = iB ◦h◦iA −1 is a subsumption morphism mapping QA to QB (the proof is left as an exercise), and hence, ˆ ˆ ˆ fs(A )fs(B ). The other direction (if fs(A )fs(B ), then fs(A)fs(B)) is completely symmetric. Exercise 2.22. Complete the proof of the above lemma: Prove that h is a subsumption morphism. ˆ B iff for every Corollary 2.29 If fsA and fsB are feature structures, fsA fs A ∈ fsA and every B ∈ fsB , A B. Like feature graph subsumption, feature-structure subsumption is reflexive and transitive; these properties can be easily established from their counterparts in the feature graph case. However, unlike feature graphs, feature structure subsumption is antisymmetric: ˆ 2 and fs2 fs ˆ 1 , then fs1 = fs2 . Theorem 2.30 If fs1 fs Proof Immediate from Theorem 2.22 and Corollary 2.29.
Therefore, subsumption is a partial order on feature structures. In the sequel we will sometimes use the ‘’ symbol to denote both feature graph and feature structure subsumption, when the type of the arguments of the relation is clear. The correspondences between feature graphs and feature structures are graphically depicted in Diagram 2.1, where the two types of objects are depicted each in a column, and the (horizontal, solid) arrows correspond to mappings from one view to another. 2.4 Abstract feature structures Theorem 2.24 provides an important characterization of feature structures and, in particular, an observation on the properties of feature graphs that are retained in feature structures. By this theorem, two isomorphic feature graphs have the same sets of paths; they mark the ends of paths (via the function θ) in the same way; and they have the same reentrancies. What sets them apart is only the identity of their nodes, which is abstracted over in feature structures. It is therefore possible to characterize feature structures, not indirectly, as equivalence classes of feature graphs, but directly, through the three components which define them. In this section, we provide such a characterization, called abstract feature structures (AFSs). These are entities that, like the feature structures discussed
Cambridge Books Online © Cambridge University Press, 2012
2.4 Abstract feature structures
55
Diagram 2.1 Feature graphs and feature structures Feature graph
Feature structure [·]∼
A1
∈
[·]∼
fs1 = [A1 ]∼
ˆ
∈ A2 ∼ A2
∈
fs2 = [A2 ]∼
A feature structure fs1 is the equivalence class of some feature graph A1 with respect to isomorphism. Conversely, a feature graph is a member of a feature structure. Hence, the mapping from feature structures to feature graphs is oneˆ 2 iff for every A1 ∈ fs1 and every A2 ∈ fs2 , to-many. By Corollary 2.29, fs1 fs A1 A2 .
in Section 2.3, abstract away from the identities of nodes in graphs. Unlike the feature structure view, which is still graph-based and therefore is easy to work with computationally, especially where the computational implementation of feature structure operations is concerned, abstract feature structures are more convenient to work with mathematically. As we will show, subsumption is reduced to just slightly more than set inclusion with this view. AFSs are not an arbitrary representation. We will show that they are only a notational variant of the feature structures introduced in Section 2.3. We will define two operations which convert a feature graph to an AFS and an AFS to a feature graph; and we will show that subsumption commutes with the two conversion operators. 2.4.1 Definitions As before, we assume a fixed signature consisting of (nonempty, finite, disjoint) sets F EATS of features and ATOMS of atoms. We start with pre- abstract feature structures, which consist of three components: a set Π of paths, corresponding to the paths defined in the intended feature graphs (taken as sequences of features); a partial function Θ that labels some of the paths (corresponding to the labeling
Cambridge Books Online © Cambridge University Press, 2012
56
2 Feature structures
of some of the sinks in graphs); and an equivalence relation specifying what sets of paths lead to the same node in the intended graph, without an explicit specification of the node’s identity. Abstract feature structures are pre- abstract feature structures with some additional constraints imposed on them, which guarantee that the specification indeed corresponds to some (equivalence class of a) concrete feature graph. Definition 2.31 (Abstract feature structures) A pre-abstract feature structure (pre-AFS) is a triple Π, Θ, ≈ , where: • Π ⊆ PATHS is a nonempty set of paths. • Θ : Π → ATOMS is a partial function, assigning an atom to some of the paths. • ≈ ⊆ Π × Π is a relation specifying reentrancy.
An abstract feature structure (AFS) is a pre-AFS Π, Θ, ≈ for which the following requirements hold: • Prefix-closure: If πα ∈ Π, then π ∈ Π (where π, α ∈ PATHS); • Fusion-closure: If πα ∈ Π and π ≈ π , then π α ∈ Π and π α ≈ πα; • ≈ is an equivalence relation over Π × Π with a finite index (with [≈] the set
of its equivalence classes) including at least the pair , ;
• Θ is defined only for maximal paths: If Θ(π)↓, then there exists no path
πα ∈ Π, such that α = ;
• Θ respects the equivalence: If π1 ≈ π2 , then either Θ(π1 )↑ and Θ(π2 )↑ or
both are defined and Θ(π1 ) = Θ(π2 ).
We use meta-variables F to range over AFSs. The set of all AFSs over a given signature of features F EATS and atoms ATOMS is AF S(F EATS, ATOMS). For a pre-AFS to be a coherent representation of some graph, some additional conditions must hold. The first condition, prefix closure, ensures that if a path is present in an AFS, then all its prefixes are present as well; this is an obvious property of feature graphs. Prefix closure also ensures that the intended graph is rooted: Because the empty path is a prefix of every path, there exists a single node (the root) from which all others are accessible. The second condition also stems directly from a graph view of feature structures: If two paths lead to the same node, and one of them can be extended, then the other can be extended just as well. Fusion closure requires that if π and π are reentrant, and πα exists, then π α is also a path, and moreover, πα and π α are reentrant. The next condition requires that the reentrancy relation ‘≈’ indeed be an equivalence relation, and that the number of equivalence classes with respect to it be finite. Every equivalence class of ‘≈’ naturally corresponds to a node in
Cambridge Books Online © Cambridge University Press, 2012
2.4 Abstract feature structures
57
a graph, since it coincides with a maximal set of reentrant paths; since feature graphs have a finite number of nodes, the relation is required to be one of a finite index. Since the ‘≈’ relation in AFSs always includes pairs of identical paths, it is important to note that when we say that an AFS is reentrant we mean that its ‘≈’ relation includes at least one nontrivial (i.e., nonidentical) pair of paths. Finally, θ labels only (some of the) sinks; therefore, it is required that its abstraction Θ be defined only for maximal paths, those that cannot be further extended. The last condition requires that Θ respect the equivalence: Since there is only one node at the end of a set of reentrant paths, this node can only be labeled by one value, and the last requirement sees to it. Example 2.13 depicts an AFS. Example 2.13 Abstract feature structure. An example AFS is F = Π, Θ, ≈ , where: • Π = {} ∪ {F Gn | n ≥ 0} ∪ {F Gn H | n ≥ 0}; • Θ(π) = a if π ∈ {F Gn H | n ≥ 0} and is undefined otherwise; • π1 ≈ π2 if and only if π1 and π2 are members in the same set of the three sets
that make up Π above.
Clearly, Π is prefix-closed. The case of fusion-closure takes a more involved proof, which we only sketch here. Assume that π ≈ π for some π = π ; then either π, π ∈ {F Gn | n ≥ 0} or π, π ∈ {F Gn H | n ≥ 0}. Consider the former case. Then π = F Gn1 and π = F G n2 , for n1 = n2 . Now if α is such that πα ∈ Π, then necessarily α = Gk or α = Gk H . In either case, π α ∈ Π and πα ≈ π α. The latter case is similar. Note that the index of ≈ is 3, a finite number, and that Θ respects the equivalence. It is easy to show pre-AFSs for which some of the conditions of Definition 2.31 do not hold. Example 2.14 lists a few cases. Exercise 2.23. For each of the other requirements of Definition 2.31, provide a pre-AFS that fails to fulfill the requirement. Show that your pre-AFS does not correspond to any feature graph. Exercise 2.24 (*). Show an AFS for which Θ()↓. Exercise 2.25 (*). Consider the feature graph depicted in Example 2.6. (page 42). Articulate the pre-AFS that it implies, and prove that it satisfies all the requirements of Definition 2.31.
Cambridge Books Online © Cambridge University Press, 2012
58
2 Feature structures
Example 2.14 Pre-AFSs that are not AFSs. The following pre-AFS is not prefix-closed: • Π = {F , G }; • Θ(π) is undefined for every π ∈ Π; • π1 ≈ π2 for every π1 , π2 ∈ Π
because ∈ Π. In the following, ≈ is not an equivalence relation: • Π = {, F , G }; • Θ(π) is undefined for every π ∈ Π; • ≈ F , ≈ G and π ≈ π for every π ∈ Π
because ≈ is not symmetric. Finally, in the following pre-AFS, which intuitively corresponds to an infinite thread when viewed as a graph, is not an AFS because ≈ does not have a finite index: • Π = {Fn | n ≥ 0}; • Θ(π) is undefined for every π ∈ Π; • π1 ≈ π2 iff π1 = π2 .
2.4.2 Abstract feature structures and feature graphs From the foregoing discussion it can be understood that there is a natural correspondence between feature graphs (which we referred to as concrete entities to distinguish them from abstract ones) and abstractions thereof. Indeed, there exists such a mapping between the two views. This mapping associates an AFS with every concrete graph; furthermore, it associates the same AFS to isomorphic graphs. In the other direction, there is a mapping which associates a concrete feature graph with every abstract one: this is a representative of the (infinite) set of all isomorphic feature graphs corresponding to the abstract AFS. By these mappings, AFSs are in one-to-one correspondence with feature structures. Definition 2.32 (Abstraction) If A = Q, q¯, δ, θ is a feature graph, then Abs(A) = ΠA , ΘA , ≈A is defined by: • ΠA = {π | δ(¯ q , π)↓};
θ(δ(¯ q , π)) if θ(δ(¯ q , π))↓ undefined otherwise • π1 ≈A π2 iff δ(¯ q , π1 ) = δ(¯ q , π2 ). • ΘA (π) =
See Example 2.15.
Cambridge Books Online © Cambridge University Press, 2012
2.4 Abstract feature structures
59
Example 2.15 Abstraction. Let A be the following feature graph: q2A
G
q0A
A:
F
q1A H
q3A
b
Its abstraction is Abs(A) = ΠA , ΘA , ≈A , where: • ΠA = {, F, FG, FH }; • ΘA (FH) = b, ΘA is undefined elsewhere; • π1 ≈A π2 iff π1 = π2 .
Let B be the following feature graph: F
q0B
B:
G
G
q1B
H
q2B
b
H
q3B
Its abstraction is Abs(B) = ΠB , ΘB , ≈B , where: • ΠB = {, F, G, GH , FG, FH, GHG , GHH};
b if π ∈ { FG, FH , GHG, GHH} undefined otherwise • ≈B = {π1 , π2 | π1 = π2 or π1 , π2 ∈ {FG , FH , GHG , GHH}}. • ΘB (π) =
Let C be the following feature graph: C:
q0
F
{F, GH} or π1 , π2
∈
G
q1
H
q2 a
Its abstraction is Abs(C) = Π, Θ, ≈ , where: • Π = {} ∪ {F Gn | n ≥ 0} ∪ {F Gn H | n ≥ 0}; • Θ(π) = a if π ∈ {F Gn H | n ≥ 0} and is undefined otherwise; • π1 ≈ π2 if and only if π1 and π2 are members in the same set of the three sets
which make up Π above. Note that the index of ≈ is 3, the number of nodes in the graph.
Cambridge Books Online © Cambridge University Press, 2012
60
2 Feature structures
Lemma 2.33 If A = Q, q¯, δ, θ is a feature graph, then Abs(A) = ΠA , ΘA , ≈A is an AFS. Proof 1. Π is prefix-closed: Π = {π | δ(¯ q , π)↓}. If πα ∈ Π, then δ(¯ q , πα)↓, and by Lemma 2.4, δ(¯ q , π)↓, too; q , π) = 2. Abs(A) is fusion-closed. Suppose that πα ∈ Π and π ≈ π . Then δ(¯ q , πα) = δ(δ(¯ q , π), α) (by the definition δ(¯ q , π ), by the definition of ≈. δ(¯ q , π α), which is of δ), which equals, by the assumption, δ(δ(¯ q , π ), α) = δ(¯ q , πα) = δ(¯ q , π α), therefore defined by assumption; hence π α ∈ Π, and δ(¯ πα ≈ π α; q , π1 ) = 3. ≈ is an equivalence relation with a finite index: π1 ≈ π2 iff δ(¯ δ(¯ q , π2 ), namely, iff π1 and π2 lead (from q¯) to the same node in A. Hence ≈ is an equivalence relation and since Q is finite, ≈ has a finite index. Also, since δ(¯ q , ) is always defined and equals itself, ≈ ; 4. Θ is defined only for maximal paths: Θ(π) = θ(δ(¯ q , π)) and θ is defined only for maximal paths; 5. Θ respects the equivalence: Θ(π) = θ(δ(¯ q , π)) and if π1 ≈ π2 , then q , π2 ), hence Θ(π1 ) = Θ(π2 ). δ(¯ q , π1 ) = δ(¯ Exercise 2.26. What is the abstraction of the feature graph Q, q¯, δ, θ , where q )↑? Show that all the requireQ = {¯ q}, δ(¯ q , F)↑ for every F ∈ F EATS and θ(¯ ments of Definition 2.31 hold. Exercise 2.27 (*). Show a feature graph whose abstraction is: • Π = {} ∪ {Fn | n ≥ 1} ∪ {Gn | n ≥ 1}; • ≈ = {(, )} ∪ {(Fi , F j ) | i, j ≥ 1} ∪ {(Gi , G j ) | i, j ≥ 1}; • Θ(π) is undefined for every π ∈ Π.
Theorem 2.34 For every two feature graphs A and B, if A ∼ B then Abs(A) = Abs(B). Proof Immediate from Theorem 2.24 (Page 52) and the definition of Abs. We now define the reverse mapping which associates a feature graph with an AFS. Definition 2.35 (Concretization) Let F = Π, Θ, ≈ be an AFS. The concretization of F , Conc(F ) = Q, q¯, δ, θ , is defined as follows: • Q = {[π]≈ | π ∈ Π};
Cambridge Books Online © Cambridge University Press, 2012
2.4 Abstract feature structures
61
• q¯ = []≈ ; • θ([π]≈ ) = Θ(π) for every node [π]≈ ; • δ([π]≈ , f ) = [πf ]≈ for every node [π]≈ and feature f , if πf ∈ Π, undefined
otherwise. Q is finite because ‘≈’ is of finite index. Also, θ is representative-independent, since Θ respects the equivalence, as F is an AFS. Since F is fusion-closed, δ is representative-independent. Conc(F ) is a concrete feature graph (its nodes are determined as equivalence classes of paths in F ), but it is “generic” in the sense that it can be viewed as representing the equivalence class of isomorphic feature structures (see Theorem 2.38). Exercise 2.28 (*). Let F = Π, Θ, ≈ be an AFS for which Π = {}, ≈ = {(, )} and Θ(π)↑ for every π. What is Conc(F )? Lemma 2.36 Let F = Π, Θ, ≈ be an AFS, and let A = Conc(F ) = q , π)↓ then δ(¯ q , π) = [π]≈ . Q, q¯, δ, θ . Then for all π ∈ Π, if δ(¯ Proof By induction on π. For π = , δ(¯ q , ) = q¯ = []≈ . Assume that the proposition holds for all paths π up to length k < n, and let π = π · f be of length n. If δ(¯ q , π )↓, then: q, π · f ) δ(¯ q , π ) = δ(¯ = δ(δ(¯ q , π), f ) = δ([π]≈ , f ) = [π · f ]≈ = [π ]≈
definition of π definition of δ induction hypothesis definition of Conc definition of π
Theorem 2.37 For every AFS F , Abs(Conc(F )) = F . Proof Let F1 = Π1 , Θ1 , ≈1 , F2 = Π2 , Θ2 , ≈2 = Abs(Conc(F1 )). By q , π)↓}. By Lemma 2.36, this is the definition of Abs, Π2 = {π | δConc(F1 ) (¯ q , π1 ) = Π1 . The argument for Θ is similar. Finally, π1 ≈2 π2 iff δConc(F1 ) (¯ q , π2 ), iff [π1 ]≈1 = [π2 ]≈1 iff π1 ≈1 π2 . δConc(F1 ) (¯ Theorem 2.38 For every feature graph A, Conc(Abs(A)) ∼ A. Proof Let A = QA , q¯A , δA , θA be a feature graph and B = Conc(Abs(A)) = QB , q¯B , δB , θB . First, A B: define h : QA → QB as h(q) = [π]≈ , where π qA , π) = q. h is well defined since if δA (¯ qA , π1 ) = δA (¯ qA , π2 ), is such that δA (¯ then [π1 ]≈ = [π2 ]≈ , by definition of abstraction. h is a subsumption morphism:
Cambridge Books Online © Cambridge University Press, 2012
62
2 Feature structures
First, h(¯ qA ) = []≈ = q¯B . Second, let q be such that δA (¯ qA , π) = q, so that h(q) = [π]≈ . h(δA (q, f )) = = = =
h(δA (¯ qA , π · f )) [π · f ]≈ δB ([π]≈ , f ) δB (h(q), f
definition of q definition of h definition of Conc definition of q
The argument for θ, namely, that θA (q) = θB (h(q)), is identical. To show that B A, define a subsumption morphism h : QB → QA by setting h([π]≈ ) = qA , π). The proof that h is indeed a subsumption morphism is left as an δA (¯ exercise. Exercise 2.29. Complete the proof for Theorem 2.54. Theorem 2.39 For every two feature graphs A and B, if Abs(A) = Abs(B), then A ∼ B. Proof If Abs(A) = Abs(B), then Conc(Abs(A)) = Conc(Abs(B)) and by Theorem 2.38, Conc(Abs(A)) ∼ A and Conc(Abs(B)) ∼ B, hence A ∼ B. Corollary 2.40 Let A, B be feature graphs. Then Abs(A) = Abs(B) iff A ∼ B. Proof From Theorems 2.34 and 2.39.
It is Corollary 2.40 that establishes the observation that AFSs are really only notational variants of feature structures; the two views are in a one-to-one correspondence, and the easiest way to establish this correspondence is via concrete feature graphs. 2.4.3 AFS subsumption Since AFSs are in one-to-one correspondence with feature structures, it is possible to define AFS subsumption via feature graph subsumption: Simply define that F1 subsumes F2 iff Conc(F1 ) Conc(F2 ). However, the beauty of the AFS notation lies in the simplicity of certain operations. In particular, subsumption can be directly defined on AFSs, using not much more than set inclusion, and the above property is then a theorem. Definition 2.41 (AFS subsumption) An AFS F1 = Π1 , Θ1 , ≈1 subsumes an ˆ 2 , iff the following three conditions AFS F2 = Π2 , Θ2 , ≈2 , denoted F1 F hold:
Cambridge Books Online © Cambridge University Press, 2012
2.4 Abstract feature structures
63
• Π1 ⊆ Π2 ; • ≈1 ⊆ ≈2 ; • if Θ1 (π)↓ then Θ2 (π)↓ and Θ1 (π) = Θ2 (π).
Namely, F1 is more general than F2 if and only if all the paths of F1 are also paths in F2 , if a (maximal) path is labeled in F1 , then it is labeled identically in F2 , and every reentrancy in F1 is a reentrancy in F2 . Of course, this definition is not arbitrary. It naturally corresponds to feature graph subsumption, as the following theorems shows. ˆ Theorem 2.42 For all feature graphs A, B, A B iff Abs(A)Abs(B). Proof Let Abs(A) = ΠA , ΘA , ≈A , Abs(B) = ΠB , ΘB , ≈B . • Assume that A B, that is, a subsumption morphism h : QA → QB exists.
qA , π)↓, that is, there 1. If π ∈ ΠA , then (from the definition of Abs(A)) δA (¯ exists a sequence q0 , q1 , . . . , qn of nodes and a sequence f1 , . . . , fn of features such that for every i, 0 ≤ i < n, δA (qi , fi+1 ) = qi+1 , q0 = q¯A and π = f1 · · · fn . Because of the subsumption morphism, there exists a sequence of nodes h(q0 ), . . . , h(qn ), such that δB (h(qi ), fi+1 ) = h(qi+1 ) for every i, 0 ≤ i < n, and h(q0 ) = q¯B . Hence π ∈ ΠB . 2. Moreover, since A B, for every node q, if θ(q)↓, then θ(h(q))↓ and θ(q) = θ(h(q)). In particular, if θ(qn )↓, then θ(qn ) = θ(h(qn )), and thus ΘA (π) ΘB (π). 3. Now suppose that two paths π1 , π2 are reentrant in A. By the definition of subsumption, π1 and π2 are reentrant in B, too. Therefore ≈A ⊆≈B . ˆ • Assume that Abs(A)Abs(B). Construct a function h : QA → QB by qB , π), where π is such that setting, for each node q ∈ QA , h(q) = δB (¯ qA , π) = q. This is well-defined since if δA (¯ qA , π) = δA (¯ qA , π ) for δA (¯ ˆ qB , π) = δB (¯ qB , π ) for π = π (since Abs(A)Abs(B), in π = π , also δB (¯ qA ) = q¯B . Also, if δA (q, f )↓, particular ≈A ⊆ ≈B ). Trivially, h is total and h(¯ then h(δA (q, f )) = δB (h(q), f ). As for θ, consider a path π leading from q¯A to q. Since Abs(A) Abs(B), if ΘA (π)↓, then ΘB (π)↓ and they are equal; hence if θ(q)↓, then θ(h(q))↓ and they are equal. Hence h is a subsumption morphism. ˆ 2 iff Conc(F1 ) Conc(F2 ). Theorem 2.43 For all AFSs F1 , F2 , F1 F
Cambridge Books Online © Cambridge University Press, 2012
64
2 Feature structures
Proof ˆ 2 . By Theorem 2.37, Abs(Conc(F1 )) = F1 and • Assume that F1 F
ˆ Abs(Conc(F2 )) = F2 . Hence Abs(Conc(F1 ))Abs(Conc(F 2 )). From Theorem 2.42, Conc(F1 ) Conc(F2 ). ˆ • If Conc(F1 ) Conc(F2 ) then by Theorem 2.42, Abs(Conc(F1 )) ˆ 2. Abs(Conc(F2 )) and by Theorem 2.37, F1 F
Example 2.16 AFS subsumption. Consider the feature graphs A and B of Example 2.15. Note that fs(A) fs(B), ˆ for example, similarly to Example 2.9 (Page 46). To see that Abs(A)Abs(B), observe that all the three conditions of Definition 2.41 hold. Since feature structure subsumption is antisymmetric, and since AFSs are simply notational variants of feature structures, we obtain that AFS subsumption is antisymmetric, too. However, it is instructive to see a direct proof of this proposition to illustrate example the simplicity of the AFS view for mathematical reasoning. ˆ 2 and F2 F ˆ 1 , then Lemma 2.44 If F1 and F2 are AFSs such that F1 F F1 = F2 . ˆ 2 and F2 F ˆ 1 , then Proof If F1 F • Π1 ⊆ Π2 and Π2 ⊆ Π1 , hence Π1 = Π2 ; • ≈1 ⊆ ≈2 and ≈2 ⊆ ≈1 , hence ≈1 = ≈2 ; • if Θ1 (π)↓, then Θ2 (π)↓ and Θ1 (π) = Θ2 (π); if Θ2 (π)↓, then Θ1 (π)↓ and
Θ1 (π) = Θ2 (π); hence Θ1 (π)↓ iff Θ2 (π)↓, in which case Θ1 (π) = Θ2 (π).
Hence F1 = F2 .
Diagram 2.2 summarizes the relations among the three views of feature structures discussed so far. The solid arrows correspond to mappings between feature graphs and feature structures, as in Diagram 2.1; the dashed arrows correspond to mappings between feature structures and AFSs.
2.5 Attribute-value matrices We now return to attribute-value matrices (AVMs), which were informally introduced in Section 2.1. This is the view that we adopt for depicting feature structures (and the grammars based on them) in the bulk of this book, both
Cambridge Books Online © Cambridge University Press, 2012
2.5 Attribute-value matrices
65
Diagram 2.2 Feature graphs, feature structures, and AFSs Feature graph
Feature structure
[·]∼
fs1 = [A1 ]∼
A1 A2 ∼ A2
AFS
Abs
∈ ∈
ˆ fs2 = [A2 ]∼
F1 = Abs(A1 ) ˆ F2 = Abs(A2 )
Conc This diagram adds AFSs to Diagram 2.1. AFSs are in one-to-one correspondence with feature structures. This correspondence is established via feature graphs: By Corollary 2.40, Abs(A) = Abs(B) iff A ∼ B, that is, iff [A]∼ = [B]∼ . Also, AFS subsumption commutes with abstraction ˆ and concretion: For all feature graphs A, B, A B iff Abs(A)Abs(B) ˆ (Theorem 2.42); for all AFSs F1 , F2 , F1 F2 iff Conc(F1 ) Conc(F2 ) (Theorem 2.43).
because they are easy to present on paper and because of their centrality in existing literature. Like feature graphs, AVMs are defined over a signature of features and atoms, which we fix below. In addition, AVMs make use of variables, also called tags. Variables are used to encode the sharing of values, as will become clear. When AVMs are concerned, we follow the convention of the linguistic literature by which variables are natural numbers, depicted in boxes, for example, 3 . 2.5.1 Definitions As above, meta-variables f (with or without subscripts) range over features; a, b and so on, range over atoms; and X, Y , Z, over variables. Definition 2.45 (AVMs) Given a signature S, the set AVMS(S) of AVMs over S is the least set satisfying the following two clauses: 1. M = Xa ∈ AVMS (S) for any a ∈ ATOMS and X ∈ TAGS; M is said to be atomic and X is the tag of M , denoted tag(M ) = X; 2. M = X[f1 : M1 , . . . , fn : Mn ] ∈ AVMS(S) for n ≥ 0, X ∈ TAGS , f1 , . . . , fn ∈ F EATS and M1 , . . . , Mn ∈ AVMS (S), where fi = fj if i = j. M is said to
Cambridge Books Online © Cambridge University Press, 2012
66
2 Feature structures
be complex, and X is the tag of M , denoted tag(M ) = X. If n = 0, then M = X[] is an empty AVM.
Note that two AVMs that differ only in their tag are distinct: If X = Y , X · · · =
Y · · · . In particular, there is no unique empty AVM. Note also that the same variable can be used more than once in an AVM. Meta-variables M , with or without subscripts, range over AVMS; the parameter S is omitted when it is clear from the context. The domain of an AVM M , denoted dom(M ), is undefined when M is atomic, and {f1 , . . . , fn } when M is complex (hence, dom(M ) is empty for an empty AVM). The value of some feature f ∈ F EATS in M , denoted fval(M, f ), is defined if f = fi ∈ dom(M ), in which case it is Mi , and undefined otherwise. Example 2.17 AVMs Consider a signature consisting of ATOMS = {a} and F EATS = {F, G}. Then 2 M1 = 4 a is an AVM by the first clause of Definition 2.45; M2 = [ ] is an empty AVM by the second clause; M3 = 3 F : 4 a is an AVM by the second clause (using M1 as the value of F, so that fval(M3 , F) = M1 ), and M4 = 2
G: 3 F: 2
F: 4
a
F: 4
a
[]
is an AVM by the second clause, as is M5 =
4
G: 3 F: 2
[]
Definition 2.46 (Sub-AVMs) Given an AVM M , its sub-AVMs are SubAVM(M ), defined as: 1. SubAVM(Xa) = {Xa}; 2. SubAVM(X[f1 : M1 , . . . , fn : Mn ]) = X[f1 : M1 , . . . , fn : Mn ] ∪1≤i≤n SubAVM(Mi ). Definition 2.47 (Tags) Given an AVM M , its tags Tags(M ) are defined as: 1. Tags(Xa) = {X}; 2. Tags(X[f1 : M1 , . . . , fn : Mn ]) = X ∪1≤i≤n Tags(Mi ).
Cambridge Books Online © Cambridge University Press, 2012
2.5 Attribute-value matrices
67
Definition 2.48 (Tagset) The tagset of an AVM M and a tag X ∈ Tags(M ) is the set of sub-AVMs of M (including M itself) that are tagged by X: TagSet(M, X) = {M ∈ SubAVM(M ) | tag(M ) = X}. Example 2.18 AVMs 2 Consider M4 and M5 of Example 2.17 above; fval(M
4 , F) = [ ]. Similarly, fval(M5 , F) = 2 [ ], whereas fval(M5 , G ) = 3 F : 4 a . Observe that T ags(M4) = T ags(M 5 ) = { 2 , 3 , 4 }. Also, TagSet(M4 , 4 ) is { 4 a}, 2 TagSet(M4 , 3 ) is { 3 F : 4 a } and TagSet(M4 , 2 ) is {M
4 , []}. As for M5 , TagSet(M5 , 2 ) = { 2 [ ]}, TagSet(M5 , 3 ) = { 3 F : 4 a } and TagSet(M5 , 4 ) = {M5 , 4 a}. Trivially, tag(M4 ) = 2 and tag(M5 ) = 4 . As another example, consider the AVM
M6 = 1 F : 1 F : 1 F : 1 [ ] Here, T ags(M6 ) = { 1 }, and TagSet(M6 , 1 ) is:
{M6 , 1 F : 1 F : 1 [ ] , 1 F : 1 [ ] , 1 [ ]} Of course, tag(M6 ) = 1 .
Exercise 2.30 (*). Can you show an AVM M and a variable X such that TagSet(M, X) = ∅? Consider some AVM f1 : 2 M1 M= 1 f2 : 2 M2 where M1 = M2 . Both M1 and M2 are sub-AVMs of M , and both have the same tag, although they are different. In other words, the recursive definition of AVMs allows two different, contradicting AVMs to be in the TagSet of the same variable. To eliminate such cases, we define well-formed AVMs as follows: Definition 2.49 (Well-formed AVMs) An AVM M is well-formed iff for every variable X ∈ Tags(M ), TagSet(M, X) includes at most one nonempty AVM. Thus, in Example 2.17 M5 is not well-formed because TagSet(M5 , 4 ) includes two different nonempty AVMs; but M4 is well-formed because the only variable that occurs more than once, 2 , has only one nonempty AVM in its TagSet. Similarly, M6 is not well-formed, since TagSet(M6 , 1 ) has four members, only one of which is empty.
Cambridge Books Online © Cambridge University Press, 2012
68
2 Feature structures
Henceforth, we only consider well-formed AVMs. This allows us to provide a concise interpretation of shared values in AVMs. We wish to make explicit the special role that multiple occurrences of the same variable in a single AVM play. To this end, we would like to say that the association of a variable X ∈ Tags(M ) in an AVM M , written assoc(M, X), is the AVM that is tagged by X. If, in a given AVM M , a variable X occurs exactly once, then assoc(M, X) is a single, unique value. If, however, X occurs more than once in M , special care is required. Recall that for well-formed AVMs, at most one of these multiple occurrences is associated with a nonempty AVM. Definition 2.50 (Variable association) For a variable X ∈ Tags(M ), the association of X in M , denoted assoc(M, X), is the single nonempty AVM in TagSet(M, X); if only X [ ] is a member of TagSet(M, X), then assoc(M, X) = X [ ]. Note that assoc assigns exactly one sub-AVM of M to each variable occurring in M , independently of the number of occurrences of the variable in M or the size of TagSet(M, X). Example 2.19 Variable association. Consider the well-formed AVM
G: 3 F: 4a 2 M= F: 2 []
Observe that assoc(M, 2 ) = M , assoc(M, 3 ) = 3 F : 4 a , and 2 have one and assoc(M, 4 ) = 4 a. The two occurrences of the variable
the same association. For M = 4 F : 4 [ ] , assoc(M , 4 ) = M .
Exercise 2.31 (*). Let
M= 4
G: 2
[] F: 2 []
Is M well-formed? If it is, what is assoc(M, 2 )? With this understanding of variables association, it is now possible to define paths in AVMs. Definition 2.51 (AVM paths.) Let M be an AVM. Let A RCS (M ) be defined as A RCS (M ) = {X, f, Y | X, Y ∈ T ags(M ), f ∈ dom(assoc(M, X)), and tag(fval(assoc(M, X), f )) = Y }. Let A RCS * be the extension of A RCS to paths, defined (recursively) by:
Cambridge Books Online © Cambridge University Press, 2012
2.5 Attribute-value matrices
69
• for all X ∈ T ags(M ), X, , X ∈ A RCS *(M ); • if X, f, Y ∈ A RCS (M ), then X, f, Y ∈ A RCS *(M ); • if X, f, Y ∈ A RCS (M ) and Y, π, Z ∈ A RCS *(M ), then X, f · π, Z ∈
A RCS *(M ). The paths of M , denoted Π(M ), is the set {π | X = tag(M ) and for some variable Y ∈ T ags(M ), X, π, Y ∈ A RCS *(M )}. Observe that A RCS * is functional (because AVMs are functional): if X, π, Y ∈ A RCS *(M ) and X, π, Z ∈ A RCS *(M ) then Y = Z. It is a partial function since it is not the case that for all X and π there exists some Y , such that X, π, Y ∈ A RCS *(M ). See Example 2.20. Example 2.20 Paths. Consider again the AVM M= 2
G: 3 F: 2
F: 4
a
[]
Observe that A RCS (M ) = { 2 , G, 3 , 2 , F , 2 , 3 , F , 4 }. Therefore, A RCS *(M ) includes, in addition to the elements of A RCS (M ), also 2 , , 2 , 2 , GF , 4 and, due to the multiple occurrence of 2 , the infinitely many triples 2 , Fi · G , F , 4 for any i ≥ 0. Exercise 2.32 (*). Specify all the paths of the AVM M of Example 2.20. Exercise 2.33. Let
⎤
F1 : 7 a M= 3⎣ G1 : 9 a ⎦ . G: 2 G2 : 1 [ ] ⎡
F: 1
Specify A RCS (M ) and A RCS *(M ). The function fval can be naturally extended to paths. Definition 2.52 (Path values) The value of a path π in an AVM M , denoted pval(M, π), is assoc(M, Y ), where Y is such that tag(M ), π, Y ∈ A RCS *(M ). This is well defined since A RCS * is functional. Similarly, pval is partial since A RCS * is partial.
Cambridge Books Online © Cambridge University Press, 2012
70
2 Feature structures
Example 2.21 Path values. In the AVM M= 2
G: 3 F: 2
F: 4
a
,
[]
pval(M, ) = M ; pval(M, G ) = 3 F : 4 a ; and pval(M, F ) = pval(M, FF ) = pval(M, FFF ) = M ; pval(M, GG ) is undefined.
Exercise 2.34. Specify the values of all the paths of the AVM M of Example 2.21. When two different paths in some AVM M have the same value we say that they are reentrant. Definition 2.53 (Reentrancy) Two paths π1 and π2 are reentrant in an AVM M M if pval(M, π1 ) = pval(M, π2 ), denoted also π1 π2 . An AVM M is reentrant M
if there exist two distinct paths π1 , π2 such that π1 π2 . M
For example, in the AVM M of Example 2.21, F because pval(M, ) = pval(M, F ) = M . Definition 2.54 (Cyclic AVMs) An AVM M is cyclic if two paths π1 , π2 ∈ M Π(M ), where π1 is a proper subsequence of π2 , are reentrant: π1 π2 . M of Example 2.21 is therefore cyclic, for example, by the paths and F . Example 2.22 A reentrant AVM. The following AVM is reentrant but not cyclic: ⎡
⎤ pl PERS : 3 third ⎥ 0 ⎣ ⎦
SUBJ : 4 AGR : 1 ⎢ AGR : 1
NUM : 2
We now introduce three conventions regarding the depiction of AVMs, motivated by the fact that variables are used primarily to indicate value sharing. When an AVM is well formed (which we always assume in the sequel, unless explicitly stated otherwise), if a variable occurs more than once, then its value is explicated only once; where this value is explicated (i.e., next to which occurrence of the variable) is immaterial, and we return to this point in the discussion of renaming in Section 2.5.2. Recall that the definition of variable association
Cambridge Books Online © Cambridge University Press, 2012
2.5 Attribute-value matrices
71
(Definition 2.50) abstracts over exactly this property of AVMs. In addition, variables that occur only once can be omitted. Finally, following the conventions of the linguistic literature, the empty AVM is sometimes omitted when it is associated with a variable. Note that the first of these conventions, namely, that the value associated with a variable is only listed once in an AVM, is in fact crucial in the case of cyclic AVMS. There is no finite representation of cyclic AVMs unless this convention is adopted. The other two, however, are merely presentational niceties; in particular, they have no effect on the formal properties of AVMs or their characteristics (e.g., assoc or A RCS ). See Example 2.23.
Example 2.23 Shorthand notation for AVMs. Consider the following feature structure: ⎡ 6
F: 3
[ ]
⎤
⎣G : 4 H : 3 a ⎦ . H: 2 []
Notice that it is well formed, since the only variable occurring more than once ( 3 ) is associated with a nonempty value (a) only once. We can remove the empty AVM and leave only one occurrence of the value explicit, and obtain: ⎡ 6
F: 3
⎤
⎣G : 4 H : 3 a ⎦ H: 2 []
Next, the tag 2 is associated with the empty AVM, which can be omitted: ⎡ 6
F: 3
⎤
⎣G : 4 H : 3 a ⎦ H: 2
Finally, the tags 4 and 6 occur only once, so they can be omitted: ⎡
F: 3
⎤
⎣G : H : 3 a ⎦ H: 2
Cambridge Books Online © Cambridge University Press, 2012
72
2 Feature structures 2.5.2 AVM subsumption and renaming
We now define subsumption directly on AVMs. Compare Definition 2.55 with Definition 2.9 (Page 44). Definition 2.55 (AVM subsumption) Let M1 , M2 be AVMs over the same signature. M1 subsumes M2 , denoted M1 M2 , if there exists a total function h : T ags(M1) → T ags(M2), such that: 1. h(tag(M1 )) = tag(M2 ); 2. For every X, f, Y ∈ A RCS (M1 ), h(X), f, h(Y ) ∈ A RCS (M2 ); 3. For every X ∈ T ags(M1 ), if assoc(M1 , X) is atomic, then assoc(M2, h(X)) is atomic, with the same atom. Many of the properties of AVM subsumption are identical to those of feature graph subsumption; the reason is that there is a natural, arc-preserving mapping from AVMs to feature graphs, as we shall see in Section 2.6 below. For the sake of completeness, we repeat some of these properties below, without proofs. The proofs can either be constructed directly, similarly to the case of feature graphs; or established via the mappings between AVMs and feature graphs that we introduce in Section 2.6. Lemma 2.56 If M1 M2 through h and X, π, Y ∈ A RCS *(M1 ), then h(X), π, h(Y ) ∈ A RCS *(M2 ). M
1 Corollary 2.57 If M1 M2 , then Π(M1 ) ⊆ Π(M2 ) and if π1 π2 , then
M
2 π1 π2 .
Theorem 2.58 AVM subsumption is reflexive: For all M , M M . Theorem 2.59 AVM subsumption is transitive: If M1 M2 and M2 M3 , then M1 M3 . Theorem 2.60 AVM subsumption is antisymmetric: If M1 M2 and M2 M1 , M1 is not necessarily identical to M2 . When two AVMs are identical up to the variables that occur in them, one AVM can be obtained from the other by a systematic renaming of the variables. We say that the two AVMs are isomorphic. Exercise 2.35 (*). Consider M1 and M2 of Example 2.24. Prove that M1 M2 and M2 M1 . Of course, one must be careful renaming variables, especially when the same 2 F : 1 a , then variable may occur in both AVMs. For example, if M =
renaming 1 to 2 will result in M = 2 F : 2 a , which is not even well formed.
Cambridge Books Online © Cambridge University Press, 2012
2.5 Attribute-value matrices
73
Example 2.24 Isomorphic AVMs. Let
M1 = 2
G: 3 F: 2
F: 4
a
M2 = 22
[]
G : 23 F : 24a F : 22 [ ]
.
Then M2 can be obtained from M1 by systematically replacing 22 for 2 , 23 for 2 and 24 for 4 .
The property of Exercise 2.35 is not accidental. When two AVMs are isomorphic, they subsume each other. Theorem 2.61 If M1 and M2 are isomorphic AVMs, then both M1 M2 and M2 M1 . Proof Assume that M1 and M2 are isomorphic, and let i : T ags(M1 ) → T ags(M2) be the mapping that establishes the isomorphism. Then i is a subsumption morphism establishing M1 M2 , and i−1 establishes M2 M1 . Another case of AVM equivalence is induced by the convention by which, if a variable occurs more than once in an AVM, then its value is explicated only once. A consequence of this convention is that two AVMs that differ only with respect to where the (single) value of some multiply occurring variable is explicated, subsume each other, as they induce the same set of A RCS . Example 2.25 AVM equivalence. The following AVMs differ only in the instance of 0 whose value is explicated:
⎤ pl AGR : ⎦ 3 M1 = 0 ⎣
PERS : third SUBJ : 4 AGR : 1 ⎡
1
⎡ M2 = 0 ⎣
AGR : 1 SUBJ
NUM : 2
: 4 AGR : 1
NUM : 2 PERS : 3
pl third
⎤ ⎦
Then M1 M2 and M2 M1 .
Theorem 2.62 Let M and M be two AVMs such that X ∈ TAGS(M ) ∩ TAGS(M ), and assume that X occurs twice in M and in M (i.e.,
Cambridge Books Online © Cambridge University Press, 2012
74
2 Feature structures
|TagSet(M, X)| > 1 and |TagSet(M , X)| > 1). If M and M are identical up to the choice of which instance of X in them is explicated, then M M and M M . Proof Observe that A RCS (M ) = A RCS (M ). The identity function which maps each variable in T ags(M ) to itself is then a subsumption morphism. The following definition unifies the two properties: Definition 2.63 (Renaming) Let M1 and M2 be two AVMs. M2 is a renaming of M1 , denoted M1 M2 , iff M1 M2 and M2 M1 . Example 2.26 AVM renamings. The following two AVMs are renamings of each other:
⎤ : 2 pl ⎦ 3 M1 = 0 ⎣
PERS : third SUBJ : 4 AGR : 1 ⎡
AGR : 1
⎡ M2 = 10 ⎣
AGR : 11 SUBJ
NUM
: 14 AGR : 11
NUM : 12pl
⎤ ⎦
PERS : 13third
Exercise 2.36. Consider M1 and M2 of Example 2.26. Prove that M1 M2 and M2 M1 . 2.6 The correspondence between feature graphs and AVMs AVMs are the entities that the linguistic literature employs to depict feature structures; feature graphs are well-understood mathematical entities to which various results of graph theory can be applied. We now define the relationship between these two views. Section 2.6.1 discusses a mapping from AVMs to graphs, whereas Section 2.6.2 deals with the reverse direction. 2.6.1 From AVMs to feature graphs Definition 2.64 formalizes the correspondence between AVMs and feature graphs by presenting a mapping, φ, which embodies the relation between an AVM and its feature graph image. Informally, a given AVM M is mapped to a concrete graph whose nodes are the variables occurring in the AVM, Tags(M ). The root of the graph is the variable tagging the entire AVM, and the arcs are determined using the function val. Atomic AVMs are mapped to single nodes,
Cambridge Books Online © Cambridge University Press, 2012
2.6 The correspondence between feature graphs and AVMs
75
labeled by the atom, with no outgoing arcs. Empty AVMs are mapped to a graph having just one node, bearing no label and having no outgoing features. Complex AVMs are mapped to graphs whose nodes, including the root, may have outgoing arcs, where the arcs’ labels correspond to features. Definition 2.64 (AVM to graph mapping) Let M be a well-formed AVM. The feature graph image of M is φ(M ) = Q, q¯, δ, θ , where: • Q = Tags(M ); • q¯ = tag(M ); • for all X ∈ Tags(M ) and f ∈ F EATS, δ(X, f ) = Y iff X, f, Y ∈ A RCS (M );
and • for all X ∈ Tags(M ) and a ∈ ATOMS, θ(X) = a iff assoc(M, X) is the
atomic AVM Xa, and is undefined otherwise. Note that if M1 and M2 are two AVMs which differ only in the order of the “rows” of feature–value pairs, they will be mapped by φ to exactly the same feature graph. See Example 2.27. To see that φ(M ) is a feature graph for every AVM M , observe first that the set Tags(M ) is never empty; even an empty AVM is associated with a tag. Therefore the set of nodes in φ(M ) is not empty. Also, every node in φ(M ) is accessible from the root q¯ since q¯ is the tag of M , and δ is defined by following M features in M . Reentrancies in the AVM are preserved in the graph: if π1 π2 , φ(M)
then also π1 π2 . See Examples 2.28 and 2.29. Exercise 2.37 (*). Let M = 0 F: 1
G: 2
a
H: 1
What is the graph image of M , φ(M )? Exercise 2.38. Show the graph image of the AVM M4 depicted in Example 2.17 (page 66). Lemma 2.65 If M is an AVM and A = φ(M ) is its feature-graph image, then for all X, Y ∈ T ags(M ) and π ∈ PATHS, X, π, Y ∈ A RCS *(M ) iff δA (X, π) = Y . Proof If π = , X, π, Y ∈ A RCS *(M ) iff X = Y (Definition 2.51) iff δA (X, π) = Y (Definition 2.64). By Definition 2.64, for all X, Y ∈ T ags(M )
Cambridge Books Online © Cambridge University Press, 2012
76
2 Feature structures
Example 2.27 AVM to graph mapping. Let
⎤
: 1 F1 : 7 a M= 3⎣ G1 : 9 a ⎦ . G: 2 G2 : 1 [ ] ⎡
F
Observe that M is well formed since the only variable that occurs more than once in M , namely 1 , has only one nonempty AVM associated with it. Specifically, the associations of the variables of M are: Variable Association
1 1 F1 : 7 a G1 : 9 a 2 2 G : 1 [] ⎤
⎡ 2 F : 1 F1 : 7 a 3 3 ⎣ G1 : 9 a ⎦ 2 G: G2 : 1 [ ] 7 7a 9 9a The feature graph image of M is φ(M ) = Q, q¯, δ, θ where Q = { 3 , 1 , 7 , 2 , 9 }, q¯ = 3 , θ( 7 ) = θ( 9 ) = a (and θ is undefined elsewhere), and δ is given by: δ( 3 , F) = 1 , δ( 3 , G ) = 2 , δ( 1 , F1 ) = 7 , δ( 2 , G1 ) = 9 , δ( 2 , G2 ) = 1 and δ is undefined elsewhere. 1
F
φ(M ) =
G2
3 G
F1
G1
7
a
9
a
2
and f ∈ F EATS, δA (X, f ) = Y iff X, f, Y ∈ A RCS (M ). By induction on the length of π, for all paths π, δA (X, π) = Y iff X, π, Y ∈ A RCS *(M ). Corollary 2.66 If M is an AVM and A = φ(M ) is its feature graph image, then M
φ(M)
Π(M ) = Π(A), and for all π1 , π2 ∈ PATHS, π1 π2 iff π1 π2 . The correspondence between AVMs and feature graphs, established via the mapping φ, relates also the subsumption relations across the two views.
Cambridge Books Online © Cambridge University Press, 2012
2.6 The correspondence between feature graphs and AVMs
77
Example 2.28 A reentrant feature graph. The reentrant AVM of Example 2.22 and its feature graph image are depicted below: ⎤ ⎡ NUM : 2 pl AGR : 1 ⎦ 3 M= 0⎣
PERS : third , SUBJ : 4 AGR : 1 NUM
φ(M ) =
AGR
0
4
SUBJ
2
pl
1 AGR
. PERS
3
third
Again, M is well-formed; in particular, assoc(M, 1 ) is 1
: 2 pl PERS : 3 third NUM
Compare this graph to the feature graph of Example 2.6.
Example 2.29 AVM to graph mapping in the face of cycles. Let M be the (cyclic) AVM
M = 3 F: 3 [] , where Tags(M ) = { 3 }. Observe that M is well formed, as the only variable that occurs more than once in M , namely 3 , has only one nonempty AVM associated with it: M itself. The graph φ(M ) will therefore be Q, q¯, δ, θ , where Q = { 3 }, q¯ = 3 , θ( 3 ) is undefined and δ( 3 , F) = 3 , δ undefined elsewhere. This graph is: φ(M ) =
3
F.
Theorem 2.67 For all AVMs M1 , M2 , M1 M2 iff φ(M1 ) φ(M2 ). Proof Assume that M1 M2 via a morphism h. Due to the correspondence between A RCS and δ (Definition 2.64), h is also a subsumption morphism establishing φ(M1 ) φ(M2 ). The reverse direction is identical.
Cambridge Books Online © Cambridge University Press, 2012
78
2 Feature structures
Corollary 2.68 For all AVMs M1 , M2 , M1 M2 iff φ(M1 ) ∼ φ(M2 ). This concludes the first direction of the correspondence between AVMs and feature graphs. 2.6.2 From feature graphs to AVMs For the reverse direction, we define a mapping, η, from feature graphs to AVMs. As above, there should be a correspondence between nodes in the graph and variables in the AVM. But note that although the nodes of a feature graph are part of the definition of the graph, AVMs are defined over a universal set of variables. We must therefore predefine a set of variables, called V below, for each AVM M , to serve as T ags(M ). In addition, AVMs exhibit a degree of freedom that is not present in feature graphs; this is due to the fact that multiple occurrences of the same variable can be explicated along with any of the instances of the variable (refer back to the discussion of renaming and Example 2.24). To overcome this difficulty, we first introduce the notion of arborescence. Definition 2.69 (Arborescence) Given a feature graph A = Q, q¯, δ, θ , a tree τ = Q, E , where E ⊆ δ, is an arborescence of A if τ is a minimum spanning directed tree of A, rooted in q¯. Informally, an arborescence of a given feature graph is a tree consisting of the nodes of the graph and the minimum number of arcs required for defining some shortest possible path from the root to each of the nodes in the graph. Since feature graphs are connected and each node is accessible from the root, such a tree always exists, but it is not necessarily unique. A simple algorithm for producing an arborescence scans the tree from the root, in some order, and marks each node by the length of the shortest path from the root to that node, marking additionally the incoming arcs to the node that are parts of minimumlength paths. Then, for each node with in-degree greater than 1, only a single marked arc is retained. See Example 2.30. Definition 2.70 (Feature graph to AVM mapping) Let A = Q, q¯, δ, θ be a feature graph and let τ = Q, E be an arborescence of A. Let V ⊆ TAGS be a set of |Q| variables and I : Q → V be a one-to-one mapping. For each node q ∈ Q, define MIτ (q) as: • if δ(q, f )↑ for all f ∈ F EATS and θ(q)↑, then MIτ (q) = I(q) [ ]; • if δ(q, f )↑ for all f ∈ F EATS and θ(q) = a, then MIτ (q) = I(q)a;
Cambridge Books Online © Cambridge University Press, 2012
2.6 The correspondence between feature graphs and AVMs
79
Example 2.30 Arborescence. Let A be the following graph: q1
F
A=
q3
G2 G
F1
q7 a q9
G1
q2
Then the following trees are arborescences of A: F
q1
q3
F1
G1 G
q7 a
q1
q9
q3
G2 G
q2
F1
G1
q7 a q9
q2
Note that both trees have exactly four arcs, and hence both are minimum spanning tree of a graph of five nodes.
• if δ(q, fi ) = qi for 1 ≤ i ≤ n , where n is the out-degree of q, then
⎤ f1 : α1 ⎢ .. ⎥ MIτ (q) = I(q) ⎣ ... . ⎦ ⎡
fn : αn where αi = MIτ (qi ) if q, fi , qi ∈ E, αi = I(qi ) otherwise. q ). The AVM expression of A with respect to an arborescence τ is ηIτ (A) = MIτ (¯ See Examples 2.31 and 2.32. Recall that the function η, mapping a feature graph to an AVM, is dependent on τ , the arborescence chosen for the graph. When a given feature graph A has several different arborescences, it has several different AVM expressions. However, these expressions are not arbitrarily different; in fact, they are all renamings of each other. Lemma 2.71 Let A = Q, q¯, δ, θ be a feature graph, and let τ1 = Q1 , E1 , τ2 = Q2 , E2 be two arborescences of A. Let V1 , V2 ⊆ TAGS be
Cambridge Books Online © Cambridge University Press, 2012
80
2 Feature structures
Example 2.31 Feature graph to AVM mapping. Let A be the following graph, and τ = Q, E an arborescence of A: F
A=
F1
q1
q3
G2 G
q7 a τ=
q9
G1
F
q3
G1 G
q2
F1
q1
q7 a q9
q2
Since Q = {q1 , q2 , q3 , q7 , q9 }, we select a set of five variables from TAGS say, V = { 1 , 2 , 3 , 7 , 9 }. We define a one-to-one mapping I from Q to V ; here, the function that maps qi to i . To compute the AVM expression of A (with respect to τ and I), we start with the sinks of the graph: nodes with no outgoing edges. There are two such nodes in A, namely q7 and q9 . By the definition, MIτ (q 9 ) = I(q9 ) [ ] = 9 [ ], and MIτ (q7 ) = I(q7 )a = 7 a. Then, MIτ (q1 ) = I(q1 ) F1 : MIτ (q7 ) = 1 F1 : 7 a . More interestingly, MIτ (q2 ) =
: MIτ (q9 ) G1 : 9 [ ] I(q2 ) . = 2 G 2 : I(q1 ) G2 : 1 G1
Note how the value of 1 is not explicated, as the arc q2 , G 2 , q1 is not included in τ . Finally, M = MIτ (q3 ) = I(q3 )
F: G:
MIτ (q1 ) MIτ (q2 )
⎤
F : 7a 1 = 3⎣ G1 : 9 [ ] ⎦ G: 2 G2 : 1 ⎡
F: 1
Observe that the result is a well-formed AVM, and that the reentrancy in A is reflected in M . Had we chosen the other arborescence of A (refer back to Example 2.30), the resulting AVM would have been: ⎡ 3
⎣
F: 1 G: 2
G1 G2
⎤
⎦ : 9 [ ] : 1 F1 : 7 a
Cambridge Books Online © Cambridge University Press, 2012
2.6 The correspondence between feature graphs and AVMs
81
Example 2.32 Feature graph to AVM mapping in the face of cycles. Let A be the following graph, whose unique arborescence is τ : A=
F
q0
q1
τ=
F
q0
q1
G
Define V = { 0 , 1 } and I maps qi to i . MIτ (q1 ) = 1 G : 0 , and hence
M = MIτ (q0 ) = 0 F : MIτ (q1 ) = 0 F : 1 G : 0 . two sets of |Q| variables and I1 , I2 : Q → V be two one-to-one mapping. Then ηIτ11 (A) and ηIτ22 (A) are renamings of each other. Proof ηIτ11 (A) and ηIτ22 (A) have exactly the same structure, and they differ from each other only in certain feature values, denoted αi in Definition 2.70. Observe that each such αi is either MIτ (qi ) or I(qi ); but MIτ (qi ) is always tagged by I(qi), so the only thing that distinguishes between the different AVMs is the choice of which instance of the variable I(qi ) should explicate the full AVM MIτ (qi ). AVMs that only differ in the choice of variable explication are renamings of each other, by Theorem 2.62. Furthermore, this is why any choice of an arborescence guarantees that the resulting AVM is well formed. Lemma 2.72 If A = Q, q¯, δ, θ is a feature graph and M = ηIτ (A) is any one of its AVM expressions, then for all q1 , q2 ∈ Q and f ∈ F EATS, δA (q1 , f ) = q2 iff I(q1 ), f, I(q2 ) ∈ A RCS (M ). Proof Assume that δA (q1 , f ) = q2 . Then by the third clause of Definition 2.70, ⎡
f1 : ⎢ .. τ MI (q1 ) = I(q1 ) ⎣ .
⎤ α1 .. ⎥ . ⎦
fn : αn where αi = MIτ (qi ) if q1 , fi , qi ∈ E, αi = I(qi ) otherwise. In any of these cases (independently of the choice of τ and its arcs E), I(q1 ), f, I(q2 ) ∈ A RCS(M ). For the reverse direction, assume that I(q1), f, I(q2 ) ∈ A RCS (M ). By definition of A RCS , Assoc(M, I(q1 )) = M1 and tag(fval(M1 , f )) = I(q2 ). By Definition 2.70, fval(M1 , fi ) = αi , and tag(αi ) = qi ; therefore, tag(fval(M1 , f )) = I(q2 ) implies δA (q1 , f ) = q2 . Corollary 2.73 If A is a feature graph and M = ηIτ (A) is any one of its AVM expressions, then: • Π(A) = Π(M );
Cambridge Books Online © Cambridge University Press, 2012
82
2 Feature structures
• for every path π, pval(M, π) is an atomic AVM with the atom a iff valA (π) is
the graph {¯ q }, q¯, δ, θ for some node q¯, where δ is undefined and θ(¯ q ) = a; and A M • for every π1 , π2 , π1 π2 iff π1 π2 .
Proof By induction on the length of paths, using Lemma 2.72. Corollary 2.74 If A is a feature graph and M1 = two of its AVM expressions, then M1 M2 .
ηIτ11 (A), M2
=
ηIτ22 (A)
are
Proof Immediate from Corollary 2.73 and the definition of renaming (see Definitions 2.55 and 2.63). Theorem 2.75 For all feature graphs A1 = Q1 , q¯1 , δ1 , θ1 , A2 = Q2 , q¯2 , δ2 , θ2 , A1 A2 iff for all arborescences τ1 , τ2 of A1 and A2 , respectively, and mappings I1 , I2 , ηIτ11 (A1 ) ηIτ22 (A2 ). Proof Assume that A1 A2 . Then by Lemma 2.13, Π(A1 ) ⊆ Π(A2 ). By A
A
1 2 π2 then π1 π2 . By Corollary 2.73, this implies Lemma 2.15, if π1 that for all arborescences τ1 , τ2 of A1 and A2 , respectively, and mappings τ
I1 , I2 , Π(ηIτ11 (A1 ))
⊆ Π(ηIτ22 (A2 )) pval(ηIτ11 (A1 ), π)
ηI 1 (A1 ) 1
τ
ηI 2 (A2 )
and that if π1 π2 , then π1 2 is an atomic AVM with the atom a, then π2 . Similarly, if q1 , π)) = a, and by the assumption of subsumption, also necessarily θ1 (δ1 (¯ q2 , π)) = a; hence, pval(ηIτ22 (A2 ), π) is an atomic AVM with the atom a. θ2 (δ2 (¯ Hence ηIτ11 (A1 ) ηIτ22 (A2 ). Now assume that for some arborescences τ1 , τ2 of A1 and A2 , respectively, and mappings I1 , I2 , ηIτ11 (A1 ) ηIτ22 (A2 ). Then there exists a function h : T ags(ηIτ11 (A1 )) → T ags(ηIτ22 (A2 )) as in Theorem 2.67. Define a function h : Q1 → Q2 by setting h (q) = I −1 (h(I(q))). The proof that h is a subsumption morphism is similar to the first part of the proof of Lemma 2.67. The above theorem allows us to abuse notation and refer to “the” AVM expression of some feature graph A, denoted η(A), when the concrete identity of the AVM is irrelevant, as in the following corollary. Corollary 2.76 For all feature graphs A1 , A2 , A1 ∼ A2 iff η(A1 ) η(A2 ). Exercise 2.39. Prove that for every AVM M , η(φ(M )) M . Exercise 2.40. Prove or refute: for every feature graph A, φ(η(A)) = A. We conclude with Diagram 2.3, which depicts the four views of feature structures.
Cambridge Books Online © Cambridge University Press, 2012
2.7 Feature structures in a broader context
83
Diagram 2.3 AVMs, feature graphs, feature structures, and AFSs AVM
Feature graph
fs1 = [A1 ]∼
A1
M1
η, τ
M2 M2 η, τ
A2 ∼ A2
AFS
Abs
[·]∼
φ
Feature structure
∈ ∈
F1 = Abs(A1 ) ˆ
ˆ fs2 = [A2 ]∼
F2 = Abs(A2 )
Conc
This diagram adds AVMs to Diagram 2.2. For every AVM M, φ(M ) is a feature graph; for every feature graph A, an arborescence τ , and a mapping I, ηIτ (A) is an AVM, and if τ1 and τ2 are two arborescences of A, with mappings I1 , I2 , then ηIτ11 (A) ηIτ22 (A) (Corollary 2.74). For all AVMs M1 , M2 , M1 M2 iff φ(M1 ) φ(M2 ) (Theorem 2.67), and for all feature graphs A1 , A2 , A1 A2 iff η(A1 ) η(A2 ) (Theorem 2.75). If M is an AVM and A = φ(M ) is its feature graph image, then the feature structure image of M is [A]∼ , the equivalence class of A with respect to isomorphism. Conversely, if fs is a feature structure, then its AVM depictions are {η(A, τ ) | A ∈ fs, and τ is an arborescence of A}. Note that there are two degrees of freedom in depicting a feature structure as an AVM, determined by the choice of feature graph representative A and its arborescence τ .
2.7 Feature structures in a broader context Feature structures are utilized by many grammatical formalisms to encode different kinds of linguistic information. They serve in representing phonological, morphological, syntactic, and semantic knowledge. But the use of feature structures is not limited to computational linguistics; indeed, they are present in other areas of computer science as well. A somewhat degenerate form of feature structures is utilized by many programming languages: Records (as in Pascal, known as structures in C) are essentially feature structures where every field is indexed by a feature. There are some major differences between records and feature structures, though. First, the notion of sharing that is central to feature structures is less significant for records. The values of record fields are not necessarily other records – different data types can be freely used; hence, transfer of values is mediated through explicit assignments, not unifications. Other operations that are defined for the
Cambridge Books Online © Cambridge University Press, 2012
84
2 Feature structures
type of values of record fields are usually allowed. For example, if a field is numeric, arithmetic operations can be applied to it. In relation to this, unification-based formalisms usually do not allow such a diversity of operations to apply to feature structures as programming languages allow to records. In particular, arithmetic operations are usually not applicable to feature structures’ values, while they are very natural to numeric records’ fields. Logic programming languages such as Prolog manipulate first-order terms (FOTs), which can be viewed as a special case of feature structures. This trend is even further emphasized in languages such as LOGIN or LIFE. Moreover, these languages make extensive use of unification, which is defined in slightly different terms in the context of FOTs. However, there are some important differences between feature structures and FOTs. First and foremost, FOTs are essentially trees, with possibly shared leaves, whereas feature structures allow reentrancies to occur in every level of the structure. Feature structures can be cyclic, in contrast to (ordinary) FOTs. FOTs use positional encoding of argument structures, with no features. Finally, two FOTs are unifiable only if they have the same functor and the same arity, while two feature structures might be unifiable even if they have a different number of features, as we shall see in the next chapter. Further reading A mathematical characterization of feature structures and a discussion of their properties can be found in, for example, Johnson (1988) or Shieber (1992). Abstract feature structures are due to Moshier and Rounds (1987) and are discussed in detail in Moshier (1988). The representation of graphs as sets of paths is inspired by works on the semantics of concurrent programming languages, and the notion of fusion-closure is due to Emerson (1983). Contemporary formalisms also utilize a variant of feature structures called typed feature structures (TFS). Typed feature structures were first defined by Aït-Kaci (1984), who introduced the notion of Ψ-terms, which are viewed as representing partial information about features and their values. This line of work continued in the LOGIN system (Aït-Kaci and Nasr, 1986) and several successors, all based on a partially ordered set of types (referred to as sorts (Aït-Kaci et al., 1993; Aït-Kaci, 1993; Aït-Kaci and Podelski, 1993). TFSs are directly defined also by Moshier (1988), where the notion of abstract feature structures is first defined. A view of TFSs as a generalization of FOTs is presented in Carpenter (1991). A logical formulation of TFSs, starting with the basic notions and ending with a complete characterization of TFS-based grammars and their languages, is given by Carpenter (1992).
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
3 Unification
The previous chapter presented four different views of feature structures, with several correspondences among them. For each of the views, a subsumption relation was defined in a natural way. In this chapter we define the operation of unification for the different views. The subsumption relation compares the information content of feature structures. Unification combines the information that is contained in two (compatible) feature structures. We use the term “unification” to refer to both the operation and its result. In the sequel, whenever two feature structures are related, they are assumed to be over the same signature. The mathematical interpretation of “combining” two members of a partially ordered set is to take the least upper bound of the two operands with respect to the partial order; in our case, subsumption. Indeed, feature structure unification is exactly that. However, since subsumption is antisymmetric for feature structures and AFSs but not for feature graphs and AVMs, a unique least upper bound cannot be guaranteed for all four views. We begin with feature graphs and define unification for this view first, extending it to feature structures in Section 3.3. We then (Section 3.2) provide a constructive definition of feature graph unification and prove that it corresponds to the least upper bound definition in a natural way. We also provide in Section 3.4 an algorithm for computing the unification of two feature graphs. AVM unification can then be defined indirectly, using the correspondence between feature graphs and AVMs. We define unification directly for AFSs in Section 3.5. We conclude this chapter with a discussion of generalization, a dual operation to unification. 3.1 Feature structure unification Definition 3.1 (Consistent feature structures) Two feature structures fs1 and fs2 are consistent if they have an upper bound (with respect to subsumption), and inconsistent otherwise. 85
Cambridge Books Online © Cambridge University Press, 2012
86
3 Unification
Definition 3.2 (Feature structure unification) If fs1 and fs2 are consistent, ˆ fs2 , is their least upper bound with respect to their unification, denoted fs1 subsumption. Example 3.1 depicts two feature graphs that are considered representatives of their respective equivalence classes. The two feature structures are consistent; indeed, the feature graph on the right, when viewed as a representative of its equivalence class, is an upper bound of the two unificands with respect to subsumption. Furthermore, it is also the least upper bound of the two unificands: It contains the minimum information that must be present in any consistent feature structure that is subsumed by the unificands. Example 3.1 Unification. q0
NUM
q1 sg
ˆ
=
q3 PERS
q5 3rd
q6
NUM
PERS
q7 sg q8 3rd
If two feature structures have an upper bound, they have a (unique) least upper bound. We do not prove this here; we will provide a constructive proof of this proposition in the following section, where a constructive definition of feature graph unification is provided (Definition 3.6). See Theorem 3.7. Rather than elaborate on the unification of feature structures, we move on to the view of concrete feature graphs, where unification will be exemplified and discussed in detail. 3.2 Feature-graph unification While the definition of unification as least upper bound is useful mathematically, it does not tell us how to compute the unification of two given feature structures. To this end, we now provide a constructive definition in terms of feature graphs, which will later lead to an algorithm for computing unification. For reasons that will be clear presently, we require that the two feature graphs be node-disjoint. Definition 3.3 Let A = QA , q¯A , δA , θA and B = QB , q¯B , δB , θB with QA ∩ u QB = ∅ be two feature graphs. Let ‘≈’ be the least equivalence relation on QA ∪ QB such that: u
• q¯A ≈ q¯B ;
Cambridge Books Online © Cambridge University Press, 2012
3.2 Feature-graph unification
87
u
• for every q1 , q2 ∈ QA ∪ QB and f ∈ F EATS, if q1 ≈ q2 , (δA ∪ δB )(q1 , f )↓ and u
(δA ∪ δB )(q2 , f )↓, then (δA ∪ δB )(q1 , f ) ≈ (δA ∪ δB )(q2 , f ). u
The ‘≈’ relation (see Example 3.2) partitions the nodes of QA ∪ QB to equivalence classes such that both roots are in the same class, and if some feature is defined for two nodes in one class, then the two nodes this feature leads to are also in one (possibly different) class. Clearly, the number of equivalence classes u (called the index of ≈) is finite. The requirement that QA and QB be disjoint is essential here: We would want two nodes to be in the same equivalence class u with respect to ‘≈’ only if they comply with the above definition; if we allowed u a nonempty intersection of nodes, ‘≈’ could have been a different relation. u
Exercise 3.1. Prove that ‘≈’ is well-defined, that is, that such a relation always uniquely exists. u
Exercise 3.2. Show the relation ‘≈’ for the following feature graphs, as was done in Example 3.2. F
F
H
G
Exercise 3.3 (*). Show two feature graphs A and B over disjoint nodes, such u that |QA ∪ QB | > 2, for which the index of ≈ is 1. Exercise 3.4 (*). Can you show two feature graphs A and B over disjoint u nodes, for which the number of equivalence classes of ≈ is |QA | + |QB |? Exercise 3.5 (*). Prove or show a counter example: for every two feature graphs qA , q¯B }. (with disjoint nodes) A, B, [¯ qA ] u = {¯ ≈
Exercise 3.6. Let A, B be feature graphs (with disjoint nodes) and qA ∈ QA , u qB ∈ QB nodes such that qA ≈ qB . Prove that there exists a path π such that qA , π) and qB = δB (¯ qB , π). qA = δA (¯ Definition 3.4 (Type-respecting relation) A binary relation ‘≈’ over the nodes of two feature structures QA ∪ QB is said to be type respecting iff for every node q ∈ QA ∪ QB , if (θA ∪ θB )(q)↓ and (θA ∪ θB )(q) = a, then for every node q such that q ≈ q , q is a sink and either (θA ∪ θB )(q )↑ or (θA ∪ θB )(q ) = a. u
When is ‘≈’ not type respecting? The above condition can hold for a node q ∈ QA ∪ QB only if (θA ∪ θB )(q)↓; that is, q must be a sink in either A or B
Cambridge Books Online © Cambridge University Press, 2012
88
3 Unification u
Example 3.2 The ‘≈’ relation. Let A and B be the following feature graphs: F
q0A
A:
G F
q0B
B:
q1A q1B
NUM PERS
q2A
sg
q2B
3rd
u
u
The relation ‘≈’ can be computed as follows: First, q0A ≈ q0B because both are u roots. Since the same feature, F, leaves two nodes that are related by ≈, namely, u the two roots, the nodes it leads to are also related, so we have q1A ≈ q1B . Since u ‘≈’ is required to be a minimal (least) equivalence relation, and since there is no need to add more pairs to it (except for the trivial reflexive pairs), we obtain the following equivalence classes (each rectangle represents a class): F
q0A
A:
G
F
q0B
B:
q1A
q1B
NUM
PERS
q2A
sg
q2B
3rd
As another example, let A and B be the following feature graphs: q2A
G
A:
q0A
B:
q0B
H F
q1A
F G
q1B u
The first item of Definition 3.3 relates the two roots: q0A ≈ q0B . Then, due to u this pair, the second item of the definition relates q1A ≈ q1B , through the F-arc; u and q2A ≈ q1B , through the G-arc. Completing the obtained relation to a (least) equivalence relation, we get, in addition to the trivial reflexive and symmetric u u pairs, also q1A ≈ q2A , because both are related to q1B and ‘≈’ is required to be transitive.
Cambridge Books Online © Cambridge University Press, 2012
3.2 Feature-graph unification
89
(recall that only sinks can be marked by atoms). The type respecting condition requires that all nodes that are equivalent to q be sinks, either unmarked or marked by the same atom. Since this is the only requirement, the relation is not type respecting if it maps two nodes, one of which is a marked sink and the other of which is either a non-sink or a sink with a different label, to the same u equivalence class. As we presently show, a non-type-respecting ‘≈’ is the only source for unification failure. Exercise 3.7 (*). Show two node-disjoint feature graphs, A and B, such that u q¯A is a sink but q¯B is not, and for which ‘≈’ is type respecting. Lemma 3.5 If A and B have a common upper bound C, such that A C through the morphism hA and B C through the morphism hB , and if qA ∈ QA u and qB ∈ QB are such that qA ≈ qB , then hA (qA ) = hB (qB ). Proof Immediate from Lemma 2.12 (Page 48) and Exercise 3.6.
Definition 3.6 (Feature-graph unification) Let A and B be two feature graphs such that QA and QB are disjoint. The unification of A and B, denoted A B, u is defined only if ‘≈’ is type respecting, in which case it is the feature graph Q, q¯, δ, θ (over the given signature), where: • Q = {[q] u | q ∈ (QA ∪ QB )}; ≈ • q¯ = [¯ q1 ] u (= [¯ q2 ] u );
≈
• δ([q] u , f ) = ≈
• θ([q] u ) = ≈
≈
[q ] u if there exists q ∈ [q] u s. t. (δA ∪ δB )(q , f ) = q ≈ ≈ ↑ if (δA ∪ δB )(q , f )↑ for all q ∈ [q] u ≈
(θA ∪ θB )(q ) if there exists q ∈ [q] u s. t. (θA ∪ θB )(q )↓ ≈ ↑ if (θA ∪ θB )(q )↑ for all q ∈ [q] u ≈
u
If ≈ is not type respecting, A and B are inconsistent and their unification is undefined. u
In words, Q is the set of equivalence classes of Q1 ∪ Q2 with respect to ≈, and the root of the result is the equivalence class of the roots of the arguments. δ is defined for [q] u only if δA or δB are defined for some q ∈ [q] u ; the value it ≈ ≈ assigns to ([q] u , F) is the equivalence class of the value either δA or δB assigns ≈
u
to (q , F); this value is independent of q , by the definition of ‘≈’. Finally, since u ‘≈’ is type respecting, the definition of θ is independent of q . See Example 3.3. To see that the result of unification is indeed a feature graph, observe that Q, q¯, δ, θ is connected because both A and B are connected; it is finite since
Cambridge Books Online © Cambridge University Press, 2012
90
3 Unification
Example 3.3 Unification. u
Refer again to Example 3.2. First, note that ‘≈’ is type respecting: θ is defined for two nodes, namely q2A and q2B , but since these nodes are not in the same equivalence class, the requirement holds vacuously. Taking the equivalence u classes of ‘≈’ to be the nodes in the unification result, we obtain the following graph: F NUM
G
{q2A } sg
{q1A , q1B }
{q0A , q0B }
PERS
{q2B } 3rd
both A and B are (hence, the number of equivalence classes is finite); and θ u labels only sinks since ≈ is type respecting. Several properties of unification should be noted. First, unification is indeed an information combination operator, in the sense that the result of the unification is subsumed by both operands (Example 3.4). Unification is absorbing: If A B then A B = B (Example 3.5). Reentrancies play a special role in unification: They can cause two consistent values to coincide (Example 3.6). Example 3.4 Unification combines information. q0
NUM
q1 sg
=
q3 PERS
NUM
q6
PERS
q5 3rd
Exercise 3.8 (*). Compute: F
q0
F
q1
Cambridge Books Online © Cambridge University Press, 2012
q2
q7 sg q8 3rd
3.2 Feature-graph unification
91
Example 3.5 Unification is absorbing. q0
NUM
q1 sg
q3
NUM
PERS
q4 sg
=
q6
NUM
PERS
q5 3rd
q7 sg q8 3rd
Example 3.6 Unification with reentrancy. q0
SUBJ
OBJ
q6
SUBJ OBJ
qs qo
q9
NUM
PERS NUM
PERS
q1 sg
q3
SUBJ OBJ
q4
=
q2 3rd
q7 sg q8 3rd
We now return to the interpretation of unification as least upper bound with respect to subsumption. Recall that feature graph subsumption is not antisymmetric; hence, the uniqueness of a least upper bound cannot be guaranteed. However, we show in Theorem 3.7 that the result of unifying two feature graphs produces some minimal upper bound of the two unificands, if they are consistent. Furthermore, the result of the unification algorithm is unique up to isomorphism: If two feature graphs are least upper bounds of A and B, then they are isomorphic. Theorem 3.7 Let A and B be two feature graphs with disjoint nodes, and let u u ‘≈’ be as in Definition 3.3. If ≈ is not type respecting, then A and B do not have an upper bound (with respect to feature graph subsumption). If it is type respecting, then C = A B (Definition 3.6) is a minimal upper bound of A and B with respect to feature-graph subsumption. u
Proof Assume that A and B are inconsistent. Then ≈ is not type respecting, that is, there exist (without loss of generality) qA ∈ QA and qB ∈ QB such u that qA ≈ qB , θA (qA ) = a, and either qB is not a sink or θB (qB ) = a. Assume
Cambridge Books Online © Cambridge University Press, 2012
92
3 Unification
toward a contradiction that an upper bound C exists for A and B. Then fs(A) fs(C) through hA and B C through hB . By Lemma 3.5, hA (qA ) = hB (qB ). By definition of subsumption, hA (qA ) is a sink and θA (hA (qA )) = a since θA (qA ) = a. In the same way, either hB (qB ) is not a sink, or it is a sink with θB (qB ) = a. Each alternative contradicts with hA (qA ) = hB (qB ). Now assume that C = A B. First, we show that A C (B C is shown similarly). Let hA : QA → QC be the morphism hA (q) = [q] u for every q ∈ QA . ≈ We show that hA is a subsumption morphism. • hA (¯ qA ) = [¯ qA ] u (definition of hA ) = q¯C (definition of C) ≈ • Suppose δA (q, f )↓.
hA (δA (q, f )) = [δA (q, f )] u (definition of hA ) ≈ = δC ([q] u , f ) (definition of δC ) ≈ = δC (hA (q), f ) (definition of hA ) u
• Suppose θA (q)↓. Then, since ≈ is type-respecting by assumption, [q] u is a ≈
sink and θC ([q] u ) = θA (q). Hence, θA (q) = θC (hA (q)) by the definition of ≈ hA . Thus, hA is a subsumption morphism and hence A C and similarly B C. Next, we show that C is indeed a least upper bound of A and B. Let D be any upper bound of A and B. We have to show that C D. By assumption, there exist subsumption morphisms hAD : QA → QD , hBD : QB → QD . Let h = QA ∪ QB → QD be defined by hAD (q) if q ∈ QA h(q) = hBD (q) if q ∈ QB (h is well defined by the assumption that the nodes of A and B are disjoint). h
h
Define a relation ≈ ⊆ (QA ∪ QB )2 by q ≈ q iff h(q) = h(q ) (note that both h
h(q) and h(q ) are in QD ). Clearly, ≈ is an equivalence relation. Furthermore, if h
h
q ≈ q , then for every F ∈ F EATS, (δA ∪ δB )(q, F ) ≈ (δA ∪ δB )(q , F ) (if both are defined because both hAD and hBD are subsumption morphisms and commute h
with δA , δB , respectively. Similarly, if q ≈ q , then (θA ∪θB )(q) = (θA ∪θB )(q ) u (if both are defined) because both equal θD (h(q)) = θD (h(q )). Because ≈ is the least equivalence relation on QA ∪ QB satisfying those two properties, we h
u
u
get that ≈ is an extension of ≈, that is, for every q, q ∈ QA ∪ QB , if q ≈ q , h
then q ≈ q . ˆ ˆ : QC → QD by h([q] We now define the morphism h u ) = h(q) (in QD ); this is u
≈
representative independent, since q ≈ q implies, as mentioned, h(q) = h(q ).
Cambridge Books Online © Cambridge University Press, 2012
3.3 Feature structure unification revisited
93
ˆ qA ] u ) = h(¯ • h([¯ qA ) = q¯D , because hAD (¯ qA ) = hBD (¯ qB ) = q¯D , as hAD , hBD ≈
are subsumption morphisms. • Suppose δC ([q] u , f )↓. ≈
ˆ C ([q] u , f )) = h([(δ ˆ h(δ (definition of δC ) A ∪ δB )(q, f )] u ) ≈ ≈ = h((δA ∪ δB )(q, f )) (definition of ˆh) = δD (h(q), f ) (because h is a subsumption morphism and commutes with δ) ˆ = δD (h([q] (definition of ˆh) u ), f ) ≈
•
ΘC ([q] u ) = (ΘA ∪ ΘB )(q) (definition of ΘC ) ≈ = θ(h(q)) (since h is a subsumption morphism u and ≈ is type respecting) ˆ ˆ = θ(h([q] (definition of h) u )) ≈
ˆ is a subsumption morphism, and thus A B = C D; hence, C is a Hence, h least upper bound of A and B. By Theorem 2.11, this proof can be summarized by the commutativity of the following diagram: D ˆ h
hAD
hBD
C A
hA
hB
B
3.3 Feature structure unification revisited Theorem 3.7 connects feature-graph unification with feature-structure unificaˆ fs2 , simply compute A = A1 A2 , where tion. In order to compute fs = fs1 A1 ∈ fs1 and A2 ∈ fs2 , and take fs = [A]∼ . Theorem 3.8 For all feature graphs A1 , A2 , if A = A1 A2 then [A]∼ = ˆ [A2 ]∼ . [A1 ]∼ Proof Immediate from Theorem 3.7 and the definition of feature-structure subsumption. Since A = A1 A2 , then by Theorem 3.7, A1 A, A2 A, and for all A3 such that A1 A3 and A2 A3 , A A3 . By Definition 2.27,
Cambridge Books Online © Cambridge University Press, 2012
94
3 Unification
ˆ ∼ , [A2 ]∼ [A] ˆ ∼ , and for all feature-structures fs such that [A1 ]∼ fs ˆ [A1 ]∼ [A] ˆ [a]∼ fs. ˆ Hence, [A]∼ = [A1 ]∼ ˆ [A2 ]∼ . and [A2 ]∼ fs, The following exercises investigate some of the properties of feature-structure unification. We abuse the ‘’ and ‘’ notation in the following and use them for feature-structure subsumption and unification, respectively. Exercise 3.9 (*). Prove: Feature-structure unification is commutative: fsA fsB = fsB fsA . Exercise 3.10. Prove: Unification is monotone: If fsA fsB , then for every fsC , fsA fsC fsB fsC (if both exist). Exercise 3.11. Prove or refute: Unification is associative: fsA (fsB fsC ) = (fsA fsB ) fsC . Exercise 3.12. Let A = {fs1 , . . . , fsn } be a finite set of feature structures. Define A directly, such that A = fs1 · · · fsn . Exercise 3.13. Prove or refute: If fsA and fsB contain no reentrancies, then so does fsA fsB (if it is defined). Exercise 3.14 (*). Prove or refute: If fsA and fsB are acyclic, then so is fsA fsB (if it is defined). In the previous chapter we defined an alternative view to feature graphs, namely attribute-value matrices. While it could have been possible to define unification directly on AVMs, this is not necessary. Since we showed a one-to-one mapping between feature graphs and AVMs, established through the functions φ and η,unification is indirectly defined for AVMs by letting M1 M2 be η(φ(M1 )φ(M2 )). In the remainder of this book, we will sometimes use AVMs to depict feature graphs, implicitly taking advantage of this correspondence. 3.4 Unification as a computational process Unification, as we have defined it here, turns out to be very efficient to implement. Several algorithms for feature-structure unification have been proposed (see the discussion at the end of this chapter). In this section we present a simple algorithm, based directly on the definition, for unifying two feature graphs. The algorithm uses two operations, known as union and find, to manipulate equivalence classes.
Cambridge Books Online © Cambridge University Press, 2012
3.4 Unification as a computational process
95
Feature graphs are implemented using the following data structure: Each node q is a record (or structure) with the fields: label, specifying θ(q) (if defined); and feats, specifying δ, which is a list of pairs, consisting of features and pointers to nodes. A node q is a sink if and only if q.feats is empty, and only such nodes are labeled. If θ(q)↑, the field label is nil. The functions is_labeled and is_sink receive a node record and return true if and only if the node is labeled or has no outgoing edges, respectively. To implement the union-find algorithm, an additional field, class, is added to nodes. It is used to point to the equivalence class of the node. Upon initialization of the algorithm, for every node q, q.class points back to q, indicating that each node is a separate equivalence class. But this should not be always the case. During the computation, the class pointers are reset. Example 3.7 depicts the representation of an example feature graph. Example 3.7 Internal representation of a feature graph. F
q0A label : nil feats : F, G class :
G
q1A label : nil feats : H class :
H
q2A
sg
label : sg feats : class :
Assume that the find operation receives a node and returns a unique canonical representative of its equivalence class; assume that union receives two (representatives of) classes and merges them by setting the equivalence class of all members of the second class to that of the first. The unification algorithm is listed in Example 3.8, and its operation is demonstrated in Examples 3.9 and 3.10. The set S contains pairs of nodes that should be merged into one equivalence class. Initially, there is exactly one such pair, namely, the two roots. The main loop of the algorithm is executed as long as there are pairs of nodes to unify. First, a pair is selected from S and the equivalence classes of its members are found. If both nodes belong to the same equivalence class (this is expressed as identical representatives for the classes of both nodes), nothing has to be done: this pair is removed from S. If they do not belong to the same class yet, they are checked for compatibility. Recall that a labeled node cannot be unified neither with a nonsink nor with a node bearing a different label. In any of these cases, failure is reported. In any other case, the equivalence classes of the two nodes
Cambridge Books Online © Cambridge University Press, 2012
96
3 Unification
Example 3.8 Unification algorithm.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Input: Two feature graphs A and B Output: If fs(A) and fs(B) are unifiable, a representative of fs(A) fs(B); otherwise fail S← {¯ qA , q¯B }; while S = ∅ do select a pair q1 , q2 ∈ S ; S← S \ {q1 , q2 }; q1 ← find(q1 ); q2 ← find(q2 ); if q1 = q2 then if (is_labeled(q1) and not is_sink(q2 )) or (is_labeled(q2) and not is_sink(q1 )) or (is_labeled(q1) and is_labeled(q2) and q1 .label = q2 .label) then fail else union(q1, q2 ); if (is_sink(q1 ) and is_sink(q2 ) and is_labeled(q2)) then q1 .label ← q2 .label; foreach f, q ∈ q2 .feats do if there is some f, p ∈ q1 .feats then S← S ∪ {p, q } else add f, q to q1 .feats
are merged using union. If both nodes are sinks, the merged class receives the correct label, if there is one. Then, all the features of the two nodes are merged. If both nodes share a common feature, then the nodes these features point to must be unified, and hence they are added to S. Upon termination of the algorithm, the original inputs are modified. The result is obtained by considering the equivalence class of q¯1 as a root and computing the graph that is accessible from it. Such algorithms, which modify their inputs, are called destructive. Exercise 3.15 (*). Draw a graph representation of the unification result of Example 3.9. Lemma 3.9 The unification algorithm terminates.
Cambridge Books Online © Cambridge University Press, 2012
3.4 Unification as a computational process
97
Example 3.9 Unification algorithm. Assume that the algorithm is applied to the following graphs: n1
n2
n3
label : nil feats : F, G class :
label : nil feats : H class :
label : a feats : class :
n4 label : nil feats : F class :
The first step of the algorithm sets S to be {n1, n4 }. Since this is the only pair in S, it is selected in the next step. Since the class fields of all nodes are set to point to the node itself upon initialization, q1 and q2 of the algorithm are set to n1 and n4, respectively. None of them is labeled, so no failure is detected. The two nodes are unioned, which results in n4.class pointing to n1. Since n4 has a feature, f (pointing to itself), which occurs in n1, where its value is n2, the pair n2, n4 is added to S. Finally, the pair n1, n4 is removed from S. When the first iteration of the algorithm completes, S = {n1, n4 } and the graphs are: n1
n2
n3
label : nil feats : F, G class :
label : nil feats : H class :
label : a feats : class :
n4 label : nil feats : F class :
Proof The algorithm consists of one main loop, executed while S is not empty (the inner loop on the features of q2 is bounded by the number of features and
Cambridge Books Online © Cambridge University Press, 2012
98
3 Unification
Example 3.10 Unification algorithm (continued). Once more, S is a singleton and the pair n2, n4 is selected. The representatives for this pair are n2 and n1, which once again, are not labeled. This time, the classes of n2 and n1 are unioned, which results in n1.class pointing to n2. Going over the features of n1, the pairs f, n2 and g, n2 are added to n2. Finally, n2, n4 is removed from S. The resulting graphs are: n1
n2
n3
label : nil feats : F, G class :
label : nil feats : H, F, G class :
label : a feats : class :
n4 label : nil feats : F class :
Since S is now empty, the algorithm terminates.
hence finite). In every iteration a pair in S is selected; the body of the loop is only executed if the two nodes are not yet equivalent, in which case they become equivalent by the union operation. In any case, the pair is removed from S. So in every iteration, either a pair is removed from S and no pairs are added, or two nodes are unified. The number of pairs to unify is finite; hence, the body of the loop can only be executed a finite number of times, and the algorithm terminates. Lemma 3.10 The unification algorithm computes A B. Proof We suppress a formal proof of correctness but point to the following observations: Given that find and union indeed perform their tasks, the algorithm u simply computes the equivalence classes of nodes with respect to the ≈ relation, directly implementing Definition 3.6. In particular, note that the root of the result is the equivalence class of the roots of the operands; that a feature is added to an equivalence class if it is a feature of any of the members in this class; and that the labels on the equivalence classes are set according to the definition.
Cambridge Books Online © Cambridge University Press, 2012
3.5 AFS unification
99
What is the time complexity of the algorithm? Note that every iteration of the loop merges two (different) equivalence classes into one. If the inputs are feature graphs consisting of fewer than n nodes, then the number of equivalence classes in the result is bounded by 2n. Thus, union can be executed at most 2n times. There are two calls to find in each iteration, so the number of find operations is bounded by 4n. With the appropriate data structures for implementing equivalence classes it can be proven that O(n) operations of union and find can be done in (O(c(n) × n), where c(n) is the inverse Ackerman function, which can be considered a constant for realistic n-s. Therefore, the unification algorithm is quasi-linear. As noted above, the algorithm is destructive in the sense that the input feature graphs are modified. This might pose a problem: The inputs might be necessary for further uses; even worse, when the unification fails, the inputs might be lost. To overcome the problem, the inputs to unification must be copied before they are unified, and copying of graphs is an expensive operation (it has been claimed that in some systems for manipulating unification-based grammars using destructive unification algorithms, about half the time is spent on copying). As an alternative solution, there exist nondestructive unification algorithms whose theoretical complexity – and actual run time – is not worse than the algorithm we have presented. Because this is merely a matter of implementation, we do not present such an algorithm here, but we give some references for information on it at the end of this chapter.
3.5 AFS unification We now move to the AFS view; we wish to define unification directly for AFSs. It is tempting to think that just as AFS subsumption is defined using set inclusion, AFS unification can be simply defined using set union, but the simple solution, namely, a pointwise set union of two abstract feature structures, will not do, simply because the result is not guaranteed to be an AFS at all. In fact, the union might not even be a pre-AFS: If the two AFSs are abstractions of inconsistent feature structures (for which unification fails), their pointwise union is not well defined. Consider two AFSs, F1 and F2 , where Π1 = Π2 = {}, ≈1 = ≈2 = {(, )}, Θ1 () = a, whereas Θ2 () = b. A pointwise union will yield a nonfunctional Θ. Even when the union yields a pre-AFS, however, it is not necessarily an AFS, as Example 3.11 demonstrates. Exercise 3.16 (*). Show two AFSs, F1 and F2 , such that a pointwise set union of F1 and F2 is a pre-AFS that is not fusion-closed.
Cambridge Books Online © Cambridge University Press, 2012
100
3 Unification
Example 3.11 Consider the AFSs F1 , where Π1 = {, F , H }, ≈1 = {(, ), (F , F ), (H , H ), (F , H ), (H , F )} and F2 , with Π2 = {, F , G }, ≈2 = {(, ), (F , F ), (G , G ), (F , G ), (G , F )}. The concrete graphs are:
F
H
F
G
The union of the paths results in the set {, F , H , G }, and the union of the ≈ relations yields the set {(, ), (F , F ), (H , H ), (F , H ), (H , F ), (G , G ), (F , G ), (G , F )}. Still, this is not the expected result because the resulting relation is not an equivalence. In particular, it is not transitive since the pairs (G , H ) and (H , G ) are missing.
Exercise 3.17. Show two AFSs, F1 and F2 , such that a pointwise set union of F1 and F2 is a pre-AFS in which ≈ is an equivalence relation; however, Θ does not respect the equivalence. To solve the problems exemplified by Example 3.11 and the above exercises, a few precautions are employed. We define below three closure operations, each operating on a pre-AFS and extending it in some way. A pre-AFS F2 extending F1 if it either contains more paths, or more equivalences, or defines Θ for more paths. Definition 3.11 (Closure operations I) Let F = ΠF , ΘF , ≈F be a pre-AFS. Define Π0 = ΠF and ≈0 =≈F . Then, for all i ≥ 1 define: • Πi = Πi−1 ∪ {π α | there exist π, πα, π ∈ Πi−1 such that π ≈i−1 π }; • ≈i =≈i−1 ∪{(π α, πα) | there exist π, πα, π ∈ Πi−1 such that π ≈i−1 π };
Then Cl(F ) = i≥0 Πi , ΘF , i≥0 ≈i .
Theorem 3.12 If F is a pre-AFS, then Cl(F ) is the least fusion-closed pre-AFS that extends F . Proof First, we show that Cl(F ) is fusion-closed: Let π, π , πα ∈ ΠCl(F ) be such that (π, π ) ∈≈Cl(F ) . Then, there exists i ≥ 0 such that π, π , πα ∈ Πi and (π, π ) ∈≈i . Hence, π α ∈ Πi+1 and (π α, πα) ∈≈i+1 . Therefore π α ∈ ΠCl(F ) and (π α, πα) ∈≈Cl(F ) .
Cambridge Books Online © Cambridge University Press, 2012
3.5 AFS unification
101
Now, we need to show that Cl(F ) is the least fusion-closed pre-AFS that extends F . Let F be a pre-AFS that extends F and that is fusion-closed. We show by induction on i that for all i ≥ 0, Πi ⊆ ΠF and ≈i ⊆≈F . Base: for i = 0, Π0 = ΠF ⊆ ΠF and ≈0 =≈F ⊆≈F because F extends F . Assume that the hypothesis holds for all i such that i ≤ n. We first show that Πn ⊆ ΠF : Let π1 ∈ Πn . Then, π1 ∈ Πn−1 or π1 ∈ {π α | there exist π, πα, π ∈ Πn−1 such that π ≈n−1 π }. If π1 ∈ Πn−1 then by the induction hypothesis π1 ∈ ΠF . Otherwise, there exist π, πα, π ∈ Πn−1 such that π ≈n−1 π and π1 = π α. By the induction hypothesis, π, πα, π ∈ ΠF and π ≈F π . Since F is fusion-closed, π1 = π α ∈ ΠF . Next, we show that ≈n ⊆≈F : If π1 ≈n π2 , then π1 ≈n−1 π2 or (π1 , π2 ) ∈ {(π α, πα) | there exist π, πα, π ∈ Πn−1 such that π ≈n−1 π }. If π1 ≈n−1 π2 , then by the induction hypothesis π1 ≈F π2 . Otherwise, there exist π, πα, π ∈ Πn−1 such that π ≈n−1 π , π1 = π α and π2 = πα. By the induction hypothesis, π, πα, π ∈ ΠF and π ≈F π . Since F is fusion-closed, it follows that (π α, πα) = (π1 , π2 ) ∈≈F . Hence, the induction hypothesis holds for n. Lemma 3.13 Let F = ΠF , ΘF , ≈F be a pre-AFS such that ≈F is symmetric. If π1 , π2 ∈ ΠCl(F ) and π1 ≈Cl(F ) π2 , then there exist π1 , π2 ∈ ΠF and α ∈ P aths such that π1 = π1 α, π2 = π2 α and π1 ≈F π2 . Proof We prove by induction on i that for all i ≥ 0, if π1 , π2 ∈ Πi and π1 ≈i π2 , then there exist π1 , π2 ∈ ΠF and α ∈ P aths such that π1 = π1 α, π2 = π2 α and π1 ≈F π2 . Base: For i = 0, Π0 = ΠF and ≈0 =≈F . If π1 ≈F π2 , then by taking π1 = π1 , π2 = π2 and α = , we satisfy the claim. Assume that the hypothesis holds for all i such that i ≤ n. Let π1 , π2 ∈ Πn and π1 ≈n π2 . There are two possible cases: 1. π1 ≈n−1 π2 : Then, π1 , π2 ∈ Πn−1 and by the induction hypothesis the claim holds. 2. π1 ≈n−1 π2 : Then, there exist π2 , π2 α, π1 ∈ Πn−1 such that π2 ≈n−1 π1 , π1 = π1 α and π2 = π2 α. By the induction hypothesis, there exist π1 , π2 , β such that π1 = π1 β, π2 = π2 β and π1 ≈F π2 . Hence, we obtain that π1 = π1 α = π1 βα, π2 = π2 α = π2 βα and π1 ≈F π2 . Hence the induction hypothesis holds for n.
Theorem 3.14 If F is prefix-closed and ≈F is symmetric, then Cl(F ) is prefixclosed and ≈Cl(F ) is symmetric. Proof We first show that Cl(F ) is prefix-closed: We show by induction on i that for all i ≥ 0, Πi is prefix-closed:
Cambridge Books Online © Cambridge University Press, 2012
102
3 Unification
Base: For i = 0 Π0 = ΠF which is prefix-closed. Assume that the hypothesis holds for all i such that i ≤ n. Let πα ∈ Πn . Then, πα ∈ Πn−1 or πα ∈ {π α | there exist π, πα, π ∈ Πn−1 such that π ≈n−1 π }. If πα ∈ Πn−1 , then by the induction hypothesis, π ∈ Πn−1 ⊆ Πn . Otherwise, there exist π, π , π α ∈ Πn−1 such that π ≈n−1 π ; in particular, π ∈ Πn−1 ⊆ Πn . Hence, the induction hypothesis holds for n. We now show that ≈Cl(F ) is symmetric: Assume (π1 , π2 ) ∈≈Cl(F ) . From Lemma 3.13, it follows that π1 = π1 α, π2 = π2 α and π2 ≈F π1 . Since ≈F is symmetric, it follows that π1 ≈F π2 . Hence, π1 , π2 ∈ ΠCl(F ) and π1 ≈Cl(F ) π2 . Since Cl(F ) is fusion-closed, it follows that (π2 , π1 ) = (π2 α, π1 α) ∈≈Cl(F ) . Definition 3.15 (Closure operations II) Let F = ΠF , ΘF , ≈F be a pre-AFS. Then, Eq(F ) = ΠF , ΘF , ≈ , where ‘≈’ is the reflexive-transitive closure of ≈F . Theorem 3.16 If F is a pre-AFS in which ≈F is symmetric, then Eq(F ) is the least extension of F to a pre-AFS in which ≈ is an equivalence relation. Proof Follows immediately from the fact that given a symmetric relation R, the reflexive-transitive closure of R is known to be the least extension of R which is an equivalence relation. Theorem 3.17 If F is prefix-closed and fusion-closed, and ≈F is symmetric, then Eq(F ) is prefix-closed and fusion-closed. Proof Since ΠEq(F ) = ΠF , clearly, Eq(F ) is prefix-closed. We need to show that Eq(F ) is fusion-closed: Let π, π , πα ∈ ΠEq(F ) such that π ≈Eq(F ) π . Since ΠEq(F ) = ΠF , π, π , πα ∈ ΠF . ΠF is fusion-closed, and therefore π α ∈ ΠF = ΠEq(F ) . We also need to show that (π α, πα) ∈≈Eq(F ) . There are two possible cases: 1. (π, π ) ∈≈F : Since F is fusion-closed, (π α, πα) ∈≈F , and since ≈F ⊆≈Eq(F ) , (π α, πα) ∈≈Eq(F ) . / F : Then, (π, π ) is added to ≈F during the reflexive-transitive 2. (π, π ) ∈≈ closure. We again observe two possible cases: 1. π = π and then since ≈Eq(F ) is an equivalence relation and πα ∈ ΠEq(F ), (π α, πα) ∈≈Eq(F ) . 2. There exist π1 , ..., πn , all different, such that π = π1 , πn = π and (π, π1 ), (π1 , π2 ), ..., (πn−1 , πn ), (πn , π ) ∈≈F
Cambridge Books Online © Cambridge University Press, 2012
3.5 AFS unification
103
Since F is fusion-closed it follows that (πα, π1 α), (π1 α, π2 α), ..., (πn−1 α, πn α), (πn α, π α) ∈≈F Since ≈F ⊆≈Eq(F ) it follows that (πα, π1 α), (π1 α, π2 α), ..., (πn−1 α, πn α), (πn α, π α) ∈≈Eq(F ) Since ≈Eq(F ) is transitive, it follows that (πα, π α) ∈≈Eq(F ) , and since ≈Eq(F ) is symmetric, (π α, πα) ∈≈Eq(F ) . It is possible to define T y(F ) for the general case where F is a pre-AFS. However, this definition is complicated and requires an inductive definition (similar to the definition of Cl(F )). We only need to define T y(F ) for cases in which ≈F is an equivalence relation. Apparently, in these cases the definition is much simpler, and therefore we define it only for these cases. Definition 3.18 (Closure operations III) Let F = ΠF , ΘF , ≈F be a pre-AFS in which ≈F is an equivalence relation. Ty(F) is defined only if the following hold: 1. For all π1 , π2 ∈ ΠF such that π1 ≈F π2 , if ΘF (π1 )↓ and ΘF (π2 )↓ then ΘF (π1 ) = ΘF (π2 ). 2. For all π ∈ ΠF , if ΘF (π)↓, then there exists no path π α ∈ ΠF such that π ≈F π and α = . If T y(F ) is defined, then T y(F ) = ΠF , Θ, ≈F , where: ⎧ ΘF (π) ⎪ ⎪ ⎨ ΘF (π ) Θ(π) = ⎪ ⎪ ⎩ undefined
ΘF (π)↓ ΘF (π)↑ and there exists π ∈ ΠF such that π ≈F π and ΘF (π )↓ otherwise
Notice that if F is also fusion-closed, then the second condition that is required so that T y(F ) be defined can be simplified to requiring that for all π ∈ ΠF , if ΘF (π)↓, then there exists no path πα ∈ ΠF such that α = . Theorem 3.19 If F is a pre-AFS in which ≈F is an equivalence relation, T y(F ) is the least extension of F to a pre-AFS in which Θ respects the ≈ relation and is defined only for maximal paths. Proof Assume that T y(F ) is defined. Observe first that if T y(F ) is defined, then ΘT y(F ) is well defined. We first show that ΘT y(F ) respects the ≈T y(F )
Cambridge Books Online © Cambridge University Press, 2012
104
3 Unification
relation: Let π1 , π2 ∈ ΠT y(F ) be such that π1 ≈T y(F ) π2 . By the definition of T y(F ), π1 , π2 ∈ ΠF and π1 ≈F π2 . There are four possible cases: 1. ΘF (π1 )↓ and ΘF (π2 )↓: T y(F ) is defined, and therefore ΘF (π1 ) = ΘF (π2 ). Hence, ΘT y(F ) (π1 ) = ΘF (π1 ) = ΘF (π2 ) = ΘT y(F ) (π2 ). 2. ΘF (π1 )↓ and ΘF (π2 )↑: T y(F ) is defined, and ≈F is an equivalence relation, and therefore for all π ∈ ΠF such that π ≈F π2 and ΘF (π)↓, ΘF (π) = ΘF (π1 ). Hence, ΘT y(F ) (π2 ) = ΘF (π1 ) = ΘT y(F ) (π1 ). 3. ΘF (π1 )↑ and ΘF (π2 )↓: This case is symmetric to the previous case. 4. ΘF (π1 )↑ and ΘF (π2 )↑: If ΘT y(F ) (π1 )↓, then there exists π ∈ ΠF such that π ≈F π1 , ΘF (π)↓ and ΘT y(F ) (π1 ) = ΘF (π). Since ≈F is an equivalence relation, π ≈F π2 , and since T y(F ) is defined, it follows that ΘT y(F ) (π2 ) = ΘF (π) = ΘT y(F ) (π1 ). Otherwise, ΘT y(F ) (π1 )↑. Then, for all π ∈ ΠF such that π ≈ π1 , ΘF (π)↑. Since ≈F is an equivalence relation and T y(F ) is defined, for all π ∈ ΠF such that π ≈ π2 , ΘF (π)↑. Hence, both ΘT y(F ) (π1 ) and ΘT y(F ) (π1 ) are undefined. Next, we show that ΘT y(F ) is defined only for maximal paths: Let π ∈ ΠT y(F ) be such that ΘT y(F ) (π)↓, and let πα ∈ ΠT y(F ) . By the definition of T y(F ), π, πα ∈ ΠF . There are two possible cases: 1. ΘF (π)↓: Then, since π ≈F π (≈F is an equivalence relation) and since T y(F ) is defined, it follows that α = ; 2. ΘF (π)↑: Then, there exists π ∈ ΠF such that π ≈F π , ΘF (π )↓ and ΘT y(F ) (π) = ΘF (π ). Since T y(F ) is defined, it follows that α = . We need to show that T y(F ) is the least extension of F : Let F be a pre-AFS that extends F , such that ΘF respects the equivalence and is defined only for maximal paths. Let π ∈ ΠT y(F ) be such that ΘT y(F ) (π)↓. If ΘF (π)↓, then since F extends F , ΘF (π)↓ and ΘF (π) = ΘF (π) = ΘT y(F ) (π). Otherwise, there exists π ∈ ΠF such that π ≈F π and ΘF (π )↓. Since F extends F , π ≈F π and ΘF (π )↓ and ΘF (π ) = ΘF (π ). Since F respects the ≈F relation, ΘF (π) = ΘF (π ) = ΘF (π ) = ΘT y(F ) (π). Theorem 3.20 If F = Π, Θ, ≈ is prefix-closed and fusion-closed and ‘≈’ is an equivalence relation, then so is T y(F ). Proof Follows immediately from the fact that ΠT y(F ) = ΠF and ≈T y(F ) =≈F . Now the unification of two AFSs can be defined in terms of set union and the closure operations (see Examples 3.12 and 3.13):
Cambridge Books Online © Cambridge University Press, 2012
3.5 AFS unification
105
Definition 3.21 (AFS unification) The unification of two AFSs F1 = Π1 , Θ1 , ≈1 , F2 = Π2 , Θ2 , ≈2 , denoted F1 F2 , is T y(Eq(Cl(F3 ))), where F3 = Π3 , Θ3 , ≈3 and • Π3 = Π1 ∪ Π2 ; • ≈3 = ≈1 ∪ ≈2 ; ⎧
Θ1 (π) ⎪ ⎪ ⎨ Θ2 (π) • Θ3 (π) = ⎪ Θ (π) ⎪ ⎩ 2 undefined
if Θ2 (π)↑ if Θ1 (π)↑ if Θ1 (π) = Θ2 (π) otherwise
The unification is undefined if there exists a path π in F1 ∩ F2 such that Θ1 (π)↓, Θ2 (π)↓ and Θ1 (π) = Θ2 (π) or if T y(Eq(Cl(F3 ))) is undefined. Theorem 3.22 If F1 , F2 are unifiable, then F1 F2 is an AFS. Proof Let F1 = Π1 , Θ1 , ≈1 , F2 = Π2 , Θ2 , ≈2 be two unifiable AFSs and let F1 F2 = Π, Θ, ≈ = T y(Eq(Cl(F3 ))) where F3 = Π3 , Θ3 , ≈3 (Definition 3.6). First, we show that F3 is prefix-closed: Let πα ∈ Π3 = Π1 ∪ Π2 . Then, πα ∈ Π1 or πα ∈ Π2 . Without loss of generality, assume that πα ∈ Π1 . Since F1 is prefix-closed, it follows that π ∈ Π1 , and hence π ∈ Π3 . In addition, Π3 is symmetric because both Π1 and Π2 are symmetric. From Theorems 3.12–3.20 it follows that F1 F2 = T y(Eq(Cl(F3 ))) is prefix-closed, fusion-closed, ≈ is an equivalence relation, Θ is defined only for maximal paths, and Θ respects the equivalence relation. Both ≈1 and ≈2 have a finite index. Observe that the closure operations maintain this property (Cl can add paths, but each added path is associated with an already existing equivalence class). Therefore, ≈ has a finite index. Hence, F1 F2 is an AFS. Exercise 3.18. Let F1 = Π1 , Θ1 , ≈1 , F2 = Π2 , Θ2 , ≈2 be two AFSs defined as follows: • Π1 = {, F , FH , G , GH }; Πs = {, F , G }; • Θ1 (FH ) = a, Θ1 (GH ) = b, Θ1 and Θ2 undefined elsewhere; • F ≈2 G , otherwise ‘≈1 ’ and ‘≈2 ’ are only defined for the trivial pairs.
Depict F1 and F2 in the AVM view. Then compute F1 F2 . While AFS unification is defined in completely different terms from featurestructure unification, they are basically the same operation: AFS unification also produces the least upper bound of its operands with respect to AFS subsumption.
Cambridge Books Online © Cambridge University Press, 2012
106
3 Unification
Example 3.12 AFS unification. Let F1 = Π1 , Θ1 , ≈1 and F2 = Π2 , Θ2 , ≈2 be two AFSs, depicted below also as graphs: F
F
F1 : G
F2 : H
G
Π1 = {, F, G , GH}; ≈1 = {, , F, F , G , G , GH , GH , F, GH , GH , F }; Θ1 is undefined always. Π2 = {, F, G}; ≈2 = {, , F, F , G , G , F, G , G , F }; Θ2 is undefined always. To compute F1 F2 , we first compute F3 = Π3 , Θ3 , ≈3 , where: • Π3 = Π1 ∪ Π2 = {, F, G , GH}; • ≈3 = {, , F, F , G , G , F, G , G , F , GH , GH , F, GH , GH , F }; • Θ3 is undefined always.
We now compute Cl(F3 ). Since F ≈3 G and GH ∈ Π3 , also FH ∈ ΠCl(F3 ) and FH ≈Cl(F3 ) GH . Since F ≈3 GH and FH ∈ ΠC , also GHH ∈ ΠCl(F3 ) and F ≈Cl(F3 ) GHH . This process can go on infinitely, resulting in the set of paths GH + , all of which are equivalent to the path F . After we apply Eq to Cl(F3 ), the result is Eq(Cl(F3 )) = Π4 , Θ4 , ≈4 , where: • Π4 = {, FHi , GHj | i, j ≥ 0}; • ≈4 = {π1 , π2 | π1 = π2 or (π1 = Fπ3 and π2 = Gπ3 for some π3 ∈ H∗ }; • Θ4 is undefined always.
Note that there is no need to apply T y as the resulting Θ4 already (vacuously) respects the equivalence. Depicted as a graph, the result is: F
H
G
Theorem 3.23 If F1 and F2 are unifiable and F = F1 F2 , then F is the least upper bound of F1 and F2 with respect to AFS subsumption. Proof If F1 and F2 are unifiable and F = F1 F2 , then F1 F2 = T y(Eq(Cl(F3 ))), where F3 = Π3 , Θ3 , ≈3 (Definition 3.21). Since Π3 = Π1 ∪ Π2 , Π1 ⊆ Π3 ; similarly, ≈1 ⊆≈3 . As for Θ, Θ3 (π) = Θ1 (π) if the latter is
Cambridge Books Online © Cambridge University Press, 2012
3.5 AFS unification
107
Example 3.13 AFS unification. Now, let F1 = Π1 , Θ1 , ≈1 and F2 = Π2 , Θ2 , ≈2 be the following two AFSs: F
F
F1 : G
F2 :
a
G
b
Π1 = {, F, G}; ≈1 = {, , F, F , G , G }; Θ1 (F ) = a and Θ1 (G ) = b. Π2 = {, F, G}; ≈2 = {, , F, F , G , G , F, G , G , F }; Θ2 is undefined always. Similarly to the previous example, we obtain that Eq(Cl(F3 )) = Π4 , Θ4 , ≈4 , where: • Π4 = {, F, G}; • ≈4 = {π1 , π2 | π1 = π2 or (π1 = F and π2 = G}.
This time, however, Θ4 (F ) = a and Θ4 (G ) = b. Since F ≈4 G , T y(F4 ) is undefined, and hence F1 F2 is undefined.
defined. In cases where Θ1 (π) = Θ2 (π), the unification fails. Observe that the closure operations only extend F3 never remove paths or reentrancies. Hence, ˆ (and, similarly, F2 F ˆ ). We now have to show that if F4 is such that F1 F ˆ ˆ ˆ 4 , Π1 ⊆ Π4 , ≈1 ⊆≈4 , and if ˆ 4 . Since F1 F F1 F4 and F2 F4 , then F F Θ1 (π)↓, then Θ4 (π)↓ and Θ1 (π) = Θ4 (π). Since F is the least extension to an ˆ 4. AFS for which these three conditions hold, we obtain that F F Theorem 3.24 The following diagram is commutative: A2
A1
Abs(A1 )
A
Abs(A2 )
F = Abs(A) = Abs(A1 ) Abs(A2 )
In other words, for all feature graphs A1 , A2 , if A = A1 A2 , then Abs(A) = Abs(A1 ) Abs(A2 ). Proof A direct corollary of Theorem 2.42.
Similarly, as a corollary of Theorem 2.43, the dual case also holds. Theorem 3.25 For all AFSs F1 , F2 , if F = F1 F2 , then Conc(F ) = Conc(F1 ) Conc(F2 ).
Cambridge Books Online © Cambridge University Press, 2012
108
3 Unification
3.6 Generalization Unification is an information-combining operator: when two feature structures are compatible, that is, contain no contradicting information, their unification can be informally seen as a union of the information both structures encode. Sometimes, however, a dual operation is useful, analogous to the intersection of the information encoded in feature structures. This operation, which is much less frequently used in computational linguistics, is referred to as anti-unification, or generalization. This operation plays a major role in grammar learning, a topic not further considered here. In this section we define generalization and present an algorithm for computing it. We begin with feature graphs and then move to feature structures and AFSs. ˆ ) is the Defined over pairs of feature structures, generalization (denoted operation that returns the most specific (or least general) feature structure that is still more general than both arguments. In terms of the subsumption ordering, generalization is the greatest lower bound (glb) of two feature structures. It is important to note that unlike unification, generalization is always defined, and its computational procedure can never fail. For every pair of feature structures there exists a feature structure that is more general than both: In the most extreme case, pick the empty feature structure, which is more general than every other structure. Definition 3.26 (Feature-structure generalization) The generalization (or antiˆ fs2 , is the greatest unification) of two feature structures fs1 and fs2 , denoted fs1 lower bound of fs1 and fs2 . The generalization of fs1 and fs2 is the most specific feature structure that subsumes both fs1 and fs2 . Note that it is unique: Two feature structures always have a lower bound (because there is a least element, the empty feature structure); and the glb is unique because feature-structure subsumption is antisymmetric. Example 3.14 exemplifies some properties of generalization (using the AVM notation for presentation). To compute the generalization, we move to feature graphs. Under this view, a glb of two graphs can only be unique up to isomorphism. Definition 3.27 (Feature-graph generalization) The generalization of two feature graphs A and B, denoted A B, is a feature graph C such that • C A; • C B; • for every C such that C A and C B, C C.
Cambridge Books Online © Cambridge University Press, 2012
3.6 Generalization
109
Example 3.14 Generalization. Following are several examples that shed light on different aspects of the generalization operation. Generalization reduces information:
NUM :
ˆ PERS : third = [ ] sg
Different atoms are inconsistent:
NUM :
ˆ NUM : pl = NUM : [ ] sg
Generalization is restricting:
sg ˆ = NUM : sg NUM : sg PERS : third
NUM :
Empty feature structures are zero elements:
ˆ AGR : NUM : sg = [ ] [ ] Reentrancies can be lost:
F : NUM : sg F : NUM : sg F : 1 NUM : sg =
ˆ G: 1 G : NUM : sg G : NUM : sg
See Example 3.15. In what follows we prove that, unlike unification, the generalization of any two feature graphs always exists, and provide an algorithm for computing it. Since we use feature graphs only as representatives of their equivalence classes, we can always select two graphs that are node-disjoint, as we do below. Theorem 3.28 Let A = QA , q¯A , δA , θA and B = QB , q¯B , δB , θB be two feature graphs such that QA and QB are disjoint. Then, C = A B, where C = QC , q¯c , δc , θc , is such that: • QC = {q ∈ QA × QB | there exist q0 , . . . , qn ∈ QA × QB such that qi =
qA , q¯B , qn = q, and for all 0 < i ≤ n there exist qiA , qiB , 0 ≤ i ≤ n, q0 = ¯ f ∈ F EATS and j < i such that qiA = δA (qjA , f ) and qiB = δB (qjB , f ); • q¯C = ¯ qA , q¯B ;
Cambridge Books Online © Cambridge University Press, 2012
110
3 Unification
Example 3.15 Generalization. Let A and B be the following feature graphs:
A:
B:
F
q2
G
q4
H
q3 a
q1
F
q5
q6
H
q7
G
Computing C = A B, we start with QC : This is obtained by starting with q0 = ¯ qA , q¯B = q1 , q5 . Since an F-arc connects the root of A with q2 and the root of B with q6 , the pair q2 , q6 is also in QC . For the same reason (due to the G-arc), we add q4 , q6 . Finally, since an H-arc leaves q2 in A and q6 in B, and q2 , q6 ∈ QC , also q3 , q7 ∈ QC . Summing up, QC = {q1 , q5 , q2 , q6 , q4 , q6 , q3 , q7 } Then, δC is induced by δA and δB : an F-arc connects qiA , qiB with qjA , qjB iff such an arc connects qiA with qjA in A and an F-arc connects qiB with qjB in B. In our example, the arcs are as specified here. Finally, θC is undefined for all q ∈ QC because no node in B is specified for θ. The resulting feature structure is the equivalence class of the following graph:
q2 , q6 F
C:
q1 , q5 G
q4 , q6
Cambridge Books Online © Cambridge University Press, 2012
H
q3 , q7
3.6 Generalization
111
• δC (qA , qB , f ) =
δA (qA , f ), δB (qB , f ) if δA (qA , f )↓ and δB (qB , f )↓ undefined otherwise
• θC (qA , qB ) =
θA (qA ) if θA (qA ) = θB (qB ) (implying both are defined) undefined otherwise
Proof First, we show that C is a well-defined feature graph. From the definition of QC and δC , QC is clearly finite and contains only reachable nodes. Let qC = qA , qB ∈ C be such that θC (qC )↓. From the definition of θC , it follows that θA (qA )↓ and θB (qB )↓. Hence, for every f ∈ F EATS, δA (qA , f )↑ and δB (qB , f )↑. By the definition of δC , δC (qC , f )↑, too; hence, qC is indeed a sink in C. Next, we show that C is a lower bound of A and B, that is, C A and C B. Let hA (qA , qB ) = qA , hB (qA , qB ) = qB . We show that both are subsumption morphisms. 1. hA (¯ qC ) = hA (¯ qA , q¯B ) = q¯A by the definitions of q¯C and hA . Similarly, for hB . qA , q¯B , f )↓. By the definition of δC , both δA (qA , f )↓ and 2. Assume that δC (¯ δB (qB , f )↓. Therefore: hA (δC (qA , qB , f ) = hA (δA (qA , f ), δB (qB , f ) ) definition of δC = δA (qA , f ) definition of hA = δA (hA (qA , qB , f ) definition of hA Similarly for commuting with δB . 3. Assume that θC (qA , qB )↓. By the definition of θC , both θA (qA )↓ and θB (qB ) ↓. Then, θA (hA (qA , qB )) = θA (qA ) and θB (hB (qA , qB )) = θB (qB ), by the definitions of hA , hB . Hence, both hA and hB are subsumption morphisms, establishing that C is a lower bound of A and B. Finally, we show that C is the greatest lower bound of A and B. Consider any lower bound C of A and B. We show that C C. Let hA : QC → QA , hB : QC → QB . Define h : QC → QC by h (q) = hA (q), hB (q) . We show that h is a subsumption morphism (from C to C).
Cambridge Books Online © Cambridge University Press, 2012
112
3 Unification
1.
h (¯ qC ) = hA (¯ qC ), hB (¯ qC ) definition of h = ¯ qA , q¯B since hA , hB are subsumption morphisms = q¯C definition of q¯C
2. Assume δC (q, f )↓. h (δC (q, f )) = hA (δC (q, f )), hB (δC (q, f )) definition of h = δA (hA (q), f ), δB (hB (q), F ) since hA , hB are subsumption morphisms = δC (hA (q), hB (q) , f ) definition of δC = δC (h (q), f ) definition of h 3. Assume that θC (q)↓. By definition of h, θC (h (q)) = θC (hA (q), hB (q) ). As both hA and hB are subsumption morphisms, we have θA (hA (q)) = θB (hB (q)) = θC (q). Hence, θC (h (q)) = θC (q).
Thus, C is indeed the greatest lower bound of A and B. Exercise 3.19. Let A and B be the following feature graphs: q2A
NUM
A:
AGR
q0A
q1A PERS
NUM
B:
AGR
q0B SUBJ
q4B
q3A
q2B
third
pl
q1B AGR
PERS
q3B
third
Compute their generalization, C = A B. Specify explicitly Qc , q¯C , δC and θC , and draw the result as a graph. Exercise 3.20. Prove or refute: Generalization is idempotent: For all A, A A = A.
Cambridge Books Online © Cambridge University Press, 2012
Further reading
113
Exercise 3.21. Prove or refute: Generalization is commutative: For all A, B, A B = B A. Exercise 3.22. Prove or refute: Generalization is absorbing: For all A, B, If A B then A B = A. Exercise 3.23. Prove or refute: Generalization is associative: For all A, B and C, A (B C) = (A B) C. Exercise 3.24. Prove that if A ∼ A and B ∼ B , then (A B) ∼ (A B ). Generalization can be defined for AFSs using simple set operations as follows: Definition 3.29 (AFS generalization) The generalization of two AFSs F1 and F2 is F3 = Π3 , Θ3 , ≈3 , where: • Π3 = Π1 ∩ Π2 ; • ≈3 = ≈1 ∩ ≈2 ; • Θ3 (π) =
Θ2 (π) if Θ1 (π) = Θ2 (π) (both are defined and equal) undefined otherwise
Note that contrary to the definition of AFS unification, for AFS generalization the closure operations are not needed. This is a corollary of the following property, whose proof we leave as an exercise: Lemma 3.30 If F1 and F2 are AFSs then their generalization is an AFS. Exercise 3.25. Prove the above lemma. Lemma 3.31 AFS generalization commutes with feature graph abstraction; if A and B are feature graphs and C is their generalization, then Abs(C) = Abs(A) Abs(B). Exercise 3.26. Prove the above lemma.
Further reading The unification operation was originally defined for first-order terms (FOTs) by Robinson (1965). A variety of algorithms for term unification have since been presented, of which the most popular is that of Martelli and Montanari (1982), which was extended to cyclic terms by Jaffar (1984).
Cambridge Books Online © Cambridge University Press, 2012
114
3 Unification
An important difference between FOTs and feature structures has to do with the result of the unification operation. For FOTs, unification results in a substitution, which, when applied to both unificands, yields the same FOT. In feature structures, however, unification returns a new feature structure, as if the substitution has already been applied to one of the unificands. In fact, there is no clear analog to the notion of substitution in the domain of feature structures; defining such an analog would make an interesting research topic. The origin of graph unification is due to Kay (1985), while an adaptation of the general unification algorithms for feature structures was done by Aït-Kaci (1984) and Moshier (1988). The observation that unification-based systems spend most of their time copying is due to Karttunen and Kay (1985); a nondestructive unification algorithm is presented by Wroblewski (1987). The union-find algorithm (and a discussion of its complexity) can be found in Aho et al. (1974). The term “generalization” was coined by Shieber (1992), referring to the greatest lower bound of two feature structures. Bayer and Johnson (1995) use it to account for some coordination phenomena (see Section 5.9). Buszkowski and Penn (1990) discuss the use of generalization for grammar learning. Other than that, we are not aware of more proposals to use generalization for linguistic applications.
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
4 Unification grammars
Feature structures are the building blocks of unification grammars, as they serve as the counterpart of the terminal and nonterminal symbols in CFGs. However, in order to define grammars and derivations, one needs some extension of feature structures to sequences thereof. In this chapter we present multirooted feature structures that are aimed at capturing complex, ordered information and are used for representing rules and sentential forms of unification grammars; we motivate this extension in Section 4.1. In parallel to the exposition of feature structures in Chapter 2, we start by defining multirooted feature graphs (Section 4.2), a natural extension of feature graphs. We then abstract away from the identities of nodes in the graphs in two ways: by defining multirooted feature structures, which are equivalence classes of isomorphic multirooted feature graphs, and by defining abstract multirooted structures (Section 4.3). Finally, we define the concept of multi-AVMs (Section 4.4), which are an extension of AVMs, and show how they correspond to multirooted graphs. The crucial concept of unification in context is discussed in Section 4.5. We then utilize this machinery for defining unification grammars. We begin by defining (sentential) forms and grammar rules (Section 4.6). Then, we define the concept of derivation for unification grammars, providing a means for defining the languages generated by such grammars (Section 4.7). We explore derivation trees in Section 4.8. The move from context-free grammars to unification grammars is motivated by linguistic considerations (the need to provide better generalizations and more compact representations). But it carries with it a major formal, mathematical change: The expressive power of unification grammars is much greater than that of CFGs, and consequently computational processing with unification grammars has very different properties from processing with CFGs. These issues, however, are deferred to Chapter 6.
115
Cambridge Books Online © Cambridge University Press, 2012
116
4 Unification grammars
Throughout the chapter, the formal material is expressed in terms of abstract feature structures and abstract multirooted structures; this is done to simplify, as much as possible, the mathematical formulation of some of the definitions and proofs. In particular, this allows us to use unification in context, which is only defined for the AMRS view, as the vehicle for defining derivations. However, in the interest of readability, we depict AFSs and AMRSs as AVMs and multiAVMs, respectively. The mappings among the various views can always be used to convert one representation to another. 4.1 Motivation We introduced feature structures in Chapter 2 in order to extend CFGs to a more expressive formalism based on feature structures. A naïve attempt to augment context-free rules with feature structures could have been to add to each rule a sequence of feature structures, with an element for each element in the CF skeleton. However, there is a certain difficulty with this view: rules cannot be thought of simply as sequences of feature structures. The reason lies in possible reentrancies among elements of such sequences, or in other words, among different categories in a single rule. As a motivating example, consider a rule intending to account for agreement on number between the subject and the verb of English sentences:
CAT :
s
→
CAT : NUM
np : 4
CAT :
vp
NUM : 4
In this rule, the feature CATegory stands for the grammatical category of phrases, a counterpart of the nonterminal symbol of CFGs. Also, the NUM feature of the NP is shared with that of the VP. If we were to represent this rule by a sequence of three feature structures, this reentrancy would be lost. As we shall see, the difference between reentrant and copied values is crucial when rules are used for defining the language of a grammar or for parsing. The difficulty in extending feature structures to sequences thereof lies in the possible sharing of information among different elements of the intended sequence. This sharing takes different forms across the various views we discuss in this chapter. In the case of multi-AVMs, the scope of variables (i.e., tags) is extended from a single AVM to a sequence. In multirooted feature graphs this is expressed by the possibility of two paths, leaving two different roots, leading to the same node. Finally, in the case of abstract multirooted structures, the reentrancy relation must account for possible reentrancies across elements.
Cambridge Books Online © Cambridge University Press, 2012
4.1 Motivation
117
How can this problem be solved? Basically, there are two methods for representing rules (and the sentential forms based upon them). One approach is to use (single) feature structures for representing “sequences” of feature structures. Dedicated features (e.g., 1, 2, . . .) are used to encode substructures of a feature structure, and the order among them. The main advantage of this approach is that the existing apparatus of feature structures suffices for representing rules as well. However, there are several drawbacks to this solution: The dedicated features are required to have special meaning; the set F EATS is required to be ordered; and, as there is no bound on the size of the right-hand size of rules, the number of such dedicated features that are needed is unbounded, in contradiction to our assumption that the set F EATS is finite. A different solution to this problem can be based on the observation that feature structures can be used to represent lists. No additional mechanisms or definitions are required: A list can be simply represented as a feature structure having two features, named, say, FIRST and REST – just the way lists are represented in the programming language LISP (where the features are dubbed CAR and CDR, respectively). The value of the FIRST feature is the first element of the list; the value of REST is either the atom elist, denoting an empty list, or is itself a list. Of course, this representation assumes that the elements of the list are representable as feature structures. The list 1, 2, 3 (assuming a signature whose atoms include the numbers 1, 2, 3) can be represented as in Example 4.1. Example 4.1 Feature structure encoding of a list. ⎡
FIRST :
1 ⎡
⎤
⎤
⎥ ⎢ FIRST : 2 ⎢ ⎥ ⎣ REST : ⎣ ⎦⎦ FIRST : 3 REST : REST : elist
Notice that list traversal is analogous to computing the value of a path: accessing the i-th element of a list, represented as a feature structure, is equivalent to computing the value of the path RESTi−1 FIRST . In Example 4.1, the second element of the list is the value of the path REST, FIRST . For simplicity, we still depict list values in the standard mathematical notation, but they must always be taken to stand for an abbreviation of feature structures. A list representation of the motivating example rule is given in Example 4.2. However, similar problems arise with this list representation. Again, the features FIRST and REST are acquired a special, irregular meaning. In addition, there
Cambridge Books Online © Cambridge University Press, 2012
118
4 Unification grammars
Example 4.2 Feature structure encoding of a list. ⎡
FIRST : CAT :
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ REST : ⎢ ⎣
⎡
s
⎤
⎤⎥ ⎥ ⎢ ⎥⎥ NUM : 6 ⎢ ⎤⎥⎥ ⎡ ⎢ ⎥⎥ CAT : vp ⎢ ⎥⎥ FIRST : ⎣ REST : ⎣ ⎦⎦⎥ ⎦ NUM : 6 REST : elist FIRST :
CAT :
np
is no “direct access” to the elements of a rule; fetching the third element, for instance, can only be done by following a path of length three. In this chapter we opt for a third solution; namely, defining dedicated mathematical entities that extend the feature structures introduced in Chapter 2. Special attention is paid to the issue of possible sharing across elements and the difficulties which may result, especially when an element of a “sequence” is removed or added. 4.2 Multirooted feature graphs We first extend feature graphs, defined in Section 2.2, to multirooted feature graphs (MRGs). Multirooted feature graphs are defined over the same signature (F EATS and ATOMS), which is assumed to have fixed values in the following discussion. Definition 4.1 (Multirooted feature graphs) A multirooted feature graph ¯ G where G = Q, δ, θ is a finite, directed, labeled graph (MRG) is a pair R, consisting of a nonempty, finite set Q of nodes (disjoint of F EATS and ATOMS), a partial function δ : Q × F EATS → Q specifying the arcs and a labeling func¯ is an ordered list of distinguished tion θ marking some of the sinks, and where R nodes in Q called roots. G is not necessarily connected, but the union of all ¯ is required to yield exactly Q. The the nodes reachable from all the roots in R ¯ λ denotes the empty MRG, length of an MRG is the number of its roots, |R|. where Q = ∅. A multirooted feature graph (Example 4.3) is a directed, not necessarily connected, labeled graph with a designated sequence of nodes called roots. It is a natural extension of feature graphs, the only difference being that the single root of a feature graph is extended here to a list in order to model the required range over MRGs, structured information as described above. Meta-variables A
Cambridge Books Online © Cambridge University Press, 2012
4.2 Multirooted feature graphs
119
¯ – over their constituents. We shall not distinguish between an and Q, δ, θ and R MRG of length 1 and a feature graph.
Example 4.3 Multirooted feature graphs. The following is an MRG in which the shaded nodes (ordered from left to right) ¯ constitute the list of roots, R. q1 CAT
q2
q3
CAT
q4 s
CAT
q5 np
AGR
AGR
q6 vp
q7 There are three elements to this MRG, rooted by q1 , q2 , and q3 , in this order. The elements are ordered; in this example (and the following ones) they are depicted from left to right. An important observation is that nodes are shared among more than one “element.” For example, q7 here is accessible from both the second and the third root. This sharing exemplifies that MRGs are not just plain sequences of feature graphs (or, more precisely, are sequences of not necessarily disjoint graphs).
¯ = ∅ iff Q = ∅. R Exercise 4.1 (*). Show that for every MRG A, A A Natural relations can be defined between MRGs and feature graphs. First, note ¯ then q¯i naturally induces = R, ¯ G is an MRG and q¯i is a root in R, that if A i = Qi , q¯i , δi , θi , where Qi is the set of nodes reachable a feature graph A| from q¯i , δi = δ|Qi (the restriction of δ to Qi ) and θi = θ|Qi (the restriction of θ to Qi ). be the MRG depicted in Example 4.3. What are Exercise 4.2 (*). Let A A|1 , A|2 , A|3 ? = R, ¯ G as an ordered sequence A1 , . . . , An One can view an MRG A i for 1 ≤ i ≤ n. of (not necessarily disjoint) feature graphs, where Ai = A| Note that such an ordered list of feature structures is not a sequence in the mathematical sense. There are a few indications for that; most obviously, observe that removing a node that is accessible from one root can result in
Cambridge Books Online © Cambridge University Press, 2012
120
4 Unification grammars
the node being removed from the graph that is accessible from some other root. Such modifications of elements of MRGs, including the removal of nodes, may occur during derivation (or its computational implementation, parsing) as we show in this chapter. Although MRGs are not element-disjoint sequences, it is possible to define substructures of them. The roots of an MRG form a sequence of nodes. Taking just a subsequence of the roots, and considering only the subgraph they induce (i.e., the nodes that are accessible from these roots), a notion of substructure is naturally obtained. = Definition 4.2 (Induced subgraphs). The subgraph of a nonempty MRG A ¯ G , induced by j, k and denoted A j...k , is defined only if 1 ≤ i ≤ j ≤ n, R, ¯ = qj , . . . , qk , G = Q , δ , θ ¯ , G where R in which case it is the MRG R and ¯ and some π; • Q = {q | δ(¯ q , π) = q} for some q¯ ∈ R • δ (q, f ) = δ(q, f ) for every q ∈ Q ; • θ (q) = θ(q) for every q ∈ Q . i for A i...i . As we identify a When the sequence is of length 1, we write A i = A| i. feature graph with an MRG of length 1, A be the MRG depicted in Example 4.3. What are Exercise 4.3 (*). Let A 1...2 2...3 1...3 A ,A ,A ? Exercise 4.4 (*). Show that for every f such that δ(q, f )↓, if q ∈ Q then δ(q, f ) ∈ Q , thus rendering the second item of Definition 4.2 well formed. Exercise 4.5. Define a more general kind of induced subgraphs, in which the nodes that induce the subgraph do not have to be a consecutive subsequence ¯ of R. As MRGs are a natural extension of feature graphs, many concepts defined for the latter can be extended to the former. First, we extend the transition function δ from single features to paths, as was done for feature graphs in Section 2.2.1 (and we abuse the notation δ for both the original function and its extension). We then define the set of paths of an MRG. Here, some caution is required: Because an MRG might have multiple roots, and because the same path might be defined for more than one of them, a path must specify not only a sequence of features, but also the root.
Cambridge Books Online © Cambridge University Press, 2012
4.2 Multirooted feature graphs
121
are Definition 4.3 (MRG paths) The paths of an MRG A = {i, π | π ∈ PATHS and δ(¯ Π(A) qi , π)↓} Next, the function val, associating a value with each path in a feature graph, is extended to MRGs. denoted by Definition 4.4 (Path value) The value of a path i, π in an MRG A, valA (i, π ), is defined if and only if δA (¯ qi , π)↓, in which case it is the feature (π). graph valA| i Note that the value of a path in an MRG is a (single-rooted) feature graph, not an MRG. In particular, valA (i, π ) may include nodes which are roots in but are not the root of the resulting feature graph. Clearly, an MRG may A have two paths i1 , π1 and i2 , π2 , where π1 = π2 even though i1 = i2 . See Example 4.4. Example 4.4 Path value. be the following MRG, where R ¯ = q0 , q1 , q2 : Let A q0
q1
F
q2
F
q3
F
q4
H
q5 H
H
G
q6 a
q7 b
Then valA (2, F ) is: q4 G
H
q6 a
q7 b
and valA (2, F H ) = valA (3, F H ) is: q7 b
Cambridge Books Online © Cambridge University Press, 2012
122
4 Unification grammars
an index i and paths π, α ∈ PATHS, if Exercise 4.6. Show that for an MRG A, valA (i, π )↓, then valval (i,π ) (α) = valA (i, πα ). Notice that the function A val is overloaded here. When its argument is α, its parameter is assumed to be a feature graph, and when the argument is a pair i, π , the parameter is an MRG. A
Two MRG paths are reentrant, denoted i, π1 j, π2 , if they share the qi , π1 ) = δA (¯ qj , π2 ). A multirooted feature graph is reentrant same value: δA (¯ if it has two distinct paths (possibly leaving different roots) that are reentrant. is cyclic if two paths i, π1 , i, π2 ∈ Π(A), where π1 is a proper An MRG A A
subsequence of π2 , are reentrant: i, π1 i, π2 . Here, the two paths must other have the same index i, although they may “pass through” elements of A than the i-th one.
Example 4.5 A cyclic MRG. = R, ¯ G , where R ¯ = q0 , q1 , q2 , is cyclic: The following MRG A q0
q1
F
q2 F
q3 H
G
F
q4
q5 H
H
q6
q7
G
= R, ¯ G be the MRG of Example 4.5. Show a nonempty Exercise 4.7 (*). Let A path α such that δA (q, α) = q for some q ∈ QA . is cyclic iff there exists a nonempty path Exercise 4.8. Prove: An MRG A α ∈ PATHS and a node q ∈ QA such that δ(q, α) = q. We now extend the notion of feature graph isomorphism (Definition 2.8, Page 43) to multirooted feature graphs.
Cambridge Books Online © Cambridge University Press, 2012
4.2 Multirooted feature graphs
123
1 = Definition 4.5 (Multirooted feature graph isomorphism) Two MRGs A 2 = R 1∼ ¯ 2 , G2 are isomorphic, denoted A 2 , iff they are ¯ 1 , G1 and A A R of the same length, n, and there exists a one-to-one mapping i : Q1 → Q2 , called an isomorphism, such that: • i(¯ q1j ) = q¯2j for all 1 ≤ j ≤ n; • for all q1 , q2 ∈ Q1 and f ∈ F EATS, δ1 (q1 , f ) = q2 iff δ2 (i(q1 ), f ) = i(q2 );
and • for all q ∈ Q1 , θ1 (q) = θ2 (i(q)) (either both are undefined, or both are defined
and equal). Exercise 4.9. Prove that ‘ ∼’ is an equivalence relation. i ∼ A 1∼ 2 iff for every i, 1 ≤ i ≤ |A 1|, A i . A Exercise 4.10 (*). Prove or refute: A 1 2 Similarly, subsumption is extended from feature graphs (Definition 2.9, Page 44) to MRGs in the natural way: = Definition 4.6 (Subsumption of multirooted feature graphs) An MRG A = R , if |R| ¯ G subsumes an MRG A ¯ , G , denoted A ¯ = |R ¯ | and A R, there exists a total function h : Q → Q such that: ¯ h(¯ • for every root q¯i ∈ R, qi ) = q¯i ; • for every q ∈ Q and f ∈ F EATS, if δ(q, f )↓, then h(δ(q, f )) = δ (h(q), f ); • for every q ∈ Q, if θ(q)↓, then θ(q) = θ (h(q)). The only difference from feature graph subsumption is that h is required to ¯ to its corresponding root in R ¯ . Notice that for two map each of the roots in R MRGs to be related by subsumption, they must be of the same length. See Examples 4.6 and 4.7. Example 4.6 MRG subsumption. We noted in Section 2.2.2 that feature graph subsumption can have three different effects: If A B, then B can have additional arcs, additional reentrancies or more marked atoms. The same holds for MRGs, with the observation that additional reentrancies can now occur among paths that originate at different roots: F
G
F
G
Cambridge Books Online © Cambridge University Press, 2012
124
4 Unification grammars
Example 4.7 MRG subsumption. be the following two MRGs: and A Let A
vp
np
sg
R
CA T
AG
A
NUM
PE
RS
3rd
AG
R
RS
3rd
CA T
vp
np
NU
M
np
PE
sg
CAT
NU
M
np
CA T
AGR
R
CA T
R
CAT
AG
A
AG
sg
PE
RS
AGR
3rd
but not A A A. Then A
and A of Example 4.7. Give a subsumption morExercise 4.11. Consider A , and prove that it is indeed a A phism h that supports the claim that A subsumption morphism. and A be two MRGs of the same length, n, such that Exercise 4.12. Let A AA . Prove: i A i . 1. For all i, 1 ≤ i ≤ n, A A
A
2. If i, π1 j, π2 , then i, π1 j, π2 . 1 |, A i1 A i2 , then Exercise 4.13 (*). Prove or refute: If for every i, 1 ≤ i ≤ |A 1 2. A A As was the case with feature graphs, MRG subsumption and isomorphism are naturally related, as the following exercise emphasizes.
Cambridge Books Online © Cambridge University Press, 2012
4.3 Abstract multirooted structures
125
1∼ 1 2 2 iff A 2 and A 1 . Use the proof of A A Exercise 4.14. Prove that A A Theorem 2.22 (Page 51) as a guide. Since MRG isomorphism is an equivalence relation, the notion of multirooted feature structures is well defined: Definition 4.7 (Multirooted feature structures) Given a signature of features EATS, ATOMS) be the set of all multirooted F EATS and atoms ATOMS, let G(F feature graphs over the signature. Let G|∼ be the collection of equivalence classes in G(F EATS, ATOMS) with respect to feature graph isomorphism. A multirooted feature structure (MRS) is a member of G|∼ . We use metavariables mrs to range over MRSs.
4.3 Abstract multirooted structures Similarly to the feature structure case, MRSs are entities that are easy to work with computationally, being equivalence classes of graphs, but they are sometimes awkward to treat mathematically. In this section we extend the notion of abstract feature structures introduced in Section 2.4 to the case of sequences; much of the rest of this chapter, and in particular, the definitions of grammars and their languages presented in Chapter 5, is presented in terms of abstract multirooted structures. We also define some concepts, such as concatenation, in terms of this view only, although they can be easily defined for the other views as well. Definition 4.8 (Abstract multirooted structures) A preabstract multirooted structure (pre-AMRS) is a quadruple σ = Ind, Π, Θ, ≈ , where • Ind ∈ N is the number of indices of σ, its length; • Π ⊆ {1, 2, . . . , Ind} × PATHS is a set of indexed paths, such that for each i,
1 ≤ i ≤ Ind, there exists some π ∈ PATHS with (i, π) ∈ Π;
• Θ : Π → ATOMS is a partial function, assigning an atom to some of the paths; • ≈ ⊆ Π × Π is a relation specifying reentrancy.
An abstract multirooted structure (AMRS) is a pre-AMRS σ for which the following requirements, naturally extending those of AFSs, hold: • Π is prefix-closed: If i, πα ∈ Π, then i, π ∈ Π; • σ is fusion-closed: If i, πα ∈ Π and i, π ≈ i , π , then i , πα ∈ Π and
i, πα ≈ i , π α ; • ≈ is an equivalence relation with a finite index (with [≈] the set of its equivalence classes) including at least the pairs {i, ≈ i, | 1 ≤ i ≤ Ind};
Cambridge Books Online © Cambridge University Press, 2012
126
4 Unification grammars
• Θ is defined only for maximal paths: If Θ(i, π )↓, then there exists no pair
i, πα ∈ Π such that α = ;
• Θ respects the equivalence: If i1 , π1 ≈ i2 , π2 , then Θ(i1 , π1 ) =
Θ(i2 , π2 ).
Meta-variables σ, ρ, and so on, range over AMRSs. We use λ to denote the empty AMRS, too, where Indλ = 0 and Πλ = ∅ (so that the length of λ is 0). Here, too, we do not distinguish between an AMRS of length 1 and an AFS. Observe that this is a natural extension of Definition 2.31 (page 56), the major difference being the extension of paths to include the index of the roots in which they originate; see Example 4.8.
Example 4.8 Abstract multirooted structures. An example of an AMRS is σ = Ind, Π, Θ, ≈ , where • Ind = 3 • Π = {1, , 1, CAT , 2, , 2, CAT , 2, AGR , 3, , 3, CAT , 3, AGR } • Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ(3, CAT ) = vp, Θ is undefined
elsewhere • ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 } ∪ {(2, AGR , 3, AGR )}.
Exercise 4.15. Prove that the above pre-AMRS is indeed an AMRS: show that it is prefix-closed and fusion-closed, that ≈ is an equivalence relation and that Θ respects the equivalence. Exercise 4.16 (*). Show an AMRS in which the set Π is infinite. We extend the notion of substructures, defined above for MRGs, to AMRSs. Definition 4.9 (AMRS substructures) Let σ = Indσ , Πσ , Θσ , ≈σ ; let j, k be such that 1 ≤ j ≤ k ≤ Indσ . The substructure of σ induced by j, k, denoted σ j..k , is an AMRS ρ = Indρ , Πρ , Θρ , ≈ρ such that • • • •
Indρ = k − j + 1; Πρ = {i − j + 1, π | i, π ∈ Πσ and j ≤ i ≤ k}; Θρ (i − j + 1, π ) = Θσ (i, π ) if j ≤ i ≤ k; ≈ρ = {(i1 − j + 1, π1 , i2 − j + 1, π2 ) | j ≤ i1 , i2 ≤ k and i1 , π1 ≈σ i2 , π2 }.
Cambridge Books Online © Cambridge University Press, 2012
4.3 Abstract multirooted structures
127
A substructure of σ is obtained by selecting a subsequence of the indices of σ and considering the structure they induce. Trivially, this structure is an AMRS. If |Indρ | = 1, then σ i..i can be identified with an AFS, denoted σ i .
Example 4.9 Substructure. Let σ = Ind, Π, Θ, ≈ be the AMRS of Example 4.8. Then its substructure induced by 2, 3, denoted σ 2..3 , is the AMRS Ind2..3 , Π2..3 , Θ2..3 , ≈2..3 , where • • • •
Ind2..3 = 2; Π2..3 = {1, , 1, CAT , 1, AGR , 2, , 2, CAT , 2, AGR }; Θ2..3 (1, CAT ) = np, Θ2..3 (2, CAT ) = vp, Θ2..3 is undefined elsewhere; ≈2..3 = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 } ∪ {(1, AGR , 2, AGR )}.
Exercise 4.17 (*). For σ of Example 4.8, show the substructure induced by {1, 2}. Exercise 4.18. Define a nonconsecutive notion of substructures: A substructure of an AMRS that is induced by a nonconsecutive subsequence of the indices. A counterpart to substructures, the notion of concatenation can be defined for AMRSs. Notice that by definition, concatenated AMRSs cannot share elements between them. Definition 4.10 (Concatenation) The concatenation of two AMRSs, σ = Indσ , Πσ , Θσ , ≈σ and ρ = Indρ , Πρ , Θρ , ≈ρ (denoted by σ · ρ) is an AMRS ξ = Indξ , Πξ , Θξ , ≈ξ such that: • Indξ = Indσ + Indρ ; • Πξ = Πσ ∪ {i + nσ , π | i, π ∈ Πρ };
Θσ (i, π ) if i ≤ nσ Θρ (i − nσ , π ) if i > nσ • ≈ξ = ≈σ ∪{(i1 + nσ , π1 , i2 + nσ , π2 ) | i1 , π1 ≈ρ i2 , π2 }. • Θξ (i, π ) =
When two AMRSs are concatenated, the result contains only reentrancies that existed in the original AMRSs; no new reentrancies are added. As usual, σ · λ = λ · σ = σ. An AMRS can be concatenated with itself, and of course, in the general case σ · σ = σ. AMRS concatenation is associative (i.e., σ · (ρ · τ ) = (σ · ρ) · τ ) but not commutative (that is, σ · ρ is not necessarily equal to ρ · σ). When σ and ρ are AFSs, we may write σ, ρ for σ · ρ.
Cambridge Books Online © Cambridge University Press, 2012
128
4 Unification grammars
Example 4.10 AMRS concatenation. Let σ1 = Ind1 , Π1 , Θ1 , ≈1 , where: • • • •
Ind1 = 2; Π1 = {1, , 1, CAT , 2, , 2, CAT , 2, AGR }; Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ is undefined elsewhere; ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 }.
Let σ2 = Ind2 , Π2 , Θ2 , ≈2 , where: • • • •
Ind2 = 1; Π2 = {1, , 1, CAT , 1, AGR }; Θ(1, CAT ) = vp, Θ is undefined elsewhere; ≈ = {(1, π1 , 1, π2 ) | π1 = π2 }.
Then σ1 · σ2 = Ind, Π, Θ, ≈ , where: • Ind = 3; • Π = {1, , 1, CAT , 2, , 2, CAT , 2, AGR , 3, , 3, CAT , 3, AGR }; • Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ(3, CAT ) = vp, Θ is undefined
elsewhere; • ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 }.
Like AFSs, AMRSs can be related to concrete MRGs in a natural way. Compare the following definition to Definition 2.32 (Page 58). = R, ¯ G is an MRG, then Abs(A) = Definition 4.11 (MRG abstraction) If A Ind, Π, Θ, ≈ is defined by: • • • •
¯ Ind = |R|; Π = {i, π | δ(¯ qi , π)↓}; Θ(i, π ) = θ(δ(¯ qi , π)); i, π1 ≈ j, π2 iff δ(¯ qi , π1 ) = δ(¯ qj , π2 ).
is an AMRS. In particular, notice that for every i, It is easy to see that Abs(A) 1 ≤ i ≤ Ind, there exists a path π such that (i, π) ∈ Π since for every i, δ(¯ qi , )↓. Exercise 4.19. With respect to Example 4.11, prove that σ = Abs(A). Exercise 4.20. Extend the definition of the concretization function Conc (Definition 2.35 on Page 60), from AFSs to AMRSs.
Cambridge Books Online © Cambridge University Press, 2012
4.3 Abstract multirooted structures
129
Example 4.11 MRG to AMRS mapping. be the MRG depicted below: Let A q1 CAT
q2
q3
CAT
q4 s
CAT AGR
AGR
q5 np
q6 vp
q7 is the AMRS σ = Ind, Π, Θ, ≈ , where The abstract representation of A • Ind = 3; • Π = {1, , 1, CAT , 2, , 2, CAT , 2, AGR , 3, , 3, CAT , 3, AGR }; • Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ(3, CAT ) = vp, Θ is undefined
elsewhere; • ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 } ∪ {(2, AGR , 3, AGR )}.
be an MRG and A j...k its subgraph, induced by j, k. Exercise 4.21. Let A j...k j...k ). In other words, show that Show that σ = Abs(A Let σ = Abs(A). abstraction and substructure commute. We now extend the definition of AFS subsumption (Definition 2.41, Page 62) to AMRSs. Definition 4.12 (AMRS subsumption) An AMRS σ = Indσ , Πσ , Θσ , ≈σ subˆ iff Ind = Ind , Π ⊆ sumes an AMRS ρ = Ind , Π , Θ , ≈ , denoted σ ρ, ρ
ρ
ρ
ρ
σ
ρ
σ
Πρ , ≈σ ⊆≈ρ , and for every i, π ∈ Πσ , if Θσ (i, π )↓, then Θρ (i, π )↓ and Θσ (i, π ) = Θρ (i, π ). Exercise 4.22. Adapt the proof of Theorem 2.42 (Page 63) to the case of AMRSs 1, A 2, A 1 1 )Abs( 2 iff Abs(A 2 ). A ˆ and show that for two MRGs A A In the rest of this chapter, we overload the symbol ‘’ so that it denotes also subsumption of AMRSs.
Cambridge Books Online © Cambridge University Press, 2012
130
4 Unification grammars
4.4 Multi-AVMs We now extend the definition of AVMs (Definition 2.45, page 65) to multiAVMs. Definition 4.13 (Multi-AVMs) Given a signature S, a multi-AVM of length n ≥ 0 is a sequence M1 , . . . , Mn such that for each i, 1 ≤ i ≤ n, Mi is an AVM over the signature. range over multi-AVMs. The sub-AVMs of M are Meta-variables M )= SubAVM(M 1≤i≤n SubAVM(Mi ). Similarly to what we did for AVMs, as T ags(M). Note we define the set of tags occurring in a multi-AVM M = M1 , . . . , Mn , then T ags(M )= that if M 1≤i≤n T ags(Mi) (where the (includunion is not necessarily disjoint). Also, the set of sub-AVMs of M itself) that are tagged by the same variable X is TagSet(M , X). Here, ing M , X) = too, TagSet(M TagSet(Mi , X). We usually do not distinguish 1≤i≤n
between a multi-AVM of length 1 and an AVM. When depicting multi-AVMs graphically, we sometimes suppress the angular brackets that enclose the sequence of AVMs. Well-formedness and variable association are extended from AVMs to multiAVMs in the natural way (see also Example 4.12). is well formed iff Definition 4.14 (Well-formed multi-AVMs) A multi-AVM M for every variable X ∈ Tags(M ), TagSet(M , X) includes at most one nonempty AVM. , Definition 4.15 (Variable association) The association of a variable X in M denoted assoc(M , X), is the single nonempty AVM in TagSet(M , X); if all the , X) are empty, then assoc(M , X) = X [ ]. members of TagSet(M Note that the same variable can tag different sub-AVMs of different elements of Example 4.12). In other words, the scope in the sequence (e.g., 1 in M of variables is extended from single AVMs to multi-AVMs. This leads to an interpretation of variables (in multi-AVMs) that hampers the view of multiAVMs as sequences of AVMs. Recall that we interpret multiple occurrences of the same variable within a single AVM as denoting value sharing; hence, the definition of well-formed AVMs and the convention that when a variable occurs more than once in an AVM, its association can be stipulated next to any of its occurrences. As in the other views, when multi-AVMs are concerned, this convention implies that removing an element from a multi-AVM can affect other elements, in contradiction to the usual concept of sequences.
Cambridge Books Online © Cambridge University Press, 2012
4.4 Multi-AVMs
131
Example 4.12 Multi-AVMs. , whose length is 3: Consider the following multi-AVM M = M
2
F: 9
H: 1
[]
G: 7
a , 1 F: 8 H: 2 []
, 6 F: 5 H: 2 []
is Tags(M ) = { 1 , 2 , 5 , 6 , 7 , 8 , 9 }. Observe that The set of variables of M it is well formed because the variables that occur more than once ( 1 and 2 ) have only one nonempty AVM in their TagSet: , 1)= TagSet(M , 2)= TagSet(M Therefore,
G: 7
a 1 [ ], 1 F : 8 H: 2 [] 2
[ ], 2 F : 9 H : 1 [ ]
, 1)= 1 F: 8 G: 7a assoc(M H: 2 []
, 2)= 2 F: 9 H: 1 [] assoc(M
Example 4.13 Multi-AVMs. , whose length is 3: Consider the following multi-AVM M = M
F: 1
, 1
G: 2 H: 3
,
G: 2
H: 3
Observe first that tags are only used when they express reentrancy, as in the is in fact case of AVMs. Furthermore, observe that the second element of M the value of the feature F in the first element. Finally, note that the second and third elements are two copies of similar, yet not identical, AVMs.
of Example 4.12, show the three Exercise 4.23 (*). For the multi-AVM M . multi-AVMs obtained by removing each of the elements of M The sets A RCS and A RCS * are naturally extended from AVMs to multiAVMs; see Definition 2.51, page 68. Crucially, an arc can connect two tags that occur in different members of the multi-AVM.
Cambridge Books Online © Cambridge University Press, 2012
132
4 Unification grammars
Example 4.14 Multi-AVM arcs. of Example 4.12, the set of arcs includes: In the multi-AVM M ) { 2 , F , 9 , 1 , F , 8 , 8 , H, 2 } ⊂ A RCS (M ). Hence, in particular, 1 , FHF , 9 ∈ A RCS *(M
of Example 4.12. ) for M Exercise 4.24 (*). List all the members of A RCS (M Can you specify the members of A RCS *(M )? When defining the paths of a multi-AVM, some caution is required. For an AVM M , Π(M ) is defined as {π | X = tag(M ) and for some variable Y ∈ T ags(M ), X, π, Y ∈ A RCS *(M )}. In case of multi-AVMs, there are several elements from which X can be chosen. Hence, we define the set of multi-AVM paths relative to an additional parameter, the index of the element in the multi-AVM from which the path leaves. = M1 , . . . , Mn is a multi-AVM of Definition 4.16 (Multi-AVM paths) If M length n, then its paths are the set Π(M ) = {i, π | 1 ≤ i ≤ n, X = tag(Mi ), )}. If n = 0, ), X, π, Y ∈ A RCS *(M and for some variable Y ∈ T ags(M ) = ∅. Π(M
Example 4.15 Multi-AVM paths. of Example 4.12, the set of paths includes 2, FG but In the multi-AVM M ). not 1, FG . More interestingly, {i, FH k | k ≥ 0 and 1 ≤ i ≤ 3} ⊂ Π(M
of Example 4.12. Exercise 4.25. Specify all the paths of M With the extended definition of paths, it is easy to adapt the definition of path values (pval) from AVMs to multi-AVMs. Definition 4.17 (Path values) The value of a path i, π in a multi , denoted pval(M , i, π ), is assoc(M, Y ), where Y is such that AVM M ). tag(M ), π, Y ∈ A RCS *(M Of course, one path can have several values when it leaves different elements , i, π ) = pval(M , j, π ) if i = j. of a multi-AVM: in general, pval(M
Cambridge Books Online © Cambridge University Press, 2012
4.4 Multi-AVMs
133
Example 4.16 Path values. of Example 4.12. Examples of path values include Consider again M , 1, F H F G ) = 7 a. Observe that in , 2, F G ) = 7 a and pval(M pval(M order to fully stipulate the value of some paths, one must combine sub-AVMs of more than one element of the multi-AVM. For example, , 2, F H ) = 1 F : 8 G : 7 a
pval(M H: 2 F: 9 H: 1 []
of Example 4.12, show pval(M , 1, ) and Exercise 4.26 (*). For M pval(M , 3, ). A multi-AVM is reentrant if it has two distinct paths that share the same value; these two paths may well be “rooted” in two different elements of the is cyclic if two paths i, π1 , i, π2 ∈ Π(M ), multi-AVM. An multi-AVM M M
where π1 is a proper subsequence of π2 , are reentrant: i, π1 i, π2 . Here, the two paths must have the same index i, although they may “pass through” other than the i-th one. elements of M Example 4.17 Multi-AVM reentrancy. of Example 4.12. It is reentrant since Consider again the multi-AVM M pval(1, FH ) = pval(2, ). Furthermore, it is cyclic since pval(1, FHFH ) = pval(1, ).
Since multi-AVMs are defined as sequences of AVMs (where the scope of variables is extended from a single AVM to the entire sequence), the notions of substructure and concatenation are implicitly defined for multi-AVMs. = M1 , . . . , Mn , any subset of the Mi ’s induces Given some multi-AVM M 1, M 2 , they can be cona substructure multi-AVM. Given two multi-AVMs M 1 catenated to form an multi-AVM whose length is the sum of the lengths of M 2 . One has to be careful not to introduce new reentrancies between paths and M 2 . The simplest way to guarantee this is by renaming one 1 and paths of M of M of them so that the two have disjoint variables, see Definition 4.37. Finally, we extend the definitions of subsumption (Definition 2.55, Page 72) and renaming (Definition 2.63, Page 74) from AVMs to multi-AVMs. See Example 4.18.
Cambridge Books Online © Cambridge University Press, 2012
134
4 Unification grammars
be two multi-AVMs of ,M Definition 4.18 (Multi-AVM subsumption) Let M , denoted subsumes M the same length n and over the same signature. M , if the following conditions hold: M M 1. For all i, 1 ≤ i ≤ n, Mi Mi . M
M
2. If i, π1 j, π2 , then i, π1 j, π2 . Example 4.18 Multi-AVM subsumption. and M be the following two multi-AVMs (of length 3), respectively: Let M 1
np
CAT :
np
AGR : 4
⎡
2
AGR : 4
1
CAT :
⎣ ⎡
2
⎣
CAT :
vp
AGR : 4 CAT :
vp
sg ⎦ PERS : 3rd NUM :
3
⎤
sg ⎦ AGR : 4 PERS : 3rd NUM :
⎡
⎤
⎣
3
CAT :
np
AGR : 6
CAT :
np
⎤
sg ⎦ PERS : 3rd NUM :
AGR : 4
M . Compare with Example 4.7. M , but not M Then M The second clause of Definition 4.18 may seem redundant: If for all i, 1 ≤ i ≤ n, Mi Mi , then in particular, all the reentrancies of Mi are all reentrancies in Mi ; why, then, is the second clause necessary? The answer lies in the possibility of reentrancies across elements in multi-AVMs. Such reentrancies are a “global” property of multi-AVMs; they are not reflected in any of the elements in isolation. For example, the tag 4 in Example 4.18 is not part or M in isolation; however, when the full of a reentrancy in any element of M multi-AVMs are concerned, the paths 2, AGR and 3, AGR are reentrant , which leads to the difference in subsumption between the , but not in M in M two multi-AVMs. In Section 2.5.2 we defined an equivalence relation on AVMs called renaming. Two AVMs are renamings of each other if they are either isomorphic (i.e., one can be obtained from the other by a systematic renaming of the variables), or they only differ in the location the values of multiply occurring variables are explicated. These two cases can naturally be extended to multi-AVMs; and again, we use the term ‘renaming’ to refer to either of them, as in the following definition. 2 be two multi-AVMs. M 2 is a 1 and M Definition 4.19 (Renaming) Let M 2 , iff M 1 1 2 1 , denoted M 2 and M 1. M M M renaming of M
Cambridge Books Online © Cambridge University Press, 2012
4.4 Multi-AVMs
135
We can now relate multi-AVMs to MRGs (and, subsequently, to MRSs) in a similar way to the relations between AVMs and feature graphs discussed in Section 2.6. We do not do it in so much detail here, as the basic ideas are similar to the simpler case of Section 2.6. As an example, we define a mapping from is related to an MRG ϕ(M ), obtained multi-AVMs to MRGs. A multi-AVM M in exactly the same way as φ(M ) is obtained from an AVM M (see from M Definition 2.64, page 75). = M1 , . . . , Mn be a Definition 4.20 (Multi-AVM to MRG mapping) Let M is ϕ(M ) = R, ¯ G , well-formed multi-AVM of length n. The MRG image of M ¯ q1 , . . . , q¯n and G = Q, δ, θ , where: with R = ¯ ); • Q = Tags(M • q¯i = tag(Mi ) for 1 ≤ i ≤ n; ) and f ∈ F EATS, δ(X, f ) = Y if X, f, Y ∈ A RCS (M ); • for all X ∈ Tags(M and ) and a ∈ ATOMS, θ(X) = a if assoc(M , X) is the atomic • for all X ∈ Tags(M AVM X(a), and is undefined otherwise. Example 4.19 Multi-AVM to multirooted feature graph mapping. ,M of Example 4.18 (Page 134), and the Refer back to the multi-AVMs M = ϕ(M ) and that A of Example 4.7 (Page 124). Observe that A MRGs A, A = ϕ(M ).
: Exercise 4.27 (*). Consider the following multi-AVM M 2
F:
3
H:
4
G:
1
[]
1
F:
6
G: 7
a H: 2 []
8
F:
9
G : 10
a H: 2 []
)? What is ϕ(M Similarly to feature graphs and AVMs, the mapping ϕ has some beneficial properties, which we cite here without proving. 2 be two multi-AVMs. Then: 1, M Theorem 4.21 Let M ) = Π(ϕ(M )); • Π(M M
ϕ(M)
• i, π1 j, π2 iff i, π1 j, π2 ; 1 1 )ϕ( 2 ). 2 iff ϕ(M M M • M
Cambridge Books Online © Cambridge University Press, 2012
136
4 Unification grammars
Example 4.20 Multi-AVM to multirooted feature graph mapping. : Consider the following multi-AVM M 2
F: 9
H: 1
[]
G: 7
a 1 F: 8 H: 2 []
6
F: 5
H: 2
[]
Observe that it is well formed, as the variables that occur more than once ( 1 is and 2 ) have only one nonempty occurrence each. The set of variables of M Tags(M ) = { 1 , 2 , 5 , 6 , 7 , 8 , 9 }, which will also be the set of nodes Q in ¯ is the sequence of variables tagging the AVM ). The sequence of roots R ϕ(M , namely 2 , 1 , 6 . The obtained graph is: elements of M 2
1
F
6 F
H 9
8
F 5
G
H 7
a
H
Exercise 4.28. Define an inverse mapping from multirooted feature graphs to multi-AVMs, along the lines of Definition 2.70. Stipulate the relations that this mapping preserves. We showed earlier how multi-AVMs correspond to MRGs and how MRGs correspond to abstract MRSs. Combining the two, multi-AVMs correspond to AMRSs in a natural way. A direct definition is given below (and demonstrated in Example 4.21). In the sequel we will many times depict AMRSs as multiAVMs: This representation is clearer to follow and more intuitive, and given a multi-AVM, the corresponding AMRS is uniquely determined by the following definition: = M1 , . . . , Mn Definition 4.22 (Multi-AVM to AMRS mapping) Let M is σ = Φ(M ) = be a multi-AVM of length n. The AMRS image of M Indσ , Πσ , Θσ , ≈σ , defined by:
Cambridge Books Online © Cambridge University Press, 2012
4.5 Unification revisited
137
• Indσ = n; ); • Πσ = Π(M , i, π) = X(a) for some variable X; • Θσ = a iff pval(M M
• i, π1 ≈σ j, π2 iff i, π1 j, π2 .
) is a pre-AMRS. In addition, Φ(M ) is prefix-closed because Clearly, Φ(M is; Θ is defined only for Π(M ) is prefix-closed; it is fusion-closed because M ; and Θ respects the equivalence maximal paths because this is the case for M ). Hence, Φ(M ) is an AMRS. by its definition (using reentrancy in M Example 4.21 Multi-AVM to AMRS mapping. be the multi-AVM: Let M
CAT :
s
CAT :
np
AGR : 1
CAT :
v
AGR : 1
is the AMRS σ = Ind, Π, Θ, ≈ , where The abstract representation of M • Ind = 3; • Π = {1, , 1, CAT , 2, , 2, CAT , 2, AGR , 3, , 3, CAT , 3, AGR }; • Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ(3, CAT ) = v, Θ is undefined
elsewhere; • ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 } ∪ {(2, AGR , 3, AGR )}.
, Φ(M ) = Abs(ϕ(M )). Exercise 4.29. Prove that for every multi-AVM M To conclude this section, we present a summary of the various views of multirooted structures in Diagram 4.1. This is a simple adaptation of Diagram 2.2 (Page 65) to the “sequence” case; the relations expressed by the arrows are natural extensions of their counterparts in Diagram 2.2, and many of them were can be proven similarly to the “simple” case. 4.5 Unification revisited In Chapter 3 we defined the unification operation for feature structures. We now extend the definition to multirooted structures; we define two variants of the operation, one which unifies two same-length structures and produces their least upper bound with respect to subsumption, and one, called unification in context, which combines the information in two feature structures, each of which may be an element in a larger structure. Intuitively, the two operations
Cambridge Books Online © Cambridge University Press, 2012
138
4 Unification grammars
Diagram 4.1 Multi-AVMs, MRGs, MRSs, and AMRSs multi-AVM
MRG
2 M 2 M
AMRS
Abs
[·]∼
ϕ 1 M
MRS
1 ]∼ mrs1 = [A
1 A
1) σ1 = Abs(A ˆ
∈
2 ∼ A 2 ∈ A
2 ]∼ mrs2 = [A Conc
2) σ2 = Abs(A
The mapping from MRGs to multi-AVMs is not defined directly, although it is a simple extension of the mapping η from feature graphs to AVMs. We also did not define a subsumption relation for MRSs, but one can easily be defined similarly to the case of feature structures.
can be graphically depicted as in Example 4.22. The first operation takes two same-length AMRSs and returns a single AMRS of the same length; unification in context, by contrast, takes two AMRSs (not necessarily of the same length) and, additionally, two indices that point to specific elements of the two operands. Those two elements are unified, and the result is two AMRSs whose lengths are identical to the lengths of the operands. Example 4.22 Two AMRS unification operations. [ ] [ ] [ ] ··· [ ] [ ] [ ] ···
[] []
[ ] [ ] [ ] ···
[]
Same-length AMRS unification
[] [] [] [] [] [] []
[]
Unification in context
Unlike Chapter 3, we focus on a single view here. We define unification for abstract multirooted structures only, although it is of course possible to define it in the other views. AMRSs were chosen because they are easy to
Cambridge Books Online © Cambridge University Press, 2012
4.5 Unification revisited
139
work with mathematically, using mostly set theoretical operations. To facilitate readability, we depict AMRSs as multi-AVMs below. These entities can be straightforwardly converted to AMRSs: see Definition 4.22. Definition 4.23 (AMRS unification) Let σ, ρ be AMRSs of the same length, n. The unification of σ and ρ, denoted σ ρ, is the least upper bound of σ and ρ with respect to AMRS subsumption, if it exists, and undefined otherwise. We defer a discussion of the uniqueness of the unification result to the end of the present section. Example 4.23 AMRS unification. Let ⎡ ⎤ CAT : n CAT : d CAT : v ⎣ ⎦ σ= NUM : 4 NUM : 4 NUM : 4 CASE : nom ⎤ ⎡ CAT : n CAT : d CAT : v ⎦ ⎣ ρ= NUM : pl NUM : pl NUM : pl CASE : [ ] Then:
σρ =
CAT : NUM
d : 4 pl
⎡ CAT : n ⎣ NUM : 4 CASE :
nom
⎤
⎦ CAT : v NUM : 4
We now focus on unification in context. We want to allow the unification of two abstract feature structures, each of which is part of an AMRS. We refer to this process as unification in context – while the two AFSs are basically unified according to the guidelines of Definition 3.21 (Page 104), their contexts – that is, the AMRSs they are parts of – are what unification is defined over. The input to the operation is a pair of AMRSs, with two indices pointing to the elements that are to be unified, and the output is a pair of AMRSs. While the operation basically unifies two AFSs, it has side effects that may influence the two contexts: the AMRSs of which the AFSs are constituents. First, the closure operations Cl and Eq (Definitions 3.11, 3.15 and 3.18, page 100) are naturally extended to AMRSs: If σ is a pre-AMRS, then Cl(σ) is the least extension of σ that is prefix- and fusion-closed, and Eq(σ) is the least extension of σ to a pre-AMRS in which ≈ is an equivalence relation, and T y(σ) is the least extension of σ in which Θ respects the ≈ relation. With this extension, unification in context is defined thus:
Cambridge Books Online © Cambridge University Press, 2012
140
4 Unification grammars
Definition 4.24 (Unification in context) Let σ, ρ be two AMRSs of lengths nσ , nρ , respectively. The unification of the i-th element in σ with the j-th element in ρ, denoted (σ, i) (ρ, j), is defined only if i ≤ nσ and j ≤ nρ , in which case it is a pair of AMRSs, σ , ρ = T y(Eq(Cl(σ ))), T y(Eq(Cl(ρ ))) , where σ and ρ are defined as follows: Indσ = Indσ Πσ = Πσ ∪ {i, π | j, π ∈ Πρ } ≈σ = ≈σ ∪{(i, π1 , i, π2 ) | j, π1 ≈ρ j, π2 } ⎧ Θσ (k, π ) if k = i ⎪ ⎪ ⎨ Θσ (k, π ) if k = i and Θσ (i, π )↓ Θσ (k, π ) = ⎪ Θ (j, π ) if k = i and Θρ (j, π )↓ and Θσ (i, π )↑ ⎪ ⎩ ρ undefined otherwise Indρ = Indρ Πρ = Πρ ∪ {j, π | i, π ∈ Πσ } ≈ρ = ≈ρ ∪{(j, π1 , j, π2 ) | i, π1 ≈σ i, π2 } ⎧ ⎪ Θρ (k, π ) if k = j ⎪ ⎨ Θρ (k, π ) if k = j and Θρ (j, π )↓ Θρ (k, π ) = ⎪ Θσ (i, π ) if k = j and Θσ (i, π )↓ and Θρ (j, π )↑ ⎪ ⎩ undefined otherwise The unification is undefined if there exists a path π such that Θσ (i, π )↓ and Θρ (j, π )↓, but Θσ (i, π ) = Θρ (j, π ); or if there exist paths π, α, where α = , such that either Θσ (i, π )↓ but j, πα ∈ Πρ , or Θρ (j, π )↓ but i, πα ∈ Πσ . Compare the above definition to Definition 3.21 (Page 104) and observe that the differences are minor. The unification returns two AMRSs, σ and ρ , which are extensions (with respect to the closure operations T y, Eq, and Cl) of σ and ρ , respectively. How is σ obtained from σ? First, the length of σ is identical to that of σ because their indices are identical. Then, some paths might be added to the i-th element of σ : Those are the paths that are defined for the j-th element of ρ. The same is true for reentrancies: If two paths defined for the j-th element of ρ are reentrant, these paths are reentrant in the i-th element of σ , too. Finally, the marking of some paths, defined for the i-th element of σ, can be modified: This can only happen if the pair i, π is unmarked in σ, but the pair j, π is marked in ρ, in which case the marking of the latter is assigned to the former. Observe that ρ is obtained from ρ in a similar way. Also, the conditions for failure, where the result of the operation is undefined, are a natural extension of the same conditions for the case of AFSs. See Example 4.24.
Cambridge Books Online © Cambridge University Press, 2012
4.5 Unification revisited
141
Example 4.24 Unification in context. Consider the following multi-AVMS: ⎡
CAT :
np
⎤
σ = ⎣ NUM : 1 ⎦ CASE : nom
CAT :
v
⎤ ⎤ ⎡ CAT : n np CAT : d ⎣ NUM : 2 ⎦ ρ = ⎣ NUM : 2 ⎦ NUM : 2 CASE : 3 CASE : 3 ⎡
,
NUM : 1
CAT :
Viewed as AMRSs, σ = Indσ , Πσ , Θσ , ≈σ and ρ = Indρ , Πρ , Θρ , ≈ρ , where: σ ρ Ind : 1, , 1, CAT, 1, NUM, 1, CASE 1, , 1, CAT, 1, NUM, 1, CASE 2, , 2, CAT, 1, NUM 2, , 2, CAT, 2, NUM 3, , 3, CAT, 3, NUM, 3, CASE Θ: 1, CAT → np, 1, CASE → nom 1, CAT → np, 2, CAT → d 3, CAT → n 2, CAT → v (1, NUM, 2, NUM), ≈: (1, NUM, 2, NUM) (1, NUM, 3, NUM), (2, NUM, 3, NUM), (1, CASE, 3, CASE)
(Note that ‘≈’ also includes the trivial pairs, (i, π , i, π ), which we do not list.) Now, (σ, 1) (ρ, 1) = (T y(Eq(Cl(σ ))), T y(Eq(Cl(ρ )))), where (given that i = j = 1 in Definition 4.24): Indσ = Indσ = 2 Πσ = Πσ ∪ {i, π | j, π ∈ Πρ } = Πσ ∪ {1, , 1, CAT , 1, NUM , 1, CASE } = Πσ Θσ
= Θσ
≈σ = ≈σ ∪{(i, π1 , i, π2 ) | j, π1 ≈ρ j, π2 } = ≈σ Similarly, Indρ = Indρ = 3, Πρ = Πρ , and ≈ρ =≈ρ . The interesting fact is that Θρ = Θρ ∪ {1, CASE → nom} because Θσ (1, CASE ) = nom and Θρ (1, CASE ) ↑. Computing the closure operations on these intermediate results, one obtains also that ΘT y(Eq(Cl(ρ))) (3, CASE ) = nom. Viewed as multi-AVM again, this result is ⎡
CAT :
np
⎤
σ = ⎣ NUM : 1 ⎦ CASE : nom
CAT :
v
NUM : 1
⎡
CAT :
np
⎤
⎦ , ρ = ⎣ NUM : 2 CASE : 3 nom
CAT :
d
NUM : 2
⎡
CAT :
n
⎤
⎣ NUM : 2 ⎦ . CASE : 3
The unification does not affect σ, but it adds a marking for the case features in two elements of ρ.
Cambridge Books Online © Cambridge University Press, 2012
142
4 Unification grammars
Exercise 4.30. Show that σ and ρ defined above are AMRSs. In Example 4.24, the unification results change to ρ , the second argument of the operation, but its first argument remains intact. This is not always the case. As Example 4.25 demonstrates, sometimes both unificands are affected by the unification. Example 4.25 Unification in context. Let F: 1a
σ= H: 2 , G: 2 []
: 3 []
ρ= H: 3 . G: 4b F
Unifying the first element in σ with the first element in ρ in the contexts of σ and ρ, we obtain (σ, 1) (ρ, 1) = (σ , ρ ): σ =
a
H: 2 , G: 2b F: 1
ρ =
a
H: 3 G: 4b F: 3
Note that both operands of the unification are modified.
Exercise 4.31. Reconstruct Example 4.25 in terms of AMRSs (rather than multi-AVMs). Unification in context is closely related to AFS unification. Informally, it is the unification of two abstract feature structures, and the only difference is that the context in which these two AFSs lie might be affected by the operation because of reentrancies. In particular, a formal connection to AFS unification can be shown. Theorem 4.25 If σ , ρ = (σ, i) (ρ, j), then σ i = ρj = σ i ρj . For the purpose of the following proof, we extend the definition of AMRS substructures to pre-AMRSs (the definition is exactly the same, but its domain is pre-AMRSs rather than AMRSs). Proof Let σ , ρ = (σ, i) (ρ, j) = T y(Eq(Cl(σ1 ), T y(Eq(Cl(ρ1 ) , and let ξ be such that σ i ρj = T y(Eq(Cl(ξ))). By Definition 4.24, Πσ1 = Πσ ∪{i, π | j, π ∈ Πρ } and Πρ1 = Πρ ∪{j, π | i, π ∈ Πσ }. Therefore, by the definition of substructures, Πσ1 i = {π | i, π ∈ Πσ1 } = {π | i, π ∈ Πσ or j, π ∈ Πρ } = {π | j, π ∈ Πρ1 } = Πρ1 j . Hence, Πσ1 i = Πρ1 j .
Cambridge Books Online © Cambridge University Press, 2012
4.5 Unification revisited
143
By Definition 4.24, ≈σ1 = ≈σ ∪{(i, π1 , i, π2 ) | (j, π1 , j, π2 ) ∈≈ρ } and ≈ρ1 = ≈ρ ∪{(j, π1 , j, π2 ) | (i, π1 , i, π2 ) ∈≈σ }. Therefore, by the definition of substructures, ≈σ1 i = {(π1 , π2 ) | (i, π1 , i, π2 ) ∈≈σ1 } = {(π1 , π2 ) | (i, π1 , i, π2 ) ∈≈σ } or (j, π1 , j, π2 ) ∈≈ρ } = {(π1 , π2 ) | (j, π1 , j, π2 ) ∈≈ρ1 } = ≈ρ1 j . Hence, ≈σ1 i = ≈ρ1 j . Now, observe that: ⎧ ⎨ Θσ (i, π ) if Θσ (i, π )↓ Θσ1 i (π) = Θσ1 (i, π ) = Θρ (j, π ) if Θρ (j, π )↓ and Θσ (i, π )↑ ⎩ ↑ otherwise and ⎧ ⎨ Θρ (j, π ) if Θρ (j, π )↓ Θρ1 j (π) = Θρ1 (j, π ) = Θσ (i, π ) if Θρ (j, π )↑ and Θσ (i, π )↓ ⎩ ↑ otherwise If the unification in context is defined, then if Θσ (i, π )↓ and Θρ (j, π )↓, then Θσ (i, π ) = Θρ (j, π ). Therefore, Θσ1 i = Θρ1 j . Hence, so far we have obtained that σ1 i = ρ1 j . Now, observe that Πξ = Πσi ∪ Πρj = Πσ1 i = Πρ1 j , ≈ξ =≈σi ∪ ≈ρj =≈σ1 i =≈ρ1 j and Θξ (π) = Θσ1 i if the latter is defined. Hence, σ1 i = ρ1 j = ξ. Observe that during the closure operations, each path, reentrancy, or path value is added to the i-th element in σ1 iff it is added also to the j-th element in ρ1 and iff it is added to ξ (the actual proof is suppressed). Hence, σ i = ρj = σ i ρj . A different characterization of unification in context relates it to the subsumption order on AFSs and AMRSs. It is not a least upper bound operation; this is a property of AMRS unification, which we defined above (Definition 4.23). But it does guarantee a certain kind of minimality, as the following theorem shows: If σ , ρ = (σ, i) (ρ, j), then σ is the most general AMRS that is
Cambridge Books Online © Cambridge University Press, 2012
144
4 Unification grammars
subsumed by σ, and whose i-th element is subsumed by the j-th element of ρ; symmetrically, ρ is the most general AMRS that is subsumed by ρ and whose j-th element is subsumed by σ i . This is a characterization of unification in context because the reverse direction of the implication also holds. Theorem 4.26 Let σ, ρ be two AMRSs and i, j be indexes such that i ≤ len(σ) and j ≤ len(ρ). Then, σ , ρ = (σ, i) (ρ, j) iff σ = min ˆ {σ |
ˆ ˆ j ˆ i i ˆ j |σ σ and ρ σ } and ρ = min {ρ | ρρ and σ ρ }. ˆ
Proof Assume that σ , ρ exists and σ , ρ = (σ, i) (ρ, j) = T y(Eq(Cl(σ1 ))), T y(Eq(Cl(ρ1 ))) . ˆ i }. ˆ and ρj σ Let Aσ = min ˆ {σ | |σ σ
We begin by showing that σ ∈ Aσ : From Definition 4.24, it follows that Indσ1 = Indσ = Indσ , Πσ ⊆ Πσ1 , ≈σ ⊆≈σ1 and Θσ1 (k, π ) = Θσ (k, π ) if the latter is defined. Observe that the closure operations only extend σ1 ; they ˆ . never remove paths, reentrancies, or atom values. Hence, σ σ Πρj = {π | j, π ∈ Πρ }. By Definition 4.24, {i, π |j, π ∈ Πρ } ⊆ Πσ1 ⊆ Πσ and therefore Πρj ⊆ Πσi . ≈ρj = {(π1 π2 ) | (j, π1 , j, π2 ) ∈≈ρ }. By Definition 4.24, {(i, π1 , i, π2 ) | (j, π1 , j, π2 ) ∈≈ρ } ⊆≈σ1 ⊆≈σ and therefore ≈ρj ⊆≈σi . Similarly, Θσ1 (i, π ) = Θρ (j, π ) if the latter is defined and ˆ i . Therefore, therefore Θσi (π) = Θρj (π) if the latter is defined. Hence, ρj σ σ ∈ Aσ . ˆ (notice that since AMRS subsumption We now show that if ξ ∈ Aσ , then σ ξ ˆ and ρj ξ ˆ i. is antisymmetric, the minimum is unique). Let ξ ∈ Aσ . Then, σ ξ ˆ and therefore: Clearly, Ind = Ind = Ind . σ ξ, σ
σ
ξ
1. Πσ ⊆ Πξ , ≈σ ⊆≈ξ and Θξ (k, π ) = Θσ (k, π ) when the latter is defined. ˆ i , and therefore: ρj ξ 2. {π | j, π ∈ Πρ } ⊆ {π | i, π ∈ Πξ } 3. {(π1 , π2 ) | (j, π1 , j, π2 ) ∈≈ρ } ⊆ {(π1 , π2 ) | (i, π1 , i, π2 ) ∈≈ξ } 4. Θξ (i, π ) = Θρ (j, π ) when the latter is defined. From (2), it follows that: 5. {i, π | j, π ∈ Πρ } ⊆ Πξ . From (3), it follows that: 6. {(i, π1 , i, π2 ) | (j, π1 , j, π2 ) ∈≈σ } ⊆≈ξ . From 1, 4, 5, and 6 it follows that Πσ1 ⊆ Πξ , ≈σ1 ⊆≈ξ and Θξ (k, π ) = Θσ1 (k, π ) when the latter is defined. Since σ is the least extension to an ˆ In the same way, AMRS for which these conditions hold, we obtain that σ ξ. ˆ and σ i ρ ˆ j }. it can be shown that ρ = min ˆ {ρ | ρρ
Cambridge Books Online © Cambridge University Press, 2012
4.5 Unification revisited
145
Now, assume that (σ, i) (ρ, j) is undefined. We need to show that in that ˆ and ρj σ ˆ and σ i ρ ˆ i } and {ρ | ρρ ˆ j } are empty. We case {σ | |σ σ
ˆ and ρj σ ˆ i } = ∅: Assume toward a contradiction first show that {σ | |σ σ ˆ ˆ i }. Then, σ ξ ˆ i . Also, and ρj σ ˆ and ρj ξ that there exist ξ ∈ {σ | |σ σ (σ, i) (ρ, j) is undefined, and therefore either one of the following holds:
1. There exist paths i, π ∈ Πσ and j, π ∈ Πρ such that Θσ (i, π ) ↓, ˆ Θξ (i, π ) ↓ and Θρ (j, π ) ↓, and Θσ (i, π ) = Θρ (j, π ). Since σ ξ, ˆ i , it follows that Θξ (i, π ) = Θξi (π) = Θξ (i, π ) = Θσ (i, π ). Since ρj ξ
Θρj (π) = Θρ (j, π ) ↓. Hence, Θσ (i, π ) = Θξ (i, π ) = Θρ (j, π ), a contradiction. ˆ 2. T y(Eq(Cl(σ ))) is undefined (where σ is as is in Definition 4.24): σ ξ, and therefore Πσ ⊆ Πξ , ≈σ ⊆≈ξ and Θξ (k, π ) = Θσ (k, π ) if the latter is ˆ i , and therefore {i, π | j, π ∈ Πρ } ⊆ {i, π | i, π ∈ defined. Also, ρj ξ Πξ } ⊆ Πξ , {(i, π1 , i, π2 ) | (j, π1 , j, π2 ) ∈≈ρ } ⊆ {(i, π1 , i, π2 ) | (i, π1 , i, π2 ) ∈≈ξ } ⊆≈ξ , and Θξ (i, π ) = Θρ (j, π ) if the latter is defined. Hence, we obtain that Πσ ⊆ Πξ , ≈σ ⊆≈ξ , and Θξ (k, π ) = Θσ (k, π ) if the latter is defined. Also, ξ extends σ into a pre-AFS that is fusion-closed and in which the ≈ relation is an equivalence relation, but since Eq(Cl(σ )) is the least such extension, we obtain that ΠEq(Cl(σ )) ⊆ Πξ , ≈Eq(Cl(σ )) ⊆≈ξ and Θξ (k, π ) = ΘEq(Cl(σ )) (k, π ) if the latter is defined. Now, since T y(Eq(Cl(σ ))) is undefined, it follows that one of the following holds: a. There exist π1 , π2 ∈ ΠEq(Cl(σ )) such that π1 ≈Eq(Cl(σ )) π2 , ΘEq(Cl(σ )) (π1 ) ↓, ΘEq(Cl(σ )) (π2 ) ↓, and ΘEq(Cl(σ )) (π1 ) = ΘEq(Cl(σ )) (π2 ). Hence, π1 , π2 ∈ Πξ , and π1 ≈ξ π2 , Θξ (π1 )↓, Θξ (π2 )↓, and Θξ (π1 ) = Θξ (π2 ), a contradiction to the fact that ξ is an AMRS. b. There exist π, π ∈ ΠEq(Cl(σ )) and α ∈ P aths such that ΘEq(Cl(σ )) (π)↓, ΘEq(Cl(σ )) (π )↓, π α ∈ ΠEq(Cl(σ )) , π ≈Eq(Cl(σ )) π, and α = . Since Eq(Cl(σ )) is fusion-closed, πα ∈ ΠEq(Cl(σ )) . Hence, π, πα ∈ Πξ and Θξ (π)↓, a contradiction to the fact that ξ is an AMRS. To conclude this section, we return to the question of the uniqueness of AMRS unification (refer back to Definition 4.23, page 139). When we defined feature structure unification in Chapter 3, we defined it as a least upper bound; uniqueness followed from the algorithm that computes feature graph unification, presented in Section 3.4. Similarly, it is possible to define the unification of two multirooted graphs and prove that the result of this computational process is indeed analog to computing the least upper bound of two AMRSs (and,
Cambridge Books Online © Cambridge University Press, 2012
146
4 Unification grammars
subsequently, two AMRSs). The actual definition and proof are very technical and are therefore suppressed here. 4.6 Rules and grammars Like context-free grammars, unification grammars are defined over an alphabet. Since the grammars that are of most interest to us are of natural languages, and since sentences in natural languages are not just strings of symbols, but rather strings of words, we add to the signature an alphabet, a fixed set W ORDS of words (in addition to the fixed sets F EATS and ATOMS). Meta-variables wi , wj and so on, are used to refer to elements of W ORDS ; w, to refer to strings over W ORDS . We also adopt here the distinction (introduced in Definition 1.8) between phrasal and terminal rules. The former cannot have elements of WORDS in their bodies; the latter have only a single word as their body. We refer to the collection of terminal rules as the lexicon: It associates with terminals, members of W ORDS, (abstract) feature structures that are their categories. For every word wi ∈ W ORDS , the lexicon specifies a finite set of abstract feature structures L(wi ). If L(wi ) is a singleton, then wi is unambiguous, and if it is empty, then wi is not a member of the language defined by the lexicon. Thus, when dealing with grammars, terminal words can be ignored and only their categories must be considered. For example, derivations can yield only a sequence of feature structures, not actual strings of words. Recall that given a signature consisting of features F EATS and atoms ATOMS, AF S(F EATS, ATOMS) is the set of all abstract feature structures over the signature. Definition 4.27 (Lexicon) Given a signature of features F EATS and atoms ATOMS, and a set W ORDS of terminal symbols, a lexicon is a finite-range function L : W ORDS → 2AF S(F EATS,ATOMS) . When the lexicon ‘L’ is clear from context, we usually depict it in a rule format, specifying the relation between words and their lexical entries using arrows, as in Example 4.28. When words are ambiguous, this entails reduplication of information, because the words must be listed several times with their different lexical entries; but in the majority of the examples in this chapter and the following one, this yields a more concise representation. A consequence of choosing (abstract) feature structures, rather than concrete feature graphs, as categories of words is that the lexical entries of words are disjoint; that is, if F1 ∈ L(wi ) and F2 ∈ L(wj ) and F1 = F2 , then F1 and
Cambridge Books Online © Cambridge University Press, 2012
4.6 Rules and grammars
147
Example 4.26 Lexicon. Following is a lexicon L over a signature consisting of F EATS = { CAT, NUM, CASE }, ATOMS = {d, n, v, sg, pl}, and W ORDS = {two, sheep, sleep}: ⎧⎡ ⎤⎫ ⎨ CAT : n ⎬ d L(two) = L(sheep) = ⎣ NUM : [ ] ⎦ ⎭ ⎩ NUM : pl CASE : [ ] CAT : v L(sleep) = NUM : pl
CAT :
Example 4.27 Lexicon. As an alternative to the lexical entry of sheep in Example 4.26 above, the grammar writer may prefer the following lexical entry: ⎧⎡ ⎤ ⎡ ⎤⎫ CAT : n ⎬ ⎨ CAT : n L(sheep) = ⎣ NUM : sg ⎦ , ⎣ NUM : pl ⎦ ⎩ ⎭ CASE : [ ] CASE : [ ]
Example 4.28 Lexicon, rule-format. To depict the lexicon specification of Example 4.27, we usually use the following notation: ⎤ n sheep → ⎣ NUM : sg ⎦ CASE : [ ] ⎡
CAT :
⎤ n sheep → ⎣ NUM : pl ⎦ CASE : [ ] ⎡
CAT :
F2 do not share paths between them (even if i = j). This restriction does not imply that the lexical entries of related words (or, indeed, of one word) are not related; it only limits the scope of reentrancies to a single feature structure. An alternative approach, allowing reentrancies among related lexical entries, would necessitate an extension of the scope of reentrancies to the entire lexicon, and such a complication is not justified. Therefore, when a string of words w is given, it is possible to construct an AMRS σw for the lexical entries of the words in w, such that no two elements of σw share paths. Such an AMRS is simply the concatenation of the lexical entries
Cambridge Books Online © Cambridge University Press, 2012
148
4 Unification grammars
of the words in w. In general, there may be several such AMRSs because each word in w can have multiple elements in its category. The set of such AMRSs is the preterminals of w. Definition 4.28 (Preterminals) Let w = w1 . . . wn ∈ W ORDS + . P Tw (j, k) is defined iff 1 ≤ j, k ≤ n, in which case it is the set of AMRSs {Aj · Aj+1 · · · Ak | Ai ∈ L(wi ) for j ≤ i ≤ k}. If j > k (i.e., w = ), then P Tw (j, k) = {λ}. The subscript w is omitted when it is clear from context. Example 4.29 demonstrates the notion of preterminals. Example 4.29 Preterminals. Consider the string of words w = two sheep sleep and the lexicon of Example 4.26. There is exactly one element in P Tw (1, 3); this is the AMRS:
⎤ ⎡ CAT : n CAT : d ⎣ NUM : [ ] ⎦ CAT : d NUM : pl NUM : pl CASE : [ ]
Notice that there is no sharing of variables among different feature structures in this AMRS. Because AMRSs are depicted using multi-AVMs here, the variables in the above multi-AVM are chosen so that unintended reentrancies are avoided. Now, assume that the word sheep is represented, as in Example 4.27, as an ambiguous word: Its category contains two feature structures, namely, ⎧⎡ ⎤ ⎨ CAT : n L(sheep) = ⎣ NUM : sg ⎦ , ⎩ CASE : [ ]
⎤⎫ n ⎬ ⎣ NUM : pl ⎦ ⎭ CASE : [ ] ⎡
CAT :
Then P Tw (1, 3) has two members:
and
⎡ ⎤ CAT : n CAT : d ⎣ NUM : sg ⎦ CAT : d NUM : pl NUM : pl CASE : [ ]
⎡ ⎤ CAT : n d ⎣ CAT : d ⎦ NUM : pl NUM : pl NUM : pl CASE : [ ] CAT :
Cambridge Books Online © Cambridge University Press, 2012
4.6 Rules and grammars
149
Definition 4.29 (Rules) A (phrasal) rule is an AMRS of length n > 0 with a distinguished first element. If σ is a rule, then σ 1 is its head and σ 2..n is its body. We adopt a convention of depicting rules with an arrow (→) separating the head from the body. Since a rule is simply an AMRS, there can be reentrancies among its elements: both between the head and (some element of) the body and among elements in its body. Notice that the definition supports -rules, that is, rules with null bodies. See Example 4.30. Example 4.30 Rules as AMRSs. Because every AMRS can be interpreted as a rule, so can the AMRS depicted in Example 4.21: • Ind = 3; • Π = {1, , 1, CAT , 2, , 2, CAT , 2, AGR , 3, , 3, CAT , 3, AGR }; • Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ(3, CAT ) = v, Θ is undefined
elsewhere; • ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 } ∪ {(2, AGR , 3, AGR )}.
whose multi-AVM view is:
CAT :
s
→
CAT :
np
AGR : 4
CAT :
v
AGR : 4
Rules can also propagate information between the mother and any of the daughters using reentrancies between paths originating in the head of the rule and paths originating from one of the body elements:
CAT :
s
SUBJ : 1
→
1
CAT :
np
AGR : 2
CAT :
v
AGR : 2
The rules of Example 4.30 employ feature structures that include the feature CAT, encoding the major parts-of-speech category of phrases. While this is useful and natural, it is by no means obligatory. Unification rules can encode such information in other ways (e.g., via a different feature or as a collection of features) or they may not encode it at all. In the general case, a unification rule is not required to have a context-free skeleton, a feature whose values constitute a context-free backbone that drives the derivation. Some unification-based
Cambridge Books Online © Cambridge University Press, 2012
150
4 Unification grammars
grammar theories do indeed maintain a context-free skeleton (LFG is a notable example), while others (like HPSG) do not. We introduce a shorthand notation in the presentation of grammars: When two rules have the same head, we list the head only once and separate the bodies of the different rules with ‘|’ (following the convention of context-free grammars). Note, however, that the scope of variables is still limited to a single rule, so that multiple occurrences of the same variable within the bodies of two different rules are unrelated. Additionally, we may use the same variable (e.g., 4 ) in several rules. It should be clear by now that these multiple uses are unrelated to each other because the scope of variables is limited to a single rule. Definition 4.30 (Unification grammars) A unification grammar (UG) G = (L, R, As ) over a signature ATOMS of atoms and F EATS of features consists of a lexicon L, a finite set of rules R, and a start symbol As that is an abstract feature structure.
Example 4.31 Gu , a unification grammar. ⎤ ⎡ CAT : np
CAT : v → ⎣ NUM : 4 ⎦ CAT : s NUM : 4 CASE : nom ⎡ ⎤ ⎤ ⎡ CAT : n CAT : np ⎣ NUM : 4 ⎦ → CAT : d ⎣ NUM : 4 ⎦ NUM : 4 CASE : 2 CASE : 2 ⎡ ⎤ ⎤ ⎡ CAT : np CAT : pron ⎣ NUM : 4 ⎦ → ⎣ NUM : 4 ⎦ CASE : 2 CASE : 2
v NUM : pl ⎡ ⎤ CAT : n lamb → ⎣ NUM : sg ⎦ CASE : [ ] ⎤ ⎡ CAT : pron she → ⎣ NUM : sg ⎦ CASE : nom CAT : d a → NUM : sg sleep →
CAT :
v NUM : sg ⎡ ⎤ CAT : n lambs → ⎣ NUM : pl ⎦ CASE : [ ] ⎤ ⎡ CAT : pron her → ⎣ NUM : sg ⎦ CASE : acc CAT : d two → NUM : pl sleeps →
Cambridge Books Online © Cambridge University Press, 2012
CAT :
4.7 Derivations
151
As an example of a unification grammar, consider Gu of Example 4.31, which we use in the remainder of this chapter to demonstrate several aspects of the theory. The linguistic motivation behind this grammar is discussed in detail in Chapter 5; meanwhile, consider it a formal example with no linguistic implications. Unless explicitly mentioned, the start symbol is assumed to be the head of the first rule in the grammar.
4.7 Derivations We define the language generated by unification grammars in a parallel way to the definition of languages generated by context-free grammars: First, we define derivations analogously to the context-free derivations (Definition 1.2). The reflexive transitive closure of the derivation relation is the basis for the definition of languages. For the following discussion, fix a particular grammar G = (L, R, As ). Derivation is a relation that holds between two forms, σ1 and σ2 , each of which is an AMRS. To define it formally, two concepts have to be taken care of: First, an element of σ1 has to be matched against the head of some grammar rule, ρ. Then, the body of ρ must replace the selected element in σ1 , thus producing σ2 . When forms are sequences of symbols (as is the case with context-free grammars), both matching and replacement are simple matters. However, with AMRSs, matching involves unification, and unification must be computed in context: that is, when the selected element of σ1 is unified with the head of ρ, other elements in σ1 or in ρ may be affected due to reentrancy. This possibility must be taken care of when replacing the selected element with the body of ρ. Since AMRSs (and, hence, forms) carry an explicit notion of indices, the definition is slightly more technical than in the case of context-free grammars. Definition 4.31 (Derivation) An AMRS σ1 of length k derives an AMRS σ2 (denoted σ1 ⇒ σ2 ) iff for some j ≤ k and some rule ρ ∈ R of length n, • (σ1 , j) (ρ, 1) = (σ1 , ρ ), and • σ2 is the replacement of the j-th element of σ1 with the body of ρ; namely, let
f (i) =
i i+n−2
if 1 ≤ i < j, if j < i ≤ k,
g(i) = i + j − 2 if 2 ≤ i ≤ n
then σ2 = Indσ2 , Πσ2 , Θσ2 , ≈σ2 , where: – Indσ2 = k + n − 2; i = f (i ) and i , π ∈ Πσ or 1 – i, π ∈ Πσ2 iff i = g(i ) and i , π ∈ Πρ
Cambridge Books Online © Cambridge University Press, 2012
152
4 Unification grammars
Θσ (i , π ) if i = f (i ) 1 Θρ (i , π ) if i = g(i ) – i1 , π1 ≈σ2 i2 , π2 if ◦ i1 = f (i1 ) and i2 = f (i2 ) and i1 , π1 ≈σ i2 , π2 ; or 1 ◦ i1 = g(i1 ) and i2 = g(i2 ) and i1 , π1 ≈ρ i2 , π2 ; or ◦ i1 = f (i1 ) and i2 = g(i2 ) and there exists π3 such that i1 , π1 ≈σ 1 j, π3 and 1, π3 ≈ρ i2 , π2 ; or ◦ i1 = g(i1 ) and i2 = f (i2 ) and there exists π3 such that i1 , π1 ≈ρ 1, π3 and j, π3 ≈σ i2 , π2 .
– Θσ2 (i, π ) =
1
∗
l
The reflexive transitive closure of ‘⇒’ is ‘⇒’. We write σ ⇒ ρ when σ derives ρ in l steps. The function f (i) maps indices in σ1 to corresponding indices in σ2 ; recall that j is the selected element of σ1 , to be replaced by the body of ρ . Thus, indices smaller than j remain intact in σ2 , whereas indices greater than j are mapped to elements following the body of ρ in σ2 . Since the body of ρ is of length n − 1, and since there are j − 1 elements preceding it, the j + 1 element of σ1 is mapped to the (j − 1) + (n − 1) + 1 position in σ2 . In general, the j + i element is mapped to the (j − 1) + (n − 1) + i = j + n − 2 + i element in σ2 . The function g(i) maps indices from the body of ρ to σ2 . The first index in the body is 2, and it is mapped to the j-th element of σ2 . In general, the i-th index in ρ is mapped to the j − 2 + i index in σ2 for 2 ≤ j ≤ n. Exercise 4.32 (*). Prove that if i = f (i ), then there exists no i = i such that i = f (i ), and if i = g(i ), then there exists no i = i such that i = g(i ). How is σ2 constructed? First, the number of its indices is k + n − 2, as explained above. Its paths are either paths in σ1 , excluding those that originate in the j-th element; or paths from ρ , excluding those that originate in the head. The Θ-markings of the paths are also those of σ1 and ρ , as appropriate. The interesting component, however, is the reentrancies in σ2 . Two paths are reentrant in σ2 if both are reentrant paths in σ1 or in ρ , of course. But in addition, they can be reentrant if one of them, π1 , is a path in σ1 , and the other, π2 , is a path in ρ . For this to be the case, there must be a path π3 in both the selected element in σ1 and the head of ρ , such that π3 is reentrant with π1 in σ1 , and also π3 is reentrant with π2 in ρ . This way, the application of the rule ρ to the form σ1 can propagate reentrancies involving the selected element to the result, σ2 . Finally, notice that the closure operations are applied to the result, σ2 , to guarantee that σ2 is indeed an AMRS.
Cambridge Books Online © Cambridge University Press, 2012
4.7 Derivations
153
Refer back to Example 4.24 (Page 141). Suppose that ⎡
⎤
CAT : v σ1 = ⎣ NUM : 1 ⎦ NUM : 1 CASE : nom CAT :
np
is a (sentential) form and that ⎡
CAT :
np
⎤
CAT :
d
ρ = ⎣ NUM : 2 ⎦ → NUM : 2 CASE : 3
⎤ ⎡ CAT : n ⎣ NUM : 2 ⎦ CASE
: 3
is a rule. Assume further that the selected element j in σ1 is the first one. Applying the rule ρ to the form σ1 , it is possible to construct a derivation σ1 ⇒ σ2 as follows: First, compute (σ1 , 1)(ρ, 1) = (σ1 , ρ ). By Example 4.24, we obtain: ⎡ ⎤ CAT : np CAT : v ⎣ ⎦ σ1 = NUM : 1 NUM : 1 CASE : nom ⎡
⎤
⎤ ⎡ CAT : n ⎦ CAT : d ⎣ NUM : 2 ⎦ ρ = ⎣ NUM : 2 NUM : 2 CASE : 3 nom CASE : 3 CAT :
np
Now, the first element of σ1 is replaced by the body of ρ . This operation results in a new AMRS, σ2 , of length 3: The first two elements are the body of ρ , and the last element is the remainder of σ1 , after its first element has been eliminated; that is, the last element of σ1 . The paths of σ2 are obtained from those elements of ρ and σ1 ; the marking function Θσ2 is also a simple combination of Θρ and Θσ . Furthermore, reentrancies in σ2 originate in reen1 trancies in ρ and σ1 . But the most important observation here is that some reentrancies are added because of the unification of the selected element in σ1 with ρ’s head. For purposes of illustration, assume that no such reentrancies were added. A simple replacement would have resulted in the following AMRS: ⎤ ⎡ CAT : n CAT : d ⎦ CAT : v ⎣ NUM : 2 NUM : 2 NUM : 1 CASE : 3 nom
Cambridge Books Online © Cambridge University Press, 2012
154
4 Unification grammars
Obviously, this is not the expected result! Since the path (1, NUM) in σ1 is reentrant with (2, NUM) (indicated by the tag 1 ), and since the path (1, NUM ) in the rule ρ is reentrant with the paths (2, NUM ) and (3, NUM ) (the tag 3 ), one would expect that the sharing between the NUM values of the noun phrase and the verb phrase in σ1 would manifest itself as a sharing between this feature’s values of the determiner, the noun and the verb phrase in σ2 . Indeed, this is what the last clause in the definition of derivation guarantees: As there exists a path π3 , in this case π3 = NUM , in the selected element of the form and in the head of the rule, such that π3 is reentrant with some path (i1 , π1 ) in σ1 and with some path (i2 , π2 ) in ρ , then the paths π1 and π2 are reentrant in σ2 , with the corresponding indices (as determined by the functions f and g). Therefore, the result is: σ2 =
CAT :
d
NUM
: 4
⎤
⎡ CAT : n ⎣ NUM : 4 CASE : 5
nom
⎦ CAT : v NUM : 4
The same derivation is described, in terms of AMRSs, in Example 4.32. A sequence of derivations is depicted in Example 4.33. Consider the form σ3 of Example 4.33, and one of the AMRSs in P Tw (1, 3) of Example 4.29: σ3 =
CAT :
d
NUM : 4
⎡ CAT : n ⎣ NUM : 4 CASE :
nom
⎤
⎦ CAT : v NUM : 4
⎡ ⎤ CAT : n CAT : d ⎣ NUM : pl ⎦ CAT : d σ= NUM : pl NUM : pl CASE : [ ]
The former contains information that is accumulated during derivations; informally, one might say that it is the sum of the information encoded in the start symbol of the grammar and the rules that are involved in the derivation. The latter is simply the information contained in the lexical entries of the words in w. The two AMRSs are not identical, of course. Furthermore, they are not even related by subsumption: σ3 σ because Θσ3 (2, CASE ) = nom while Θσ (2, CASE )↑. Similarly, σ σ3 because Θσ (3, NUM ) = pl while Θσ3 (3, NUM )↑. Nevertheless, the information in both forms is consistent. It is possible to unify each element in σ3 with the corresponding element of σ, and no unification will fail. This is where AMRS unification (Definition 4.23) comes in handy. The result of σ3 σ, which was established in Example 4.23,
Cambridge Books Online © Cambridge University Press, 2012
4.7 Derivations
155
Example 4.32 Derivation step in AMRS notation. If (σ1 , 1) (ρ, 1) = (σ1 , ρ ), then (ignoring the trivial pairs in ‘≈’): σ1
ρ
Π : 1, , 1, CAT , 1, NUM , 1, CASE , 2, , 2, CAT , 2, NUM
1, , 1, CAT , 1, NUM , 1, CASE , 2, , 2, CAT , 2, NUM , 3, , 3, CAT , 3, NUM , 3, CASE 1, CAT → np, 2, CAT → d, 3, CAT → n, 1, CASE → nom, 3, CASE → nom
Θ : 1, CAT → np, 1, CASE → nom, 2, CAT → v ≈: (1, NUM , 2, NUM ) (1, NUM , 2, NUM ), (1, NUM , 3, NUM ), (2, NUM , 3, NUM ), (1, CASE , 3, CASE ) Observe that |σ1 | = k = 2, |ρ| = n = 3, j = 1, g(2) = 1, g(3) = 2 and f (2) = 3. In other words, the first element of the resulting form is based on the second element of ρ (since g(2) = 1 the second element of the result is based on the third element of ρ (g(3) = 2) and the third element of the result is based on the second element of σ, since f (2) = 3. Then: Indσ = k + n − 2 = 2 + 3 − 2 = 3 2 Πσ = {1, π | 2, π ∈ Πρ }∪ 2 {2, π | 3, π ∈ Πρ }∪ {3, π | 2, π ∈ Πσ } = {1, , 1, CAT , 1, NUM , 2, , 2, CAT , 2, NUM , 2, CASE 3, , 3, CAT , 3, NUM } Θσ2 = {1, CAT → d, 2, CAT → n, 2, CASE → nom, 3, CAT → v} ≈σ = {(1, NUM , 2, NUM ), (1, NUM , 3, NUM ), 2 (2, NUM , 3, NUM )} Note that the last two pairs of ≈σ are added owing to the final two clauses of 2 Definition 4.31. (Page 139), is: σρ =
CAT : NUM
d : 4 pl
⎡ CAT : n ⎣ NUM : 4 CASE :
nom
⎤
⎦ CAT : v NUM : 4
Cambridge Books Online © Cambridge University Press, 2012
156
4 Unification grammars
Example 4.33 Derivation. Consider the grammar Gu (Example 4.31). A derivation with Gu can start with a form of length 1, consisting of
σ1 = CAT : s . The single element of this AMRS unifies with the head of the first rule in the grammar, trivially. Substitution is again trivial, and the next form in the derivation is the body of the first rule: ⎡
⎤
CAT : v ⎣ ⎦ σ2 = NUM : 1 NUM : 1 CASE : nom CAT :
np
This is exactly the AMRS σ1 of Example 4.32; since the rule ρ of that example is indeed in Gu , a derivable form from σ2 is: ⎤ ⎡ CAT : n CAT : d ⎣ NUM : 4 ⎦ CAT : v σ3 = NUM : 4 NUM : 4 CASE : nom
∗
Thus, we obtain σ1 ⇒ σ2 ⇒ σ3 , and hence σ1 ⇒ σ3 . Caveat: We use variables rather sloppily here. Recall that the scope of variables is limited to a single AMRS. Occurrences of the same variable in two distinct AMRSs do not indicate value sharing. We use the same tags across different AMRSs to point to the “percolation” of certain values along the derivation.
Exercise 4.33. Show a form and a rule such that a single derivation step applying the rule to some element of the form fails because some atomic value is found to be incompatible with a nonatomic feature structure. With this machinery, we are now in a position to define the language generated by an arbitrary unification grammar. Definition 4.32 (Language) The language of a unification grammar G is L(G) = {w ∈ W ORDS ∗ | w = w1 · · · wn , and there exist an AMRS σ such that ∗ As ⇒ σ and an AMRS ρ ∈ P Tw (1, n) such that σ ρ is defined}. Definition 4.34 does not rule out the possibility that the empty word, , is a member of L(G) for some grammar G. Recall from Definition 4.29 that a unification grammar rule is an AMRSs of length greater than 0, and in particular,
Cambridge Books Online © Cambridge University Press, 2012
4.8 Derivation trees
157
Example 4.34 Language. Consider the grammar Gu of Example 4.31 (page 150), and the string w = the sheep sleep. By Example 4.33, the form σ3 is derivable from the start symbol of the grammar. Since σ3 is unifiable with one of the members of P Tw (1, 3), as demonstrated above (see also Example 4.29), w ∈ L(Gu ).
AMRSs of length 1 are valid rules. When such rules take part in a derivation, they yield shorter forms; specifically, a form of length 0 can be derived, namely, σ = λ. In this case, P T (1, n) = P T (1, 0) = λ, and ∈ L(G). 4.8 Derivation trees To depict derivations graphically we extend the notion of derivation trees, defined for context-free grammars, to unification grammars. Informally, we would like a tree to be a structure whose elements are feature structures. However, care must be taken when the scope of reentrancies in a tree is concerned: In a tree of feature structures, what should the scope of variables be (in the AVM view), or path sharing (on a graph view), or reentrancy (on an AFS view)? For information to be shared among all nodes in a tree, this scope is extended to the entire tree. In the following definition, we reuse the concept of multirooted structures (more precisely, AMRSs) to represent trees. To impose a tree structure on AMRSs, we simply pair them with a tree whose nodes are integers, so that each node in the tree serves as an index into the AMRS. In this way, all the existing definitions that refer to AMRSs can be naturally used in reasoning about trees. (The alternative would have been to define a new mathematical entity corresponding to a tree whose nodes are feature structures, with the scope of reentrancies extended to the entire structure. This would have necessitated new definitions of derivation, and subsequently also unification in context, that would apply to trees rather than AMRSs.) Definition 4.33 (Unification trees) Given a signature S = ATOMS, F EATS , a unification tree over S is a pair σ, τ , where σ is an AMRS over S, of say, length l for some l ∈ N, and τ is a tree over the nodes {1, 2, . . . , l}. Informally, a unification is an ordered tree whose nodes are AVMs over S, where the scope of reentrancies is extended to the entire tree. Technically, however, it is a pairing of a tree over integers and an AMRS, where the integers serve as indices into the AMRS. A subtree is a particular node of the tree, along with all its descendants (and the edges connecting them).
Cambridge Books Online © Cambridge University Press, 2012
158
4 Unification grammars
Example 4.35 depicts a unification tree. When depicted as a tree of AVMs, a unification tree can be interpreted in various ways as a pair σ, τ , induced by different orders of the elements of σ. It is always possible, however, to rearrange σ so that its nodes are ordered in a canonical way, say, in a left-to-right, depth-first order. In the following discussion, we abuse the notation of sequences to denote a not necessarily consecutive substructure of an AMRS. In other words, we use σi , σj1 , . . . , σjk to refer to the AMRS that is induced by the indices i, j1 , . . . , jk in σ (see Exercise 4.18, Page 127). Unification derivation trees are built incrementally, in a way that is similar to a sequence of derivations. Of course, a derivation tree provides a means for recording a history of derivation steps. The following definition captures the relation between two trees, one of which is obtained from the other by a single derivation step. It is based on the definition of derivations (Definition 4.31, page 151) and, similarly, makes use of unification in context. Definition 4.34 (Tree extension) Let σ1 , τ1 be a unification tree over a signature S and nodes {1, . . . , l}, and let j be a leaf in τ1 (1 ≤ j ≤ l). A unification tree σ2 , τ2 over the signature S and nodes {1, . . . , l + n − 1} extends the tree σ1 , τ1 through an AMRS ρ of length n iff: • τ2 is obtained from τ1 by adding to the latter the nodes l+1, l +2, . . ., l +n−1
(in this order) as immediate daughters of the node j; and • (σ1 , j) (ρ, 1) = (σ1 , ρ ); and • σ2 is obtained by concatenating σ1 with ρ2..n , including the reentrancies
introduced by the unification. Formally, let g(i) = i − l + 1 for 2 ≤ i ≤ n; then σ2 = Indσ2 , Πσ2 , Θσ2 , ≈σ2 , where – Indσ2 = l + n − 2 or i ≤ l and i, π ∈ Πσ 1 – i, π ∈ Πσ2 iff i > l and g(i), π ∈ Πρ Θσ (i, π ) if i ≤ l 1 – Θσ2 (i, π ) = Θρ (g(i), π ) if i > l – i1 , π1 ≈σ2 i2 , π2 if ◦ i1 ≤ l and i2 ≤ l and i1 , π1 ≈σ i2 , π2 , or 1 ◦ i1 > l and i2 > l and i1 , π1 ≈ρ i2 , π2 , or ◦ i1 ≤ l and i2 > l and there exists π3 such that i1 , π1 ≈σ j, π3 and 1 1, π3 ≈ρ g(i2 ), π2 , or ◦ i1 > l and i2 ≤ l, and there exists π3 such that g(i1 ), π1 ≈ρ 1, π3 and j, π3 ≈σ i2 , π2 . 1
Cambridge Books Online © Cambridge University Press, 2012
4.8 Derivation trees
159
Example 4.35 Unification tree. Following is a unification tree depicted as a tree of AVMs. Note that the tag 4 reflects reentrancy between nodes at different “levels” of the tree.
⎡
CAT :
CAT :
s
⎤
np
⎣ NUM : 4 ⎦ CASE : 2 nom
CAT :
d
NUM : 4
⎡
CAT :
n
⎤
⎦ ⎣ NUM : 4 CASE : 2 nom
CAT :
v
NUM : 4
Formally, this tree is a pair σ, τ , where σ is an AMRS of length 5 and τ is a tree over {1, 2, 3, 4, 5}: ⎡ ⎤ ⎤ ⎡ CAT : n CAT : np CAT : d CAT : v ⎣ ⎦ ⎦ ⎣ σ = CAT : s NUM : 4 NUM : 4 NUM : 4 NUM : 4 CASE : 2 nom CASE : 2 nom
τ= 1 2 3
4 5
An alternative tree would be σ , τ : ⎡ ⎡ ⎤ ⎤ CAT : n CAT : np CAT : d ⎣ NUM : 4 ⎦ CAT : v ⎦ σ = CAT : s ⎣ NUM : 4 NUM : 4 NUM : 4 CASE : 2 nom CASE : 2 nom
τ= 1 2 4
5 3
In subsequent examples we will depict unification trees as AVM-based trees, rather than as pairs of AMRSs and trees over their indices.
Definition 4.35 (Unification derivation trees) A unification derivation tree induced by a unification grammar G = (R, As ) is a unification tree defined recursively as follows:
Cambridge Books Online © Cambridge University Press, 2012
160
4 Unification grammars
1. As , τ is a unification derivation tree, where τ is the tree consisting of the single node {1}; 2. if σ, τ is a unification derivation tree and σ , τ extends σ, τ through some rule ρ ∈ R, then σ , τ is also a unification derivation tree. Example 4.36 demonstrates tree extension. In the special case of -rules, the extended tree in fact has one fewer node than the tree it extends. Notice that Definition 4.35 induces a canonical order of the indices in a derivation tree. Example 4.36 Unification derivation trees. Consider the grammar Gu (Example 4.31 on page 150) and the derivation of Example 4.33 (Page 156). A unification derivation tree reflecting the derivation
can be built incrementally as follows. The start symbol of the grammar is CAT : s ; therefore, an initial derivation tree would be the start symbol itself. Then, by using the first grammar rule, the following tree is obtained:
CAT :
s
NUM : 4
⎡
CAT :
np
⎤
⎣ NUM : 4 ⎦ CASE : nom
CAT :
v
NUM : 4
Next, by applying the second grammar rule to the leftmost node above, the following tree is obtained:
CAT :
s
NUM : 4
⎡
⎤ np ⎣ NUM : 4 sg ⎦ CASE : nom
CAT :
CAT :
d
NUM : 4
⎡
CAT :
n
⎤
⎣ NUM : 4 ⎦ CASE : 2
CAT :
v
NUM
: 4
Here, too, values are shared across different levels of the tree. Observe in particular how the tag 4 is shared among all nodes of the tree, from the root to the frontier.
Cambridge Books Online © Cambridge University Press, 2012
4.8 Derivation trees
161
Exercise 4.34. Let σ1 , τ1 be a unification tree and let σ2 , τ2 be a tree which extends it. Show that σ2 , τ2 is indeed a unification tree (i.e., show that it complies with Definition 4.33). Exercise 4.35. Refer back to Example 4.35. Show that σ , τ is a unification derivation tree, whereas σ, τ is not. Derivation trees induced by unification grammars are different from their context-free counterparts in one important aspect. In the context-free case, a tree can be built incrementally, starting with the root and expanding a node in the frontier according to the grammar rules. If τ1, τ2 , . . . , τk is a sequence of contextfree trees built that way, then τi is a proper subtree of τi+1. Unification derivation trees are also built in this manner, by starting with the start symbol of the grammar as a root and expanding nodes in the frontier using grammar rules. However, because of extended scope of reentrancies, if σ1 , τ1 , σ2 , τ2 , . . . , σk , τk is a sequence of unification trees, then σi , τi is not necessarily a subtree of σi+1 , τi+1 : the latter might have more information that is expressed even in nodes existing in the former. Consider Example 4.36. Observe that the three nodes of σ2 all exist in σ3 , in exactly the same tree positions; however, the application of the second rule added the information that the NUM feature is valued sg. Through the variable 4 , this information is propagated in the entire tree, also affecting the values of the NUM feature in the np node and even in the root of the tree (recall that the actual instance of 4 in which its associated value is depicted is arbitrary). More formally, if the tree σ, τ extends (in the sense of Definition 4.34) the tree σ1 , τ1 , and if σ2 , τ2 is the subtree of σ, τ induced by τ1 , then σ1 and ˆ 2 . σ2 are not necessarily identical, but σ1 σ As in the context-free case, the frontier of unification derivation trees does not have to correspond to any lexical item. Of course, for trees to represent complete derivations, we are particularly interested in such trees whose frontier is unifiable with a sequence of preterminals. The following definition captures this notion. Definition 4.36 (Complete derivation trees) A unification derivation tree σ, τ is complete if the frontier of τ is j1 , . . . , jn and there exist a word w ∈ W ORDS ∗ of length n and an AMRS ρ ∈ P Tw (1, n) such that ρ σi , σj1 , . . . , σjn is defined. Note that there may be more than one qualifying AMRS in P Tw (1, n); the definition only requires one. Of course, different AMRSs in P Tw (1, n)
Cambridge Books Online © Cambridge University Press, 2012
162
4 Unification grammars
will correspond to different interpretations of the input string (resulting from ambiguous lexical entries of the words). Example 4.37 Complete derivation trees. Consider the grammar Gu of Example 4.31 (Page 150), and the string w = twolambssleep. The tree of Example 4.36 is complete. Its frontier is unifiable with the following AMRS:
⎤ ⎡ CAT : n d ⎣ CAT : v ∈ P Tw (1, 3) NUM : pl ⎦ NUM : pl NUM : pl CASE : 2 CAT :
Definition 4.36 only requires that the frontier of a tree be consistent with a sequence of preterminals. It is sometimes useful to depict a tree whose leaves already reflect the additional information obtained by actually unifying the frontier of a complete derivation tree with P Tw . We call such trees lexicalized, and we depict them with additional arcs connecting the preterminals to the terminals they dominate. It is easy to see that for every lexicalized tree σ, τ ˆ See there exists a complete derivation tree σ , τ such that τ = τ and σ σ. Example 4.38. Definition 4.37 (Lexicalized derivation trees) Let σ, τ be a complete derivation tree induced by a unification grammar G = (R, As ) and let w, ρ be as in Definition 4.36. A lexicalized derivation tree induced by G on w is the unification tree σ , τ , where σ is obtained from σ by unifying the frontier of σ with ρ. Exercise 4.36. Let G be a unification grammar and w ∈ W ORDS ∗ a word of length n. 1. Let σ, τ be a unification derivation tree induced by G. Let ρ be the frontier ∗ of the tree. Prove that As ⇒ ρ (hint: by induction on the structure of τ ). ∗ 2. Assume that As ⇒ ρ. Prove that there exists a unification derivation tree whose frontier is ρ (hint: by induction on the length of the derivation). Exercise 4.37 (*). Show a unification grammar G and a string w of length n such that w ∈ L(G) and there exist O(2n ) different derivation trees for w under G.
Cambridge Books Online © Cambridge University Press, 2012
Further reading
163
Example 4.38 Lexicalized derivation tree. Consider the following tree, induced by the grammar Gu of Example 4.31 (Page 150) on the string two lambs sleep:
⎡
CAT :
CAT :
s
⎤
np
⎣ NUM : 4 ⎦ CASE : 2 nom
CAT :
d
NUM : 4
two
pl
⎡
CAT :
n
⎤
⎦ ⎣ NUM : 4 2 CASE : nom sheep
CAT :
v
NUM : 4
sleep
Compare the tree to the tree of Example 4.36. Note the additional information contributed by the lexical entries of the words: the value of pl for the feature NUM that is associated with the variable 4 .
Further reading The extension of CFGs by adding features to lexical items and grammar categories dates back to the 1970s (Kay, 1979) and was incorporated into various linguistic theories, starting with functional grammars, later called unification grammars and then functional-unification grammars (Kay, 1983) and lexicalfunctional grammars (Kaplan and Bresnan, 1982). Later, it was the underlying formalism for GPSG (Gazdar et al., 1985). Modern linguistic theories that are based on or employ feature structures include LFG (Kaplan and Bresnan, 1982; Dalrymple et al., 1995; Butt et al., 1999), HPSG (Pollard and Sag, 1987, 1994; Copestake, 1999) and some variants of categorial grammars and tree-adjoining grammars (Vijay-Shanker and Joshi, 1991). Sells (1988) provides a linguistically motivated comparative survey of three theories, namely, Government and Binding, GPSG, and LFG. A clear introduction to contemporary phrase structure grammars, with an extensive description of both GPSG and HPSG, including their handling of various linguistic phenomena, is Borsley (1996). See also Sag and Wasow (1999), which introduces syntax from the point of view of unification grammars. Feature-structure-based grammars are extensively studied by Johnson (1988) and Shieber (1992), among others. The latter also explicitly represents
Cambridge Books Online © Cambridge University Press, 2012
164
4 Unification grammars
multirooted structures as simple feature structures, encoding the constituents as the values of dedicated features (1, 2, 3, etc.) in the feature structure. Multirooted structures were used implicitly in many implementations of unification-based grammars but were only later defined formally (Sikkel, 1993, 1997) and explored mathematically (Wintner and Francez, 1995a,b; Wintner, 1997). The use of feature structures in unification formalisms is reminiscent of a simple extension to CFG that is widely used in describing programming languages: attribute grammars. In a certain sense, attribute grammars can be viewed as a degenerate form of unification grammars. Instead of unification as an information-combining operation, attribute grammars use explicit assignments. Thus, they inherently determine the direction of information flow within a derivation, unlike the declarative character of unification-based grammars. The attributes are usually taken from a small domain, and – more importantly – their appropriate values are usually scalars, taken from a finite (usually small) domain, which makes the formalism equivalent in expressive power to CFGs. Attribute grammars are used mostly for constructing compilers for programming languages because they enable the specification of (usually semantic) information that is involved in applying the phrase-structure rules, without effecting the efficiency of the parsing process. Attribute grammars were introduced by Knuth (1968) and were used extensively for constructing compilers for (mostly imperative) programming languages. For a summary of applications and systems, refer to Alblas and Melichar (1991).
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
5 Linguistic applications
We developed an elaborate theory of unification grammars, motivated by the failure of context-free grammars to capture some of the linguistic generalizations one would like to express with respect to natural languages. In this chapter, we put the theory to use by accounting for several of the phenomena that motivated the construction. Specifically, we account for all the language fragments discussed in Section 1.3. Much of the appeal of unification-based approaches to grammar stems from their ability to a account for linguistic phenomena in a concise way; in other words, unification grammars facilitate the expression of linguistic generalizations. This is mediated through two main mechanisms: First, the notion of grammatical category is expressed via feature structures, thereby allowing for complex categories as first-class citizens of the grammatical theory. Second, reentrancy provides a concise machinery for expressing “movement,” or more generally, relations that hold in a deeper level than a phrase-structure tree. Still, the formalism remains monostratal, without any transformations that yield a surface structure from some other structural representation. Complex categories are used to express similarities between utterances that are not identical. With atomic categories of the type employed by contextfree grammars, two categories can be either identical or different. With feature structures as categories, two categories can be identical along one axis but different along another. For example, two noun phrases can have the same agreement features, but different case values; two verb phrases can differ in their subcategorization requirements. We provide examples in this chapter. Reentrancies are used in unification grammars to express the sharing of information. In context-free grammars, the only relations that can be expressed over phrases are tree-structure relations: immediate dominance (the relation that holds between a node and its mother), its reflexive-transitive closure, linear
165
Cambridge Books Online © Cambridge University Press, 2012
166
5 Linguistic applications
precedence (the order among sister nodes), and so forth. In unification grammars tree nodes can, additionally, share information across them. As we shall see below, this facilitates the expression of complex relations, such as the implicit subjects of embedded verb phrases or the relation between declarative sentences and their interrogative counterparts. We begin with a basic grammar (Section 5.1) and extend it to account for agreement (Section 5.2), case control (Section 5.3), and a simplified treatment of verb subcategorization (Section 5.4). We elaborate the treatment of subcategorization in Section 5.5 and account for long-distance dependencies in Section 5.6. Another type of long-distnce dependencies, the relative clause, is discussed in Section 5.7. Control phenomena are addressed in Section 5.8 and coordination in Section 5.9. We conclude the chapter with a summary of the linguistic generalizations that are facilitated by the grammar fragments we present (Section 5.10) and a discussion of contemporary linguistic formalisms that are based, in one way or another, on unification (Section 5.11). 5.1 A basic grammar Our departure point in this chapter is the context-free grammar G0 for E0 (Example 1.11, page 26), repeated in Example 5.1 for convenience. Example 5.1 A context-free grammar G0 . S VP NP D N V Pron PropN
→ → → → → → → →
NP VP V | V NP D N | Pron | PropN the, a, two, every, . . . sheep, lamb, lambs, shepherd, water . . . sleep, sleeps, love, loves, feed, feeds, herd, herds, . . . I, me, you, he, him, she, her, it, we, us, they, them Rachel, Jacob, . . .
Our first observation is that any context-free grammar is a special case of a unification grammar: the nonterminal symbols of the CFG can be modeled by atoms. Since two atomic feature structures are consistent if and only if they are identical, a unification grammar derivation where all feature structures are atomic is reduced to CFG derivation (assuming the normal form of Definition 1.8). A more general view of G0 as a unification grammar can encode the fact that the non-terminal symbols represent grammatical categories. This can be done using a single feature, for example, CAT, whose values are
Cambridge Books Online © Cambridge University Press, 2012
5.2 Imposing agreement
167
the nonterminals of G0 . We obtain the (unification) grammar G0 , depicted in Example 5.2. Example 5.3 depicts (lexicalized) derivation trees for two strings with this grammar. Compare them with the (context-free) derivation tree of Example 1.12 (Page 27). Example 5.2 G0 , a basic unification grammar. The following is a unification grammar, G0 , over a signature F EATS, ATOMS where F EATS = {CAT} and ATOMS = {s, np, vp, v, d, n, pron, propn}: 1 2 3 4 5, 6
s
CAT :
vp
CAT :
vp
CAT :
np
CAT :
np
sleep
→
love
→
feed
→
lamb
→
she
→
they
→
Rachel → a
CAT :
→ → → → →
np CAT : vp
CAT : v
CAT : v CAT : np
CAT : d CAT : n
CAT : pron | CAT : propn CAT :
CAT :
v
CAT :
v
CAT :
v
CAT :
n
CAT :
pron
CAT :
pron
CAT :
→
tell
→
feeds
→
lambs →
propn
→ CAT : d
give
her
→
them
→
Jacob
→
two
CAT :
v
CAT :
v
CAT :
v
CAT :
n
CAT :
pron
CAT :
pron
CAT :
propn
→ CAT : d
Note the absence of any reentrancies in this unification grammar.
Exercise 5.1 (*). Show a derivation for ∗Rachel sleep two lamb with G0 . Exercise 5.2. Prove that L(G0 ) = L(G0 ).
5.2 Imposing agreement Next, we address the issue of number agreement. Clearly, L(G0 ) includes obvious violations of agreement rules in E0 , as Example 5.3 demonstrates.
Cambridge Books Online © Cambridge University Press, 2012
168
5 Linguistic applications
Example 5.3 Derivation trees induced by G0 . The grammar G0 of Example 5.2 induces the following tree on the string the sheep love her:
CAT :
CAT :
d
np
CAT :
s
CAT :
n
CAT :
CAT :
v
the
sheep
vp
CAT :
CAT :
love
np
pron
her
Not surprisingly, an isomorphic derivation tree is induced by the grammar on the ungrammatical string ∗the lambs sleeps they:
CAT :
CAT :
d
np
CAT :
s
CAT :
n
CAT :
CAT :
v
the
lambs
vp
CAT :
CAT :
sleeps
np
pron
they
The categories involved in number agreement in E0 are determiners, nouns, verbs, noun phrases, and verb phrases. We therefore add the feature NUM to the signature, and specify it for all those categories. Furthermore, some rules are amended to reflect agreement on number; this is achieved by sharing the values of the NUM feature between two daughters of the same rule. Sometimes it is even necessary to share this value with the mother of the rule: For example, a noun phrase consisting of a determiner followed by a noun is singular if and only if both the determiner and the noun are. Note that the rules remain completely declarative. They do not specify whether the information flows from mother to daughters or vice versa. Gagr (Example 5.4) is a unification grammar that accounts for number agreement in E0 .
Cambridge Books Online © Cambridge University Press, 2012
5.2 Imposing agreement
169
Example 5.4 Gagr , accounting for agreement on number.
CAT : np CAT : vp → 1 CAT : s NUM : 4 NUM : 4 CAT : vp CAT : v 2 → NUM : 4 NUM : 4
CAT : vp CAT : v 3 → CAT : np NUM : 4 NUM : 4 CAT : np CAT : d CAT : n 4 → NUM : 4 NUM : 4 NUM : 4 CAT : np CAT : pron CAT : propn 5, 6 → | NUM : 4 NUM : 4 NUM : 4 sleep
→
love
→
feed
→
lamb
→
she
→
they
→
v NUM : pl CAT :
v NUM : pl CAT :
v NUM : pl CAT :
→
tell
→
feeds
→
CAT :
pron NUM : sg pron NUM : pl CAT :
give
n NUM : sg
CAT :
lambs →
her
→
them
→
Jacob
→
two
→
propn NUM : sg CAT : d → NUM : sg
Rachel → a
CAT :
CAT :
v NUM : pl CAT :
v NUM : pl
v NUM : sg CAT : n NUM : pl CAT : pron NUM : sg CAT : pron NUM : pl CAT : propn NUM : sg CAT : d NUM : pl CAT :
Several things are worth observing with regard to Gagr . First, note that the third rule does not specify the number feature of the second daughter. This is because when a verb phrase is formed by combining a verb with a noun phrase, the agreement features of the noun phrase are immaterial, and in particular, they do not have to match those of the verb. The second daughter, CAT : np ,
Cambridge Books Online © Cambridge University Press, 2012
170
5 Linguistic applications
is thus unifiable with noun phrases that can be singular or plural (or, for that matter, unspecified for number). Second, observe that the value of the NUM feature is shared by all the members of a noun phrase (e.g., in rule 4 these are the determiner, the head noun, and the mother noun phrase) and also by a verb phrase and its head verb, but it is not propagated to the sentence level (rule 1). This is a decision of the grammar designer: It is up to the linguist to determine which values of which features are propagated in the derivation tree, and the example grammar demonstrates one possible choice, namely, to “forget” the value of NUM at the sentence level. Finally, observe that it is possible to construct partial derivations for ungrammatical strings because information on number is specified only by the preterminals of the grammar. See Exercise 5.4. Exercise 5.3 (*). Show a derivation tree for the shepherd feeds two lambs with Gagr . Exercise 5.4. Construct a partial derivation for ∗Rachel sleep with Gagr , and show where it fails to be extensible to a complete derivation tree. On linguistic generalizations It must be noted that while Gagr is a unification grammar, the language it generates is context free. A context-free grammar G1 for exactly the same language is given in Example 5.5. The idea behind this grammar is simple: Observe that the only feature present in the rules of Gagr is NUM, and that this feature only has two valid values: sg and pl. This observation allows us to multiply out all the possible combinations of values for the NUM feature in all the rules in which it occurs. The grammar G1 has a few rules for each original rule in Gagr ; these context-free rules simply account for all the (finitely many!) possible combinations of the NUM feature. However, this CFG is inferior to the unification grammar in some respects. First, the linguistic description is distorted. Information regarding NUMBER, which is determined by the words themselves, is encoded in G1 by the way they are derived (in other words, G1 accounts for lexical knowledge by means of phrase-structure rules). Second, several linguistic generalizations are lost. Consider, for example, the derivations of the sentences a lamb sleeps and two lambs sleep in both grammars. With the unification grammar, the tree for a lamb sleeps will be essentially identical to the tree for two lambs sleep. However, the context-free grammar induces two different trees; in particular, the internal nodes of these two trees have different categories. For example, while a notation such as NPpl , NPsg is suggestive of some relationship between these
Cambridge Books Online © Cambridge University Press, 2012
5.2 Imposing agreement
171
Example 5.5 A context-free grammar G1 . S → Ssg | Spl Ssg → NPsg VPsg
Spl → NPpl VPpl
NPsg → Dsg Nsg
NPpl → Dpl Npl
NPsg → Pronsg | PropNsg
NPpl → Pronpl | PropNpl
VPsg → Vsg
VPpl → Vpl
VPsg → Vsg NPsg | Vsg NPpl
VPpl → Vpl NPsg | Vpl NPpl
Dsg → a
Dpl → two
Nsg → lamb | sheep | · · ·
Npl → lambs | sheep | · · ·
Pronsg → she | her | · · ·
PropNsg → Rachel | Jacob | · · ·
Vsg → sleeps | · · ·
Vpl → sleep | · · ·
two category symbols (the common NP is “hidden” behind both), such a relation does not exist. We could rename NPpl to A and NPsg to B, for example, showing they are unrelated. The indexing via values is done for explanatory reasons, to make the transformation from Gagr to G1 more transparent. Obviously, the unification grammar gives way to many more similarities in the derivation (and the structure) of the two sentences than does the context-free grammar. Exercise 5.5. Characterize the language that is generated by the grammar G1 (Example 5.5). Prove that L(G1 ) is equal to the language generated by Gagr of Example 5.4. Notice that both grammars are specified in different formalisms. Thus, one natural notion of “linguistic generalization” emerges from the above discussion: the ability to formulate a linguistic restriction by means of a single rule, instead of by a collection of “similar” rules, where the similarity is not supported by the underlying formalism. In this sense, Gagr captures the agreement generalization, while G1 does not, even though both grammars impose the required agreement. There is an intimate connection between generalization and partial description (or underspecification) of linguistic objects. We return to this issue after discussing the imposition of subcategorization consistency in Section 5.5. It is worth mentioning here that multiplying out all the possible values of a particular feature and converting a unification grammar to an equivalent contextfree grammar in this way is not always possible. We will see in Chapter 6 that unification grammars are strictly more expressive than CFGs, so that such a
Cambridge Books Online © Cambridge University Press, 2012
172
5 Linguistic applications
conversion is mathematically impossible. Intuitively, the technique we applied in converting Gagr to the equivalent G1 is only possible when the number of possible values for each and every feature of the grammar is finite (since the number of nonterminal symbols of a CFG must be finite). So far, all the features we used had finite domains, and in many cases, finitely many values are perfectly sufficient. There are, however, cases in which one would naturally want potentially infinite domains for features. One example will be presented in Section 5.5, where the value of a feature could potentially be any feature structure representing a grammatical category. Other examples will be presented in the treatment of unbounded dependencies in Section 5.6. Notice that we do not claim here that CFGs cannot, in general, assign linguistically motivated structures (trees) to natural language utterances; we have only pointed out the superiority of one unification grammar over one CFG generating the same language. As we noted in Section 1.6, claims regarding the strong generative capacity of linguistic formalisms are hard to formalize. For the purposes of the presentation in this book, it is sufficient to note that most contemporary linguistic theories resort to mechanisms whose formal complexity goes far beyond that of CFGs. We only reflect this trend, and we refrain from making a statement regarding its necessity or adequacy. 5.3 Imposing case control In this section we suggest a solution for the problem of controlling the case of a noun phrase (refer back to Section 1.3, page 6, where this problem was introduced). As we did in the case of number agreement, we add a feature, CASE, to the feature structures associated with nominal categories: nouns, pronouns, proper names and noun phrases. What will the values of the CASE feature be? Obviously, the lexical entries of pronouns must specify their case, which is overt and explicit. We use the value nom for nominative case, whereas acc stands for the accusative case. As for proper names and nouns, their lexical entries are simply underspecified with respect to case. This means that the value of the CASE feature in those entries is an empty feature structure. The decision to add a CASE feature to all the nominal categories, including those that are unspecified for case in English, such as N or PropN, is a design decision of the grammar writer. Although the case of certain noun phrases is not explicitly marked in English, it can be indicative of the phrase’s function in a sentence. As we shall see in Example 5.8, when all noun phrases bear a CASE feature, and when the grammar rules manipulate the values of this feature correctly, the case of certain noun phrases is implicitly determined, although it is not lexically specified. Of course, an alternative would have been to specify only the case of words and phrases that are explicitly marked.
Cambridge Books Online © Cambridge University Press, 2012
5.3 Imposing case control
173
Once extended nominal categories are specified for case in the lexicon, the values of the CASE feature can be used in the grammar to impose case-control constraints. Simplifying somewhat, the case of subjects in English is nominative, and the case of direct objects is accusative. We must take care in the grammar that this constraint is imposed. First, to percolate the value of the CASE feature from the lexical entries to the category NP, the three rules whose mother category is a noun phrase are modified. Then, to impose the constraint, the first and fourth rules of Gagr have to be amended; the sentenceformation rule specifies a requirement for a nominative subject; the rule that combines a transitive verb with a direct object requires that the object be in the accusative case. The modified grammar, Gcase , is depicted in Examples 5.6 and 5.7. Example 5.6 Gcase , accounting for case control (Rules). 1
CAT :
2
CAT : NUM
CAT :
s
⎡
vp
: 4 vp
⎤
CAT : vp → ⎣ NUM : 4 ⎦ NUM : 4 CASE : nom CAT : v → NUM : 4
CAT :
CAT :
np
v
⎡
CAT :
np
⎤
⎣ NUM : 3 ⎦ CASE : acc ⎡ ⎤ ⎤ ⎡ CAT : n CAT : np CAT : d ⎣ NUM : 4 ⎦ ⎣ NUM : 4 ⎦ → 4 NUM : 4 CASE : 2 CASE : 2 ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ CAT : np CAT : pron CAT : propn ⎦ 5, 6 ⎣ NUM : 4 ⎦ → ⎣ NUM : 4 ⎦ | ⎣ NUM : 4 2 2 2 CASE : CASE : CASE : 3
NUM
: 4
→
NUM : 4
Example 5.8 depicts a derivation tree for the sentence the shepherds feed them. Note how the value of the variable 3 is “propagated” down the tree all the way to the noun shepherds, which is thus determined to be in the nominative case. Similarly, notice that the value associated with the variable 5 is compatible between two constraints: rule 4 binds it to acc, as does the lexical entry of the pronoun them.
Cambridge Books Online © Cambridge University Press, 2012
174
5 Linguistic applications
Example 5.7 Gcase , accounting for case control (Lexicon). CAT : v CAT : v sleep → sleeps → NUM : pl NUM : sg CAT : v CAT : v feed → feeds → NUM : pl NUM : sg ⎤ ⎤ ⎡ ⎡ CAT : n CAT : n lamb → ⎣ NUM : sg ⎦ lambs → ⎣ NUM : pl ⎦ CASE : [ ] CASE : [ ] ⎡ ⎡ ⎤ ⎤ CAT : pron CAT : pron she → ⎣ NUM : sg ⎦ her → ⎣ NUM : sg ⎦ CASE : nom CASE : acc ⎡ ⎡ ⎤ ⎤ CAT : pron CAT : pron they → ⎣ NUM : pl ⎦ them → ⎣ NUM : pl ⎦ CASE : nom CASE : acc CAT : propn CAT : propn Rachel → Jacob → NUM : sg NUM : sg CAT : d CAT : d a → two → NUM : sg NUM : pl
Exercise 5.6 (*). Show a derivation induced by Gcase on the sentences She feeds the sheep. Jacob loves her.
Exercise 5.7. Explain why similar derivations fail with the strings ∗Her feeds the sheep ∗Jacob loves she ∗Jacob Rachel
Exercise 5.8. Specify a lexicon and grammar rules for imposing case control in a language that marks more nominal categories for case. Such languages include German, Latin, Russian, and Finnish. 5.4 Imposing subcategorization constraints In this section we provide a naïve solution to the subcategorization problem. Recall that the problem originates from the observation that there exist
Cambridge Books Online © Cambridge University Press, 2012
5.4 Imposing subcategorization constraints
175
Example 5.8 Derivation tree with case control. The following is a derivation tree for the sentence the shepherds feed them. The scope of variables is the entire tree. Recall that when a variable occurs more than once within its scope, at most one of its occurrences explicated. Here, we chose to list the atomic values next to the AVMs that “set” them. Thus, the values of the NUM features are determined in the lexicon, whereas the values of CASE are set by the rules. The actual location in the tree in which a variable is specified does not change in any way the information encoded in the tree.
⎡
CAT :
s
⎤
CAT : vp ⎣ NUM : 4 ⎦ NUM : 4 CASE : 3 nom ⎤ ⎤ ⎡ ⎡ CAT : n CAT : np CAT : d CAT : v ⎦ ⎣ NUM : 2 ⎣ NUM : 4 ⎦ NUM : 4 pl NUM : 4 CASE : 3 CASE : 5 acc CAT :
np
⎡
⎤ pron ⎣ NUM : 2 pl ⎦ CASE : 5 the
shepherds
feed
CAT :
them
This tree represents a derivation that starts with the initial symbol CAT : s and ends with multi-AVM σ , where: σ =
the NUM : 4
shepherds feed them NUM : 4 NUM : 2 NUM : 4 CASE : nom CASE : acc
This multi-AVM is unifiable with (but not identical to!) the sequence of lexical entries of the words in the sentence, which is: σ=
the NUM :
[]
shepherds feed them NUM : pl NUM : pl NUM : pl CASE : [ ] CASE : acc
Hence, the sentence is in the language generated by the grammar.
Cambridge Books Online © Cambridge University Press, 2012
176
5 Linguistic applications
several classes of verbs in natural languages; a simplified classification for English (and E0 ) can be the following: intransitive verbs (with no object), that is, sleep, walk, run, laugh, . . .; and transitive verbs (with a nominal object), that is, feed, love, eat, . . .. Intransitive verbs prohibit objects; transitive verbs require one. A fuller and more general treatment of subcategorization is given in Section 5.5. To impose subcategorization constraints on the grammar for E0 (refer back to Gagr of Example 5.4, page 169), two steps are employed. First, the lexical entries of verbs are extended such that their subcategorization is specified. The fixed set F EATS of features is extended to also include SUBCAT, and ATOMS is extended to include also trans and intrans. Second, the rules that involve verbs and verb phrases are extended. Each such rule now checks the value of the SUBCAT feature of its main verb and permits the combination of other constituents according to this value: The first rule accounts for intransitive verbs, and prohibits objects; the second rule allows one object to be combined with transitive verbs. The modified grammar, Gsubcat , is listed in Examples 5.9 and 5.10. Exercise 5.9 (*). Show a derivation tree for she feeds the sheep with Gsubcat . Explain why a derivation of *she sleeps the sheep with Gsubcat would fail. Example 5.9 Gsubcat , a naïve account of verb subcategorization (Rules). ⎡ ⎤ CAT : np
CAT : vp → ⎣ NUM : 4 ⎦ 1 CAT : s NUM : 4 CASE : nom ⎤ ⎡ CAT : v CAT : vp ⎦ 4 2 → ⎣ NUM : NUM : 4 SUBCAT : intrans ⎤⎡ ⎤ ⎡ CAT : v CAT : np CAT : vp ⎦ ⎣ NUM : 4 ⎦ 4 3 → ⎣ NUM : NUM : 4 SUBCAT : trans CASE : acc ⎡ 4
CAT :
np
⎤
⎣ NUM : 4 ⎦ → CASE : 2 ⎡
CAT :
np
⎤
CAT :
d
NUM
: 4
⎡
CAT :
⎡
n
⎤
⎣ NUM : 4 ⎦ CASE : 2
pron
5, 6 ⎣ NUM : 4 ⎦ → ⎣ NUM : 4 CASE : 2 CASE : 2
CAT :
⎤ ⎡
CAT :
propn
⎦ | ⎣ NUM : 4 CASE : 2
Cambridge Books Online © Cambridge University Press, 2012
⎤ ⎦
5.4 Imposing subcategorization constraints
177
Example 5.10 Gsubcat , a naïve account of verb subcategorization (Lexicon). ⎡ ⎤ ⎡ ⎤ CAT : v CAT : v ⎦ sleeps → ⎣ NUM : ⎦ sleep → ⎣ NUM : pl sg SUBCAT : intrans SUBCAT : intrans ⎡
feed
⎤ v ⎦ → ⎣ NUM : pl SUBCAT : trans ⎡
lamb
⎤ n → ⎣ NUM : sg ⎦ CASE : [ ] ⎡
she
CAT :
⎤ pron → ⎣ NUM : sg ⎦ CASE : nom ⎡
→
feeds
d NUM : sg
CAT :
⎤ n lambs → ⎣ NUM : pl ⎦ CASE : [ ] ⎡
her
them
Jacob
CAT :
⎤ pron → ⎣ NUM : pl ⎦ CASE : acc CAT : propn → NUM : sg
two
CAT :
⎤ pron → ⎣ NUM : sg ⎦ CASE : acc ⎡
CAT :
CAT :
⎤ v ⎦ → ⎣ NUM : sg SUBCAT : trans ⎡
CAT :
⎤ pron they → ⎣ NUM : pl ⎦ CASE : nom CAT : propn Rachel → NUM : sg a
⎡
CAT :
→
CAT :
CAT :
d NUM : pl
This solution is, as noted, rather naïve. We discuss its limitation and suggest a better, more general solution in Section 5.5. One particular deficiency of this solution can be realized already at this stage. It misses an important generalization over verbs; namely, they agree with their subjects irrespectively of their subcategorization requirements. Thus, in the grammar above, this agreement has to be enforced twice, once per each verb type. One feels that it ought to be possible to state the agreement requirement only once. Note, however, that subcategorization of verbs can be extended from two subcategories (intransitive and transitive) to more classes, each indicating the number and type of the verb arguments, as we demonstrate in the next section. Of course, such an extension requires a dedicated rule to generate verb phrases of each possible combination of a verb and its complements. The solution that we propose in the following section does capture linguistic generalizations more elegantly.
Cambridge Books Online © Cambridge University Press, 2012
178
5 Linguistic applications 5.5 Subcategorization lists
The previous section presented a naïve solution to the problem of verb subcategorization. We now account for the subcategorization data in a more general and elegant way, extending the coverage of our grammar from the smallest fragment E0 to the fragment Esubcat . Recall that in Esubcat different verbs subcategorize for different kinds of complements: noun phrases, infinitival verb phrases, sentences, and so on. Also, some verbs require more than one complement. The idea behind the solution is to store in the lexical entry of each verb, not an atomic feature indicating its subcategory, but rather a list of categories, indicating the verb’s appropriate complements (recall that feature structures can be used to represent lists, as shown in Section 4.1). Each verbal lexical entry contains a SUBCAT feature. Its value is a list, encoding the categories of the verb’s complements. Example 5.11 lists the lexical entries of some verbs (lists are depicted using the standard ‘ ’ notation, but they are, of course, represented as feature structures, using the encoding introduced in Section 4.1). Example 5.11 Lexical entries of some verbs using subcategorization lists. ⎡
⎤ v sleep → ⎣ SUBCAT : elist ⎦ NUM : pl ⎡
CAT :
love
⎤ v
→ ⎣ SUBCAT : CAT : np ⎦ NUM : pl
give
⎤ v
→ ⎣ SUBCAT : CAT : np , CAT : np ⎦ NUM : pl ⎡
⎡
tell
CAT :
CAT :
⎤ v
→ ⎣ SUBCAT : CAT : np , CAT : s ⎦ NUM : pl CAT :
As we shall, presently, see, the ability of a verb to combine with its arguments is regulated by the value of the verb’s SUBCAT feature. The verb sleep is intransitive. Anticipating the combination rules that add complements to a
Cambridge Books Online © Cambridge University Press, 2012
5.5 Subcategorization lists
179
verb based on the verb’s subcategorization list, intransitivity is reflected by an empty subcategorization list; similarly, love subcategorizes for a single noun phrase; give has two complements, both noun phrases; and tell expects a noun phrase and a sentence. Linguistically, this can be seen as if the verbs select the kinds of phrases that can serve as their complements. In our former account for verb subcategorization (the grammar Gsubcat of Example 5.9), we have used the atomic valued feature SUBCAT, whose values indicated types of verbs, to denote what complements each verb subcategorizes for. Using a list-valued SUBCAT, the lexical entries of verbs explicitly list their complements. This gives rise to better linguistic generalizations, as we presently show. It also exemplifies a current trend in linguistic theories, namely, lexicalism (see Section 1.8). With a list representation of subcategorization, not only does the verb have the ability to “rule out” certain complements, as is the case when subcategorization is represented by atoms; rather, verbs can “select” their complements directly, by listing a specification for the complement in their lexical entries. Furthermore, such an approach enables an account of partially saturated verb phrases, that is, verb phrases in which some of the complements are realized but others are not. In fact, such verb phrases have representations that are very similar to those of verbs with a smaller number of complements. For example, the feature structure associated with herd the sheep has an empty SUBCAT list, just like the feature structure that is associated with the intransitive verb sleep. This corresponds well to the combinatorial properties of the two phrases. Whenever lists are used to encode grammatical information, the issue of the order of the elements in the list has to be addressed. In the preceding example we assumed the default ordering of verb complements in English. Indeed, this is an oversimplification, even for English, and for many natural languages whose constituent order is less strict than in English, such a solution might not be appropriate at all. One might suggest using sets rather than lists, but sets give rise to severe computational problems that lie outside the scope of this book. For the purpose of the following description, we assume that the complement order in English is fixed. The grammar rules must be modified to reflect the additional wealth of information in the lexical entries. In fact, this wealth can lead to a dramatic reduction in the number of grammar rules necessary for handling verbs. Example 5.12 lists a grammar fragment that deals with the combination of verb phrases and their complements. This example requires careful consideration. The first rule states that a sentence can be formed by combining a noun phrase with a verb phrase provided
Cambridge Books Online © Cambridge University Press, 2012
180
5 Linguistic applications
Example 5.12 VP rules using subcategorization lists.
CAT :
CAT :
s
v
SUBCAT : 2
→
CAT :
⎡
→ ⎣
np
CAT :
v SUBCAT : elist
CAT : SUBCAT
v :
FIRST : CAT : 4
⎤
⎦ CAT : 4
REST : 2
that the SUBCAT list of the verb phrase is empty.1 In this case the verb phrase is said to be saturated as all its complements have been consumed. Notice that the subject receives a unique status. Because it is (at least in Esubcat ) the only verb complement that precedes the verb, it is handled by the first rule; all the other verb complements, which are postverbal, are handled by the second rule. The second rule unifies the treatment of all the different verbs’ subcategories. It states that a verb phrase with a subcategorization list containing X as a first element can be combined with a complement of category X; this is exactly what the presence of 4 in the subcategorization list means. Moreover, the rule states that the subcategorization list of the mother verb phrase (the head of the rule) is the list containing all but the first element of the daughter VP. In the simple case when the daughter VP has a subcategorization list containing one element, what the mother has is simply elist. This rule can be recursively applied, each time consuming one element of the list, until an empty list is obtained. Then the first rule can be applied and a sentence is formed. This is how the order of the elements on the list determines the grammatical ordering of the verb’s complements: the recursive applications of this rule only permit complements ordered according to the original subcategroization list of the verb. Example 5.13 lists a derivation tree for the sentence Rachel gave the sheep water according to this grammar (when the internal structure of some subtree is irrelevant, its root is connected directly with the string at its yield, which is circled). A slightly more complex example, with sentential complements, is depicted in Example 5.14 (the feature SUBCAT is shortened to SC). Exercise 5.10. Explain why the two derivation trees depicted in Examples 5.13 and 5.14 are inappropriate for the strings ∗Rachel told the sheep water and 1
Other properties (e.g., agreement features) of the constituents, as well as constraints on them, are suppressed for simplicity. We also ignore the difference between verbs and verb phrases in this example.
Cambridge Books Online © Cambridge University Press, 2012
5.5 Subcategorization lists
181
Example 5.13 A derivation tree. In this example, pay special attention to the values of the SUBCAT feature (shortened to SC in the tree) along the branch of the tree whose CAT values are v. See how the subcategorization lists of the three verbal projections are shrinking as each object “consumes” one element of the list. (We assume that the noun water is added to the lexicon.)
CAT :
CAT :
np
CAT :
s
CAT :
v SC :
v
SC : CAT : 2
CAT : v
CAT : 1 np SC : CAT : 1 , CAT : 2
Rachel
gave
the sheep
CAT : 2
np
water
∗Jacob gave Laban he loved Rachel, respectively. Indicate exactly where uni-
fication will fail. Exercise 5.11. Show a derivation tree for Jacob told Laban Rachel told him she loved him
In the above grammar, categories on subcategorization lists are represented as atomic symbols (or variables). Naturally, this is a simplification; the method outlined here can be used with more complex encodings of categories. In other words, the specification of categories in a subcategorization list can include all the constraints that the verb imposes on its complements. For example, the lexical entry of the German verb geben (to give) can state that the first complement must be in the dative case, whereas the second must be accusative. This will allow verb phrases such as the first one in Example 5.15, while eliminating the last two.
Cambridge Books Online © Cambridge University Press, 2012
5 Linguistic applications 182
Example 5.14 A derivation tree.
s
np
CAT :
CAT :
Jacob
CAT : SC :
CAT : v
CAT : SC :
v
Laban
CAT : 2
he
s
CAT :
SC :
v
loved
Rachel
CAT : v
CAT : 1 np CAT : np CAT : 3 np , CAT : 2 SC : CAT : 3
v
CAT : 2
SC : CAT : 1
told
Cambridge Books Online © Cambridge University Press, 2012
5.5 Subcategorization lists
183
Example 5.15 Subcategorization imposes case constraints. Ich
gebe dem Hund den Knochen I give the(dat) dog the(acc) bone I give the dog the bone
∗Ich gebe den Hund den Knochen I give the(acc) dog the(acc) bone Hund dem Knochen ∗Ich gebe dem I give the(dat) dog the(dat) bone
The lexical entry of gebe, then, could be: ⎡
CAT :
v
NUM :
sg
⎢ L(gebe) = ⎢ ⎣ SUBCAT :
⎤ ⎥ CAT : np CAT : np ⎥ , ⎦ CASE : dat CASE : acc
In order to account for subcategorization of complex information (rather than of atomic category symbols), the VP rule that manipulates subcategorization lists has to be slightly modified. The revised rule reflects the fact that the subcategorized information is not the value of the CAT feature, but rather the entire verb complement (compare the following rule to Example 5.12):
CAT : SUBCAT
v : 2
⎡ → ⎣
CAT : SUBCAT :
v
FIRST : 3
⎤ ⎦ 3 []
REST : 2
The model of subcategorization implied by this rule does not necessarily mean that all the features of the subcategorized complement must be specified in the verb’s lexical entry. While the rule does share the entire feature structure of the complement with the (first element on the list) value of SUBCAT, notice that the verb can specify partial information, relating only to the subcategorization requirements, and underspecify other features. For example, the lexical entry of gebe above lists the category and the case of the subcategorized complement, but says nothing about its number or gender, which are irrelevant for subcategorization. Exercise 5.12 (*). Show a derivation tree for Rachel gave the sheep water using the above rule. Compare the tree to Example 5.13.
Cambridge Books Online © Cambridge University Press, 2012
184
5 Linguistic applications
To complete the discussion, we present in Examples 5.16 and 5.17 a grammar for Esubcat , G3 . Note how the number of rules is reduced (compared to the grammar Gsubcat of Example 5.9, page 176), and how the complexity of the lexical entries, in particular the verbs, is increased. This will be a recurring trade-off in the linguistic theories discussed in later chapters. Example 5.16 G3 , a complete Esubcat -grammar (Rules). ⎡ ⎤⎡ ⎤ CAT : np CAT : v
⎦ 4 → ⎣ NUM : 4 ⎦ ⎣ NUM : CAT : s CASE : nom SUBCAT : elist ⎤ ⎡ ⎤ ⎡ CAT : v CAT : v ⎥ ⎢ NUM : 4 ⎣ NUM : ⎥ 3 [] 4⎦ → ⎢ ⎣ FIRST : 3 ⎦ SUBCAT : SUBCAT : 2 REST : 2 ⎡ ⎤ ⎤ ⎡ CAT : n CAT : np CAT : d ⎣ NUM : 4 ⎦ ⎣ NUM : 4 ⎦ → NUM : 4 2 CASE : CASE : 2 ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ CAT : np CAT : pron CAT : propn ⎦ ⎣ NUM : 4 ⎦ → ⎣ NUM : 4 ⎦ | ⎣ NUM : 4 CASE : 2 CASE : 2 CASE : 2
Example 5.17 G3 , a complete Esubcat -grammar (Lexicon). ⎡ ⎤ CAT : v sleep → ⎣ SUBCAT : elist ⎦
love
give
tell
NUM
:
pl ⎤ CAT : v ⎢ ⎥ CAT : np ⎥ → ⎢ ⎣ SUBCAT : CASE : acc ⎦ NUM : pl ⎤ ⎡ CAT : v ⎢
⎥ CAT : np ⎥ → ⎢ ⎣ SUBCAT : CASE : acc , CAT : np ⎦ NUM : pl ⎤ ⎡ CAT : v ⎢ ⎥
CAT : np ⎥ → ⎢ ⎣ SUBCAT : CASE : acc , CAT : s ⎦ NUM : pl ⎡
Cambridge Books Online © Cambridge University Press, 2012
5.6 Long-distance dependencies Example 5.17 (cont.) ⎡ lamb
→
she
→
Rachel → a
→
⎤ n ⎣ NUM : sg ⎦ CASE : 2 ⎤ ⎡ CAT : pron ⎣ NUM : sg ⎦ CASE : nom CAT : propn NUM : sg CAT : d NUM : sg
⎤ n ⎣ NUM : pl ⎦ CASE : 2 ⎤ ⎡ CAT : pron ⎣ NUM : sg ⎦ CASE : acc CAT : propn NUM : sg CAT : d NUM : pl ⎡
CAT :
lambs →
her
→
Jacob
→
two
→
185
CAT :
5.6 Long-distance dependencies
QUE :
[
[
[
[
Encoding grammatical categories as feature structures, as we have done above, is very useful in the treatment of another phenomenon that is not handled by our E0 grammar, namely, unbounded dependencies, which are included in the grammar fragment Eldd . Recall from Section 1.3.4 (page 9) that such phenomena involve a “missing” constituent that is realized outside the clause from which it is missing, as in The shepherd wondered whom Jacob loved . Note that a different kind of long-distance dependency is exhibited by relative clauses, as in The shepherd whom Jacob loved smiles. Such dependencies call for a different treatment and they induce different rules; the following discussion does not refer to such phenomena, but see Section 5.7. Phrases such as whom Jacob loved or who loved Rachel are instances of a category that we haven’t discussed yet. They are basically sentences, with a constituent which is “moved” from its default position and realized as a wh-pronoun in front of the phrase. We represent such phrases by using s, the same category we used for sentences, but to distinguish them from (saturated) declarative sentences we add a feature, QUE, to the category (see a discussion of such subcategories of sentences in the material that follows). The value of QUE will be the atom ‘+’ in sentences with an interrogative pronoun realizing a transposed constituent. We also add a lexical entry for the pronoun whom: ⎡ ⎤ CAT : pron whom → ⎣ CASE : acc ⎦ +
Cambridge Books Online © Cambridge University Press, 2012
186
5 Linguistic applications
Finally, we update the rule that derives pronouns so that it propagates the value of QUE from the lexicon to higher projections of the pronoun: ⎡
CAT :
np
QUE :
5
⎤
⎡
CAT :
pron
⎢ NUM : 1 ⎥ ⎢ NUM : 1 ⎢ ⎥ ⎢ ⎣ CASE : 3 ⎦ → ⎣ CASE : 3 QUE :
⎤ ⎥ ⎥ ⎦
5
[
We now propose an extension of G3 (Example 5.16) that can handle longdistance dependencies of the kind exemplified above. The idea is to allow partial phrases, such as Jacob loved , to be derived from a category that is similar to the category of the full phrase, in this case Jacob loved Rachel, but to signal in some way that a constituent, in this case a noun phrase, is missing. We use a dedicated feature whose value records the missing constituent; for historical reasons, we use the name “slash” for this feature. We thus extend G3 with two additional rules, based on the first two rules of G3 : (3)
CAT :
s
⎡
SLASH : 4
⎡
CAT :
v
⎤
⎢ NUM : 1⎥ ⎥ (4) ⎢ ⎣ SUBCAT : 2 ⎦ SLASH :
4
CAT :
np
⎤
⎡
CAT :
v
⎤
⎢ NUM : ⎥ 1 ⎥ → ⎣ NUM : 1 ⎦ ⎢ ⎣ SUBCAT : elist ⎦ CASE : nom SLASH : 4 ⎤ ⎡ CAT : v ⎥ ⎢ NUM : 1 ⎥ → ⎢ ⎣ FIRST : 4 ⎦ SUBCAT : REST : 2
[
Compare these rules to the first two rules of G3 . We add a feature, SLASH, to verb phrases and sentences. The value of this feature is a feature structure encoding categorial information: In rule (4), the value of SLASH on the lefthand side element of the rule is set to the “transposed” complement of the verb. This means that the verb can be “promoted” to a verb phrase, that is, be relieved of the requirement to be combined with a complement, and information about the complement that is missing is recorded as the value of SLASH. Rule (3) simply propagates the value of SLASH from a verb phrase to a sentence. With the two additional rules, it is possible to derive partial phrases, such as Jacob loved , as depicted in Example 5.18. Note that we now have defined subcategories of the base category ‘sentence’: feature structures whose CAT feature is valued ‘s’ are now further classified according to the value of their QUE and SLASH features. QUE is ‘+’ in (and only in) interrogative sentences; SLASH is nonempty in case (and only in case) the sentence is missing a constituent. Compare this notion of subcategories
Cambridge Books Online © Cambridge University Press, 2012
5.6 Long-distance dependencies
CAT :
s
.
[
Example 5.18 A derivation tree for Jacob loved
187
SLASH : 4
⎡
⎤
⎡
CAT : np ⎣ NUM : 1 ⎦
⎢ NUM : ⎢ ⎣ SLASH :
CASE : 2
v 1 4
⎤ ⎥ ⎥ ⎦
SUBCAT : 8
⎤
⎡
CAT :
⎡
CAT :
⎤
v
⎥ ⎢ NUM : 1 ⎢ ⎤⎥ ⎡ ⎥ ⎢ CAT : np ⎥ ⎢ ⎣ SUBCAT : ⎣ FIRST : 4 CASE : acc ⎦ ⎦ REST : 8 elist
Jacob
loved
[
CAT : propn ⎣ NUM : 1 sg ⎦ CASE : 2 nom
with the case of verbs, where a dedicated feature, SUBCAT, is used for the more subtle classification. Now that partial phrases can be derived, with a record of their “missing” constituent, all that is needed is a rule for creating “complete” sentences by combining the missing category with a “slashed” sentence. The following rule is defined in a general way. It does not commit as to the category of the dislocated element; it simply combines any category with a sentence in which this very same category is missing (as indicated by the value of SLASH), provided that this category is marked as ‘QUE +’. The value of QUE is propagated to the mother to indicate that the sentence is interrogative rather than declarative: (5)
CAT :
s
QUE : 5
→
4
QUE : 5
+
CAT :
s
SLASH : 4
[
Thus, a complete derivation for whom Jacob loved is depicted in Example 5.19. Note how the noun phrase dominating whom is reentrant with the value of SLASH in the partial sentence. At first sight, rule (5) might seem to be extremely overgenerating: it permits the combination of any category with a sentence missing that category, and thus could, in principle, allow the combination of, for example, a verb with a
Cambridge Books Online © Cambridge University Press, 2012
5 Linguistic applications
Example 5.19 A derivation tree for whom Jacob loved
CAT :
.
[
188
s
QUE : 5
CAT :
s
SLASH : 4
⎡ 4
CAT :
np
⎤
⎣ CASE : 3 ⎦ QUE : 5
⎡
CAT :
np
⎤
⎣ NUM : 1 ⎦ CASE : 2
⎡
CAT :
⎢ NUM : ⎢ ⎣ SLASH : SUBCAT :
v 1 4
⎤ ⎥ ⎥ ⎦
elist
⎤ ⎡ ⎤ ⎡ ⎤ pron CAT : propn CAT : v ⎣ CASE : 3 acc ⎦ ⎣ NUM : 1 sg ⎦ ⎣ NUM : 1 ⎦ SUBCAT : 4 QUE : 5 + CASE : 2 nom CAT :
whom
Jacob
loved
[
⎡
[
sentence missing its verb. However, since such a phenomenon is not exhibited in the language fragment we are accounting for here, we must prevent such a combination. This is done by ensuring that no rule loads a verbal category into the SLASH feature of a sentence. Since in our fragment the only rule that loads the SLASH feature is rule (4), and what is loaded into the SLASH feature of the mother is the first element on the SUBCAT list of the daughter, and since in our fragment there are no verbs in subcategorization lists, this is guaranteed. This is a good example of the intricate and complex nature of larger-scale unification grammars, where information from many sources (several rules and lexical entries) is interacting during derivations. In order to derive the full sentence Rachel wondered whom Jacob loved we first need a lexical entry for the verb wondered. It is a verb, so its CATegory is v, and because it subcategorizes for an interrogative sentence, its SUBCATegory is a list of a single member, a sentence whose QUE feature is ‘+’: ⎡
⎤ v ⎢ NUM : ⎥ [] ⎥ wondered → ⎢ ⎣ ⎦ CAT : s SUBCAT : QUE : + CAT :
Cambridge Books Online © Cambridge University Press, 2012
189
Example 5.20 A derivation tree for Rachel wondered whom Jacob loved
CAT : s CAT :
np
⎤
⎡
⎣ NUM : 3 ⎦ CASE : 4 nom
CAT :
⎣ NUM :
elist
CAT :
Rachel
⎦
3
SUBCAT :
⎤ ⎡ ⎤ propn CAT : v ⎣ NUM : 3 sg ⎦ ⎣ NUM : 3 ⎦ SUBCAT : 1 CASE : 4 ⎡
⎤
v
1
wondered
CAT :
s QUE : +
whom Jacob loved
[
⎡
.
[
5.6 Long-distance dependencies
The derivation tree for the entire sentence is thus as depicted in Example 5.20. In Example 5.19, the filler of the gap (i.e., the phrase that is “dislocated”) is realized immediately to the left of the clause in which the gap occurs. Of course, this need not always be the case. By their very nature, unbounded dependencies can hold across several clause boundaries. Typical examples are: [
[
The shepherd wondered whom Jacob loved . The shepherd wondered whom Laban thought Jacob loved . The shepherd wondered whom Laban thought Leah claimed Jacob loved
[
.
Also, the dislocated constituent does not have to be an object:
[
[
[
The shepherd wondered whom loved Rachel. The shepherd wondered whom Laban thought loved Rachel. The shepherd wondered whom Laban thought Leah claimed loved Rachel.
[
[
The solution we proposed for the simple case of unbounded dependencies can be easily extended to the more complex examples. The solution amounts to three components: a slash introduction rule, slash propagation rules, and a gap filler rule. In the simple solution, slashes are introduced by rule (4) whenever a verb phrase is lacking its object. Rule (3) propagates the value of SLASH from a verb phrase to a sentence, and rule (5) “consumes” the slash by combining a filler with a “slashed” sentence. In order to account for filler-gap relations that hold across several clauses, all that needs to be done is to add more slash propagation rules. For example, in The shepherd wondered whom Laban thought Jacob loved , the slash is introduced by the verb phrase loved , and is propagated to the sentence
Cambridge Books Online © Cambridge University Press, 2012
190
5 Linguistic applications
to
[
Then, the slash is propagated from the verb phrase thought Jacob loved the sentence Laban thought Jacob loved . Rule (7) handles this case: ⎡ ⎤ ⎤ CAT : ⎡ v CAT : np ⎢ NUM : ⎥ 5 CAT : s ⎥ (7) → ⎣ NUM : 5 ⎦ ⎢ ⎣ SLASH : 4 SUBCAT : elist ⎦ CASE : nom SLASH : 4
[
[
by rule (3). This sentence is the object of the verb thought; therefore, we need a rule that propagates the value of SLASH from a sentential object to the verb phrase of which it is an object. Such rule is (6): ⎡ ⎤ ⎤ ⎡ CAT : v CAT : v ⎢ NUM : ⎥
⎢ NUM : 1⎥ 1 ⎥ ⎢ ⎥ 8 SLASH : 4 (6) ⎢ ⎣ SUBCAT : 12⎦ → ⎣ FIRST : 8 ⎦ SUBCAT : SLASH : 4 REST : 12
Jacob loved
With these additional rules, it is possible to derive the more complex cases, as depicted in Example 5.21 (again, SUBCAT is shortened to SC).
[
Exercise 5.13. Show a derivation tree for The shepherd wondered whom Laban .
thought Leah claimed Jacob loved
Finally, to account for gaps in the subject position, all that is needed is an additional slash-introduction rule. In this case, the sentence-formation rule (1) is duplicated, and the first element of the right-hand side is omitted; this omission is recorded in the value of SLASH in the mother: ⎡ ⎤ ⎤ ⎡ CAT : s ⎤ ⎡ CAT : v ⎢ ⎥ CAT : np ⎥ ⎦ ⎣ NUM : 1 (8) ⎢ ⎣ SLASH : ⎣ NUM : 1 ⎦ ⎦ → SUBCAT : elist CASE : nom [
With this new rule, a tree for who loved Rachel is depicted in Example 5.22. Note that the lexical entry of the interrogative pronoun who is similar to that of whom, the only difference being that the CASE feature of the former is nom, rather than acc.
Exercise 5.15. Explain why a derivation for ∗whom
[
[
Exercise 5.14. Show a derivation tree for The shepherd wondered whom Laban thought loved Rachel.
Cambridge Books Online © Cambridge University Press, 2012
loved Rachel would fail.
⎡ ⎤ CAT : np 4 ⎣ CASE : 3 ⎦ : 6 QUE
6
s
CAT :
s
SLASH : 4
thought
⎡ CAT : v ⎢ ⎢ NUM : 5 ⎣ SLASH : 4 SC :
⎤
s
v
1
⎤
⎥ ⎥ ⎦
⎦
⎤
elist
4
loved
SC :
⎡ CAT : v ⎣ NUM : 1
SC :
⎣ SLASH : 4
⎡ CAT : ⎢ ⎢ NUM :
SLASH : 4
CAT :
⎥ ⎥ ⎦ 12elist 8
⎡ ⎤ CAT : np ⎣ NUM : 1 ⎦ : 2 CASE
Jacob
Cambridge Books Online © Cambridge University Press, 2012
[
.
Example 5.21 A derivation tree for whom Laban thought Jacob loved
CAT : QUE : 6
⎡ ⎤ CAT : np ⎣ NUM : 5 ⎦ CASE : 9
Laban
⎤ ⎡ ⎡ ⎤ ⎡ ⎤ ⎤ ⎡ CAT : v CAT : pron CAT : propn CAT : propn ⎥ ⎢ ⎣ CASE : 3 acc ⎦ ⎣ NUM : 5 sg ⎦ ⎢ NUM : 5 ⎥ ⎣ NUM : 1 sg ⎦ ⎣ FIRST : 8 ⎦ SC : + CASE : 9 nom CASE : 2 nom REST : 12 QUE :
whom
[
191 5.7 Relative clauses
5.7 Relative clauses
In the previous section we described in detail how unification grammars utilize the mechanism of reentrancy to account for long-distance dependencies. In this section we use the very same mechanism to account for relative clauses,
5 Linguistic applications
Example 5.22 A derivation tree for who
[
192
loved Rachel.
CAT :
s
QUE : 6
⎡
CAT :
s
⎣ NUM :
1
⎤ ⎦
SLASH : 4
⎡
CAT :
⎣ NUM :
SUBCAT :
⎡
⎤ np 4 ⎣ CASE : 3 nom ⎦ QUE : 6 ⎡
⎦
1
elist
CAT :
8
⎡
[
CAT :
v
⎤ ⎡
CAT :
np
CASE : 2
⎤ propn ⎣ NUM : 1 sg ⎦ ⎣ NUM : 6 sg ⎦ SUBCAT : 8 CASE : 2 acc
⎤ pron ⎣ CASE : 3 nom ⎦ QUE : 6 CAT :
who
⎤
v
loved
CAT :
Rachel
a similar case of remote dependencies. Here, too, reentrancies are the main representational mechanism, relating the “missing” complement of the verb with the noun (the head of the relative clause) that “fills its place.” Recall from Section 1.3 that the language fragment Erelcl includes relative clauses in which the head noun fills the function of either the subject or the direct object of the clause. Again, our starting point is the grammar G3 (Example 5.16). As we did in the case of long-distance dependencies, we begin by adding rules that allow sentences in which an element (here, either the subject or the direct object) are missing. We do so by duplicating the basic sentence-formation rule; to distinguish between “full” sentences and ones in which an element is missing, we use a special category for the latter, namely s . Also, we record the missing element as the value of the SLASH feature in s . Consider Example 5.23. The first rule is the sentence-formation rule of G3 . The second rule accounts for gaps in the subject position. It allows “sentences” (in fact, phrases of category s’) consisting of verb phrase only; the “missing”
Cambridge Books Online © Cambridge University Press, 2012
5.7 Relative clauses
193
subject is recorded in the SLASH feature of the s’. Similarly, the third rule creates “sentences” from a noun-phrase subject followed by a verb phrase that still has an object on its subcategorization list. This object is duly recorded on the SLASH feature of the mother. Example 5.23 Relative clauses: creating “gapped” sentences.
CAT :
⎡
s
⎡
CAT :
np
⎤⎡
CAT :
⎦ 4 → ⎣ NUM : 4 ⎦ ⎣ NUM : CASE : nom SUBCAT : elist s ⎡
⎤
⎡ CAT : ⎥ ⎢ CAT : np ⎥ → ⎣ NUM : ⎢ ⎣ SLASH : ⎣ NUM : 4 ⎦ ⎦ CAT :
⎤
CASE :
CAT :
⎤
v
s
SLASH : 7
SUBCAT :
nom ⎡
CAT :
np
⎤
⎡
CAT :
⎤
v
⎦
4
elist v
⎤
⎢ NUM : ⎥ 4 ⎥ → ⎣ NUM : 4 ⎦ ⎢ ⎣ ⎦ FIRST : 7 SUBCAT : CASE : nom REST : elist
Next, we need a rule to create relative clauses. We introduce the relativizer that, whose category is rel with no additional features (other relativizers, such as who, can be added in the same way). The simple rule that combines a rel-
ativizer with a gapped sentence is depicted in Example 5.24; the result of the combination is a relative clause, whose category is relcl. Observe that the value of SLASH is propagated from the s’ to the relcl. Finally, we add relative clauses as modifiers of nouns in the second rule in Example 5.24. Based on the rule that creates noun phrases by combining a determiner with a noun, this rule allows an optional relative clause following the noun. To represent the fact that the head noun fills the gap in the relative clause, the rule shares the value of SLASH in the relative clause with the entire feature structure of the noun phrase. To demonstrate the functionality of the grammar, Examples 5.25 and 5.26 depict derivation trees for the noun phrases a lamb that loved Rachel and a lamb that Rachel loved, respectively. Observe how reentrancy (the tag 7 ) is used to indicate that function of the head noun in the relative clause. Finally, it is worth noting that the solution advocated in Examples 5.25 and 5.26 is different from analyses of the same phenomena that were common in the linguistic literature and that resorted to empty categories, or traces.
Cambridge Books Online © Cambridge University Press, 2012
194
5 Linguistic applications
Example 5.24 Relative clauses: filling the gap.
CAT :
relcl
SLASH : 7
⎡
→
⎤
CAT :
rel
SLASH : 7
⎡
s
⎤
⎣ NUM : 4 ⎦ → CAT : d ⎣ NUM : 4 ⎦ CAT : relcl NUM : 4 SLASH : 7 CASE : 2 CASE : 2
that → CAT : rel CAT :
np
CAT :
CAT :
n
7
Example 5.25 Derivation tree of a lamb that loved Rachel. ⎡ 7
CAT :
np
⎤
⎣ NUM : 3 ⎦ CASE : nom
CAT :
relcl
SLASH : 7
CAT :
d
NUM : 3
sg
CAT :
n
NUM
: 3
s
CAT :
SLASH : 7
⎡
CAT :
elist 8
that
v
loved
⎤ ⎡
CAT :
np
CASE : 2
⎤ propn 4 sg ⎦ ⎣ NUM : 6 sg ⎦ SUBCAT : 8 CASE : 2 acc
[
lamb
⎡ CAT : CAT : rel ⎣ NUM :
⎦
4
SUBCAT :
⎤
v
⎣ NUM :
a
CAT :
Rachel
In those analyses, the implicit subject of loved in a lamb that loved Rachel is realized as a node in the derivation tree, whose only daughter is the empty word, . It is of course possible to implement such an analysis with our grammar.
Cambridge Books Online © Cambridge University Press, 2012
5.7 Relative clauses
195
Example 5.26 Derivation tree of a lamb that Rachel loved. ⎡ 7
CAT :
np
⎤
⎣ NUM : 3 ⎦ CASE : acc
CAT :
relcl
SLASH : 7
CAT : NUM
d
: 3 sg
CAT :
n
NUM
: 3
⎡
CAT :
⎣ NUM :
s 4
⎤ ⎦
SLASH : 7
CAT :
np
CASE : 2
a
CAT :
lamb
rel
⎡
⎤ ⎡ ⎤ propn CAT : v ⎣ NUM : 6 sg ⎦ ⎣ NUM : 6 sg ⎦ CASE : 2 acc SUBCAT : 7
that
CAT :
Rachel
loved
[
Example 5.27 Relative clauses with empty categories.
CAT :
s
⎡
→
SLASH : 1
⎡
CAT :
np
1
CAT :
np
⎤⎡
CAT :
v
⎤
⎦ ⎣ NUM : 4 ⎦ ⎣ NUM : 4 CASE : nom SUBCAT : elist
⎤
⎣ NUM : 4 ⎦ → CASE : nom
Refer back to the grammar of relative clauses listed in Example 5.23. The second rule of that grammar, which creates sentences with a “gap” in the subject position, can be replaced by the two rules listed in Example 5.27. With these rules, a derivation tree for a lamb that loved Rachel is given in Example 5.28 (cf. the tree of Example 5.25).
Cambridge Books Online © Cambridge University Press, 2012
196
5 Linguistic applications
Example 5.28 An analysis of relative clauses with empty categories. The structure induced by the modified grammar on the string a lamb that loved Rachel is: ⎤ ⎡ CAT : np ⎦ 7 ⎣ NUM : 3 CASE :
nom
CAT :
relcl
SLASH : 7
CAT :
d
NUM : 3
sg
CAT :
n
NUM : 3
s
CAT :
SLASH : 7
⎡
CAT :
⎣ NUM :
v 4
⎤ ⎦
SUBCAT :
elist CAT : np 8 CASE : 2
a
lamb
CAT :
rel
that
⎡ 7
⎤ propn ⎣ NUM : 4 sg ⎦⎣ NUM : 6 sg ⎦ SUBCAT : 8 CASE : 2 acc CAT :
v
loved
⎤⎡
CAT :
Rachel
In the above tree, an explicit noun phrase (the tag 7 ) dominates an empty string. This is a totally different tree from the one in Example 5.25; it may serve to explicate theoretical views regarding the (cognitive) reality of traces, views that are not supported by the tree in Example 5.25.
Exercise 5.16. Extend the relative-clause grammar such that the following noun phrases are also accounted for: A lamb that Rachel thought Jacob loved A lamb that Rachel hoped Laban thought Jacob loved
Exercise 5.17. Observe that because of the propagation of the (value of the) SLASH feature, a noun phrase that includes a relative clause is inherently marked
Cambridge Books Online © Cambridge University Press, 2012
5.8 Subject and object control
197
for case; this case is determined by the function of the “missing” noun in the relative clause. When such a noun phrase is embedded in a full sentence, it may cause a problem. Explain what the problem is and suggest a way to solve it. 5.8 Subject and object control In this section we deal with the phenomena of subject control and object control, which are part of language fragment Econtrol . As a reminder, such phenomena capture the differences between the “understood” subjects of the infinitive verb phrase to work seven years in the following sentences: Jacob promised Laban to work seven years. Laban persuaded Jacob to work seven years.
In the first example, the implicit subject of work is Jacob, the subject of the matrix verb promised; whereas in the second example, the subject of work is understood to be Jacob, which is the object of the matrix verb persuaded. These differences have to be reflected in the structure that a grammar fragment for Econtrol assigns to the two sentences (which have an isomorphic phrase structure). Unification grammars and, in particular, internalized categories can assign appropriate structures very elegantly. The key observation in the solution is that the differences between the two example sentences stem from differences in the matrix verbs. Promise is a subject-control verb; it subcategorizes for two objects, a noun phrase and an infinitival verb phrase, and the implicit subject of the verb phrase is the subject of promise itself. On the other hand, persuade is object control: it also subcategorizes for two objects, a noun phrase and an infinitival verb phrase, but the implicit subject of the latter is the object of persuade itself. In short, subject control or object control is a lexical feature of verbs of this kind. Exercise 5.18 (*). For each of the following verbs, determine whether it is subject control, object control or both: allow, ask, beg, demand, order, permit, remind. Our departure point is the grammar G3 of Example 5.16. We modify it by adding a SUBJ feature to verb phrases, whose value is a feature structure associated with the phrase that serves as the verb’s subject. A grammar fragment, G4 , implementing this modified representation, is listed in Examples 5.29 and 5.30. The next step is to account for infinitival verb phrases. This can be easily done by adding a new feature, VFORM, to verbal projections. The values of this feature can represent the form of the verb: As a minimal assumption, let us use
Cambridge Books Online © Cambridge University Press, 2012
198
5 Linguistic applications
Example 5.29 G4 : explicit SUBJ values (Rules). ⎡
⎤
⎡
CAT :
⎤
v
CAT : np ⎥ ⎢ 7 ⎥ ⎣ CASE : nom ⎦ ⎢ NUM : ⎣ SUBCAT : elist ⎦ NUM : 7 1 SUBJ : ⎤ ⎡ ⎡ ⎤ CAT : v CAT : v ⎥ ⎢ NUM : 7 ⎢ NUM : ⎢ ⎥ 7⎥ ⎢ ⎥ 2 [] ⎥ → ⎢ 2 FIRST : ⎥ ⎢ SUBCAT : ⎣ SUBCAT : 4 ⎦ ⎦ ⎣ 4 REST : 1 SUBJ : 1 SUBJ : ⎡ ⎡ ⎤ ⎤ CAT : n CAT : np CAT : d ⎣ NUM : 7 ⎦ ⎣ NUM : 7 ⎦ → NUM : 7 6 CASE : CASE : 6
CAT :
⎡
CAT :
s
→
np
⎤
⎣ NUM : 7 ⎦ CASE : 6
1
⎡
CAT :
pron
→ ⎣ NUM : 7 CASE : 6
⎤ ⎡
CAT :
propn
⎦ | ⎣ NUM : 7 CASE : 6
⎤ ⎦
two values, fin for finite verbs and inf for infinitival ones (other values might distinguish between bare infinitives and to-infinitives, or indicate the tense of finite verbs). We thus augment the lexicon of G4 with the following entry (assuming that work is intransitive; we also assume for the sake of simplicity that to work is a single lexical item): ⎤ v ⎥ ⎢ VFORM : inf ⎥ to work → ⎢ ⎦ ⎣ SUBCAT : elist
SUBJ : CAT : np ⎡
CAT :
Finally, we must show the lexical entries of verbs such as promise or persuade. Consider promised first; it has two complements: an accusative noun phrase object and an infinitive verb phrase object. Therefore, its SUBCAT list will contain exactly two elements. The first element is simply an accusative noun phrase; the second element is more complicated. It is an infinitive verb phrase, which raises the question: what is the value of the SUBJ feature of this infinitive verb phrase (which is itself an element on the SUBCAT list of promised)? Because we know that the implied subject of the infinite verb phrase is, in fact, the subject of
Cambridge Books Online © Cambridge University Press, 2012
5.8 Subject and object control
199
Example 5.30 G4 : explicit SUBJ values (Lexicon). ⎡
sleep
love
give
lamb
⎤ v ⎢ SUBCAT : elist ⎥ ⎢ ⎥ ⎢ ⎥ → ⎢ CAT : np ⎥ ⎣ SUBJ : CASE : nom ⎦ NUM : pl ⎡ ⎤ CAT : v ⎢ ⎥ ⎢ SUBCAT : CAT : np ⎥ ⎢ ⎥ CASE : acc ⎥ ⎢ ⎥ → ⎢ ⎢ ⎥ CAT : np ⎢ SUBJ : ⎥ ⎣ CASE : nom ⎦ NUM : pl ⎡ ⎤ CAT : v ⎢ ⎥ ⎢ SUBCAT : CAT : np , CAT : np ⎥ ⎢ ⎥ CASE : acc ⎢ ⎥ → ⎢ ⎥ ⎢ ⎥ CAT : np ⎢ SUBJ : ⎥ ⎣ ⎦ CASE : nom NUM : pl ⎤ ⎤ ⎡ ⎡ CAT : n CAT : n → ⎣ NUM : sg ⎦ lambs → ⎣ NUM : pl ⎦ CASE : 6 CASE : 6 ⎡
CAT :
⎤ pron she → ⎣ NUM : sg ⎦ CASE : nom CAT : propn Rachel → NUM : sg a
→
CAT :
CAT :
d NUM : sg
⎡
⎤ pron her → ⎣ NUM : pl ⎦ CASE : acc CAT : propn Jacob → NUM : sg
two
→
CAT :
CAT :
d NUM : pl
the matrix verb, we can simply specify it in the lexical entry of promised using a reentrancy: the value of the SUBJ feature of promised, which is the subject of the matrix verb, will be token-identical to the subject of the second element in the SUBCAT list of promised. This rather complex lexical entry is depicted as
Cambridge Books Online © Cambridge University Press, 2012
200
5 Linguistic applications
follows: ⎡
⎤ v ⎢ VFORM : f in ⎥ ⎢ ⎤ ⎥ ⎡ ⎢ ⎥ CAT : v ⎢ ⎥ CAT : np ⎢ ⎥ , ⎣ VFORM : inf ⎦ ⎥ ⎢ SUBCAT : promised → ⎢ CASE : acc ⎥ 1 SUBJ : ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ CAT : np ⎢ SUBJ : ⎥ 1 ⎣ ⎦ CASE : nom NUM : [] CAT :
With this lexical entry, Example 5.31 depicts a derivation tree of the sentence Jacob promised Laban to work (ignoring agreement for brevity). Note how the subject of to work is identified with the subject of the matrix verb and all its projections (through the reentrancy tag 1 ). Again, SUBCAT is shortened to SC. Example 5.31 A derivation tree for Jacob promised Laban to work.
CAT : s ⎡
⎤ v ⎢VFORM : f in ⎥ ⎢ ⎥ ⎣SUBJ : 1 ⎦ CAT :
SC :
elist
⎡
⎤ v ⎢VFORM : f in⎥ ⎢ ⎥ ⎣SUBJ : 1 ⎦ CAT :
SC :
3
⎤ v ⎢VFORM : f in ⎥ CAT : np CAT : np ⎥ ⎢ 1 ⎦ 2 CASE : 7 acc 1 CASE : 6 nom ⎣SUBJ : SC : 2, 3
⎡
CAT :
⎡ ⎤ CAT : v CAT : propn 3 ⎣VFORM : inf⎦ CASE : 7 1 SUBJ :
CAT : propn CASE : 6 Jacob
promised
Laban
Cambridge Books Online © Cambridge University Press, 2012
to work
5.9 Constituent coordination
201
The differences between subject control verbs such as promised and object control verbs such as persuaded can be accounted for by a simple change in the lexical entries of the verbs. The only difference between the lexical entries of promised and persuaded is that in the latter, the value of the SUBJ list of the infinitival verb phrase is reentrant with the first element on the SUBCAT list of the matrix verb, rather than with its SUBJ value: ⎤ ⎡ CAT : v ⎥ ⎢ VFORM : f in ⎢ ⎤ ⎥ ⎡ ⎥ ⎢ CAT : v ⎥ ⎢ ⎥ ⎢ SUBCAT : 1 CAT : np , ⎣ VFORM : inf ⎦ ⎥ ⎢ persuaded → ⎢ CASE : acc ⎥ 1 ⎥ ⎢ SUBJ : ⎥ ⎢ ⎥ ⎢ CAT : np ⎥ ⎢ SUBJ : ⎦ ⎣ CASE : nom NUM : [] Exercise 5.19. Show a derivation tree for the sentence Laban persuaded Jacob to work.
Note that while the phrase structure of the trees assigned to the sentences Jacob promised Laban to work seven years. Laban persuaded Jacob to work seven years.
by our grammar are isomorphic, the feature structures associated with nodes in these trees are significantly different. In other words, the use of unification grammars in this case enables us to overcome one of the major limitations of context-free grammars, namely, the fact that the only structure assigned to strings is their phrase structure (tree). Exercise 5.20. Infinite verb phrases subcategorize for a subject, which is not realized explicitly. Explain why the nonsentence ∗Jacob promised Laban Jacob to work cannot be generated with the grammar described above. Exercise 5.21 (*). While embedded infinitival verb phrases cannot be combined with explicit subjects, the above grammar allows nonsentences in which the matrix verb is infinite, such as ∗Jacob to work. What can be done to eliminate such strings?
5.9 Constituent coordination Another area in a grammar in which internalizing categories can be of much help is the treatment of coordination. Many languages exhibit a phenomenon
Cambridge Books Online © Cambridge University Press, 2012
202
5 Linguistic applications
by which constituents of the same category can be conjoined to form a constituent of this category. Some examples from English were given in Section 1.3 and are repeated here (the conjoined phrases in each example are enclosed in brackets): N: No man lift up his [hand] or [foot] in all the land of Egypt NP: Jacob saw [Rachel] and [the sheep of Laban] VP: Jacob [went on his journey] and [came to the land of the people of the east] VP: Jacob [went near], and [rolled the stone from the well’s mouth], and [watered the flock of Laban his mother’s brother]. ADJ: every [speckled] and [spotted] sheep ADJP: Leah was [tender eyed] but [not beautiful] S: [Leah had four sons], but [Rachel was barren] S: She said to Jacob, “[Give me children], or [I shall die]!” Coordination is not covered by our basic fragment E0 of English; to this end we extend the fragment, referring to it as Ecoord . The lexicon of a grammar for Ecoord is extended by a closed class of conjunction words; categorized under Conj, this class includes the words and, or, and but and perhaps a few others (Ecoord contains only these three). For simplicity of the exposition, we assume that in Ecoord , every category of E0 can be conjoined. We also assume – simplifying a little – that the same conjunctions (e.g., and, or, but) are possible for all the categories (this is a simplification because, for example, but cannot be used to conjoin nouns). How can the data of Ecoord be accounted for in a context-free framework? To implement the first assumption, namely, that every category can be coordinated, a special rule is required for every category: S → S Conj S NP → NP Conj NP VP → VP Conj VP .. . Conj → and, or, but, . . . The last rule is the implementation of our second assumption, namely, that the same conjunctions can be used with all categories. In contrast, when generalized categories are used, a single production is sufficient:
Cambridge Books Online © Cambridge University Press, 2012
5.9 Constituent coordination
CAT : 1
203
→ CAT : 1 CAT : conj CAT : 1
This rule states that a phrase of any category can be obtained by combining two phrases of the same category by putting a conjunction between them. The values that the variable 1 might be instantiated to are the possible values of the feature CAT: This is the implementation of our first assumption, namely that phrases of every category can be conjoined. Let Gc be the grammar G3 of Example 5.16 (page 184), augmented by the above rule for coordination (the signature is augmented such that conj is an atom). Gc is a complete grammar for our language fragment Ecoord . Example 5.32 shows a derivation tree for the coordinated verb phrase rolled the stone and watered the sheep, according to Gc (augmented by the necessary lexical entries). Example 5.32 Coordination.
⎡
CAT :
1
⎣ NUM : [ ] SC :
CAT : 1
v
⎤
⎡
⎦
⎣ NUM : [ ]
CAT : SC :
elist
1
⎤ ⎦
elist
⎡ ⎤ ⎤ v CAT : v
⎣ NUM : [ ] ⎦ 2 CAT : np CAT : conj ⎣ NUM : [ ] ⎦ 3 CAT : np NUM : sg NUM : [ ] SC : 2 SC : 3 ⎡
CAT :
rolled
the stone
and
watered
the sheep
Exercise 5.22. Show a derivation for the sentence Jacob loved Rachel but married her and Leah using the grammar Gc (augmented by the necessary lexical entries). It must be noted here that the above solution oversimplifies. First, it allows coordination not only of E0 categories, but also of Ecoord categories; in particular, it allows conjunctions to be coordinated as well. This problem is related to the (oversimplifying) assumption made above, namely, that every category can
Cambridge Books Online © Cambridge University Press, 2012
204
5 Linguistic applications
be coordinated. Certainly there are natural languages in which some categories cannot. For example, in English, conjunctions themselves cannot be conjoined (in most cases). We view this mainly as a technical problem that can easily be solved by introducing a classification of categories to those that can and those that cannot be conjoined. Such a classification can be imposed, for example, by adding a binary feature, say CONJABLE, to the category feature structure, and modifying the coordination rule accordingly:
CAT : CONJABLE :
1
−
→
CAT : CONJABLE
1
:+
CAT :
conj
CAT : CONJABLE
1
:+
Exercise 5.23. Propose a solution that will enable the coordination of more than two constituents. There are at least two major problems concerning coordination in natural languages that the foregoing analysis does not address. Indeed, these problems are acknowledged in many linguistic theories and analyses, and they are not easily solved in any of them. One problem has to do with the properties of conjoined phrases, especially of noun phrases; the other is the ability to conjoin unlike categories and nonconstituents. Properties of conjoined constituents We start the discussion by considering coordination of verb phrases, such as the previously demonstrated Jacob [went on his journey] and [came to the land of the people of the east]. Both constituents in this example form the predicate of the sentence, implying that they have to agree with the subject, Jacob. Therefore, both constituents must agree on person and number; furthermore, whatever number and person the constituents are, these will be the number and person of the conjoined constituent: Rachel sleeps and smiles. ∗Rachel sleep and smiles ∗Rachel sleeps and smile
Now, consider applying the coordination rule suggested above to the category of noun phrases (Example 5.33). Naturally, feature structures representing NPs encode the properties of the phrases, such as agreement properties. Assume for the sake of simplicity that NPs have the following features: NUMBER, which can be sg or pl; PERSON, which can be 1, 2, or 3; and GENDER, which can be masc or fem. What are the values of these features in a coordinated NP?
Cambridge Books Online © Cambridge University Press, 2012
5.9 Constituent coordination
205
Example 5.33 NP coordination. ⎤ np ⎢ NUM : ?? ⎥ ⎥ ⎢ ⎣ PERS : ?? ⎦ GEN : ?? ⎡
⎡
CAT :
1
GEN :
8
⎤
CAT :
1
⎢ NUM : 4 ⎥ ⎥ ⎢ ⎣ PERS : 2 ⎦ ⎡
CAT :
pron
CAT :
1
GEN :
7
⎤
⎢ NUM : 6 ⎥ ⎢ ⎥ ⎣ PERS : 3 ⎦ ⎤
⎢ NUM : 4 ⎥ ⎢ ⎥ ⎣ PERS : 2 second ⎦ GEN : 8 you
⎡
CAT :
conj
and
CAT :
d
NUM
: 6
⎡
CAT :
n
⎤
⎢ NUM : 6 sg ⎥ ⎢ ⎥ ⎣ PERS : 3 third ⎦ GEN : 7
a
lamb
The NUMBER feature is, at least for and coordination, valued pl. The PER feature is more tricky: in many languages that explicitly mark differences between first, second, and third persons,2 the value for this feature in a coordinated phrase is the minimum (assuming the natural order: ‘first’ < ‘second’ < ‘third’) of the values of the feature in any of the conjuncts. As for GENDER, this is very much language dependent. In Hebrew, for example, the value is masc unless both conjuncts are valued fem. Obviously, the rule suggested above does not account for these data. For instance, it combines phrases with an identical CAT feature value, for example, n, but does not articulate the values of the other features, such as NUMBER. There is no way to avoid specifying a particular set of rules for NP coordination. Unfortunately, this is not only a property of NPs; other categories may involve such idiosyncratic constructions. SON
Exercise 5.24. Formulate a unification grammar that accounts for NP coordination in Ecoord according to the English data. The grammar must correctly cover all combinations of number and person in the conjoined phrases. While the problems of NP coordination are acknowledged by many linguistic theories, including some that are totally unrelated to unification, these problems indicate a major limitation of approaches that adhere to the unification 2
French and Hebrew are such languages.
Cambridge Books Online © Cambridge University Press, 2012
206
5 Linguistic applications
operation as the sole operation manipulating linguistic objects in a grammar. Intuitively, unification is an operation that combines consistent information; it fails in the presence of inconsistencies. However, such inconsistencies are common in natural languages, as the examples preceding elucidate. To tackle such data in an elegant, general way, unification, as powerful an operation as it is, might not be sufficient. Interestingly, the problems raised by coordination may indicate not only the limitations of the unification operation, but also the inability of featurestructure-based approaches to tackle complex agreement phenomena. To overcome the problems associated with unification as a sole operation on feature structures, it has been suggested to use instead, for coordination, the inverse operation, generalization (Section 3.6). We first explain how generalization can help in simple cases of coordination; then, we show that even generalization is limited in the face of more complex data, indicating that the problem might be a consequence of deeper deficiencies of feature-structure-based grammars. Coordination of unlikes Consider the following English data: Joseph became wealthy Joseph became a minister Joseph became [wealthy and a minister] Joseph grew wealthy ∗Joseph grew a minister ∗Joseph grew [wealthy and a minister]
The verb became can subcategorize for either an adjectival phrase (e.g., wealthy) or a noun phrase (a minister), or a coordination of the two. The verb grew, on the other hand, requires an adjectival phrase, but not a noun phrase, and therefore not a coordination of the two. These data are easy to account for in a unification-based framework that includes the possibility of specifying generalization instead of unification in certain cases. We represent the base category as a complex feature structure having two features: V and N. Now, the subcategorization requirement of the verb became can be stated as a complement with N + as its category, thus accepting both adjectives and nouns. The rule for coordination assigns to the coordinated phrase the generalization of the feature structures of both conjuncts. This rule can be phrased as:
→ CAT : 1 2 CAT : 1 CAT : conj CAT : 2 where ‘’ is the generalization operator. Consequently, the feature structure associated with wealthy and a minister has N + as its category. A possible derivation tree of Joseph became wealthy and a minister is presented in Example 5.34.
Cambridge Books Online © Cambridge University Press, 2012
5.9 Constituent coordination
207
Example 5.34 Coordination of unlikes.
CAT : V :
+
+ V:+
SUBCAT : N : CAT :
became
V:
+ CAT : N:+ wealthy
CAT : N :
CAT :
+
conj
and
V:
− CAT : N:+
a minister
Notice how the generalization of adjectives (specified as V +, N +) and nouns (V-, N +) yields N +, thus complying with the subcategorization requirement of the verb. However, the situation becomes more complicated when verbs, too, are conjoined: ∗Joseph [grew and remained] [wealthy and a minister]
While this example is ungrammatical, it is obtainable by the same grammar (incorporating generalization instead of unification in the case of coordination). The subcategorization requirements of the conjoined verb phrase grew and remained, obtained as a generalization of the subcategorization of both verbs, are simply n+, which is compatible with the category of wealthy and a minister, as can be seen in Example 5.35 (where, to save space, the features CAT, SUBCAT are shortened to C and SC, respectively, and the value conj is shortened to c). The consequence is that even with substitutes to unification, coordination is tricky to account for in a feature-structure-based framework. However, we are unaware of any general solutions for these and similar problems in any other framework yet. Nonconstituent coordination Consider the following English sentences: Rachel gave the sheep [grass] and [water]. Rachel gave [the sheep grass] and [the lambs water]. Rachel [kissed] and Jacob [hugged] Binyamin.
In the first sentence two noun phrases are conjoined, but determining the category of the combined phrases in the other two sentences is not so easy.
Cambridge Books Online © Cambridge University Press, 2012
208
5 Linguistic applications
Example 5.35 Coordination of unlikes.
C: V:
+
V : + SC : N : +
C:
C: N:
+
⎤
V:+ C: V:+
V:+
V:− ⎣ ⎦
C: V:+ C:c C:c C: SC : SC : N : + N:+ N:+ N:+ ⎡
C:
grew
and
remained
wealthy
and
a minister
In fact, the lambs water is not considered a constituent by common linguistic theories. Still, the sentence is perfectly grammatical. Consequently, the analysis suggested above for constituent coordination does not capture cases of non-constituent coordination. The correct way of handling such cases in this framework is still an unsettled issue. A plausible account of complicated coordination phenomena is provided by categorial grammars. 5.10 Unification grammars and linguistic generalizations This chapter presented several examples of unification grammars that account for diverse phenomena in the syntax of natural languages: agreement, case assignment, verb-argument structure, long-distance dependencies, and coordination. The common theme of the treatment of all these phenomena is the reliance on feature structures as the only mechanism for representing linguistic information; and reentrancy as the sole mechanism for encoding dependencies between constituents. Compared with context-free grammars, unification grammars provide much better means for expressing linguistic generalizations. Consider, for example, the treatment of verb subcategorization (Section 5.5). A single rule accounts for the combination of a verb with all its possible arguments. An equivalent CFG would necessitate as many rules as there are subcategorization patterns. The assignment of case to the arguments of verbs in a CFG has to be specified for each subcategorization pattern, whereas in unification grammars verbspecific information is deferred to the lexicon, and the single rule remains general.
Cambridge Books Online © Cambridge University Press, 2012
5.11 Unification-based linguistic formalisms
209
A similar, more extreme example of linguistic generalizations facilitated by unification grammars is coordination (Section 5.9), where a single unification rule was stipulated when several CFG rules would have been needed. Unification grammars also provide much more informative structures than CFGs. Derivation trees can express linguistic information in several ways: The structure of the tree itself is as informative as the CFG tree, but since the nodes of trees are now complex entities, feature structures, more expressive power is possible. In particular, two similar (but not identical) sentences can have the same structural trees while still being different in the contents of some of the nodes. For example, our agreement grammar (Section 5.1) induces two structurally similar trees on sentences that differ in the number of their verb; the distinction is reflected in the contents of the feature structure nodes of those trees. Finally, unification grammars provide a very powerful tool for expressing what other linguistic theories would call “movement.” The relations between gaps and fillers, for example, or between dislocated constituents and their “deep structure” position (Sections 5.6 and 5.7) can be expressed by value sharing (reentrancy). This is a mechanism that is all together missing in CFGs. 5.11 Unification-based linguistic formalisms We present unification grammars as a formalism for natural languages that is, on one hand, highly expressive, enabling the grammar designer to express generalizations that may otherwise be quite hard to specify, and on the other hand, computationally efficient, admitting effective parsers (Chapter 6). Indeed, since their inception in the 1980s, unification grammars have become a dominant paradigm in computational linguistics and natural language processing. Several linguistic theories are based on feature structures, with unification being the main or even only operation permitted by the formalism. We briefly survey some of these theories in this section, as a demonstration of the utility (and the extent of use) of unification grammars to linguistic theory. We begin with lexical functional grammar. LFG is a lexical theory (see Section 1.8); this means that the lexicon associates words with a wealth of information at various levels of linguistic knowledge: phonological, morphological, syntactic, and semantic. LFG grammars exhibit two separate levels of syntactic representation: c-structure and f-structure. The former encodes information regarding the constituent structure, in much the same way as derivation trees encode such information in context-free grammars. The latter is a functional structure, used to hold information about functional relations, and is encoded
Cambridge Books Online © Cambridge University Press, 2012
210
5 Linguistic applications
using feature structures. In addition, there is a mapping φ relating nodes in the c-structure with substructures of the f-structure. In several linguistic theories, most notably Government and Binding (GB), grammatical functions are defined configurationally. A subject in English, for example, is a noun phrase (in more recent formulations of the theory, a determiner phrase) located (approximately) under an S-node in a certain level of representation. In other languages, though, this characterization of subjecthood need not hold. In LFG, on the other hand, grammatical functions are taken to be primitive; they cannot be defined in terms of other theoretical entities. In spite of the central role that grammatical functions fill in LFG, there is no explicit discussion of criteria for their choice. Whereas for simple sentences, for example, those in E0 , one can make use of traditional functions such as “subject” and “object,” how to choose grammatical functions for more complicated structures is less obvious; in general it is left to the discretion of the grammar writer. For simple languages, one role of f-structures is the isolation and explicit representation of the predicate-argument structure, allowing for some crosslinguistic generalizations. For example, in E0 -level sentences in languages with different word (or constituent) order, the f-structure corresponding to a sentence with a transitive verb is uniformly: ⎤ ... ⎣ SUBJ : . . . ⎦ OBJ : . . . ⎡
PRED :
although the c-structures, of course, differ. The different mappings from the different c-structures to the single f-structure will reflect the type of the language: SVO, SOV, and so on. This approach allows also the abstraction by which grammatical functions have no configurational characterization, and are, for example, identified via explicit case markings. Thus, grammatical functions are represented as feature names in an AVM representing an f-structure. Other features in f-structures represent morphosyntactic information, for example, agreement features. The syntactic component of the grammar supports a natural interface to semantic interpretation via further projection mappings to other levels of representation. Since its introduction in 1982, LFG has acquired a significant position in the linguistic and computational linguistic communities. Several implementations of grammar development environments exist for the formalism, the most popular one being XLE. A large number of grammars have been developed for a variety of languages, including English, French, German, Chinese, Japanese,
Cambridge Books Online © Cambridge University Press, 2012
Further reading
211
Urdu, and many others. Current work in LFG addresses both theoretical linguistic questions and practical grammar engineering explorations. In many ways, head-driven phrase structure grammar is a competing theory to LFG. While both rely on feature structures and unification, two significant aspects distinguish between them. First, HPSG is based on typed feature structure. This is an extension of the feature structures we present in this book, where each structure is associated with a type, drawn from a partially ordered set. Enjoying all the benefits of typed systems in programming languages, typed feature structures facilitate a more concise specification of linguistic information with an additional dimension in which generalizations can be expressed, more compile-time checks, and more efficient implementation. Second, HPSG does away with context-free skeletons for rules and derivation trees; in other words, it has no direct counterpart to the c-structure of LFG. Instead, rules in HPSG are expressed as logical implications over typed feature structures. Practical implementations of the theory, however, do require a context-free backbone, either explicitly or implicitly. Like LFG, HPSG has become very influential, especially in computational linguistics. Several grammar development environments have been constructed for the theory, of which TRALE and LKB are the most popular today. HPSG grammars have been developed for a wide range of languages, including French, German, Japanese, Korean, and Norwegian. Current work in HPSG focuses on further development of grammars, language typology investigations, and grammar engineering. Other linguistic theories use feature structures in a more restricted ways. Noteworthy is tree-adjoining grammars (TAG), for which a feature-structure extension was introduced as early as 1991; and categorial grammars, for which several feature-structure extensions have been proposed.
Further reading Unification grammars have been the underlying formalism for a great number of implemented grammars, especially in LFG and HPSG. LFG was introduced by Kaplan and Bresnan (1982) and is described in detail in several subsequent publications (Sells, 1988; Dalrymple et al., 1995; Dalrymple, 2001). Its main grammar development implementation is XLE (Butt et al., 1999; Kaplan et al., 2002). Grammars are developed under a variety of projects, of which the most prominent is probably the PARGRAM project (Butt et al., 1999; King et al., 2005) in which LFG grammars are developed for a large number of natural languages in parallel.
Cambridge Books Online © Cambridge University Press, 2012
212
5 Linguistic applications
HPSG is due to Pollard and Sag (1987, 1994); it is extensively discussed in several other textbooks and monographs (Sag and Wasow, 1999; Copestake, 2002). Grammar development environments for the theory include ALE (Carpenter and Penn, 1994), TRALE (Meurers et al., 2002), LKB (Copestake, 2002), and Grammix (Müller, 2007). The DELPH-IN project (Oepen et al., 2002) uses HPSG as the theory underlying grammars for a variety of languages. Tree-adjoining grammars were introduced by Joshi et al. (1975); modern incarnations are discussed extensively in the literature (Joshi, 1987; VijayShanker, 1987; Joshi, 2003). The formalism was augmented with feature structures by Vijay-Shanker and Joshi (1991). Feature structures were independently introduced to categorial grammar (Haddock et al., 1987) by several researchers (Uszkoreit, 1986; Zeevat et al., 1987; Pareschi and Steedman, 1987; Damas and Moreira, 1995). The ‘slash’ notation that we used for long-distance dependencies originated in the GPSG literature (Gazdar et al., 1985) and has become the standard analysis in categorial grammars, where this type of representation is the natural one. The original category symbol used to designate sentences that lack a noun phrase was S/NP, and for historical reasons, the term “slash” was retained for the feature that records the missing constituent in such phrases. The limitations of unification in the face of coordination were recognized by Sag et al. (1985). The example of NP-coordination in Hebrew is from Wintner and Ornan (1996). Shieber (1992) discusses the problems and coins the term ‘generalization’ for the greatest lower bound of two feature structures. The inadequacy of feature-structure-based frameworks, even when augmented with generalization, to account for coordination (and Examples 5.34 and 5.35) are due to Bayer and Johnson (1995). A linguistic theory that accounts very naturally for coordination is categorial grammars (Haddock et al., 1987; Wood, 1993; Moortgat, 1997; Steedman, 2000).
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
6 Computational aspects of unification grammars
In previous chapters we presented a linguistically motivated grammatical formalism and focused on its mathematical formulation. We said very little about the computational properties and processing of grammars expressed in the formalism. This chapter is concerned with such aspects. We focus on the expresiveness of the formalism, and on computational procedures for processing it. The expressiveness of a linguistic formalism F (specifying grammars) determines the class of (formal) languages that can be defined by the grammars in F . Typically, it also bears on the computational complexity that is needed to solve the universal recognition problem with respect to the formalism. This is the problem of determining, given an arbitrary grammar G in F and an arbitrary string w, whether w ∈ L(G). Thus, context-free grammars define (all and only) the context-free languages; the universal recognition problem for CFGs can be solved in cubic time in the length of the input string. Regular expressions define the regular languages; the universal recognition problem for regular expressions can be solved in linear time. Also, Turing machines define all the recursively enumerable languages, but the universal recognition problem is undecidable for this formalism. We begin this chapter with a discussion of the expressive power of unification grammars in Section 6.1. In Section 6.2 we show that unification grammars are equivalent to Turing machines. This implies that the universal recognition problem for the formalism is undecidable. Therefore, we discuss in Section 6.3 a condition on grammars, known as off-line parsability, which ensures the decidability of the recognition problem for unification grammars that satisfy it. A more constraining condition, allowing only branching unification grammars, is presented in Section 6.4. More restrictive constraints, which further limit the expressiveness, thereby guaranteeing that recognition with grammars obeying these constraints is polynomial, are discussed in Section 6.5. 213
Cambridge Books Online © Cambridge University Press, 2012
214
6 Computational aspects of unification grammars
We then move on to processing unification grammars, focusing on a parsing algorithm. Two processing tasks are associated with any grammatical formalism F : Recognition: For an arbitrary grammar G ∈ F and an arbitrary word w ∈ Σ∗ , determine whether w ∈ L(G). Parsing: For an arbitrary grammar G ∈ F and an arbitrary word w ∈ L(G), assign a structure to w determined by G (there may be more than one). It is sometimes convenient to combine the two tasks into one, and call it parsing: In this case, parsing has to determine whether w ∈ L(G), and if it is, produce the structures G induces on w as “evidence” of membership. The formalism F determines the kind of syntactic structures assigned to strings in L(G): For context-free grammars, it is a phrase-structure tree; for unification grammars, we take it to be feature-structure trees (Section 4.8). The major purpose of parsing, at least with grammars of natural languages, is not merely to produce a tree, but to assist in the computation of meaning. Because the representation of meaning, and semantic theory in general, are beyond the scope of this book, we do not cover them here. We also suppress a discussion of generation, a dual problem to parsing, which converts an abstract meaning to a phrase in the language of the grammar that expresses this abstract meaning. Parsing theory is an active field of research in both the formal and the natural language communities. A great number of parsing algorithms have been developed, and we do not intend to cover all of them here. Rather, we concentrate on a simple, bottom-up, chart-based algorithm in this chapter (Section 6.7). We first present such an algorithm for context-free grammars, and then show how it can be naturally extended to unification grammars. We discuss certain complications of this extension, and in particular, focus on questions of the termination and efficiency of the algorithm. 6.1 Expressiveness of unification grammars We have hinted that unification grammars are more expressive than CFGs. In fact, we introduced unification grammars to facilitate the expression of linguistic generalizations that were hard to capture with CFGs. In this section we discuss the expressivity of unification grammars and prove that they are strictly more powerful than CFGs, even when weak generation capacity is concerned. First, however, we note that unification grammars are at least as expressive as CFGs. In other words, for every context-free language L there exists
Cambridge Books Online © Cambridge University Press, 2012
6.1 Expressiveness of unification grammars
215
a unification grammar G such that L(G) = L. The reason is that CFGs are, in effect, a special case of unification grammars. This is established in the following lemma. Lemma 6.1 Every CFG is weakly equivalent to a unification grammar; that is, for every CFG G there exists a unification grammar G such that L(G) = L(G). Proof Given a context-free grammar G = Σ, V, S, P , construct a signature S = ATOMS, F EATS , where ATOMS = Σ ∪ V and F EATS = ∅; and construct a unification grammar G over S by taking the rules of G as the rules of of G , viewing the terminal and nonterminal symbols of G as atoms in G . When all feature structures are atomic, unification amounts to identity check; hence, every derivation in G is also a derivation in G . If a string is derivable by G , then exactly the same derivation can be used in G. Hence L(G) = L(G ). In what follows, however, we employ a slightly more verbose version of this idea. Rather than use atoms to represent the terminal and nonterminal symbols of a CFG in a unification grammar, we use a representation whereby a designated feature, called CAT, takes as values atoms that represent the nonterminal symbols of a CFG. In other than use
the atoms s, np, and vp, we
words, rather use the feature structures CAT : s , CAT : np , CAT : vp , and so on. To demonstrate the extra expressivity of unification grammars, we list below two unification grammars for formal languages that are known to be trans-context-free. The first example (6.1) presents a grammar, Gabc , for the language whose sentences are sequences of a’s, b’s, and c’s, in this order, in which the lengths of the sequences are equal: Labc = {an bn cn | n > 0}. The signature of the grammar consists of F EATS = { CAT, T} and ATOMS = {s, ap, bp, cp, at, bt, ct, end}. The terminal symbols are, of course, a, b, and c. The start symbol is the left-hand side of the first rule. Feature structures in this example have two features: CAT, which stands for category, and T, which counts the length of sequences of a-s, b-s, and c-s. The “category” is ap for strings of a-s; bp for b-s; and cp for c-s. The categories at, bt and ct are preterminal categories of the terminal symbols a, b, and c, respectively. Counting is done in unary base: A string of length n is derived by an AFS (i.e., an AMRS of length 1) with a depth of n. For example, the string bbb is derived by the following AFS (for the actual derivation sequence, refer to Example 6.2): CAT : bp
T: T : T : end The atom end is used to terminate a nested sequence of T-s. The idea is that feature structures having a nested sequence of n T-s be unifiable only with
Cambridge Books Online © Cambridge University Press, 2012
216
6 Computational aspects of unification grammars
Example 6.1 A unification grammar Gabc for the language Labc = {an bn cn | n > 0}.
CAT : ap CAT : bp CAT : cp → ρ1 : CAT : s 1 1 1 T: T: T: ρ2 :
CAT : T:
ρ3 :
CAT : T:
ρ4 :
CAT : T:
ρ5 :
CAT : T:
ρ6 :
CAT : T:
ρ7 :
CAT :
ap
T: 2
ap end
at
CAT :
bt
CAT :
ct
→ →
T: 2
CAT :
cp
T:
→ →
T: 2
cp end
bp
bp end
→
→
CAT :
at
CAT :
at
CAT :
bt
CAT :
bt
CAT :
ct
CAT :
ct
CAT : T:
ap
2
CAT : T:
bp
2
CAT : T:
cp
2
→ a → b → c
feature structures that have nested sequences of the same depth. We return to this point presently. We use two notations in the following discussion. First, we define AFSs such as the one depicted above in a generic way. Definition 6.2 Let n be a natural number and u ∈ {a, b, c}. Then σ(n, u) = Ind, Π, Θ, ≈ is such that Ind = 1, Π = {(1, ), (1, CAT)} ∪ {(1, Ti ) | 1 ≤ i ≤ n} and Θ is defined by: ⎧ ⎨ ap u = a Θ(1, CAT ) = bp u = b ; ⎩ cp u = c
Θ(1, Tn ) = end
Cambridge Books Online © Cambridge University Press, 2012
6.1 Expressiveness of unification grammars
217
The only reentrancies in σ(n, u) are the trivial ones (i.e., (i, π) ≈ (i, π)). Depicted as an AVM, σ(n, u) is: CAT : ap/bp/cp
T: T : · · · T : end n times Definition 6.3 Let n be a natural number. Then τ (n) = Ind, Π, Θ, ≈ is such that Ind = 1, Π = {(1, )} ∪ {(1, Ti ) | 1 ≤ i ≤ n} and Θ is defined by: Θ(1, Tn ) = end. The only reentrancies the trivial ones ((i, π) ≈
in τ (n) are
(i, π)). Depicted as an AVM, τ (n) is: T : T : · · · T : end n times Exercise 6.1 (*). Show σ(b, 4) and τ (5) as AVMs. Feature structures of the form τ (n) are indeed representations of natural numbers. Because they are lists made up of the single feature T, they are only consistent if they are identical (and, in particular, are of the same length). To see this, assume that one of these lists is shorter than another. The shorter list would have the value end, whereas the longer list would have a complex feature structure (with the feature T) as the value of the same path, and unification will fail. Lemma 6.4 Two feature structures τ (n1 ) and τ (n2 ) are consistent iff n1 = n2 . Proof If n1 = n2 , then τ (n1 ) = τ (n2 ), and they are trivially consistent. If n1 = n2 , assume without loss of generality that n1 < n2 . Consider the path π = Tn1 . Observe that Θτ (n1 ) (π) = end, whereas Θτ (n2 ) (π) is undefined, and also that a path extending π is defined in τ (n2 ), namely the path Tn1 +1 . By the definition of unification (Definition 3.21, Page 104) the unification fails due to the undefinedness of T y. Now that we know how the strings an , bn , and cn are represented, it should be easy to understand how the first rule of the grammar, ρ1 , permits the concatenation of exactly three substrings of exactly the same length, n. The body of ρ1 consists of three AFSs, whose CAT features are valued ap, bp and cp. This means that the elements in the body of ρ1 derives a string of a-s, b-s, and c-s, in this order. In addition, the rule also requires that the values of the T feature in all three AFSs be unifiable. By Lemma 6.4, these values are only consistent if they are identical (and, in particular, are of the same length).We show that a2 b2 c2 ∈ Labc of Example 6.1 by showing in Examples 6.2, 6.3 a derivation sequence for this string. Then, Example 6.4 depicts a derivation tree for this string.
Cambridge Books Online © Cambridge University Press, 2012
218
6 Computational aspects of unification grammars
Example 6.2 Derivation sequence of a2 b2 c2 .
We start with a form that consists of the start symbol, σ0 = CAT : s . Only one rule, ρ1 , can be applied to the single element of the AMRS in σ0 . Observe that unifying (σ0 , 1) (ρ1 , 1) yields σ0 , ρ1 , and hence σ0 ⇒ σ1 , where σ1 =
CAT :
ap
T:
CAT :
bp
T:
1
CAT :
cp
T:
1
1
.
Now, computing (σ1 , 1) (ρ2 , 1) yields σ1 , ρ2 , where σ1
=
CAT : T:
ap
1
T: 2
CAT :
bp
T:
CAT : T:
1
cp
.
1
The AMRS above is just a convenient format for depicting the more complete information in which reentrant values are duplicated; we provide the more detailed representation in the following material to emphasize the information that is shared among elements of the AMRS. σ1
=
CAT : T:
ap
1
T: 2
CAT : T:
bp
1
T: 2
CAT : T:
cp
1
T: 2
.
Note that elements other than the first in σ1 are modified due to the reentrancy of the values of T in σ1 . Also, in this case, since ρ12 σ11 , we get ρ2 = ρ2 . Hence, σ1 ⇒ σ2 , where: σ2 =
CAT :
at
CAT : T:
ap 2
CAT : T:
bp
1
T: 2
CAT : T:
cp
1
T: 2
.
To prove that L(Gabc ) = Labc = {an bn cn | n > 0}, we use two auxila single iary lemmas. Observe that for w = u1 · · · un , P Tw (1, n) contains AMRS of length n whose i-th element is the AFS CAT : at when ui = a;
CAT : bt when ui = b; and CAT : ct when ui = c. As noted above, we identify P Tw (1, n) (when it is a singleton set) with its single member. In the following lemmas we make use of σ(u, n) and τ (n), defined in Definitions 6.2 and 6.3. Lemma 6.5 For every n ≥ 1, if w = un , where u ∈ {a, b, c}, then there is a n derivation σ(u, n) ⇒ P Tw (1, n).
Cambridge Books Online © Cambridge University Press, 2012
6.1 Expressiveness of unification grammars
219
Example 6.3 Derivation sequence of a2 b2 c2 (Continued) CAT : bp 3
In the same way, we can now choose σ2 = , which is unifi1 T: 2 T: CAT : bp . Computing (σ2 , 3) (ρ4 , 1) yields σ2 , ρ4 ,
able with ρ14 = T: T: 2 where σ2 = σ2 and ρ4 = ρ4 . Hence, σ2 ⇒ σ3 , where σ3 =
CAT :
at
CAT :
ap
T:
2
CAT :
bt
CAT :
bp
T:
CAT : T:
2
cp
T: 2
.
In the following examples, the feature CAT is shortened to C. Applying ρ6 to σ35 , we obtain σ3 ⇒ σ4 , where σ4 =
C:
at
C:
ap
T: 2
C:
bt
C:
bp
T: 2
C:
ct
C:
cp
T: 2
.
The second element of σ4 is unifiable with the heads of both ρ2 and ρ3 . We choose to apply ρ3 ; (σ4 , 2) (ρ3 , 1) yields σ4 , ρ3 , where ρ3 = ρ3 and σ4 =
C:
at
C:
ap T : end
C:
bt
C:
bp T : end
C:
ct
cp . T : end C:
The choice of applying ρ3 rather than ρ2 was not arbitrary, of course; the exponent n is represented in the derivation by the depth of the feature structure under the feature T, the variable 1 in σ1 above. By choosing to apply ρ2 we extend this depth, thereby adding to the exponent; applying ρ3 ends this process. See how the unification affects all occurrences of the variable 2 , not just the one directly involved in the operation. Hence, σ4 ⇒ σ5 , where σ5 =
C:
at
C:
at
C:
bt
C:
bp T : end
C:
ct
cp . T : end C:
In the same way, we can now apply ρ5 and ρ7 and obtain, eventually, σ7 =
CAT :
at
CAT :
at
CAT :
bt
CAT :
bt
CAT :
ct
CAT :
ct .
Now, let w = aabbcc. Then σ7 is a member of P Tw (1, 6); in fact, it is the only member of the preterminal set. Therefore, w ∈ L(Gabc ).
Cambridge Books Online © Cambridge University Press, 2012
220
6 Computational aspects of unification grammars
Example 6.4 Derivation tree of a2 b2 c2 . The following is a derivation tree for the string aabbcc. As in Example 6.2, we duplicate here the values tagged by 1 and 2 to emphasize value sharing.
CAT : ap T: 1
T: 2
end
CAT :
CAT :
at
CAT :
a
at
T: 2
CAT :
a
bt
end
CAT : T:
CAT : bp T: 1
ap 2 end
T:
cat : s
bt
CAT : cp T: 1
bp 2 end
CAT :
b
T: 2
CAT : T:
CAT :
b
ct
end
cp 2 end
CAT :
c
ct
c
Proof We prove only the case of u = a. Because the grammar rules are parallel for a, b and c, the proof is similar for the other two cases. The proof is by induction on n. For n = 1, by the rule ρ3 , σ(a, 1) =
CAT : T:
ap end
1
⇒
CAT :
at
1
That is, we obtain σ(a, 1) ⇒ P Tw (1, 1). For n > 1, assume that the hypothesis n−1 holds for n − 1, so that if w = an−1 , σ(a, n − 1) ⇒ P Tw (1, n − 1). Let w = n n aw = a . We show that σ(a, n) ⇒ P Tw (1, n) by applying to σ(a, n) the rule: ρ2 :
CAT : T:
ap
T: 4
→
CAT :
at
CAT : T:
ap 4
.
Clearly, this rule is applicable to σ(a, n), as unifying (σ(a, n), 1)(ρ2 , 1) yields σn , ρ2 , where σn = σ(a, n). Then, ρ2 is like ρ2 , the only possible difference being that the value of 4 may be further instantiated: It now consists of a list of length n − 2 of T-s. Hence, CAT : ap 1
σ(a, n) ⇒ CAT : at , 2 T:
Cambridge Books Online © Cambridge University Press, 2012
6.1 Expressiveness of unification grammars
221
where 2 consists of a list of length n − 1 of T-s, terminated by end. In other words, 1
σ(a, n) ⇒ CAT : at σ(a, n − 1) By the induction hypothesis, and using the definition of derivation (Definition 4.31, Page 151), n−1
σ(a, n − 1) ⇒ P Tw (1, n − 1) n
and we obtain σ(a, n) ⇒ P Tw (1, n) .
Lemma 6.6 Let w be such that |w| = n and σ a feature structure. If σ is such ∗ that Πσ ⊇ {1, CAT , 1, T }, and there exist i, j, such that σ ⇒ P Tw (i, j), then σ = σ(u, n) where n = j − i + 1 and u is such that P Tw (i, j) = un . Proof Again, we prove only the case of u = a, by induction on the length of the derivation, k. For k = 1, if σ ⇒ P Tw (i, j) (in one derivation step) and Πσ ⊇ {1, CAT , 1, T }, then the only possible rule to license the derivation is ρ3 :
CAT : ap → CAT : at T: end Observe that the head of this rule is σ(a, 1) and thus its yield must be the string a. Hence, the requirements of the lemma hold. k−1 Assume that the hypothesis holds for k − 1; namely, if σ ⇒ P Tw (i, j), then σ = σ(a, k − 1), where k − 1 = j − i + 1 and P Tw (i, j) = ak−1 . Now, k
if σ ⇒ P Tw (i , j ) and Πσ ⊇ {1, CAT , 1, T }, then the first rule used for the derivation must be ρ2 :
CAT : T:
ap
T: 4
→
CAT :
at
CAT : T:
ap
4
Because the first element in thebody of this rule is the preterminal of a, we obtain CAT : ap , must derive P Tw (i + 1, j ) in k − 1 that the second element, σ = 4 T: steps. By the induction hypothesis, σ = σ(a, k −1) and k −1 = j −(i +1)+1; hence, k = j − i + 1. Hence, σ = σ(a, k) and P Tw (i , j ) = ak . Corollary 6.7 L(Gabc ) = Labc = {an bn cn | n > 0}. ∗
Proof If w = an bn cn then, by the Lemma 6.5, there exist derivations σ(a, n) ⇒ ∗ ∗ P Tw (1, n), σ(b, n) ⇒ P Tw (n + 1, 2n), σ(c, n) ⇒ P Tw (2n + 1, 3n). Let σ be
Cambridge Books Online © Cambridge University Press, 2012
222
6 Computational aspects of unification grammars
the following AMRS, where 4 is τ (n − 1):
σ =
CAT : T:
ap
CAT :
bp
T:
4
CAT :
cp
T:
4
4
Observe that σ 1 = σ(1, n), σ 2 = σ(b, n), σ 3 = σ(c, n). Since σ is unifiable with the body of the first it is possible to use this rule to the grammar,
rule of
license the derivation CAT : s ⇒ σ (refer again to Definition 4.31). Hence, CAT : s ⇒ P Tw (1, 3n) and w ∈ L(G). If w ∈ L(G) and |w| = n, then there exists a derivation from the start symbol to P Tw (1, n), and this derivation must start with
CAT :
s ⇒
Let
σa =
CAT :
ap
T:
CAT : T:
4
ap 1
CAT : T:
σb =
bp
T:
4
CAT : T:
CAT :
bp 2
cp
4
σc =
∗
⇒ P Tw (1, n)
CAT : T:
cp
3
Consider first σa . According to Lemma 6.6, there exist i, j, such that σa = σ(a, na ), where na = j − i + 1 and u is such that P Tw (i, j) = ana . In this case, i = 1. Similarly, there exists k such that σb = σ(b, nb ), where nb = k − j + 1. Also, σc = σ(c, nc ), where nc = n − k + 1. However, because the value of the tag 4 is shared among all three constituents, by the definition of σ(u, n), necessarily na = nb = nc = n/3; hence, w = aj bj cj . The second example is concerned with the (formal) repetition language Lww , defined as {ww | w ∈ {a, b}+ }. A grammar Gww for Lww is listed in Example 6.5. Here the idea is similar: The feature structure associated with a phrase encodes, in a list notation, the characters that constitute the phrase. Example 6.6 depicts a (partial) derivation of the string aab with Gww ; of course, the derivation does not start with the start symbol of the grammar (since the string is not in the language L(Gww )). A (full) derivation tree for the grammatical string aabaab is depicted in Example 6.7. Exercise 6.2. Show a derivation of abab with Gww . Exercise 6.3 (*). In the tree of Example 6.7, what is the full AVM whose yield is the substring aab?
Cambridge Books Online © Cambridge University Press, 2012
6.1 Expressiveness of unification grammars
223
Example 6.5 A unification grammar Gww for {ww | w ∈ {a, b}+ }. The signature of the grammar consists of F EATS = { CAT, FIRST, REST} and ATOMS = {s, ap, bp, elist}. The terminal symbols are a and b. The start symbol is the left-hand side of the first rule.
FIRST : 4 FIRST : 4 → 1 CAT : s REST : 2 REST : 2 ⎡ ⎤ FIRST : ap FIRST : ap FIRST : 4 2 ⎣ FIRST : 4 ⎦ → REST : 2 REST : REST : elist REST : 2 ⎤ ⎡ FIRST : bp FIRST : bp FIRST : 4 ⎦ ⎣ → 3 FIRST : 4 REST : REST : 2 REST : elist REST : 2 FIRST : ap 4 → a REST : elist FIRST : bp → b 5 REST : elist
Example 6.6 A partial derivation of the string aab with Gww . ⎡
FIRST :
ap ⎡
⎤
⎤
⎢ ⎥ FIRST : 4 ap ⎢ ⎥ ⎣ REST : ⎣ ⎦⎦ FIRST : 14bp REST : 2 REST : 12 elist ⎤ ⎡ FIRST : 4 ap FIRST : ap ⎦ ⎣ ⇒ FIRST : 14 bp REST : 2 REST : elist REST : 12 elist FIRST : ap FIRST : ap FIRST : 14 bp ⇒ REST : elist REST : elist REST : 12 elist ∗
⇒ a a b
Rule 2
Rule 3 Rules 4, 5
Exercise 6.4. Show that the string ab is not derivable with the grammar Gww . To prove that L(Gww ) = Lww , the following propositions can be proven. ∗ First, to establish L(Gww ) ⊆ Lww , assume that σ ⇒ P Tw (1, n) where |w| = n. Then σ must be of one the forms below:
Cambridge Books Online © Cambridge University Press, 2012
224
6 Computational aspects of unification grammars
a
FIRST
FIRST :
2
a
ap REST : elist
12
FIRST : 14ap REST : 12
REST : 2
a
FIRST :
ap REST : elist
ap FIRST : 4
b
: bp REST : elist
a
FIRST :
2 FIRST :
ap REST : elist
s CAT :
b
FIRST
12
ap REST : elist
FIRST : 14 ap REST : 12
FIRST
: 4 ap REST : 2
: bp REST : elist
Example 6.7 A derivation tree for the string aabaab. Below is a complete derivation tree for the string aabaab. For the sake of clarity, we duplicate some of the information that multiple occurrences of the same tag are associated with.
ap iff n = 1 and w = a; REST : elist FIRST : bp • σ is the feature structure iff n = 1 and w = b; REST : elist • σ is the feature structure
FIRST :
Cambridge Books Online © Cambridge University Press, 2012
6.1 Expressiveness of unification grammars • σ is the feature structure
FIRST :
ap
225
REST : 2
for some complex feature structure
(whose tag is) 2 iff n > 1and w = a ·w for some word w’; FIRST : bp • σ is the feature structure for some complex feature structure REST : 2 (whose tag is) 2 iff n > 1 and w = b · w for some word w’; • σ is the feature structure CAT : s iff w = w1 · w2 and w1 = w2 . To establish the reverse direction, L(Gww ) ⊇ Lww , assume that w = w1 · w2 and w1 = w2 . Show a derivation sequence with Gww for w1 by induction on its structure; show that the root of this derivation sequence is a form consisting of a single feature structure, encoding the structure of w as indicated above; and, finally, establish that a single derivation step suffices to derive two copies of this feature structure from the start symbol of the grammar, namely CAT : s , using the first rule of Gww . Exercise 6.5. Prove that L(Gww ) = Lww . Exercise 6.6 (*). Design a unification grammar for the (formal) language L = {an bm cn dm | 1 ≤ n, m}. Exercise 6.7. Design a unification grammar for the (formal) language L = {an bm cn dm | 1 ≤ n ≤ m}. Exercise 6.8. Design a unification grammar for the (formal) language L = {an bm cn dm en f m | 1 ≤ n ≤ m}. We have proven above the correctness of the grammar Gabc and sketched a proof of correctness for the grammar for Lww . These two examples can serve as hints for a general proof technique for the correctness of unification grammars. When context-free grammars are concerned, correctness proofs associate with every nonterminal symbol a language, and establish the association by means of mutual induction (on the length of derived strings in one direction, and on the derivation sequence in the reverse direction). With unification grammars the number of “categories” (i.e., feature structures that can root a derivation tree) is potentially infinite, and hence such a proof technique is unavailable. However, it is often possible to consider a particular feature structure and to establish an association between the (possibly infinitely many) feature structures it subsumes and a set of derived strings. For example, in the case of the language Lww , we established an association between the feature structure:
: ap REST : 1 FIRST
Cambridge Books Online © Cambridge University Press, 2012
226
6 Computational aspects of unification grammars
(which subsumes infinitely many feature structures, with increasing levels of embedding) and the (infinite) set of strings a · w for some string w . The rest of the proof progresses in exactly the same way as with context-free grammars. Exercise 6.9. Show a unification grammar for the language {ap | p is not a prime}. 2
Exercise 6.10. Show a unification grammar for the language {an | n > 0}. n
Exercise 6.11 (*). Show a unification grammar for the language {a2 | n > 0}. 6.2 Unification grammars and Turing machines We have demonstrated, then, that the weak generative capacity of unification grammars is strictly greater than that of context-free grammars. A natural question is, just how expressive are unification grammars? As it turns out, the expressiveness of unification grammars goes far beyond that of CFGs; in fact, it can be proven that they are equivalent in their weak generative power to unrestricted rewriting systems, the most powerful computational device in the Chomsky hierarchy of (formal) languages. This is the same as saying that unification grammars are equivalent to Turing machines in their generative capacity, or that the languages generated by unification grammars are exactly the set of recursively enumerable languages. A Turing machine is the most general model of computation; if is believed to represent the most powerful computing devices imaginable and is an accepted formalization of the informal notion of effective procedures. By Church’s thesis, Turing machines are formal versions of algorithms: no computational procedure is considered an algorithm unless it can be represented by a Turing machine. Turing machines define languages; in terms of complexity, the universal recognition problem is undecidable for Turing machines. This means that given an arbitrary Turing machine M and an arbitrary string w, no procedure exists that can determine whether w ∈ L(M ). In this section we define Turing machines and prove a theorem that shows that unification grammars are their equivalent: Any arbitrary Turing machine can be “simulated” by a unification grammar. This fact raises some bothersome concerns: If unification grammars are equivalent to the most general, most powerful, and most expressive computational device, what are the implications for natural languages? Does this mean that such computational power is actually necessary in order to process language?
Cambridge Books Online © Cambridge University Press, 2012
6.2 Unification grammars and Turing machines
227
However, the fact that unification grammars are powerful, expressive devices does not necessarily mean that natural languages are, in the general case, recursively enumerable languages. While it is possible to design unification grammars that are extremely expressive (and, therefore, highly inefficient to process), it is also possible to impose constraints on unification grammars that will dramatically limit the classes of languages that they can generate (while still remaining trans-context-free). Simply put, “natural” grammars for natural languages are much more constrained than the general case that the formalism permits. In the following sections (6.3, 6.4, and 6.5) we discuss such constraints. We begin, however, with the most general case. In the following we assume some familiarity with elementary computation theory; readers with insufficient background can safely skip this section. We define (a variant of) Turing machines below: It is a machine with a single head, a two-way infinite tape, and three operations: rewriting a symbol on the tape (without moving the head), a left head move, and a right head move (without changing the contents of the tape). The machine accepts an input by a single final (accepting) state. Definition 6.8 (Turing machines) A (deterministic) Turing machine (Q, Σ, , δ, s, h) is a tuple such that: • • • • • •
Q is a finite set of states; Σ is an alphabet, not including the symbols L and R; ∈ Σ is the blank symbol; s ∈ Q is the initial state; h ∈ Q is the final state; and δ : (Q \ {h}) × Σ → Q × (Σ ∪ {L, R}) is a total function specifying transitions.
A configuration of a Turing machine consists of the state, the contents of the tape, and the position of the head on the tape. A configuration is depicted as a quadruple c = (q, wl , σ, wr ) where q ∈ Q, wl , wr ∈ Σ∗ and σ ∈ Σ; in this case, the contents of the tape is ω ·wl ·σ ·wr ·ω (the concatenation symbol ‘·’ is often omitted), and the head is positioned on the σ symbol. A given configuration yields a next configuration, determined by the transition function δ, the current state, and the character on the tape that the head is positioned on. Definition 6.9 (Next configuration) Let first(σ1 · · · σn ) =
σ1 n > 0 n=0
Cambridge Books Online © Cambridge University Press, 2012
228
6 Computational aspects of unification grammars but-first(σ1 · · · σn ) = last(σ1 · · · σn ) = but-last(σ1 · · · σn ) =
σ2 · · · σn n > 1 n≤1 σn n > 0 n=0
σ1 · · · σn−1 n > 1 n≤1
Then the next configuration c of a configuration c = (q, wl , σ, wr ) is defined iff q = h, in which case it is: ⎧ if δ(q, σ) = (p, σ ) where σ ∈ Σ ⎨ (p, wl , σ , wr ) c = (p, wl σ, first(wr ), but-first(wr )) if δ(q, σ) = (p, R) ⎩ (p, but-last(wl ), last(wl ), σwr ) if δ(q, σ) = (p, L) If c is the next configuration of c, we write c
c and say that c yields c .
In the first case, when the transition function specifies a rewrite operation, the contents of the tape to the left and to the right of the head remain intact, the position of the head is not changed, and the only change in the configuration is the contents of the tape at the position the head points to, which changes from σ to σ . The second case deals with a movement of the head to the right. If the next character to the right of the current position of the head is not blank, the head simply shifts one position to the right: wl of the next configuration is obtained by concatenating σ to the end of the current wl , the next σ is the first character in the current wr and the next wr is obtained by removing the first character in the current wr . A slight complication occurs when the current wr is empty. In this case, it is assumed that the rest of the tape to the right is padded with blanks. This assumption is realized by the definitions of first and but-first. Finally, the third case handles a left movement and is precisely symmetric to the case of right movement. Notice that a next configuration is only defined for configurations in which the state is not the final state, h; note also that since δ is a total function, there always exists a unique next configuration for every given configuration. A computation of the Turing machine is a (possibly infinite) sequence of configurations, starting with an initial configuration corresponding to some string w ∈ Σ∗ , the input. Given some input w, the initial configuration is c0 = (s, w, , ), where s is the initial state of the machine. A computation is thus a ∗
sequence c0 , c1 , . . . such that for all i > 0, ci−1 ci . We use ‘ ’ to denote the reflexive transitive closure of ‘ ’. Some computations may terminate: When a
Cambridge Books Online © Cambridge University Press, 2012
6.2 Unification grammars and Turing machines
229
computation yields a configuration in which the state is h, the final state, there is no next configuration (since δ is undefined on h). Other computations, however, can be nonterminating. Definition 6.10 (Language of a Turing machine) The language of a Turing machine M is the set L(M ) = {w ∈ Σ∗ | c0 = (s, w, , ) (h, w , σ, w ) for some w , w ∈ Σ∗ and σ ∈ Σ}.
∗
The language of a Turing machine is thus the set of strings for which the computation successfully terminates. Here, termination is only defined in terms of reaching the accepting (final) state, h. Specifically, observe that a Turing machine can reject an input word in two different ways: either by reaching a nonaccepting state at the end of the computation; or, crucially, because of nontermination: Computations with a Turing machine are not guaranteed to terminate. Note further that the contents of the tape upon termination is immaterial. To show that unification grammars are equivalent to Turing machines, we show how the operation of an arbitrary Turing machine can be “simulated” by a grammar. Specifically, we show that for every Turing machine M there exists a unification grammar G such that L(M ) = L(G). This result should not come as a surprise: Several restricted versions of Turing machines are known to be equivalent to the model defined above. In particular, a push-down automaton with two stacks (or a two-counter machine) is such a model of computation. Feature structures can be easily used to manipulate stacks, as we showed in Section 4.1, and any finite number of stacks can be encoded in a single feature structure. The idea behind the simulation is that the nested, unbounded nature of feature structures means that it is possible to encode the contents of a tape (represented as lists of characters) within the feature structures involved in a derivation. The state of the Turing machine can be easily encoded as the base category of the feature structure (the number of states being finite). The transitions are then encoded as simple manipulations of the feature structures. In the following encoding below we make use of the feature structure representation of lists, introduced in Section 4.1. Feature structures represent configurations of a Turing machine and consequently have four features: CAT, representing the state of the machine; CURR, representing the character under the head; RIGHT, representing the tape contents to the right of the head (as a list); and LEFT, representing the tape contents to the left of the head, in reversed order. Formally, a configuration c = (q, wl, σ, wr ) of the Turing machine is represented
Cambridge Books Online © Cambridge University Press, 2012
230
6 Computational aspects of unification grammars
by the feature structure
⎡
⎤ q ⎢ CURR : σ ⎥ ⎥ Ac = ⎢ ⎣ LEFT : 1 ⎦ , CAT :
RIGHT
: 2
where 1 stands for the (reversed) list representation of wl , and 2 stands for the list representation of wr . When the Turing machine rewrites the symbol under the head of the tape, the grammar will have a rule that changes the value of the CURR feature accordingly. Right movement and left movement of the head of the machine are simulated by rules that change the contents of the RIGHT and LEFT lists in the obvious way, as we illustrate. Given a Turing machine M = (Q, Σ, , δ, s, h) (and assuming, without loss of generality, that start, elist, lex ∈ Σ), we define a unification grammar GM over the signature F EATS = { CAT, LEFT, RIGHT, CURR , FIRST, REST}, ATOMS = Σ ∪ {start, elist, lex}, W ORDS = Σ. The lexicon maps every symbol σ ∈ Σ to the feature structure: CAT : lex CURR : σ The start symbol of the grammar is:
CAT :
start LEFT : elist
Grammar rules can be divided into five groups. First, two rules, which are almost independent of the specific Turing machine (they are only dependent on Σ), are responsible for generating arbitrary strings over Σ∗ , while at the same time recording the generated string (as a reversed list of alphabet symbols) as the value of the feature LEFT. Intuitively, these rules prepare the initial configuration of the Turning machine given some input word w by recording w on the left part of the tape (in reversed order), and by placing the head of the machine immediately to the right of w, on a blank symbol. ρ1 :
start
CAT :
start
LEFT : 1
→
LEFT : 1
ρ2 :
CAT :
CAT : CURR
lex
: 2
⎡ ⎣
CAT : LEFT :
⎤ s ⎢ CURR : ⎥ ⎥ → ⎢ ⎣ LEFT : 1 ⎦ RIGHT : elist ⎡
CAT :
Cambridge Books Online © Cambridge University Press, 2012
start
FIRST
⎤
: 2 ⎦
REST : 1
6.2 Unification grammars and Turing machines
231
For some input string w, the two
rules above can be used to derive a form consisting of several instances of CAT : lex , precisely the preterminals of w, followed by a single feature structure Ac0 , representing the initial configuration c0 of the Turing machine. In the special case of w = , ρ1 is inapplicable and ρ2 can be applied exactly once, deriving a form of length 1 (Ac0 ). The second group of rules are defined for rewriting transitions. For every q, σ such that δ(q, σ) = (p, σ ) and σ ∈ Σ, the following rule is defined: ⎡
⎤ ⎤ ⎡ q CAT : p ⎥ ⎢ CURR : σ ⎥ ⎢ ⎢ ⎥ → ⎢ CURR : σ ⎥ ρq,σ 3 : ⎣ ⎦ ⎣ RIGHT : 4 RIGHT : 4 ⎦ LEFT : 2 LEFT : 2 CAT :
That is, if the current state is q and the head points to σ, the next state is p and the head points to σ , while the left and the right portions of the tape are not changed. A third group of rules is defined for the right movement of the head. This case is slightly more complicated because the situation in which the right portion of the tape is empty must be carefully taken care of. For every q, σ such that δ(q, σ) = (p, R) we define two rules, the first for this extreme case and the second for the default case: ⎡ ρq,σ 4
ρq,σ 5
⎤
⎡
CAT :
p CAT : q ⎢ CURR : ⎢ CURR : σ ⎥ ⎢ ⎢ ⎥ elist → ⎢ ⎢ RIGHT : ⎣ RIGHT : elist ⎦ ⎣ FIRST : σ LEFT : LEFT : 4 REST : 4 ⎡ ⎤ ⎡ CAT : q CAT : p ⎢ CURR : σ ⎥ ⎢ CURR : 4 ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ 4 ⎢ RIGHT : FIRST : ⎥ → ⎢ RIGHT : 2 ⎣ ⎦ ⎣ REST : 2 FIRST : σ LEFT : LEFT : 5 REST : 5
⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦
The first rule is only triggered in case the RIGHT feature of the mother, q, has the value elist (any other value it can have is bound to be a complex feature structure, and hence, incompatible with the atom elist). Since the head moves to the right, the contents of the left portion of the tape are shifted left in the next state, p: Its first character is the current σ, and its rest is the current states’ left portion (recall that the left part of the tape is encoded as a list, in reversed order). Since the right portion of the current tape is empty, in the next configuration the head points to a blank symbol and the right portion of the tape remains empty.
Cambridge Books Online © Cambridge University Press, 2012
232
6 Computational aspects of unification grammars
The second rule is only triggered when the RIGHT feature of q is not empty: It must be a list because the value of RIGHT refers to the FIRST and REST features. The rule shifts the right portion of the tape leftward: The next configuration’s CURR feature is the first element in the current configuration’s right tape, and the rest of the current configuration’s right tape becomes the next configuration’s RIGHT . The fourth group of rules handles left movements in a symmetric fashion. For every q, σ such that δ(q, σ) = (p, L) we define two rules: ⎡ ρq,σ 6
ρq,σ 7
⎡
⎤ q ⎢ CURR : σ ⎥ ⎢ ⎥ ⎣ RIGHT : 4 ⎦ LEFT : elist
CAT :
p ⎢ CURR : ⎢ → ⎢ ⎢ RIGHT : FIRST : σ ⎣ REST : 4 LEFT : elist ⎡ ⎡ ⎤ CAT : q CAT : p ⎢ CURR : 2 ⎢ CURR : σ ⎥ ⎢ ⎢ ⎥ ⎢ RIGHT : 4 ⎥ → ⎢ FIRST : σ ⎢ ⎢ ⎥ ⎦ RIGHT : ⎣ ⎣ FIRST : 2 REST : 4 LEFT : REST : 5 LEFT : 5 CAT :
⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎥ ⎥ ⎥ ⎥ ⎦
Finally, one more rule is needed to terminate the derivation in case the Turing machine reaches the final state: ρ8 :
CAT :
h →
This rule is of course independent of the specific Turing machine. Recall that the first two rules derive, when applied to a word w, a form consisting of the sequence of preterminals of w, followed by one feature structure corresponding to the initial configuration of the Turing machine. The other rules listed above simulate the operation of the Turing machine, and in particular, the next configuration relation. Since all these rules (other than the first two) are unit rules, they all operate on the last element of the sentential form, and the length of the form never increases. If a computation is nonterminating, the derivation will never terminate either. If, however, the computation reaches the final state, the last rule of the grammar will fire and will replace the last element of the form by , yielding a form that consists exactly of the preterminals of the input w. In other words, w will be generated by the grammar. We suppress a complete proof that the Turing machine and the grammar simulating it are indeed equivalent. Such a proof would go along the following lines.
Cambridge Books Online © Cambridge University Press, 2012
6.3 Off-line parsability
233
Lemma 6.11 Let M be a Turing machine; and c1 , c2 , two of its configurations. Then c1 c2 iff Ac1 ⇒ Ac2 in GM . Lemma 6.12 Let M be a Turing machine; and w ∈ Σ∗ , its input. Then GM derives a sentential form consisting of the preterminals of w, followed by the feature structure Ac0 , where c0 is the initial configuration of M . Lemma 6.13 Let M be a Turing machine; and w ∈ Σ∗ , its input. Then, M terminates on the input w iff GM induces a finite derivation on w. Theorem 6.14 Let M be a Turing machine. Then, L(M ) = L(GM ). An immediate consequence of this theorem is the following: Corollary 6.15 The universal recognition problem for unification grammars is undecidable. Proof Assume toward a contradiction that a procedure P exists that can determine, for every unification grammar G and string w, whether w ∈ L(G). We outline a procedure for deciding whether an arbitrary Turing machine M accepts an input w, a contradiction. Let M be a Turing machine. Let GM be the unification grammar simulating M . Given an input w, use P to compute whether w ∈ L(G) and return the result. By Theorem 6.14, this is the correct result. What this means is that given an arbitrary unification grammar G and some string w, no procedure can determine whether or not w ∈ L(G). Of course, for many specific unification grammars the recognition problem is decidable, and may even be solved efficiently. In the following sections we consider some classes of such grammars. Exercise 6.12. Define a language L such that no unification grammar G exists for which L(G) = L.
6.3 Off-line parsability Unification grammars are Turing equivalent, and the recognition problem for unification grammars is undecidable in the general case. It is therefore desirable to look for constraints on unification grammars that, on one hand, limit their expressiveness but, on the other, facilitate more tractable computational processing. In this section and the following two sections, we explore such constraints on unification grammars, beginning with one that guarantees the decidability of the recognition problem. Several constraints on grammars, commonly known (generically) as the off-line parsability constraints (OLP), were
Cambridge Books Online © Cambridge University Press, 2012
234
6 Computational aspects of unification grammars
suggested, such that the recognition problem is decidable for OLP unification grammars. To understand the notion of off-line parsability, it is instrumental to understand why unification grammars are so expressive. Refer back to the construction of the grammar GM , simulating some Turing machine M , in the previous section. Observe that rules ρ3 through ρ7 , which actually simulate the operation of the machine, are all unit rules (with a single element in their bodies). Clearly, application of a unit rule during a derivation does not change the length of the sentential form. Observe further that these rules can feed each other. Once one of them is applied to the selected element of some sentential form, and the body is substituted for the head (with the necessary modifications) to yield a new form, other rules of this group can still apply to the same selected element in a subsequent derivation step. In principle, these rules can apply indefinitely, increasing the length of the derivation (and, consequently, the depth of the derivation tree) without expanding the length of the sentential form (or the frontier of the tree). Indeed, when GM simulates a nonterminating computation of M , this is exactly what happens. The motivation behind all OLP definitions is to rule out grammars which license trees in which unbounded amount of material is generated without expanding the frontier word. This can happen due to two kinds of rules: -rules, whose bodies are empty, and unit rules, whose bodies consist of a single element. If the length of the sentential form can be guaranteed to grow as the derivation progresses, then a procedure for determining whether a given string w ∈ L(G) could be based on exhaustive search: Simply generate all the possible derivations up to the length of w, and check whether one of them is indeed a derivation of w. The number of possible derivations whose length is limited, under the assumption that each derivation step increases the length of the sentential form, may be huge, but is still finite, so such a procedure, while inefficient, is still effective. With context-free grammars, removing rules which can cause an unbounded growth (while preserving equivalence) is always possible. In particular, one can always remove cyclic sequences of unit rules. However, with unification grammars such a procedure turns out to be more problematic. It is not trivial to determine when a sequence of unit rules is, indeed, cyclic and when a rule is redundant. One way to guarantee the decidability of the universal recognition problem with a unification grammar G is by guaranteeing that for every word w and every derivation tree τ that G induces on w, the depth of τ (d(τ ), refer back to Definition 1.6, Page 18) is bounded by some recursive function of the length of w. When this is the case, an exhaustive search algorithm can take advantage of this function to terminate the search.
Cambridge Books Online © Cambridge University Press, 2012
6.3 Off-line parsability
235
Lemma 6.16 (The bounding lemma) For every unification grammar G, if there exists a recursive function fG : N → N, such that for every word w and every tree τ induced by G on w, d(τ ) < fG (|w|), then membership for L(G) is decidable. Proof Assume that for some grammar G such a function, fG , exists. Consider a given word w and the set of trees induced on w by G. Since the size of grammar rules in G is finite, the branching degrees of nodes in such trees are finite; and since the depth of each tree is bounded by fG , the set of all such trees is finite. Therefore, an effective exhaustive search algorithm can enumerate the members of this set of trees, in increasing depth. If some derivation tree is found by the algorithm, then w ∈ L(G); otherwise, w ∈ L(G) since every tree for w must not be deeper than fG (|w|). Notice that the bounding lemma not only ensures the decidability of the recognition problem, but also the termination of the parsing problem (recall that the task of parsing is to compute the structures G induces on w when it is given that w ∈ L(G); this is discussed in detail in Section 6.7). If all the trees that are induced by a grammar are bounded in depth, then the exhaustive search algorithm is guaranteed to produce all of them in finite time. A more liberal version of the lemma would require only that for every word w ∈ L(G), at least one tree induced by G on w is depth bounded. This would still ensure the decidability of the recognition problem, but not of the parsing problem. Example 6.8 Bounding functions. Consider the grammar Gabc (Example 6.1). We claim without proof that for every w ∈ L(Gabc ) and every tree τ induced on w, d(τ ) < |w|. There exists therefore a function fGabc such that for every word w and every tree τ induced by fGabc on w, d(τ ) < fGabc (|w|); one such function is fGabc (n) = n. In contrast, we show a grammar for which no bounding function exists. Let M be a Turing machine such that for some input w, M does not terminate on w. Let GM be the grammar simulating M (as in Section 6.2). GM is a unification grammar for which no bounding function exists: if a bounding function fGM did exist, it would have been possible to enumerate all the trees induced on w up to depth fGM (|w|), thereby determining whether w ∈ L(GM ), in contradiction to the assumption that M does not terminate on w.
Cambridge Books Online © Cambridge University Press, 2012
236
6 Computational aspects of unification grammars
Exercise 6.13. Prove the first claim of Example 6.8 by induction on the depth of τ . The OLP constraint proposed here disallows grammars that can generate derivation trees in which the same rule may be applied more than once from two different nodes dominating the same yield. Note that this is a key aspect of the construction of GM , the grammar that simulates an arbitrary Turing machine M , which we presented in Section 6.2. The ability to manipulate stacks corresponding to the tape of M was mandatory for the proper simulation with GM , and the rules of GM are all unit rules. In particular, derivations with GM can include several applications of the same rule, and since all the rules are unit rules, a potentially unbounded number of different feature structures can dominate the same leaf in the yield of the tree. Such a situation is ruled out with OLP grammars. Definition 6.17 (Nonbranching dominance chains) Given a grammar G and a derivation sequence σ1 ⇒ σ2 ⇒ · · · ⇒ σn induced by G, n ≥ 2, a subsequence σi ⇒ · · · ⇒ σj is nonbranching if for every k, i ≤ k < j, the immediate derivation σk ⇒ σk+1 is licensed by a unit rule (a rule of length 1). A derivation tree includes a nonbranching dominance chain if a derivation it represents includes a nonbranching subsequence.
Example 6.9 Nonbranching dominance chains. Refer back to Example 6.4. (Page 220). The tree includes a nonbranching dominance chain, as a unit rule is used to license the following derivation:
CAT : T:
ap ⇒ CAT : at 2 end
In the following, grammars are assumed to include no -rules, but a more liberal constraint allowing them can be devised. The constraint we propose is a static property of the grammar that can be effectively tested off-line. Definition 6.18 (Cyclically unifiable rules) A sequence of (not necessarily pairwise distinct) unit rules R1 , . . . , Rk (k ≥ 1) is cyclically unifiable iff there exists a sequence of feature structures1 σ1 , . . . , σk+2 such that for 1 ≤ i ≤ k, σi ⇒ σi+1 by the rule Ri , and σk+1 ⇒ σk+2 by R1 . 1
Formally, these should be AMRSs of length 1, which are identified with feature structures here.
Cambridge Books Online © Cambridge University Press, 2012
6.3 Off-line parsability
237
Example 6.10 displays two grammar rules, ρ1 , ρ2 . The sequence ρ1 , ρ2 is cyclically unifiable, for example, by σ1 =
CAT : F:
p CAT : q CAT : q , σ2 = , σ3 = F : b , σ4 = a F: a F: b
Then, σ1 is unifiable with ρ1 ’s head, σ1 ⇒ σ2 by ρ1 , σ2 ⇒ σ3 by ρ2 , and then unifiable; whatever ρ2 σ3 ⇒ σ4 by ρ1 . The sequence ρ2 , ρ1 is not
cyclically applies to, the resulting feature structure is F : b ; then, applying ρ1 necessarily CAT : q yields , which is incompatible with the head of ρ2 . Hence, ρ2 cannot F: b be applied again. Example 6.10 A cyclically unifiable set of rules. ⎧ ⎫ ⎪ ⎪ CAT : p CAT : q ⎪ ⎪ ⎬ ⎨ ρ1 : → 1 1 F : F : R= ⎪ ⎪
⎪ ⎪ ⎭ ⎩ ρ : F : a → F:b 2
Definition 6.19 (Off-line parsability) A unification grammar G is OLP iff it has no cyclically unifiable sequences. See Example 6.11. Lemma 6.20 An OLP unification grammar does not induce (on any word) any derivation tree with a nonbranching dominance chain in which the same rule is used more than once. Proof Assume toward a contradiction that for some OLP grammar G, a unit rule ρ1 is used more than once in a nonbranching chain. Therefore, there exists a sequence of MRSs σ1 , . . . , σk+2 , the path nodes on the derivation tree, and a sequence of unit rules ρ1 , . . . , ρk such that for 1 ≤ i ≤ k, σi ⇒ σi+1 by ρi , and σk+1 ⇒ σk+2 by ρ1 . Thus, G contains a cyclically unifiable sequence, a contradiction to G being OLP. Lemma 6.21 The depth of every derivation tree whose yield is of length n admitted by an OLP grammar G is bounded by (u + 1) × n, where u is the number of G’s unit rules. Proof Since G contains no cyclically unifiable sequences, by Lemma 6.20, no rule may be applied more than once in a nonbranching dominance chain. Therefore, the depth of any generated nonbranching dominance chain is bounded by u.
Cambridge Books Online © Cambridge University Press, 2012
238
6 Computational aspects of unification grammars
Example 6.11 Off-line parsable grammars. Refer back to the grammar Gabc of Example 6.1 (Page 216). While the grammar induces trees that include a nonbranching dominance chain, it is still off-line parsable since it does not include any cyclically unifiable set of rules. In contrast, consider the unification grammar GM simulating a Turing machine M (Section 6.2). Assume that the Turing machine has a transition δ(q, σ) = (q, σ) for some state q and symbol σ. According to the construction, GM will then include the following rule: ⎡
⎤ ⎡ q CAT : q ⎢ CURR : σ ⎥ ⎢ CURR : σ q,σ ⎥ ⎢ ρ3 : ⎢ ⎣ RIGHT : 4 ⎦ → ⎣ RIGHT : 4 CAT :
LEFT :
2
LEFT :
⎤ ⎥ ⎥ ⎦
2
Clearly, this rule constitute a cyclically unifiable set since it can be applied infinitely many times to its head, yielding its head again. Hence, in the general case, the grammar GM is not OLP. This result, of course, is not surprising, as we have shown that, in general, recognition with GM is not guaranteed to be decidable. In fact, for any Turing machine M that is not guaranteed to terminate on all its inputs, GM will include at least one (and typically several) cyclically unifiable sets.
Thus in every derivation tree admitted by G, every u consecutive applications of unit rules (at most) are followed by either a leaf or an application of a nonunit rule expanding the yield (recall that no -rules are allowed). Therefore, the depth of every derivation tree is at most (u + 1) times the size of its yield. Corollary 6.22 The membership problem is decidable for OLP grammars. Proof This property follows directly from Lemma 6.21 and the bounding lemma. If G is OLP, then by Lemma 6.21, for every word w of length n and a tree τ induced by G on w, the depth of τ is bounded by a recursive function of n (here, (u + 1) × n, where u is independent of n). By Lemma 6.16, the membership problem for G is in this case decidable. In fact, it is possible to relax the off-line parsability constraint of Definition 6.19 and allow some (constrained) cyclically unifiable sequences; another possible relaxation allows -rules. We do not discuss such extensions here. It is important to note that while the class of languages generated by offline parsable unification grammars is indeed a proper subset of the class of
Cambridge Books Online © Cambridge University Press, 2012
6.4 Branching unification grammars
239
languages generated by arbitrary unification grammars, the former is still a significant class; in other words, the reduction in expressive power does not render the resulting formalism trivially constrained. Specifically, the class of languages generated by OLP grammars properly includes not only the contextfree languages, but also the mildly context-sensitive ones, as we presently show (Section 6.6). 6.4 Branching unification grammars The idea behind off-line parsability, as we presented it in the previous section, is to avoid derivations whose length grows without a corresponding extension of the length of the yield, thereby enabling an exhaustive search algorithm to enumerate all the parse trees for the given input string. We showed that the membership problem is decidable for OLP grammars in the previous section; however, the universal recognition problem is still computationally hard. A more extreme way to constrain the expressiveness of grammars is by disallowing -rules and unit rules altogether. Unfortunately, even this severe constraint does not guarantee efficient processing. This is the topic of the current section, which assumes familiarity with basic complexity theory and with truth tables for propositional logic. Definition 6.23 (Branching unification grammars) A unification grammar G = (L, R, As ) is branching iff for all ρ ∈ R, |ρ| > 2. In a branching unification grammar, all rules are of length greater than 2 (recall that the mother is counted in the length of a rule), so such a grammar cannot include -rules or unit rules. Exercise 6.14 (*). Which of the grammars listed in Chapter 5 is branching? We now consider the universal recognition problem with branching unification grammars. As we show here, this problem is “hard.” In computational terminology, we say that it is NP-hard. What this means is that in all likelihood (more precisely, unless P=NP), efficient (i.e., deterministic, polynomial) algorithms for this problem do not exist. The NP-hardness of the universal recognition problem for branching unification grammars is established through a reduction; this is a standard proof technique in computer science. We first select a problem in which the NPhardness is independently known; in this case, it is a problem known as 3-SAT (see next paragraph). Then, we show that if we had an efficient solution for our original problem, we could use it to solve the 3-SAT problem efficiently. Since the latter is known to be impossible, our assumption (that an efficient solution exists for the original problem) is refuted.
Cambridge Books Online © Cambridge University Press, 2012
240
6 Computational aspects of unification grammars
One of the first problems that was shown to be NP-hard is 3-SAT. It involves Boolean (logic) formulae in conjunctive normal form; these are expressions over a finite, nonempty set of Boolean variables (variables that can be assigned truth values) such that each expression consists of a conjunction of clauses, and each clause is a disjunction of exactly three literals, a literal being either a variable or its negation. We use x ¯ to denote the negation of a variable x. See Example 6.12. Example 6.12 An instance of 3-SAT. The following is an expression over the variables x, y, and z: F = (x ∨ y¯ ∨ z) ∧ (¯ x ∨ y ∨ z¯) It consists of two clauses; the first is (x ∨ ¬y ∨ z) and the second is (¯ x ∨ y ∨ z¯). Each clause consists of a disjunction of exactly three literals, and the two clauses are combined by conjunction. Of course, formulae can use any number of variables. Following is an example expression over two variables, x and y: x ∨ y ∨ y¯) F = (x ∨ y¯ ∨ y) ∧ (¯ and an example of an expression over five variables, x1 , . . . , x5 : (x1 ∨ x¯2 ∨ x3 ) ∧ (x¯1 ∨ x3 ∨ x¯4 ) ∧ (x2 ∨ x4 ∨ x5 ) ∧ (x¯3 ∨ x¯4 ∨ x¯5 ))
The 3-SAT problem is defined as follows: Given an arbitrary instance F (an expression of the form defined above), does there exist a satisfying assignment of truth values to the variables of F ? An assignment is satisfying if it yields the value true to the entire expression. For example, the assignment of true to the variables x and y would render the expression F in Example 6.12 true; hence, F is satisfiable. Exercise 6.15. Show a 3-SAT instance for which no satisfying assignment exists. Exercise 6.16. Show a 3-SAT instance for which every assignment is satisfying. The reason that 3-SAT is a computationally hard problem is that the number of possible assignments of values to variables is huge. If an expression has
Cambridge Books Online © Cambridge University Press, 2012
6.4 Branching unification grammars
241
n variables, and each variable can have either of two values (true or false), then the number of different assignments is 2n . Of course, it is possible to check all these assignments exhaustively, but this induces an exponential-time algorithm. (Crucially, checking a given assignment is trivial and can be done in linear time.) Unfortunately, no more efficient algorithm for this problem is known (or is likely to be found). To show that the universal recognition problem for branching unification grammars is NP-hard, we reduce 3-SAT to this problem. In other words, given an instance F of the 3-SAT problem, we transform it to an instance of our own problem, namely, to a branching unification grammar G and a string w to test for membership. The transformation itself is efficient (can be done in time polynomial in the size of the 3-SAT instance). Crucially, the transformation preserves correctness: F is satisfiable if and only if w ∈ L(G). If we had an efficient algorithm to solve our own problem (i.e., to check membership with a branching unification grammar), it would immediately induce an efficient algorithm to 3-SAT, in contradiction to the fact that it is NP-hard. Let F be a 3-SAT instance, and let X be the set of variables in F . The first part of the transformation is easy: Given F , we create a string w(F ) (over an alphabet consisting of the literals over X) by removing the parentheses, the disjunction symbols, and the conjunction symbols from F ; the remaining sequence of literals is our string. For example, the expression (x ∨ y¯ ∨ z) ∧ (¯ x ∨ y ∨ z¯) is converted to the string x¯ yz x ¯y z¯. We also create a dedicated grammar, G(F ), for this instance. The grammar, over the signature F EATS = {CAT, ASSIGN} ∪ X, ATOMS = {s, t, f, true, false}, of eight rules, depicted in Example 6.13.
consists The start symbol is simply CAT : s . Note that the grammar rules are independent of F (the lexicon won’t be). The bodies of rules 2 through 8 are all of length 3 because we are dealing with 3-SAT. Clearly, the grammar is branching. The lexicon consists of 2 × m entries, where m = |X| is the number of variables in the Boolean expression. Each entry corresponds to a literal, one for the variable and one for its negation. In other words, the set WORDS consists of all literals in F . The lexical entries of the literals are all ambiguous: Each literal is associated with two feature structures. In one, the value of CAT is t, and in the other, it is f. Furthermore,
these feature structures also have an ASSIGN feature, whose value is x : true if CAT is t and the literal is positive, or if CAT
is f and the literal is negative; the value of ASSIGN is x : false if CAT is t and the literal is negative, or CAT is f and the variable is positive. The two lexical entries induced by the variable x are depicted in Example 6.14. To see how the reduction works, consider the 3-SAT instance F of Example 6.12. We focus on one of the satisfying assignments, namely, the one in which x is true, and y and z are false (other satisfying assignments will
Cambridge Books Online © Cambridge University Press, 2012
242
6 Computational aspects of unification grammars
Example 6.13 A branching unification grammar, Gb (F ), for 3-SAT. CAT : s CAT : s CAT : s 1 → ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 CAT : s CAT : t CAT : t CAT : t 2 → 1 1 1 ASSIGN : ASSIGN : ASSIGN : ASSIGN : 1 CAT : s CAT : t CAT : t CAT : f 3 → ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 CAT : s CAT : t CAT : f CAT : t 4 → ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 CAT : s CAT : f CAT : t CAT : t 5 → 1 1 1 ASSIGN : ASSIGN : ASSIGN : ASSIGN : 1 CAT : s CAT : t CAT : f CAT : f 6 → ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 CAT : s CAT : f CAT : f CAT : t 7 → ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 ASSIGN : 1 CAT : s CAT : f CAT : t CAT : f 8 → 1 1 1 ASSIGN : ASSIGN : ASSIGN : ASSIGN : 1
Example 6.14 A lexicon for 3-SAT. The variable x induces two literals, the positive x and the negative x ¯. Each is associated with two feature structures: CAT : t
f
, CAT : L(x) = ASSIGN : x : true ASSIGN : x : false CAT : t
f
, CAT : L(¯ x) = ASSIGN : x : false ASSIGN : x : true Since any expression includes a finite number of variables, the lexicon is also finite.
correspond to different trees). A lexicalized derivation tree for the string w(F ) = x y¯ z x¯ y z¯ is depicted in Example 6.15. Observe that each pre-terminal in the tree is consistent with exactly one of the elements in the lexical entry of the corresponding terminal.
Cambridge Books Online © Cambridge University Press, 2012
6.4 Branching unification grammars
243
Example 6.15 A lexicalized derivation tree for the 3-SAT instance.
z¯
ASSIGN : 1
y
: 1 x ¯
ASSIGN : 1
CAT :
⎡
CAT :
z
: 1 y¯
ASSIGN
: 1
t CAT :
x
ASSIGN : 1
t CAT :
ASSIGN
: 1
s CAT :
ASSIGN
f
s
CAT :
⎡
ASSIGN
f f
CAT :
: 1 ASSIGN
⎢ x : true ⎥ ⎢ ⎥ ⎣ ASSIGN : 1 ⎣ y : false ⎦ ⎦ z : false
⎤
⎤
CAT :
s
CAT :
t
The following tree is induced by the grammar G(F ) on the string x y¯ z x¯ y z¯:
Cambridge Books Online © Cambridge University Press, 2012
244
6 Computational aspects of unification grammars
Exercise 6.17. Show a derivation tree with G(F ) for the expression: F = (x ∨ y¯ ∨ z) ∧ (¯ x ∨ y ∨ z¯) ∧ (¯ q ∨ y ∨ p¯) ∧ (¯ x ∨ p ∨ q¯) Exercise 6.18. Show a nonsatisfiable 3-SAT instance F , and explain why w(F ) ∈ L(G(F )). To complete the proof, we need to show that a 3-SAT expression F is satisfiable iff w(F ) ∈ L(G(F )). If w(F ) ∈ L(G(F )), there is a derivation (tree) for w(F ) induced by G(F ). By the construction of the grammar, each preterminal
triplet (in other words, the immediate three daughters of ternary-branching CAT : s -nodes) has at least one member whose CAT value is t. Assigning true to the literal dominated by this member yields a satisfying assignment (truth assignments to other literals are irrelevant). For the reverse direction, assume that F is satisfiable. Since F is in conjunctive normal form, at least one of the literals in each clause
mustbe true. A derivation tree for w(F ) will therefore have as preterminals CAT : t , dominating literals
whose assignment is true, and CAT : f for those whose assignment is false. The rest of the tree is built as in Example 6.15. Example 6.16 Reducing 3-SAT to membership with branching unification grammars. Refer back to Example 6.15, concerning the 3-SAT expression F = (x ∨ y¯ ∨ z) ∧ (¯ x ∨ y ∨ z¯) Observe that F is satisfiable, for example, through the assignment of true to x and false to z. F is converted to the input string x y¯ z x ¯ y z¯. Indeed, this string is derivable by G(F ), as shown in the tree of Example 6.15. In this tree, x is dominated by a preterminal whose CAT value is t, and z¯ is dominated by a preterminal whose CAT value is, again, t. Note how the value of the ASSIGN feature reflects the (lexical) assignments of each of these two variables. Observe also that the value assigned to y is irrelevant.
6.5 Polynomially parsable unification grammars As we have seen, even when a grammar is off-line parsable, or even branching, recognition time may well be exponential in the length of the input word. The exhaustive search recognition algorithm is indeed exponential in the worst case, since the number of different derivation trees for a string of length n (even
Cambridge Books Online © Cambridge University Press, 2012
6.5 Polynomially parsable unification grammars
245
in the case of context-free grammars) is exponential in n, and the algorithm enumerates them. In this section we investigate stricter constraints which guarantee that recognition is polynomial in the length of the input (ignoring the size of the grammar). This is achieved by constraining the grammars such that the languages they generate belong to a strict class of languages: context-free languages in one case (Section 6.5.1); mildly context-sensitive in another (see Section 6.5.2). The constrained unification grammars can be mapped to context-free and linear indexed grammars, respectively, and thereby (assuming that the mapping itself can be done efficiently) a polynomial recognition time is achieved. While the constrained unification grammar formalism that can generate any mildly context-sensitive language may be relevant for natural languages (see Section 1.7), the more restrictive constraint that renders the formalism equivalent to context-free grammars is much less so (Section 1.6). Our main interest in the following section is, therefore, theoretical rather than practical.
6.5.1 Context-free unification grammars In this section we define a constraint on unification grammars that ensures that the grammars satisfying it generate exactly the class of context-free languages. The constraint disallows any reentrancies in the rules of the grammar. When rules are nonreentrant, applying a rule means inserting an exact copy of the body of the rule into the generated (sentential) form, without affecting neighboring elements of the form the rule is applied to. The only difference between rule application in the constrained formalism and the analog operation in CFGs is that the former requires unification; whereas, the latter only calls for identity check. This small difference does not affect the generative power of the formalism, since unification can be precompiled in this simple case, as we show in Definition 6.26. Definition 6.24 (Nonreentrant grammars) A unification grammar G = L, R, As over the signature ATOMS, F EATS, W ORDS is nonreentrant iff every rule ρ ∈ R is nonreentrant. Let UGnr be the class of nonreentrant unification grammars. We now set out to show that nonreentrant unification grammars are equivalent to context-free grammars. The trivial direction is to map a CFG to a nonreentrant unification grammar, since every CFG is, trivially, such a grammar (where terminal and nonterminal symbols are viewed as atomic, and hence nonreentrant, feature structures). For the inverse direction, we define a mapping
Cambridge Books Online © Cambridge University Press, 2012
246
6 Computational aspects of unification grammars
from UGnr to C FGS. The nonterminals of the CFG in the image of the mapping are the set of all feature structures defined in the source unification grammar. Definition 6.25 Let G = L, R, As be a nonreentrant unification grammar. The set of feature structures induced by G, denoted F (G), is {Ai | if there exists a rule ρ ∈ R such that ρ = A0 → A1 · · · An and 0 ≤ i ≤ n}. Example 6.17 Nonreentrant unification grammar and the feature structures it induces. Consider the grammar G0 , of Example 5.2 (Page 167). Observe that
G0 is nonreentrant. byG0 are F (G0 ) = { CAT : s ,
The feature structures
induced
CAT : np , CAT : vp , CAT : v , CAT : d , CAT : n , CAT : pron , CAT : propn }
Clearly, for every unification grammar G, F (G) is finite. In the following definition, this set (together with the set of lexical entries of G) constitutes the set of nonterminal symbols of a context-free grammar. Definition 6.26 Let ug2cfg : UGnr → C FGS be a mapping of UGnr to C FGS, such that if G = L, R, As is over the signature ATOMS, F EATS, W ORDS , then ug2cfg(G) = Σ, V, S, P , where • • • •
Σ = W ORDS ; V = F (G) ∪ a∈ATOMS L(a) ∪ {As }; S = As ; and P consists of the following rules: 1. Let b ∈ W ORDS , let B ∈ L(b) and let A0 → A1 . . . An ∈ R. If for some i, 1 ≤ i ≤ n, Ai B↓, then Ai → b ∈ P . 2. If A0 → A1 . . . An ∈ R and As A0 ↓, then S → A1 . . . An ∈ P . 3. Let ρ1 = A0 → A1 . . . An and ρ2 = B0 → B1 . . . Bm , where ρ1 , ρ2 ∈ R. If for some i, 1 ≤ i ≤ n, Ai B0 ↓, then the rule Ai → B1 . . . Bm ∈ P .
In words, the terminals of the CFG are the same terminals of the unification grammar, and the set of nonterminals V is the set of all the (finitely many) feature structures occurring in any of the rules or the lexicon of the unification grammar. The start symbol is naturally As . The context-free grammar has three types of rules: Terminal rules are generated by Clause 1 of the definition; phrasal rules, by Clause 3; and rules deriving the start symbol, by Clause 2. The size of ug2cfg(G) is polynomial in the size of G. The mapping is demonstrated in Example 6.18.
Cambridge Books Online © Cambridge University Press, 2012
6.5 Polynomially parsable unification grammars
247
Example 6.18 Mapping nonreentrant unification grammars to CFGs. Let ATOMS = {v, u, w}, F EATS = {F1 , F2 }, W ORDS = {a, b} and G = L, R, As be a nonreentrant unification grammar for the language {an bn | 0 ≤ n}, such that F :w • As = 1 F2 : w
• The lexicon is: L(a) = { F2 : v } and L(b) = { F2 : u } • Theset of rules R is: F1 : w 1. → F2 : w F1 : v
F :u
2. F2 : w → 1 F2 : w F2 : v F2 : u Then, the context-free grammar ug2cfg(G) = Σ, V, S, P is:
F1 : w F1 : u F1 : v • V = , , F2 : v , F2 : u , F2 : w , F2 : w F2 : v F2 : u • Σ = W ORDS = {a, b} F :w • S = As = 1 F2 : w • Theset of rules P is: F1 : u 1. →a F2 : v F1 : v →b 2. F :u 2 F1 : w → 3. F :w
2 4. F2 : w → F1 : v F1 : w F1 : u
→ 5. F2 : w F2 : w F :v F :u 2 2 F1 : v
F1 : u
6. F2 : w → F2 : w F2 : v F2 : u
Lemma 6.27 Let G = L, R, As be a nonreentrant unification grammar over the signature ATOMS, F EATS, W ORDS , and let G = ug2cfg(G) = Σ, V, S, P . ∗ ∗ Then, As ⇒G A1 . . . An iff S ⇒G A1 . . . An . The proof, which is by inductions on the lengths of the derivation sequences, is suppressed. Example 6.19 depicts a derivation tree of the string aabb with the
Cambridge Books Online © Cambridge University Press, 2012
248
6 Computational aspects of unification grammars
grammar G in Example 6.18. Note that the same tree exactly is also a derivation tree with respect to ug2cfg(G), but in the latter case, the feature structures are actually atomic symbols. Example 6.19 Derivation tree with a nonreentrant unification grammar.
:u F2 : v F1
a
:u F2 : v F1
a
:w F2 : w F1
F2
:w
:w F2 : w F1
:v F2 : u F1
:v F2 : u F1
b
b
Theorem 6.28 If G = L, R, As is a nonreentrant unification grammar, then L(G) = L(ug2cfg(G)). Corollary 6.29 Nonreentrant unification grammars are weakly equivalent to C FGS. 6.5.2 Mildly context-sensitive unification grammars The second constraint on unification grammars, which guarantees that the class of languages they generate is exactly the class of mildly context-sensitive languages (more precisely, the tree-adjoining languages; see Section 1.7), is also expressed in terms of reentrancies. Definition 6.30 (One-reentrant grammars) A unification grammar G = L, R, As over the signature ATOMS, F EATS, W ORDS is one-reentrant iff for every rule ρ ∈ R, ρ includes at most one reentrancy, and if such a reentrancy is present, it is between the head of the rule and some element of the body. Formally, ≈ρ includes at most one nontrivial pair (i1 , π1 ) ≈ρ (i2 , π2 ), where i1 = 1 and i2 > 1. See Example 6.20. One-reentrant unification grammars induce highly constrained (sentential) forms: In such forms, there are no reentrancies whatsoever, neither between distinct elements nor within a single element. The following lemma can be proven by a simple induction on the length of a derivation sequence; it follows
Cambridge Books Online © Cambridge University Press, 2012
6.5 Polynomially parsable unification grammars
249
Example 6.20 A one-reentrant unification grammar, G1r . Following is a one-reentrant grammar, G1r. Observe that ρ1 and ρ5 have no reentrancies; whereas, in ρ2 , ρ3 and ρ4 exactly one reentrancy is present, between the mother and one of the daughters (the first in ρ2 and ρ3 ; the second in ρ4 ). ρ1 :
CAT :
ρ2 :
CAT : T:
ρ3 :
CAT : T:
s
cp
→
→
1
cp
T: 1
CAT :
abp
ρ4 : T: T: 2 CAT : abp ρ5 : T: end
CAT : at
CAT :
bt
CAT :
ct
→ →
T: CAT : T:
→
CAT :
CAT : T:
cp end
cp
T: 1
abp
CAT :
ct
1
CAT :
at
CAT :
at
CAT : T:
CAT :
abp 2
bt
CAT :
bt
→ a → b → c
directly from the fact that rules in a one-reentrant unification grammar have no reentrancies between elements of their body. Lemma 6.31 If τ is a sentential form induced by a one-reentrant grammar, then there are no reentrancies between the elements of τ or within an element of τ . Since all the feature structures in forms induced by a one-reentrant unification grammar are nonreentrant, unification is simplified. Furthermore, the flow of information during a derivation is highly constrained because values can only be shared between the head of a rule and a single element in its body. As an example, consider the one-reentrant grammar G1r in Example 6.20. A derivation sequence with this grammar is listed in Example 6.21. Observe that in the sequence, all forms are nonreentrant; no reentrancy tags are present in any of the forms involved in the derivation sequence. Furthermore, the single reentrancy that is allowed in the rules can only be used to propagate information from the
Cambridge Books Online © Cambridge University Press, 2012
250
6 Computational aspects of unification grammars
mother to a single daughter; in the derivation, this is reflected by expanding at most one element in the derived form. For example, the third step of the derivation sequence in Example 6.21, reflecting the application of ρ2 , results in an expansion of the first element in the sentential form only. Example 6.21 Derivation with the one-reentrant unification grammar G1r . We show a derivation sequence for aabbcc with this grammar. As a result of the one-reentrancy constraint, all the sentential forms in the derivation below have no reentrancies whatsoever.
start symbol CAT : s ⇒
CAT : T:
CAT : T:
cp ⇒ end cp
T:
end
ρ1
CAT :
ct ⇒
cp CAT : ct CAT : ct ⇒
T: T : T : end
CAT : abp CAT : ct CAT : ct ⇒
T: T : end
CAT : abp
CAT : at CAT : bt CAT : ct CAT : ct ⇒ T: end
CAT : at CAT : at CAT : bt CAT : bt CAT : ct CAT : ct
CAT :
ρ2 ρ2 ρ3 ρ4 ρ5
The resulting sentential form is obviously the preterminal sequence of the string aabbcc.
Still, one-reentrant grammars are not as restricted as nonreentrant unification grammars: The construction of the previous section fails in the case of onereentrant unification grammars simply because the set of feature structures induced by a one-reentrant unification grammar can, in the general case, be infinite, and a context-free grammar must have a finite number of nonterminal symbols. The proof that one-reentrant unification grammars generate exactly the class of mildly context-sensitive languages is complex, and we suppress it here. It is a constructive proof, which maps one-reentrant unification grammars
Cambridge Books Online © Cambridge University Press, 2012
6.6 Unification grammars for natural languages
251
to linear indexed grammars, one of the known mildly context-sensitive formalisms. Example 6.21 demonstrates a derivation with the grammar G1r of Example 6.20; we claim without proof that L(G1r ) = {an bn cn | n > 0}. Exercise 6.19. Show a derivation sequence with G1r for the string aaabbbccc. Exercise 6.20. Prove that L(G1r ) = {an bn cn | n > 0}. Exercise 6.21. Show a one-reentrant unification grammar for the language {ww | w ∈ {a, b}∗ }.
6.6 Unification grammars for natural languages Let us summarize the first part of this chapter: in the general case, unification grammars are Turing equivalent (Section 6.2). OLP grammars (Section 6.3) ensure the decidability of the recognition problem, but not tractable processing. Constraining the formalism further, branching unification grammars (Section 6.4) impose additional constraints, but they, too, cannot guarantee efficient processing. Polynomial recognition time is guaranteed, however, with nonreentrant and one-reentrant grammars (Section 6.5). On the other hand, these two formalisms may be too restrictive for practical grammar design. Example 6.22 graphically depicts the relations that hold among the various constrained versions of unification grammars discussed above. Let LOLP be the class of languages defined by OLP grammars, and let Lbu be the class of languages defined by branching unification grammars. Of course, the class of languages defined by one-reentrant unification grammars is T AL, the class of tree-adjoining languages, and the class of languages defined by nonreentrant unification grammars is CF L, the context-free languages. Then some relations clearly hold among these languages classes. Trivially, CF L ⊂ T AL. It should also be clear that Lbu ⊂ LOLP because branching unification grammars do not permit unit rules at all, let alone cyclically unifiabe ones. As for the relationship between Lbu and T AL, we conjecture that T AL ⊂ Lbu , although we are not aware of a proof. What are the implications of these formal results for designing grammars for natural languages? We have presented unification grammars as an adequate formalism with which to specify the structure of natural languages. But if recognition with such grammars is intractable, can they still serve as adequate models? Conversely, are highly constrained unification grammars still adequate for expressing the syntactic properties of a natural language?
Cambridge Books Online © Cambridge University Press, 2012
252
6 Computational aspects of unification grammars
Example 6.22 Expressivity of various constrained formalisms. OLP grammars: LOLP Branching UG: Lbu One-reentrant UG: T AL Nonreentrant UG: CF L
To answer these questions, let us first refer back to some of the grammars of (fragments of) English defined in Chapter 5. The very first unification grammar we listed, namely, G0 of Example 5.2 (Page 167), is obviously a nonreentrant grammar (and, consequently, is polynomially parsable). But as soon as we account for the simplest, most basic agreement phenomena, the grammar Gagr of Example 5.4 is not even one-reentrant. Furthermore, because of rule 2, which is a unit rule, Gagr is not even branching. More generally, the no-reentrancy constraint immediately rules out all the “natural” grammars we defined for fragments of English in Chapter 5. This does not mean that the grammars of Chapter 5 generate trans-context-free languages; on the contrary, as we show in Example 5.5 (page 171), Gagr does generate a context-free language. But the way it is formulated, it uses reentrancy in a way that is not available with the highly constrained unification grammars we consider in Section 6.5. Specifically, this grammar expresses agreement (in this case, subject-verb agreement in English) in a natural way, which requires reentrancy among more than one daughter (and, sometimes, also the mother) in some rules. Similarly, one-reentrant unification grammars are not very practical for grammar developers: even simple agreement constraints cannot be naturally expressed with them. The main importance of the constraint is theoretical, for the understanding of the formalism itself. That said, many of the “natural” grammars we defined in Chapter 5 can easily be transformed to branching unification grammars, as they make no use of -rules, and when they include unit rules, these are only intended to dominate lexical entries (in other words, they do not admit cyclically unifiable applications). Consider Gsubcat (Example 5.9, Page 176). It has three unit rules (Rules 2, 5, and 6), but the single daughter in all these rules is only unifiable with elements of the lexicon, never with the mother of any other rule. Consequently, when
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
253
these rules are used in a derivation, they immediately dominate preterminals and can never occur on a nonbranching chain longer than 1. With these specific grammars, efficient recognition is guaranteed. Exercise 6.22 (*). Consider G3 of Example 5.16 (Page 183). Is it nonreentrant? One-reentrant? Branching? OLP? Exercise 6.23. Consider G4 of Example 5.29 (Page 198). Is it nonreentrant? One-reentrant? Branching? OLP? Ideally, one would seek a characterization of constrained unification grammars that admits efficient processing, on one hand, but is useful for “natural” grammar development, on the other. More research is necessary in order to characterize the correct constraint. Clearly, such a constrained formalism must be able to generate at least the context-free languages; preferably, the mildly context-sensitive languages; and perhaps a superset thereof. Additionally, it should allow the “natural” expression of the kinds of constraints that hold in natural languages, such as agreement, control, and so on. This is of course not a mathematical characterization, it is more a road map for the appropriate solution. Finally, such a constrained formalism must allow efficient processing, and in particular, must guarantee that the recognition problem can be solved in polynomial time. Even in the absence of such a characterization, unification grammars should not be deemed less than useful for natural languages. Consider as an analogy the case of programming languages. With very few exceptions, all programming languages that are in common industrial use are Turing-equivalent: They facilitate the expression of programs that are not guaranteed to terminate. That does not mean, of course, that such programming languages are useless; what it means is that programmers are expected to be careful in they way they specify programs. The responsibility for termination (and for efficiency) is in the hands of the programmer, rather than the language. Analogously, unification grammars should be thought of as a very powerful formalism for specifying the structure of natural languages; such a specification lends itself to implementations that can be used to solve the recognition problem of these grammars. The responsibility for designing the grammar in a way that ensures efficient processing is, similarly, delegated to the grammar developer, and is not inherent to the formalism.
6.7 Parsing with unification grammars Since unification grammars are, in the general case, Turing-equivalent, a general parsing algorithm for a unification grammar does not exist. However,
Cambridge Books Online © Cambridge University Press, 2012
254
6 Computational aspects of unification grammars
one can devise algorithms that work for some grammars, which satisfy certain conditions, along the lines we discussed in the previous sections. In this section we discuss parsing with unification grammars. We begin with a parsing algorithm for context-free grammars, which we then extend to the case of unification grammars. The context-free parsing algorithm is efficient; the unification parsing algorithm is only guaranteed to terminate for a subset of the grammars. 6.7.1 Basic notions We start with a recognition algorithm for context-free grammars; we later show how the algorithm can be extended to parsing. For the following discussion, we fix a context-free grammar G = Σ, V, S, P that is assumed to be in phrasal/terminal normal form (see Definition 1.8, page 21) and an input word w ∈ Σ∗ . The normal form means that grammar rules can have either sequences of nonterminal symbols or a single terminal in their bodies (but not a combination of both). We assume that G contains no -rules (an extension of the algorithm to handle such rules is simple, but we prefer to suppress them for the sake of simplicity; see Section 6.7.5 below). We also assume that the grammar does not contain useless rules, and in particular does not have cycles of unit rules of the form A1 → A2 , . . ., An−1 → An , An → A1 . As we did in Chapter 1.5, we use the meta-variables X, Y , Z to range over nonterminals (which we sometimes called “categories”), σ – over terminal symbols, u, v, w – over sequences of terminals and α, β, γ – over sequences of terminal and nonterminal symbols. Consider first an exhaustive search algorithm for recognition (Example 6.23). Observe that the normal form assumed above guarantees that the lengths of sentential forms monotonically increase with each derivation; the lack of rules guarantees that no sentential form can be shorter than its predecessor, and the lack of cycles of unit rules guarantees that a sequence of derivations that does not increase the length of the sentential form must be limited by some constant that can be determined from the size of the grammar. Given a contextfree grammar G and a string w, then, it is possible to generate all the possible derivation sequences induced by G in which the yield is of length |w|, and to test whether any of them yields w. In other words, it is possible to enumerate the grammar rules (we refer to the i-th rule in P as Pi ) and then execute the procedure listed in Example 6.23. The algorithm proceeds in a top-down fashion. Given a (sentential) form, the algorithm selects each of its elements in turn, and for each selection, it selects each of the grammar rules which are applicable to the selected element
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
255
Example 6.23 Recognition algorithm for context-free grammars. Input: a CFG G = Σ, V, S, P and an input string w = σ1 · · · σn Output: True if w ∈ L(G), False otherwise 1 k← |{R | R is a unit rule in P }|; 2 form← S ; 3 return expand(form,0); 4 5 6 7 8 9 10 11 12 13
Function expand(form,depth) ::= if (depth > k × n) then return False; for i ← 1 to |form| do let e be the i-th element of form; if e ∈ V then for r ← 1 to |P | do if the head of Pr is e then apply Pr to e, yielding form’; if form’=w then return True; if expand(form’,depth+1) then return True return False
and recursively applies the same procedure. The search starts with a sentential form consisting of the start symbol, of course, and the only question is when to terminate it. Observe that since the grammar includes no -rules and no cycles of unit rules, the depth of any derivation tree for w is bounded by a linear function of the length of w. This is because every derivation step either increases the length of the form or, in the case of unit rules, does not decrease it. But each unit rule can only be applied once to a given form (since no cycles are allowed). Consequently, the search can be terminated whenever the number of derivation steps exceeds k × w, where k is the number of unit rules, or when w is produced. Exercise 6.24. Execute the exhaustive search algorithm on the grammar of Exercise 1.7 (Page 17) and the strings aabbaa and aab. While such an exhaustive search is always possible, its time complexity is unacceptable: it is easy to show that in the general case, the number of derivation trees induced by a grammar on a string of length n can be exponential in n (see Example 6.29 on Page 266). Of course, the algorithm of Example 6.23 is a recognition algorithm: It only needs to report whether a derivation tree for w exists. Instead, it practically enumerates all derivation trees. This is the main reason for its inherent inefficiency.
Cambridge Books Online © Cambridge University Press, 2012
256
6 Computational aspects of unification grammars
In light of the unacceptable time complexity of the exhaustive search algorithm, we present an alternative, efficient algorithm based on a chart. Chartbased parsing algorithms are an example of dynamic programming techniques: Intermediate results are stored in a data structure called chart, so that they do not have to be computed more than once. And if a substring of the input has several different syntactic structures, all rooted by the same category, this information is stored in a compact way so that full expansion of all the (possibly exponential number of) trees is never required. The algorithm we present is bottom-up: This means that processing starts with the words in the input string and attempts to build more structure upwards from the leaves of the presumed tree toward its root. The goal of the algorithm, then, is to determine whether w ∈ L(G) or in other ∗ words, whether S ⇒ w where S is the start symbol of G. But the algorithm does more than that. It determines, for every substring u of w, which categories (if any) can derive u. When the computation terminates, all that is left is to check whether or not the entire string w (which is a substring of itself) is derivable from S. The basic idea behind chart-based algorithms is to store all the categories that derive each substring in a table. The chart is a bidimensional table maintaining entries that correspond to all the possible substrings of the input string w. Thus, if w = σ1 · · · σn is of length n ≥ 1, the chart has rows indexed by 0 . . . n − 1 and columns indexed by 0 . . . n, and for 0 ≤ i ≤ j ≤ n, the [i, j] entry of the chart corresponds to the substring σi+1 · · · σj of w. It should be clear that only half of the table is needed: It is an upper triangular table. Also, we will explain the reason for a column 0 presently. By the end of the process, the [i, j] entry will contain a (possibly empty) list of all the categories that derive the substring σi+1 · · · σj . Note that the convention here is that chart indices (e.g., the ‘i’ and ‘j’ in [i, j]) refer to positions between the words in the input and not to the words themselves. Thus, the chart entry that corresponds to the entire input string σ1 · · · σn is [0, n]. We recall here Exercise 1.8, repeated as Lemma 6.32: ∗
Lemma 6.32 In a CFG, if Xi ⇒ αi for 1 ≤ i ≤ n and X → X1 …Xn is a rule, ∗ then X ⇒ α1 · · · αn . The above description is, as a matter of fact, an oversimplification. The actual entries of the chart are not lists of categories but rather lists of dotted rules (also known as edges). A dotted rule is a grammar rule with some additional information, traditionally indicated by a dot (‘•’) in the body of the rule.
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
257
Definition 6.33 (Dotted rules) If X → α is a grammar rule, then X → β • γ is a dotted rule (or edge), where α = β · γ. Both β and γ can be empty. If γ is empty, the dotted rule is said to be complete; otherwise it is active. If X → is a rule, then X → • = X → • is considered complete. Metavariables R range over dotted rules. Note that the set of all dotted rules induced by a given grammar is finite (its size is bounded by the number of rules times the length of the longest rule). See Examples 6.24, and 6.25. Example 6.24 Dotted rules. The rule S → NP VP induces the following three dotted rules: S → • NP VP, S → NP • VP, and S → NP VP •. The rule V → loves induces the dotted rules V → •loves and V → loves•.
Example 6.25 Dotted rules. Consider the grammar Ge of Example 1.1. (Page 15). The dotted rules induced by Ge are: S → •Va S Vb Va → •a S → Va • S Vb Va → a• S → Va S • Vb Vb → •b S → Va S Vb • Vb → b• S → •
Exercise 6.25. Show all the dotted rules induced by the grammar G1 of Example 5.5 (page 171). The part of the dotted rule preceding the dot (usually called the history of the dotted rule) reflects a substring of the input that has already been scanned and assigned the categories preceding the dot (see below). A given string w can be associated with a sequence of categories Xi , . . . , Xk if there exist ui ∈ Σ∗ , 1 ≤ i ≤ k, such that w = u1 · · · uk and for every i, 1 ≤ i ≤ k, Xi ⇒ ui . This is a natural generalization of associating a string with a single category. Dotted rules stored in the chart are naturally associated with substrings of the input: If a dotted rule is stored in the [i, j] entry of the chart, it is associated with the substring σi+1 · · · σj of w. The triple i, R, j , where [i, j] is an entry of the chart and R is a dotted rule stored in this entry, is called an item. Throughout the computation the following invariant is maintained (recall the convention that all rules are in phrasal/lexical normal form).
Cambridge Books Online © Cambridge University Press, 2012
258
6 Computational aspects of unification grammars
Invariant 6.34 (Invariant of the chart) Let X → α • β be a dotted rule associated with a string u. Let α = Y1 , . . . , Yk . Then, there exist u1 , . . . , uk such that ∗ u = u1 · · · uk , and for every i, 1 ≤ i ≤ k, ui = , and Yi ⇒ ui . If α is empty (k = 0), the dotted rule is associated with . We refer to this invariant as Ic . Intuitively, when a dotted rule is associated with a string, the history of the dotted rule derives the string (this is the extended notion of derivation, applying to sequences of categories, which we introduced in Section 1.5, Page 17). In the special case of α = , u must be the empty string; the special case of β = is discussed below. Note that the decomposition of u into u1 , . . . , uk need not be unique. See Example 6.26. Lemma 6.36 that follows, proves that this invariant is indeed maintained during the executing of the recognition algorithm. Example 6.26 Invariant of the chart. Assume some grammar G that includes the rule S → NP VP, which induces the following three dotted rules: S → • NP VP, S → NP • VP and S → NP VP •. By the invariant, the first (S → • NP VP) can only be associated with the empty string. Suppose that S → NP • VP is a dotted rule associated with the string Rachel; assume further that VP → V • is a dotted rule associated with smiled. The ∗ ∗ invariant implies that NP ⇒ Rachel and V ⇒ smiled. This latter fact also implies ∗ that VP ⇒ smiled. In anticipation of the forthcoming discussion (and in light of ∗ ∗ Lemma 6.32), note also that the combination of NP ⇒ Rachel and VP ⇒ smiled, ∗ along with the rule S → NP VP, implies that S ⇒ Rachel smiled. The parsing algorithm fills the chart by assigning to every substring u = σ1 · · · σk of the input w the set of dotted rules associated with it. This set includes all the dotted rules X → α • β such that α = X1 · · · Xj and u can be split into ∗ u = u1 · · · uj , and for each i, 1 ≤ i ≤ j, Xi ⇒ ui . Note that, in particular, this set includes all the (complete) edges of the form X → α• (with an empty β) where α derives (in the extended sense of the word) the string u; by the ∗ invariant of the chart and Exercise 1.8, such complete edges indicate that X ⇒ u, i.e., that a possible category of the string u is X. In this way dotted rules indeed represent information that associates categories with the substrings they derive. The main operation of the algorithm, usually known as the fundamental rule of parsing or, as here, dot movement, combines two dotted rules together to create a new dotted rule. Intuitively, this operation corresponds to a single step of derivation, but in the reverse direction. Whereas in derivation, an element of
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
259
a sentential form, X, is selected, matched against the heads of the rules in the grammar, and then replaced by the body of the selected rule, here a complete edge represents the fact that the body of some rule R2 had been completely “consumed,” and its head Y can therefore be “skipped over” on the way to scanning the body of some other rule, R1 , with Y in its body. Definition 6.35 (Dot movement) Let R1 = X → α • Yβ be an active dotted rule and R2 = Y → γ• be a complete dotted rule. Then R1 ⊗ R2 = X → αY • β. Example 6.27 Dot movement. Let R1 = S → • NP VP be an active dotted rule, and let R2 = NP → D N• be a complete dotted rule. Then R1 ⊗ R2 = R3 = S → NP • VP. Let R4 = VP → V NP• be a complete dotted rule. Then R3 ⊗ R4 = R5 = S → NP VP•. Note that R5 is a complete dotted rule. Note that the operation is well defined: it is defined over two dotted rules, an active one (R1 ) in which the dot precedes a nonterminal symbol (Y ) and a complete one (R2 ), whose head is the same nonterminal. It results in a dotted rule very similar to R1 , the only difference being the location of the dot, which is “shifted” over the nonterminal symbol Y, one position to the right. Clearly, the result is a dotted rule (based on the same grammar rule as R1 ). Note that the operation is independent of γ, because the only relevant information it takes from R2 is the identity of Y . Lemma 6.36 Assume that Ic holds for some chart. If R1 = X → α • Yβ is a dotted rule associated with u and R2 = Y → γ• is a dotted rule associated with v, then associating R1 ⊗ R2 with the string u · v maintains Ic . Proof If R1 is associated with u, then by Ic α = X1 , . . . , Xk and u = u1 · · · uk ∗ and for every i, 1 ≤ i ≤ k, Xi ⇒ ui . If R2 is associated with v then by Ic ∗ (combined with Exercise 1.8), Y ⇒ v. R1 ⊗ R2 = αY • β, and the invariant is maintained by observing that αY derives (in the extended sense) the substring u · v. The recognition algorithm must make sure that the invariant of the chart is maintained during the process, but as the only operation it performs (after initialization) is dot movement, which operates on adjacent strings, this is guaranteed by the above lemma. The only remaining question is: How are chart entries being built? In particular, how is the chart initialized, and in what order are its entries constructed? We answer both these questions next.
Cambridge Books Online © Cambridge University Press, 2012
260
6 Computational aspects of unification grammars 6.7.2 The recognition algorithm
Initialization of the chart consists of two parts. First, complete edges are added for each word of the input. Thus, if the input is w = σ1 · · · σn , then for each word σi , 1 ≤ i ≤ n, and for each rule X → σi (recall again the normal form convention), the complete edge X → σi • is added to the [i − 1, i] entry of the chart. The intuitive meaning of the edge X → σi • in the [i − 1, i] entry of the chart is that the terminal symbol σi was seen in the i-th position of the input string and was assigned the category X. Note that the invariant is established by these dotted rules. The second part of the initialization must guarantee that there exist active dotted rules in the chart as well; otherwise, dot movement can never be applied. Thus, the main diagonal of the chart is initialized with active dotted rules corresponding to all the rules in the grammar, with the dot in the initial position. In other words, each [i, i] entry (for every 0 ≤ i ≤ n) is initialized to contain the dotted rules X → •α for every grammar rule X → α, where α is a sequence of nonterminal symbols. These active rules will drive the process, as we shall presently show. Their intuitive meaning is that it is expected that the rule X → α will be applied to the string which starts in position i. Again, note that the invariant is (vacuously) established by these rules. Clearly, chart entries depend on each other in a certain way: One way to determine all the categories that derive a certain string is to determine all the categories that derive each of its substrings, and then combine them together using the grammar rules. This is the idea behind the algorithm we present here. Chart entries are built for prefixes of increasing length of the input: first, for the prefix of length one; and then, length two, and so on. In other words, all the entries in the i-th column of the chart are built before any entry in the i + 1 column is constructed. Then, for a prefix of length i (i.e., the i-th column of the chart), shorter substrings are processed before longer ones. In other words, the i-th column is constructed from the bottom up: first for the substring in the [i − 1, i] entry, and then [i − 2, i] and so on. Finally, when the [i, j] entry is constructed, the substring σi+1 · · · σj is split in the k position for k from i to j − 1. This guarantees that no chart entry is needed before it is available. The full algorithm is listed in Example 6.28. Since all iterations are bounded, the algorithm clearly terminates for every input. 6.7.3 An example To demonstrate the operation of the algorithm, consider the following contextfree grammar G = Σ, V, S, P where Σ = {a, b}, V = {S, A, B} and P is given
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
261
Example 6.28 Recognition algorithm for context-free grammars. Input: a context-free grammar G and an input string w = σ1 · · · σn Output: True if w ∈ L(G), False otherwise 1 for i ← 0 to n-1 do /* empty the chart */ 2 for j ← 0 to n do 3 chart[i,j] ← ∅ for i ← 1 to n do /* initialization, phase I */ foreach grammar rule X → σi do 6 chart[i-1,i] ← chart[i-1,i] ∪{X → σi •} 4 5
for i ← 0 to n-1 do /* initialization, phase II */ foreach grammar rule X → α do 9 chart[i,i] ← chart[i,i] ∪ {X → •α} 7 8
16
for j ← 1 to n do for i ← j-1 downto 0 do for k ← j-1 downto i do foreach active dotted rule R1 ∈ chart[i,k] do foreach complete dotted rule R2 ∈ chart[k,j] do R ← R1 ⊗ R2 /* dot movement */; chart[i,j] ← chart[i,j] ∪ {R}
17
if S in chart[0,n] then return True else return False
10 11 12 13 14 15
by S → A S B, S → B, A → a and B → b. The input string a b b is in L(G). As the input is of length 3, the chart will have 3 × 4 entries. It is initialized first with the terminals as follows: 0 0 1 2
1
2
3
A → a• B → b• B → b•
Then, the main diagonal of the chart is initialized with dotted rules whose dot is in the initial position. There are only two such dotted rules, namely, S → •A S B
Cambridge Books Online © Cambridge University Press, 2012
262
6 Computational aspects of unification grammars
and S → •B: 0 0
1
S → •A S B S → •B
2
A → a• S → •A S B S → •B
1
3
B → b• S → •A S B S → •B
2
B → b•
Now the main part of the algorithm is executed. It has three embedded loops, in which j increases from 1 to 3, i decreases from j − 1 to 0, and k decreases from j − 1 to i. In each step , all the active dotted rules in [i, k] are combined with all the complete ones in [k, j]. In this example, for j = 1, i = 0 and k = 0, the active dotted rule S → •A S B in [0, 0] can combine with the complete rule A → a• in [0, 1]; this leads to the addition of the rule S → A • S B to [0, 1]. Further steps are summarized in the following table:
j i k 1 2 2 2
0 1 0 0
active in [i, k]
0 S → •A S B 1 S → •B 1 S→A•SB 0
complete in [k, j]
add to [i, j]
A → a• B → b• S → B• ∅
S→A•SB S → B• S→AS•B
Note that the last step does nothing to chart since the [0, 2] entry is empty. After this stage, the chart is:
0 1 2
0
1
S → •A S B S → •B
A → a• S→A•SB S → •A S B S → •B
2
3
S→AS•B B → b• S → B• S → •A S B S → •B
Cambridge Books Online © Cambridge University Press, 2012
B → b•
6.7 Parsing with unification grammars
263
The following steps are:
j i k 3 3 3 3 3 3
2 1 1 0 0 0
active in [i, k]
complete in [k, j]
add to [i, j]
B → b•
S→B•
∅ B → b•
S→ASB•
2 S → •B 2 ∅ 1 2 S→AS•B 1 0
By the end of this process, the complete chart is:
0 1 2
0
1
S → •A S B S → •B
A → a• S→A•SB S → •A S B S → •B
2
3
S→AS•B
S→ASB•
B → b• S → B• S → •A S B S → •B
B → b• S→B•
In particular, note how the edge S → A S • B in [0, 2] combines with the complete edge B → b• in [2, 3], yielding S → A S B • in [0, 3]. This is a complete edge, spanning the entire input; hence, the input is in L(G), as expected. Exercise 6.26. Simulate the algorithm on the input a a b and show that this string is not in L(G).
6.7.4 Complexity What is the complexity of this algorithm? It is easy to see that the three embedded loops in the main part of the algorithm (after initialization) are bounded in the length of the string, and hence the body of the innermost loop can be executed at most n3 times. This body considers all the active dotted rules in one chart entry against all the complete ones in another entry; both lists are bounded by the number of rules in the grammar (which, of course, is independent of the length of the input). If the size of the grammar is |G|, the algorithm takes time O(|G|2 × n3 ). Parsing algorithms are usually evaluated in terms of the length of the input string only; in that case, we say that the complexity of the algorithm presented above is cubic.
Cambridge Books Online © Cambridge University Press, 2012
264
6 Computational aspects of unification grammars
One special case of the algorithm requires some precaution: application of dot movement to an active edge in [i, k] and a complete one in [k, j] results in a new edge in [i, j]. This edge might be complete; in the special case i = k, the complete edge thus created is added to [i, j] = [k, j]. Such edges can now be combined with active edges in [i, k] again. Notice that the situation occurs only when i = k. The only way an active edge in the [i, i] entry of the chart (which is an edge with the dot in the initial position) can become complete is if the rule on which the edge is based is of length 2 (a unit rule). Therefore, for unit rules a special treatment is required: Complete edges that result from application of dot movement on (an active edge that stems from) a unit rule are constructed before any other edges and, hence, before they are needed. The only problem left is the possibility of more than one unit edge in the same chart entry. Such rules can be ordered such that if R1 can “feed” R2 , R1 is processed before R2 ; such an ordering must always exist since we assume no cycles of unit rules. Exercise 6.27. Show an example grammar and an input string for which more than one unit rule end up in the same chart entry. Exercise 6.28. Consider the (ambiguous!) grammar Garith of Example 1.7 (Page 19). Simulate the recognition algorithm with respect to this grammar on the input a + b ∗ c. How is ambiguity expressed in the algorithm?
6.7.5 Extension to parsing Finally, we need to show how this recognition algorithm can be extended to a parsing algorithm; in other words, once the algorithm determines that w ∈ L(G), how can the structures induced by G on w be extracted from the chart? A very simple extension of the algorithm can add to each dotted rule R stored in the chart pointers to the dotted rules from which R was created, if it was indeed created by the fundamental rule of parsing. Then, when recognition terminates, the parser can move from the complete S edges in the [0, n] entry of the chart down the tree following these pointers (the recursion stops when the procedure handles an edge that was entered into the chart during initialization, and is not a combination of two other edges). As an example, consider the final chart of Section 6.7.3. In the following table, we depict the same chart, with additional dashed arrows. These arrows link each edge in the chart to the edges based on which it was created; in other words, each edge R is linked to the active edge R1 and to the complete edge R2 such that R = R1 ⊗ R2 . Edges that are not the result of dot movement are
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
265
not linked. 0 0
1
2
S → •A S B
A → a•
S → •B
S→A•SB
1
S→AS•B
S → •A S B
B → b•
S → •B
S → B•
2
3 S→ASB•
S → •A S B
B → b•
S → •B
S→B•
The links indicated by dashed arrows can be alternatively represented as a tree, whose root is the complete edge in the [0, n] entry of the chart. The tree is binary, since every complete edge is formed by combining exactly two edges (an active one and a complete one). For the running example, this tree is: S→ASB• S→AS•B S→A•SB
B → b• S→B•
S → •A S B A → a•
S→•B
B → b•
The above tree is not a derivation tree, of course. In order to extract a derivation tree from it, first remove all the nodes with active edges in them; replace every path in the tree that goes through such nodes by a single arc, if such a path terminates in a complete edge. The obtained tree for the running example is: S→ASB• A → a•
S→B•
B → b•
B → b•
Cambridge Books Online © Cambridge University Press, 2012
266
6 Computational aspects of unification grammars
The node labels in the above tree are all complete edges. Replace each label by the category symbol that heads the dotted rule, and add arcs for the terminal symbols to obtain: S A S B a B
b
b Is it important to realize that although the running example had exactly one tree encoded in the chart, in the general case, there may be more than one tree for a particular (structurally ambiguous) string. Therefore, the algorithm for recovering the trees from the chart must maintain a list of all the possible ways in which a particular complete edge was formed, linking each such edge to all the pairs of complete and active edges from which it may have been constructed. Note that while recognition takes time in the order of n3 , generating all the possible trees induced by G on w might take exponential time, simply because there may be exponentially many such trees, as Example 6.29 demonstrates. Example 6.29 Exponentially many trees. Consider the grammar Gexp defined by the rules S → a and S → S S. Obviously, L(Gexp ) = a+ . Consider now a particular string an in this language. The trees induces by the grammar on this string correspond to all the possible binary bracketings of the string. The number of these trees is known as Cn , the n(2n)! th Catalan number, and it can be shown that Cn = (n+1)!×n! for n > 0 or, asymptotically, that Cn ≈
4n √ . n3/2 × n
Exercise 6.29. Show all the trees induced by Gexp on the string aaa. Another issue that we do not address here is an extension of the algorithm such that it can handle -rules. Such rules can be viewed as the lexical entries of the empty substring; for example, if X → is a grammar rule, it can be interpreted as indicating that the symbol X can derive any empty substring of the input. In our terminology, this is equivalent to storing the complete edge X → • in the [i, i] entry of the chart for every i, 0 ≤ i ≤ n. This is the standard way to handle -rules in chart-based parsing algorithms. The algorithm we listed above, however, will require a slight modification due to the order in which chart entries are constructed. A special provision has to be added to the algorithm for this purpose, which we suppress here.
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
267
6.7.6 Extension to unification grammars We now extend the context-free recognition algorithm to unification grammars. Of course, since unification grammars are Turing-equivalent (see Section 6.2), it is clear that no recognition algorithm exists for all grammars. In this section we propose an extension of the context-free algorithm that works for some unification grammars but may not terminate for others. In previous sections we postulated several conditions on grammars that guarantee the decidability (and, in some cases, also the efficiency) of the recognition problem. The algorithm we present below is guaranteed to terminate, for example, for all the branching unification grammars (Section 6.4). Two factors contribute to the dramatic difference between context-free grammars and unification grammars in terms of expressivity, and both are easily manifested when the recognition algorithm is concerned. Recall that for the algorithm to work, we assumed in Section 6.7.2 that the grammar includes no -rules and no cycles of unit rules. We could make this assumption with context-free grammars because there are known effective procedures for simplifying such grammars and, in particular, for removing -rules (when the empty word is not in the language of the grammar) and cycles of unit rules (refer back to Exercises 1.13, and 1.14, on Page 21). With unification grammars things are different. First, while -rules can be removed in the same way, their removal may change the grammar in a way that is unacceptable to the linguist. This is not a difference in the weak generative capacity of the two formalisms, but clearly, if one is interested in the strong generative capacity of the formalism, then one cannot arbitrarily modify a grammar. This issue is discussed in the context of our account of relative clauses in Section 5.7, in particular, in Example 5.28 (Page 196). The second difference, however, implies a change in the weak generative capacity of the formalism. Recall from the discussion in Section 6.7.4 that the parsing algorithm needs a special treatment of unit rules (rules with a single element in their bodies). In particular, we assumed throughout the discussion of parsing with CFGs that grammars do not include “cycles” of unit rules: sequences of rules of the form A0 → A1 , A1 → A2 ,…, An → A0 . When grammars are no longer context-free, however, such cycles become more difficult to define. In fact, unlike the context-free case, it is no longer correct to assume that such cycles can always be removed without affecting the language generated by the grammar. Referring back to the construction of the grammar GM of Section 6.2, simulating the operation of a Turing machine M , observe that the grammar includes mainly unit rules. Indeed, many of them can be used to feed other rules, and possible cycles of unit rules abound.
Cambridge Books Online © Cambridge University Press, 2012
268
6 Computational aspects of unification grammars
Attempting to remove any of the unit rules in GM will necessarily change the language generated by the grammar. The assumption we made in Section 6.7.4 regarding an ordering of unit rules can no longer be made for unification grammars. This is the motivation for our definition of off-line parsability in Section 6.3. In this section, however, we make a stronger assumption. We are only concerned with branching unification grammars (Section 6.4), that is grammars that have no -rules and no unit rules whatsoever. This obviously constrains the expressiveness of the formalism, but it is still possible to define such constrained grammars for all the context-free languages, and for many languages that are trans-context-free, such as the languages {an bn cn | n > 0} or {ww | w ∈ Σ∗ }, as the grammars in Section 6.1 demonstrate. Such a constraint is also reasonable when grammars for natural languages are concerned. Indeed, in none of the grammars presented in Chapter 5 are -rules used, and unit rules are few and far between. In fact, all those unit rules can easily be eliminated with only minor implications for the linguistic analysis. As it turns out, very little is required in order to extend the algorithm described above from context-free grammars to branching unification grammars. The control structures of the algorithm remain intact; all that differs is the concept of dot movement, or the implementation of the fundamental rule of parsing. Of course, because grammar rules are multirooted structures, dotted rules can no longer be context-free rules with an indication of the dot in them; rather, they are multirooted structures with an indication of a dot. Since the dot is merely an indication of a specific position in the rule, and since, like CFG rules, multirooted structures consist of sequences of elements (albeit with possible reentrancies among them), the extension of dotted rules to unification rules is immediate. See Example 6.30. Definition 6.37 (Dotted rules) Given a unification grammar G = (L, R, As ), a dotted rule (or edge) is a pair ρ, d , where ρ is such that ρ ρ for some rule ρ ∈ R of length n ≥ 1, and 0 < d ≤ n, indicating the location of the dot. An edge is active if d < n, complete otherwise. Exercise 6.30. Why is d = 0 excluded in Definition 6.37? The chart is initialized with complete dotted rules corresponding to the words in the input; these are simply the lexical entries of the input words, which are feature structures viewed as MRSs of length 1, with the dot in the final position. Then, the main diagonal of the chart is loaded with dotted rules corresponding to all grammar rules, with the dot in the first position (i.e., immediately following the head of the rule).
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
269
Example 6.30 Dotted rules. Let ρ be the following unification rule:
CAT :
s
NUM : 4
⎡
CAT :
np
⎤⎡
CAT :
v
⎤
⎦ 4 → ⎣ NUM : 4 ⎦ ⎣ NUM : CASE : nom SUBCAT : elist
Then ρ induces the following three dotted rules: ρ, 1 :
CAT :
s
NUM
: 4
CAT :
s
⎡
CAT :
⎤⎡
np
CAT :
v
⎤
⎦ 4 → • ⎣ NUM : 4 ⎦ ⎣ NUM : CASE : nom SUBCAT : elist ⎡
CAT :
np
⎤
⎡
CAT :
v
⎤
⎦ 4 → ⎣ NUM : 4 ⎦ • ⎣ NUM : CASE : nom SUBCAT : elist ⎤⎡ ⎤ ⎡ CAT : np CAT : v CAT : s ⎦• 4 ρ, 3 : → ⎣ NUM : 4 ⎦ ⎣ NUM : NUM : 4 CASE : nom SUBCAT : elist
ρ, 2 :
NUM : 4
We can now extend the definition of dot movement to unification grammars. The idea remains the same, the only difference being that “matching” the head of a complete edge against the category of an active edge following the dot cannot be done by testing identity; rather, it is done by unification. The unification operation is introduced exactly because dot movement is, in a sense, the inverse of a single derivation step, and derivation with unification grammars requires unification to match the selected element against the heads of all rules and to propagate information from the head to the body. The active edge is unified with the complete one in the context of both MRSs. More precisely, the element of the active edge following the dot is unified with the head of the complete edge in the context of both MRSs. The result is a pair of MRSs, one corresponding to the active edge, and one, to the complete edge. Of course, the presence of reentrancies in either the active edge or the complete one can result in additional value sharing in the resulting MRSs, as per the definition of unification in context (Section 4.5). Dot movement is defined as the resulting MRS corresponding to the active edge, where the dot is shifted one position to the right. See Example 6.31.
Cambridge Books Online © Cambridge University Press, 2012
270
6 Computational aspects of unification grammars
Definition 6.38 (Dot movement) Let ρ1 , d , where len(ρ) = n and 0 < d < n, be an active edge; let ρ2 , len(ρ2 ) be a complete edge. Let ρ1 , ρ2 = (ρ1 , d) (ρ2 , 1). Then ρ1 ⊗ ρ2 = ρ1 , d + 1 . Example 6.31 Dot movement. Let ρ1 , 1 be the following active edge:
CAT :
s
NUM
: 4
⎡
CAT :
np
⎤⎡
CAT :
v
⎤
⎦ 4 → • ⎣ NUM : 4 ⎦ ⎣ NUM : CASE : nom SUBCAT : elist
Let ρ2 , 3 be the following complete edge: ⎡
⎤ ⎤ ⎡ CAT : n np CAT : d ⎣ NUM : 4 sg ⎦ → ⎣ NUM : 4 ⎦ • NUM : 4 CASE : 2 CASE : 2 CAT :
Then ρ1 ⊗ ρ2 is the following (active) edge:
CAT :
s
NUM
: 4
⎤ ⎡ ⎤ np CAT : v ⎦ 4 → ⎣ NUM : 4 sg ⎦ • ⎣ NUM : SUBCAT : elist CASE : nom ⎡
CAT :
Note that the unification binds the value of the feature NUM to the value sg, and through the variable 4 this value is shared by both constituents in the body of the edge, as well as by the head of the edge. Once dot movement is defined over MRSs, the algorithm of Example 6.28 remains intact: It can be used as is with unification grammars (see Example 6.32). In particular, the invariant of the chart is maintained. However, although for context-free grammars its complexity is polynomial (more precisely, cubic), unification grammars introduce intractability, as explained previously, and the algorithm is only guaranteed to terminate for grammars with no -rules and no unit rules. Theorem 6.39 For every branching unification grammar G and string w, the recognition algorithm terminates. The proof is involved and is suppressed here, but it must be noted that while termination is guaranteed for branching unification grammars, efficiency is not (refer back to Section 6.4 for the details).
Cambridge Books Online © Cambridge University Press, 2012
6.7 Parsing with unification grammars
271
Example 6.32 Parsing with unification grammars. To demonstrate the algorithm, consider the unification grammar Gww in Example 6.5, and the input string aa, whose length n = 2. After the initialization of the chart, phase I, the following edge populates both the [0, 1] and the [1, 2] entries of the chart: FIRST : ap σ, 1 = → a• REST : elist The second phase of the initialization adds (inter alia) the following edge to the [0, 0] cell of the chart:
: 4 ρ, 0 = CAT : s → • REST : 2 FIRST
FIRST : 4
REST : 2
Observe that the element following the dot in ρ is unifiable with the (only) element of σ. Specifically,
FIRST : 4 ap FIRST : 4 ρ ⊗ σ = ρ = CAT : s → • REST : 2 REST : 2 elist Hence, the algorithm can combine ρ, 0 with σ, 1 , yielding ρ , 1 , and store it in the [0, 1] cell of the chart. In a similar way, this newly created edge can be combined with the edge σ, 1 from the [1, 2] cell of the chart, resulting in the complete edge:
CAT :
FIRST : 4 ap FIRST : 4 • s → REST : 2 REST : 2 elist
which is stored in [0, 2]. Since the head of the edge is the start symbol, and its span is the entire input, the string aa is indeed in the language of the grammar.
Finally, note that extending the recognition algorithm to parsing is identical to the case of context-free grammar: All that is needed is to record the structures that are stored in the chart, and to add links from each new edge to the edges that contributed to its creation. Of course, in light of the complexity of feature structures and unification, this may require more space and more time, but in principle the extension is as simple as in the case of context-free grammars. As a concluding example, we outline below the execution of the recognition algorithm of Example 6.28 on the grammar G4 , listed in Examples 5.29 and 5.30, (Page 198), and the input string the lambs sleep. First, the chart is cleared.
Cambridge Books Online © Cambridge University Press, 2012
272
6 Computational aspects of unification grammars
Then, the subdiagonal [i − 1, i] of the chart is loaded with the preterminals corresponding to the words of the input string. This results in the following three edges:
d [0, 1] λ1 = → two• NUM : pl ⎤ ⎡ CAT : n → lambs• [1, 2] λ2 = ⎣ NUM : pl ⎦ CASE : 6 ⎤ ⎡ CAT : v ⎥ ⎢ SUBCAT : elist ⎢ ⎥ ⎥ → sleep• ⎢ [2, 3] λ3 = ⎢ CAT : np ⎥ SUBJ : ⎣ CASE : nom ⎦ CAT :
NUM :
pl
Next, the second phase of the initialization loads the main [i, i] diagonal of the chart with active dotted rules corresponding to each and every rule in the grammar. In particular, the following edges are added (along with many others): ⎤ ⎡ ⎤ CAT : v np ⎥ ⎢ NUM :
7 ⎥ → • 1 ⎣ CASE : nom ⎦ ⎢ = CAT : s ⎣ SUBCAT : elist ⎦ NUM : 7 1 SUBJ : ⎤ ⎤ ⎡ ⎡ CAT : n CAT : np CAT : d ⎦ ⎣ ⎣ NUM : 7 ⎦ = NUM : 7 → • NUM : 7 CASE : 6 CASE : 6 ⎡
[0, 0] ρ1
[0, 0] ρ2
CAT :
Now the main loop is executed, with j increasing from 1 to 3 (the length of the input) and i decreasing from j − 1 to 0. Let j be 1 and i be 0. The only valid value for k is then 0. The algorithm tries to combine active dotted rules in [0, 0] with complete ones in [0, 1]. This results in the following dotted rule: ⎡
CAT :
np
⎤
CAT : d ρ3 = ρ2 ⊗ λ1 = ⎣ NUM : 7 ⎦ → NUM : 7 pl CASE : 6
⎡
CAT :
n
⎤
• ⎣ NUM : 7 ⎦ CASE : 6
which is stored in [0, 1]. Now j = 2 and i = 1. The only valid value for k is 1. Here, the algorithm tries to combine active edges from [1, 1] with complete edges from [1, 2]. While both these cells are non-empty, they include no eligible candidates for combination. However, with i = 0, k can have two different values, namely 1 and 0. Consider k = 1; the algorithm searches for active edges
Cambridge Books Online © Cambridge University Press, 2012
Further reading
273
in [0, 1] that can be combined with complete edges in [1, 2]. Indeed, such a pair exists and yields the following new (complete) edge: ⎡
CAT :
np
⎤
CAT :
d
⎡
CAT :
n
⎤
⎣ NUM : 7 ⎦ •, ρ4 = ρ3 ⊗ λ2 = ⎣ NUM : 7 ⎦ → NUM : 7 pl CASE : 6 CASE : 6 which is stored in [0, 2]. Now, trying k = 0 results in the combination of active edges in [0, 0] with complete edges in [0, 2]. Consequently, a new active edge is added to [0, 2]: ⎡
⎤
⎡
CAT :
v
⎤
CAT : np ⎢ NUM : ⎥
7 ⎥ ρ5 = ρ1 ⊗ ρ4 = CAT : s → 1 ⎣ CASE : nom ⎦ • ⎢ ⎣ SUBCAT : elist ⎦ NUM : 7 pl 1 SUBJ :
Finally, j = 3. Consider only the case of i = 0 and subsequently k = 2. The algorithm attempts to combine active edges in [0, 2] with complete edges in [2, 3], and as a results, the following complete edge is created: ⎡ ⎤ ⎤ CAT : v np ⎢ NUM : ⎥
7 ⎥ ρ6 = ρ5 ⊗ λ3 = CAT : s → 1 ⎣ CASE : nom ⎦ ⎢ ⎣ SUBCAT : elist ⎦ • 7 NUM : pl 1 SUBJ : ⎡
CAT :
This is a complete edge, stored in the [0, n] cell of the chart, and its head is indeed the start symbol of the grammar; hence, the input two lambs sleep is indeed a sentence of the grammar G4 . Further reading Turing machines are defined and their properties are discussed in many textbooks on computation theory, for example, Lewis and Papadimitriou (1981). The Turing-equivalence of unification-based formalisms is first mentioned by Kaplan and Bresnan (1982) and is formally proven by Johnson (1988) and Shieber (1992). Our construction and proof sketch in this chapter are a reformulation of the former. Off-line parsability is a notion coined by Pereira and Warren (1983) for definite clause grammars, having been introduced (unnamed) by Kaplan and Bresnan (1982) for LFG grammars. Several alternative definitions for this concept are available in the literature, each capturing slightly different properties
Cambridge Books Online © Cambridge University Press, 2012
274
6 Computational aspects of unification grammars
of the constraint on grammars (Johnson, 1988; Haas, 1989; Shieber, 1992; Torenvliet and Trautwein, 1995; Kuhn, 1999; Wintner and Francez, 1999). The discussion in Section 6.3 is based on Jaeger (2002), parts of which were published as Jaeger et al. (2002) and Jaeger et al. (2004). The latter includes a survey of the various definitions of off-line parsability and their interrelations. The reduction of 3-SAT to the universal recognition problem with binary unification grammars presented in Section 6.4 is based directly on a similar proof for Lexical-Functional Grammar, originally suggested by Berwick (1982) and reproduced in Barton et al. (1987c). Barton et al. (1987a) present a different reduction, from 3-SAT to a constrained version of unification grammars called agreement grammars. The discussion of context-free and mildly context-sensitive unification grammars is based on Feinstein and Wintner (2008), which also provides the proofs that nonreentant unification grammars generate exactly the class of CFGs, and one-reentrant unification grammars generate the class of tree-adjoining languages. Parsing theory is a well-established but still active field of research. The first algorithm for parsing context-free grammars is attributed to Cocke (unpublished), Younger (1967), and Kasami (1965); it is called CYK after the three of them. CYK can only handle grammars in Chomsky Normal Form; the most practical and popular parsing algorithm for general context-free grammars is due to Earley (1970): It uses a chart to store intermediate results, and takes time proportional to the cube of the input’s length. The concept of chart parsing was then independently developed by Kaplan (1973) and Kay (1973). Parsing with unification grammars was developed along with the increased popularity of these grammars in the beginning of the 1980s. In a seminal work, Pereira and Warren (1983) defined parsing as a special case of logical deduction, and showed how definite-clause grammars, in particular, and other unification formalisms, in general, can be parsed using the Earley deduction proof procedure. The paradigm of parsing as deduction was later extended to many parsing strategies and a variety of grammar formalisms by Shieber et al. (1995). The same paradigm was the basis for the concept of parsing schemata, which abstract over the particular implementation details of parsing algorithms and retain only their logical properties (Sikkel, 1997).
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
7 Conclusion
We have reached the final destination of our journey into unification grammars, and can pause and look back at what we have done. Our main purpose has been the presentation of a powerful formalism for specifying grammars, for both formal and natural languages. In doing so, we intended to combine insights from both linguistics and computer science. From linguistics, we have adopted various analyses of complex syntactic constructs, such as long-distance dependencies or subject/object control, that prevail in natural languages and are easily recognized as inadequately represented, say, by context-free grammars. Several linguistic theories deal with such constructs; a common denominator of many of them is the use of features (and their values) to capture the properties of strings, based on which a grammar can be specified. However, the use of features is mostly informal. The formalism we presented adopts (from linguistics) feature structures as its main data-structure, but with a rigorous formalization. As a result, claims about grammars can be made and proved. We believe that resorting to proofs (in the mathematical sense) should become a major endeavor in any study of theoretical linguistics, and the ability to prove claims should be a major ingredient in the education of theoretical linguists. By understanding its underlying mathematics, one can better understand the properties of the formalism, recognizing both its strengths and weaknesses as a tool for studying the syntax of natural languages. From computer science, we took several insights and adapted them to also suit the analysis of natural language. From formal language theory, we adopted the idea of a formalism for specifying grammars (with an emphasis on the plural). The ability to ask (and answer!) questions about an arbitrary grammar formulated in any formalism is the right approach to understanding and judging the adequacy of any specific grammar. We think that the parlance of “the grammar,” frequently used in theoretical linguistics, is misguided. There never is a unique grammar for any language, 275
Cambridge Books Online © Cambridge University Press, 2012
276
7 Conclusion
whether formal or natural. A grammar should be regarded as a purpose-driven artifact. One grammar may be used for expository purposes, while another may lead to a more efficient parsing. Thus, the equivalence of grammars (in any given grammatical formalism) is a central tool to be utilized by grammarians. In addition, the question about the relationship among different grammatical formalisms is important in terms of their strong generative capacity. Linguists are not satisfied by a grammar merely recognizing some language; they expect to know that a grammar assigns an adequate grammatical structure to any string that has been recognized as generated by the grammar. From complexity theory, we have borrowed the central notions about tractability and efficiency. Focusing on the tractability and complexity of the universal recognition problem for a class of grammars is a major tool for studying adequacy, that should accompany any study of empirical adequacy in the case of natural language grammars. In general, computer science puts an emphasis on processing, manifesting itself here in the problems of membership and recognition. The study of algorithms that realize the tasks of membership recognition and parsing is an important component of a linguistic theory. It endows grammars with am operational meaning, the closest formal analog to a cognitive view of a grammar, the latter not being a mathematical concept. In general, the mathematization of a science is the best guarantee of its maturity, and the best machinery for its advancement. The same process that physics has gone through over several centuries, leading to its current amazing state of the art, is now starting to take place in theoretical linguistics. We see every prospect of success in this development. This book is, after all, a textbook. As such, it represents a view about what makes up part of an adequate curriculum for students intending to study and, later, to investigate the syntax of natural languages. It may also be useful to students of computer science, by extending the scope of applicability of the various techniques involved beyond, say, programming languages. Finally, one should not stop at unification grammars. There are other formalisms that have been used for the study of natural languages, based on different views and methodologies. A major one is type-logical (categorial) grammar, the underlying mathematics of which is formal logic, according to which language processing is a deductive activity.
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
Appendix A List of Symbols
The following table lists the main symbols used in the book, along with the page number(s) on which they are introduced or defined. Symbol N S ATOMS F EATS PATHS G ∼ FS ˆ AF S ˆ TAGS AVMS Tags TagSet SubAVM ˆ
Meaning the natural numbers signature set of atoms set of features sequences of features set of feature graphs over some signature reentrancy feature-graph isomorphism feature-graph subsumption set of feature structures over some signature feature-structure subsumption set of AFSs over some signature AVM subsumption AFS subsumption tags of an AVM tag in an AVM, element of TAGS set of AVMs over some signature set of tags of an AVM tagset of an AVM sub-AVMs of an AVM AVM renaming feature-structure unification
≈
equivalence relation over feature-graph nodes
i
u
277
Cambridge Books Online © Cambridge University Press, 2012
Page 37 37 37 37 37 41 43 44 52 53 56 62 62 65 65 65 66 66 66 74 86 86
278
ˆ λ G ˆ W ORDS L P Tw R As ⇒ ∗ ⇒ k
⇒ ∗
⊗
Appendix A
feature-graph unification AFS unification feature-structure generalization feature-graph generalization empty MRG, empty MRS MRG subsumption set of MRGs over some signature
89 104 108 108 118 123 125
AMRS subsumption words lexicon preterminals of w rules in a unification grammar start symbol of a unification grammar derivation relation reflexive-transitive closure of ‘⇒’
129 146 146 148 150 150 151 151
k applications of ‘⇒’ blank symbol in a Turing machine yield relation over Turing machine configurations
151 227 227
reflexive-transitive closure of ‘ ’ dot movement
227 259
The following table summarizes the meta-variables used in the book, along with the page number(s) on which they are introduced or defined. Variable σ u, v, w L G A, B, C X α, β, γ R f, g a, b π, α A, B, C
Ranges over... letters (elements of Σ) strings (elements of Σ∗ ) languages grammars (of all kinds) nonterminal symbols in CFGs symbols in CFGs (elements of (V ∪ Σ)) forms in CFGs (elements of (V ∪ Σ)∗ ) rules in CFGs; dotted rules features (elements of F EATS) atoms (elements of ATOMS) paths (elements of F EATS∗ ) feature graphs
Cambridge Books Online © Cambridge University Press, 2012
Page 12 12 13 13 13 13 13 13, 257 37 37 37 38
Appendix A
Q q¯ δ θ Π fs F Θ ≈ M X A ¯ R mrs σ, ρ M wi , wj w λ ρ M
nodes in feature graphs and MRGs roots of feature graphs transition function in feature graphs and MRGs node marking function in feature graphs and MRGs sets of paths feature structures abstract feature structures atom-marking function in AFSs and AMRSs reentrancy relation in AFSs and AMRSs attribute-value matrices AVM variables multirooted graphs lists of roots in MRGs multirooted feature structures abstract multirooted structures multi-AVMs words (elements of W ORDS ) strings over W ORDS preterminals in unification grammars rules in unification grammars, dotted rules Turing machines
Cambridge Books Online © Cambridge University Press, 2012
279
38, 118 38 38, 118 38, 118 39 52 56 56 56 65 65 118 118 125 126 130 146 146 148 148, 268 226
Cambridge Books Online © Cambridge University Press, 2012
Appendix B Preliminary mathematical notions
We list below some of the mathematical notions and concepts we use in the book, mostly in order to establish a common notation. We assume that readers are basically familiar with most of these concepts. Sets A Set is an unordered, repetition-free collection of elements. When a set is finite, its members can be stipulated (listed); curly brackets are used for the collection, as in {1, 4, 9, 16}. Sets can also be specified by imposing a condition on their members, as in {n2 | n is an integer }. To indicate that an element a is a member of a set S, we write a ∈ S; the subset relation on sets is indicated by ‘⊆’. Common operations on sets include union ‘∪’, intersection ‘∩’, difference ‘\’, and complementation {· · ·}c . The cross-product of a set S, denoted S × S, is the set of pairs {a, b | a ∈ S and b ∈ S}. Relations A (binary) relation R over some set S is a subset of the cross-product S × S, namely, R ⊆ {a, b | a ∈ S and b ∈ S}. If a, b ∈ R, one usually writes aRb. A binary relation is total of for all a, b ∈ S, either aRb or bRa. R is reflexive if for all a, aRa; it is symmetric if aRb implies bRa and antisymmetric if aRb and bRa imply a = b; and it is transitive if aRb and bRc imply aRc. A relation that is reflexive and transitive is called a pre-order. A pre-order that is antisymmetric is called a partial order. If it is, in addition, also total, then it is a total, or linear, order, or a chain. If R is reflexive, symmetric, and transitive it is an equivalence relation. If R is an equivalence relation over S, it classifies the members of S into equivalence classes. For every a ∈ S, the equivalence class of a is the set {b | aRb}. We write [a]≈ for the equivalence class of a with respect to the equivalence relation ‘≈’. 280
Cambridge Books Online © Cambridge University Press, 2012
Appendix B
281 ∗
When → is a binary relation, its reflexive transitive closure, denoted →, is ∗ defined as follows: a → b if a = b, or if there exists some c such that a → c and ∗ c → b. Bounds Let S be a set and ‘≤’ a partial order over S. For a, b ∈ S, if a ≤ b we say that a is less than, or lower than, b, and b is said to be greater than a. Given some subset S of S, an upper bound of S is an element b ∈ S such that for all a ∈ S , a ≤ b. Conversely, b is a lower bound of S if for all a ∈ S , b ≤ a. The least upper bound, or join, of some subset S is an upper bound b such that for any upper bound c, b ≤ c. Similarly, the greatest lower bound, or meet, of S is a lower bound b such that for all lower bounds c, c ≤ b. Functions A binary relation over a set S is a function if for all a ∈ S, if aRb and aRc then b = c. If f is a function we usually write f (a) = b to indicate that a, b ∈ f . The domain of the function is the set {a | aRb for some b ∈ S}; its range is the set {b | aRb for some a ∈ S}. A function over S is total if its domain is the entire set S; it is partial otherwise. A partial function may be defined on a (strict) subset of S. Therefore, one must be careful when applying a partial function, and in particular when using the result of the application (which may or may not be defined). In this book, when we use the result of a partial function we implicitly imply that it is defined; for example, when we say “if f (a) = f (b)” we mean “if both f (a) and f (b) are defined, and if they are equal.” We use ‘f (a)↓’ for “f (a) is defined” and ‘f (a)↑’ for “f (a) is not defined.” A function f is an injection, or a one-to-one function, if for all a, b in its somain, a = b implies f (a) = f (b). A function f is invertible if there exists a function g such that for all x, y, f (x) = y iff g(y) = x. If f is invertible its inverse, g, is usually denoted f −1 . If f is a function, then f|S is f restricted to S , namely {a, f (a) | a ∈ S }. If f and g are functions over the same set S, then the union of f and g is f ∪ g = {a, b | f (a) = b or g(a) = b}. The composition of f and g is f ◦ g = {a, b | b = f (g(a))}. Graphs A (simple) graph is a pair consisting of a (typically, finite) set of nodes, or vertices, and a set of edges, or arcs, connecting between pairs of nodes. Formally, a graph G is a pair Q, E , where Q is a set of nodes and E ⊆ Q × Q designates
Cambridge Books Online © Cambridge University Press, 2012
282
Appendix B
the edges. A graph is directed if the order of nodes making up an edge matters, that is, if q1 , q2 ∈ E does not necessarily imply q2 , q1 ∈ E. It is undirected otherwise. Graphs can also be labeled; in this case, either the nodes or the edges (or both) are decorated with some labels. In a labeled graph, two nodes can be connected by several edges bearing distinct labels. A path is a sequence of consecutive edges q0 , q1 , q1 , q2 , . . ., qn−1 , qn . Note that the first node of each edge (except the first one) in a path is the second node of its preceding edge. The length of a path is the number of its edges. A path is a cycle if qn = q0 , that is, if it begins and ends in the same node. A loop is a cycle of length 1, that is, an edge connecting a node to itself. A graph is acyclic if it contains no cycles. A graph is connected if a path leads from each node to any other node. A tree is a connected graph whose underlying graph (i.e., ignoring the directionality of edges) contains no cycles; in such a graph, the number of edges is exactly the number of nodes minus 1. In a tree, a node connected to exactly one other node is called a leaf ; all other nodes are called internal nodes. Sometimes, a node in a tree is designated as a root. If the tree is directed, then its root has no incoming edges. In a directed tree, the daughters of a node q are all nodes p such that an edge leads from q to p; then q is the mother of all those p-s. The branching degree of a tree is the maximum number of daughters of some node in the tree. Computation A computational problem is an infinite sequence of instances, along with a solution for each instance. It is useful to think of instances as strings over some given alphabet, typically {0, 1}. An important class of computational problems are decision problems, for which the solution is either true (1) or false (0). An algorithm is an effective, systematic method, consisting of a finite sequence of instructions from a predefined inventory that, if followed, can provide a solution for each instance. The computational (time) complexity of an algorithm is a measure of the time (in terms of steps or of execution of basic instructions) the algorithm would take to provide a solution to some arbitrary instance of the problem, in the worst case, as a function of the length of that instance. More formally, assume that instances are strings, so their length is well defined. We say that the complexity of an algorithm is f (n) if there exist n0 > 0 and a constant c such that for all n > n0 , the time T (n) it takes for the algorithm to provide a solution for an instance of length n is such that T (n) < c × f (n). Specifically, an algorithm is linear if its running time, in the worst case, is bound by a linear function of n
Cambridge Books Online © Cambridge University Press, 2012
Appendix B
283
(times some constant); it is quadratic if the bounding function is n2 ; cubic, if it is n3 ; polynomial, if it is some polynomial in n; and exponential, if it is cn for some constant c > 1 (typically, c = 2). A similar measure, space complexity, quantifies the space (in terms of basic space units) an algorithm consumes during its execution (again, in the worst case, and as a function of the size of the input). The complexity of a problem is the computational complexity of the best (fastest) algorithm known for it. A problem is considered tractable if a polynomial algorithm for its solution is known. Problems for which the best known algorithm is exponential are considered intractable. For some computational problems, it can be proven that no algorithm exists. We say that such problems are undecidable. A function whose computation is decidable (i.e., for which an algorithm exists) is called a recursive or a computable function.
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
Appendix C Solutions to selected exercises
1.2. Let Σ = {a, b}. Then a · b = ab = ba = b · a. 1.3. An equation of the form w · x = u has a solution iff w is a prefix of u, that is, for every i, 1 ≤ i ≤ |w|, the i-th letter of w and u is identical. In this case, x consists of the rest of the letters in u: x = σ1 · · · σ|u|−|w| , where σj is the |w| + j-th letter of u. Hence, the solution is unique. 1.4. w0 = , and for every n > 0, wn = w · wn−1 . L0 = {} and for every n > 0, Ln = L · Ln−1 . 1.5. If Σ = ∅ then Σ∗ = {}. 1.6. Let Σ1 = {a}, Σ2 = {b}. Then Σ∗1 is the set of strings of a-s, Σ∗2 is the set of strings of b-s, and Σ∗1 ∪ Σ∗2 is the set of strings of either a-s or b-s (but no “mixed” strings are included). Σ1 ∪ Σ2 = {a, b}, and hence (Σ1 ∪ Σ2 )∗ is the set of strings over a and b. In particular, the string ab is included. Note that the empty string is a member of both Σ∗1 ∪ Σ∗2 and (Σ1 ∪ Σ2 )∗ . Actually, Σ∗1 ∪ Σ∗2 ⊆ (Σ1 ∪ Σ2 )∗ . 1.7. The language is the set of all palindromes (strings that read the same forwards and backwards) over the alphabet {a, b}: L = {w ∈ {a, b}∗ | w = w reversed} ∗
1.8. Assume that for each i, 1 ≤ i ≤ n, Xi ⇒ αi via a derivation sequence di . ∗ ∗ Then X → X1 …Xn (by the assumed rule) ⇒α1 X2 . . . Xn (by d1) · · · ⇒ α1 · · · αn (by d2 , . . . , dn ). 1.9. There is a derivation for in Ge because the rule S → is a production of the grammar, and S is the initial symbol. Assume towards a contradiction 284
Cambridge Books Online © Cambridge University Press, 2012
Appendix C
285
that there was another derivation for the empty string. Then, it would have to involve the nonterminal S since it is the only symbol that derives . The only rule that has an S in its body is S → Va S Vb . However, Va expands to an a, and Vb expands to a b, in contradiction to the assumption that an empty string is derived. 1.11. S A N C
→ → → →
A | A or S N | N and A C | not C a | b | c | true | false | S
1.12. Let G = Σ, V, S, P be a grammar that is not in normal form. Construct an equivalent normal-form grammar G = Σ, V , S, P as follows: Let V = V and P = P . For every production R ∈ P that is not in normal form, and for every occurrence of a terminal σ in the body of R, let Aσ be a symbol such that Aσ ∈ V . Construct R by replacing every occurrence of σ with Aσ . Add Aσ to V , replace R by R in P and add to P the (terminal) production Aσ → σ. It is easy to see that the obtained grammar, G , is equivalent to G. 2.1. No feature structure A exists for which Π(A) = ∅, since the empty path is always a member of Π(A), for all A. 2.3. For every feature structure A, valA () = A. 2.4. The proposition is false: In the feature structure A, θA (δA (¯ qA , F )) = qA , G )) = a but valA (F ) = valA (G ) (as these values are different θA (δA (¯ graphs, consisting of different nodes:
A:
F
q2 a
G
q3 a
q1
2.6. If A = valA (π), then q¯A = δA (¯ qA , π) (by the definition of path value). If π is a path in A , then there exists a node q ∈ QA such that qA , π ) = q. Since QA ⊆ QA (by the definition of path value), q ∈ QA . δA (¯ qA , π ) is defined, and its value is q. From δA (¯ qA , π) = q¯A and Moreover, δA (¯ qA , π ) = q, we obtain δA (¯ qA , π · π ) = q. δA (¯
Cambridge Books Online © Cambridge University Press, 2012
286
Appendix C
2.7. Since valA (π) = A for every path, this is so for paths of length 1, too. Hence, every outgoing edge of the root node must lead back to the root. The only feature graphs for which this condition holds have a single node, from which n edges, where n = |F EATS|, each labeled by some element f ∈ F EATS, leave and lead back to the same node. Clearly, all paths (even of length greater than 1) lead back to the root. 2.9. Since q ∈ Q, there exists a path π ∈ Π(A) such that δ(¯ q , π) = q. Since δ(q, π1 ) = δ(q, π2 ), we obtain δ(¯ q , π · π1 ) = δ(¯ q , π · π2 ). Hence, A is reentrant. 2.10. If A is cyclic, there exist a node q ∈ Q and a non-empty path α that δ(q, α) = q. Since q is accessible, let π be the path from the root to q: δ(¯ q , π) = q. The infinite set of paths {παi | i ≥ 0} is contained in Π(A). If A is acyclic, then for every nonempty path α ∈ PATHS and every q ∈ Q, δ(q, α) = q. Every path outgoing from q¯ can go through each node in Q at most once. Q is finite, and so is F EATS, so the out-degree of every node is finite. Therefore the number of different paths leaving q¯ is bounded, and hence Π(A) is finite. 2.12. False: The nodes of A and B can be completely disjoint, nothing in the definition of subsumption requires that they be related. 2.13. False: See the fourth case of Example 2.11. 2.14. False: See the first and third cases of Example 2.11, as well as Example 2.9. 2.15. False: See the first and third cases of Example 2.11. 2.16. True: let q1 , q2 ∈ QA be such that δA (q1 , f ) = q2 for some f ∈ F EATS. Then, there exist unique nodes h(q1 ), h(q2 ) ∈ QB such that δB (h(q1 ), f ) = h(q2 ) (by the second clause of the definition of subsumption). Hence every edge in A is uniquely mapped to an edge in B (but B can have additional edges). 2.20. A B. Intuitively, A and B have exactly the same (infinite) sets of paths and the same marking to the end of each path, but B has more reentrancies, because the empty path is reentrant, in B, with all other paths; whereas, in A this is not the case. More formally, to establish A B, define a subsumption morphism h : QA → QB such that h(q0 ) = h(q1 ) = q2 . It is easy to verify that h is indeed a subsumption morphism. To see that no subsumption morphism
Cambridge Books Online © Cambridge University Press, 2012
Appendix C
287
from QB to QA exists, observe that q2 would have to be mapped to either q0 or q1 , and each case leads to a contradiction. 2.24. F = Π, Θ, ≈ , where: • Π = {}; • Θ() = a; • π1 ≈ π2 if and only if π1 = π2 .
2.25. The following is the induced AFS: • Π = {, agr , agr, num , agr, pers , subj , subj, agr , subj, agr, num ,
subj, agr, pers };
• Θ(agr, num ) = Θ(subj, agr, num ) = pl, Θ(agr, pers ) = Θ(subj, agr, pers ) =
third, Θ(π) is undefined for any other path;
• For every path π, π ≈ π. In addition, agr ≈ subj, agr , agr, num ≈
subj, agr, num and agr, pers ≈ subj, agr, pers . 2.27. F F
q0 G
q1 q2 G
2.28. Conc(F ) =
[]≈
2.30. There can be no AVM M and a variable X such that TagSet(M, X) = ∅ because there must be at least one X ∈ T ags(M ) in order for TagSet to be defined. 2.31. M is well-formed, and assoc(M, 2 ) = 2 [ ]. 2.32. Π(M ) = {} ∪ { Fi | i > 0} ∪ {Fi G | i ≥ 0} ∪ {Fi GF | i ≥ 0}. 2.35. To establish the mutual subsumption, consider the subsumption morphism i : T ags(M1 ) → T ags(M2 ), defined as i( 2 ) = 22 , i( 3 ) = 23 ,
Cambridge Books Online © Cambridge University Press, 2012
288
Appendix C
i( 4 ) = 24 . Observe that i is actually an isomorphism, so i−1 is a subsumption morphism from M2 to M1 . 2.37. φ(M ) =
0
G
1
F
2
a
H
3.3.
F
A:
F
q0
B:
q1
q2
3.4. No. Since each node belongs to at least one equivalence class, and since the roots of A and B are in the same class, the number of equivalence classes is necessarily smaller than the number of nodes in the two feature structures. 3.5. The equivalence class of the root always contains both roots, but it can qA , F) = q and q = q¯A ; whereas in B, contain more nodes. For example, if δA (¯ qB , F ) = q¯B , then [¯ qA ] u ⊇ {¯ qA , q¯B , q}. δB (¯ ≈
3.7.
F
A:
B:
q0
q2
3.8. F F
q0
F
q1
q2
=
{q0 , q1 , q2 }
3.9. Note that (feature graph) unification is defined in terms of set union, which q2 ] u , A B = B A. is commutative. Since [¯ q1 ] u = [¯ ≈
≈
3.14. The proposition is false. Following is a counter-example, depicted using AVMs:
F: H: 1 F: 1 F: 1 H: 1 = G: 1 G: 1 G: 1
Cambridge Books Online © Cambridge University Press, 2012
Appendix C
289
3.15. F
q0
H
q1 sg
G
3.16. Consider the AFSs F1 and F2 , depicted as concrete graphs for simplicity: Conc(F1 ) : F
Conc(F2 ) :
H
F
G
I
J
Since Π1 includes the path H , I and Π2 includes F, J , and since F ≈1 H , fusion closure requires the path H , J to be in the result; as this path is not a member of either Π1 or Π2 , it is not in the union of the two. ¯ = ∅ since R ¯ ⊆ Q . If Q = ∅ but R ¯ = ∅, 4.1. If QA = ∅, trivially R A A A A A there exists at least one node q ∈ QA , and since there are no roots, this node is not accessible from any of the roots. This contradicts the requirement of root accessibility. 4.2. 1: A|
cat
s
2: A|
cat
np
agr 2: A|
cat
vp
4.3. 1...2 : A
q1 CAT
q2 CAT
q4 s
q5 np
Cambridge Books Online © Cambridge University Press, 2012
290
Appendix C 2...3 : A
q2
q3
CAT
CAT
AGR
q5 np
q6 vp
AGR
q7 1...3 : A
q1
q3
CAT
CAT AGR
q4 s
q6 vp
q7
¯ and some path π such that q = δ(¯ 4.4. If q ∈ Q then there exist some q¯ ∈ R q, π). Since δ(q, f )↓, there exists a path π · f and δ(¯ q , π · f )↓. Hence δ(q, f ) ∈ Q . back to itself. 4.7. The path F, H , G , F, H , G leads from the first root of A 11 ∼ A 12 and A 21 ∼ A 22 , but not A 1∼ 2. 4.10. False. In the following example, A A 1 A
q0
2 A
q1
F
q2 F
q4
q5
q3 F
F
q6
4.13. False; See the example in the solution to Exercise 4.10.
Cambridge Books Online © Cambridge University Press, 2012
Appendix C
291
4.16. • • • •
Ind = 1; Π = {1, Fn | n ≥ 0}; Θ is undefined everywhere; ≈ = {(1, π1 , 1, π2 )} for all π1 , π2 .
4.17. • • • •
Ind = 2; Π = {1, , 1, CAT , 2, , 2, CAT , 2, AGR }; Θ(1, CAT ) = s, Θ(2, CAT ) = np, Θ is undefined elsewhere; ≈ = {(i1 , π1 , i2 , π2 ) | i1 = i2 and π1 = π2 }.
4.23. 1. 1 = M
G: 7
a 1 F: 8 H: 2 []
, 6 F: 5 H: 2 F: 9 H: 1 []
2. 2= M
G: 7
a 2 F: 9 H: 1 F: 8 H: 2 []
, 6 F: 5 H: 2 []
3. 3 = M
2
F: 9
H: 1
[]
G: 7
a , 1 F: 8 2 H: []
) = { 2 , F, 9 , 9 , H, 1 , 1 , F, 8 , 8 , H, 2 , 8 , G , 7 , 4.24. A RCS (M ) is infinite. 6 , F, 5 , 5 , H, 2 }. The set A RCS *(M
Cambridge Books Online © Cambridge University Press, 2012
292
Appendix C
4.26. G: 7a pval(M , 3, ) = 2 F : 9 H : 1 F : 8 H: 2 [] , 3, ) = 6 F : 5 H : 2 F : 9 H : 1 F : 8 G : 7 a pval(M H: 2 [] 4.27.
H 2
1 H
F
F
G
3
8 F
6
H
9 G
4
7
G 10
a
a
4.32. If i = g(i ), then i = i + j − 2, so i = i − j + 2. The domain of g is such that 2 ≤ i ≤ n; hence, 2 ≤ i + j − 2 ≤ n, namely, j ≤ i ≤ n + j − 2. In the same way, if i = f (i ), then either i = i , if 1 ≤ i < j, or i = i + n − 2, if j < i ≤ k. We obtain either 1 ≤ i < j or j < i − n + 2 ≤ k, that is, either 1 ≤ i < j or j + n − 2 < i ≤ k + n − 2. In any case, the range for i does not overlap the range obtained for g(i ). 4.37. In fact, this is possible even with context-free grammars. See Example 6.29, Page 266.
5.1.
CAT :
s
⇒ ⇒ ⇒ ⇒
CAT :
np
CAT :
pron
CAT :
pron
CAT :
pron
CAT :
vp
CAT :
vp
CAT : v
CAT : v
np
CAT : d CAT : n CAT :
∗
⇒ Rachel sleep two lamb
Cambridge Books Online © Cambridge University Press, 2012
Appendix C
293
5.3.
CAT :
np
CAT :
d
s
: 4
NUM
CAT :
NUM : 4
CAT :
n
NUM
: 4
CAT :
v
CAT :
vp NUM : 4 sg
NUM : 4
CAT : NUM
CAT :
d
shepherd
feeds
: 2
NUM : 2
the
np
CAT : NUM
two
n : 2 pl
lambs
5.6.
⎡
CAT :
np
CAT :
⎤
⎣ NUM : 4 ⎦ CASE : 3 nom
CAT :
pron NUM : 4 pl
s
CAT :
vp
NUM
: 4
CAT :
v
NUM
: 4
⎡
CAT :
⎣ NUM : 2 ⎦ CASE : 5 acc
she
feeds
⎤
np
CAT :
d
⎡
CAT :
n
⎤
NUM : 2
⎣ NUM : 2 ⎦ CASE : 5
the
sheep
Cambridge Books Online © Cambridge University Press, 2012
294
Appendix C
⎡
CAT :
CAT :
s
⎤
np
⎣ NUM : 4 ⎦ CASE : 3 nom
CAT :
pron NUM : 4 sg
CAT :
vp
CAT :
v
NUM : 4
NUM : 4
⎡
CAT :
np
⎤
⎦ ⎣ NUM : 2 CASE : 5 acc ⎡
⎤ propn ⎣ NUM : 2 sg ⎦ CASE : 5 Jacob
CAT :
loves
her
5.9.
⎡
CAT :
np
CAT :
s
⎤
⎣ NUM : 4 ⎦ CASE : 3 nom
CAT :
pron NUM : 4 pl
CAT :
NUM : 4
⎡
CAT :
⎣ NUM : SUBCAT :
v 4
⎤
⎡
⎦
⎣ NUM : 2 ⎦ CASE : 5 acc
trans
she
vp
feeds
CAT :
CAT :
d
np
⎡
⎤
CAT :
n
⎤
NUM : 2
⎣ NUM : 2 ⎦ CASE : 5
the
sheep
The value of SUBCAT in the node dominating feeds is trans, which is compatible with the lexical entry of feeds. In sleeps, in contrast, the same feature has the value intrans, and unification of these two different atomic values would fail.
Cambridge Books Online © Cambridge University Press, 2012
Appendix C
295
5.12.
CAT :
s
CAT :
np
CAT :
v SUBCAT :
CAT :
v SUBCAT : 2
CAT :
v SUBCAT : 1 , 2
Rachel
gave
1
CAT :
np
2
the sheep
CAT :
np
water
5.18. All are object control. 5.21. The sentence formation rule can be amended such that it only permits a subject to combine with a verb phrase when the latter is finite:
CAT :
s
⎤ v ⎢ VFORM : f in ⎥ CAT : np ⎥ ⎢ ⎥ 1 ⎣ CASE : nom ⎦ ⎢ 7 ⎥ ⎢ NUM : ⎣ SUBCAT : elist ⎦ NUM : 7 ⎡
→
⎤
⎡
CAT :
SUBJ :
1
6.1. σ(b, 4) =
CAT : T:
bp
T : T : T : end
τ (5) = T : T : T : T : T : end 6.3. ⎡
FIRST : 4
ap ⎡
⎤
⎤
⎢ ⎥ FIRST : 14 ap ⎢ ⎥ ⎣ REST : 2 ⎣ ⎦⎦ FIRST : bp REST : 12 REST : elist
Cambridge Books Online © Cambridge University Press, 2012
296
Appendix C
6.6.
CAT :
CAT : T:
CAT : T:
CAT : T: CAT : T: CAT : T:
CAT : T:
CAT : T: CAT :
s
→
a
T: 1
a 1 b
c
→
→
→
T: 1
c 1
→
T: 1
b 1
→
→
d
T: 1
→
T:
d 1
CAT :
at
CAT :
bt
CAT :
ct
CAT :
dt
→
CAT : T:
a
T:
1
CAT :
at
CAT :
at
CAT :
bt
CAT :
CAT :
ct
CAT :
ct
CAT :
dt
CAT :
dt
CAT :
a
CAT :
c
T:
1
1
CAT : T:
bt
2
T:
CAT :
b
b
1
CAT : T:
c
1
CAT : T:
d
1
→ a
→ b
→ c
→ d
6.11.
CAT :
s
⎡ → ⎣
CAT : EXP :
at
FIRST : REST :
⎤
⎦ a end
Cambridge Books Online © Cambridge University Press, 2012
CAT : T:
d 2
Appendix C
⎡ ⎣
CAT : EXP :
CAT :
s
FIRST :
a
⎤ ⎦ →
REST : 1
at
CAT :
297
s
EXP : 1
CAT :
s
EXP : 1
→ a
6.14. None, because of the rules that promote proper names and pronouns to noun phrases. Observe that such rules are in effect spurious and can be eliminated, with only minor changes to the resulting trees. 6.22. G3 is obviously neither nonreentrant nor one-reentrant. It is also not branching, but it is OLP (use the same argument as for Gsubcat ).
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
Bibliography
Aho, Alfred V. and Jeffrey D. Ullman. The Theory of Parsing, Translation and Compiling, volume 1: Parsing. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1972. Aho, Alfred V, John E. Hopcroft, and Jeffrey D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, MA, 1974. Aït-Kaci, Hassan. A lattice-theoretic approach to computation based on a calculus of partially ordered types. PhD thesis, University of Pennsylvania, 1984. Aït-Kaci, Hassan. An introduction to LIFE – programming with logic, inheritance, functions and equations. In Dale Miller, editor, Logic Programming – Proceedings of the 1993 International Symposium, pages 52–68, MIT Press, Cambridge, MA, 1993. Aït-Kaci, Hassan, and Roger Nasr. LOGIN: a logic programming language with built-in inheritance. Journal of Logic Programming, 3:185–215, 1986. Aït-Kaci, Hassan, and Andreas Podelski. Towards a meaning of LIFE. Journal of Logic Programming, 16(3-4):195–234, July–August 1993. Aït-Kaci, Hassan, Andreas Podelski, and Seth Copen Goldstein. Order-sorted feature theory unification. In Dale Miller, editor, Logic Programming – Proceedings of the 1993 International Symposium, pages 506–524, MIT Press, Cambridge, MA, 1993. Akmajian, Adrian, Richard A. Demers, and Robert M. Harnish. Linguistics: An Introduction to Language and Communication. MIT Press, Cambridge, MA, second edition, 1984. Alblas, Henk, and Borivoj Melichar, editors. Attributed grammars, applications and systems, volume 545 of Lecture Notes in Computer Science. Springer Verlag, Berlin, 1991. Allen, James. Natural Language Understanding. Benjamin/Cummings, New York, NY, second edition, 1995. Barton, Jr., G. Edward, Robert C. Berwick, and Eric Sven Ristad. Agreement and ambiguity. In Barton et al., editors, Computational Complexity and Natural Language, chapter 3, pages 89–102, 1987a. Barton, Jr., G. Edward, Robert C. Berwick, and Eric Sven Ristad, editors. Computational Complexity and Natural Language. Computational Models of Cognition and Perception. MIT Press, Cambridge, MA, 1987b.
299
Cambridge Books Online © Cambridge University Press, 2012
300
Bibliography
Barton, Jr., G. Edward, Robert C. Berwick, and Eric Sven Ristad. The complexity of LFG. In Barton et al., editors Computational Complexity and Natural Language chapter 4, pages 103–114, 1987c. Bayer, Sam and Mark Johnson. Features and agreement. In Proceedings of the 33rd Annual Meeting of the Associati on for Computational Linguistics, Cambridge, MA, June 1995. Berwick, Robert C. Computational complexity and lexical-functional grammar. Computational Linguistics, 8(3-4):97–109, 1982. ISSN 0891-2017. Steven Bird, Ewan Klein, and Edward Loper. Natural Language Processing with Python. O’Reilly Media, Sebastopol, CA, 2009. Bloomfield, Leonard. Language. Holt, Rinehart and Winston, New York, 1933. Borsley, Robert D. Modern phrase structure grammar. Number 11 in Blackwell textbooks in linguistics. Blackwell, Oxford, UK, 1996. Bresnan, Joan, Ronald M. Kaplan, Stanley Peters, and Annie Zaenen. Cross-serial dependencies in Dutch. Linguistic Inquiry, 13(4):613–635, 1982. Buszkowski, Wojciech, and Gerald Penn. Categorial grammars determined from linguistic data by unification. Studia Logica, 49(4):431–454, 1990. URL Butt, Miriam, Tracy Holloway King, María-Eugenia Niño, and Frédérique Segond. A Grammar Writer’s Cookbook. CSLI Publications, Stanford, CA, 1999. Carpenter, Bob. Typed feature structures: A generalization of first-order terms. In Vijai Saraswat and Ueda Kazunori, editors, Logic Programming – Proceedings of the 1991 International Symposium, pages 187–201, MIT Press, Cambridge, MA, 1991. Carpenter, Bob. The Logic of Typed Feature Structures. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1992. Carpenter, Bob, and Gerald Penn. ALE 2.0 user’s guide. Technical report, Laboratory for Computational Linguistics, Philosophy Department, Carnegie Mellon University, Pittsburgh, PA 15213, December 1994. Chomsky, Noam. Three models for the description of language. In I. R. E. transactions on information theory, Proceedings of the symposium on information theory, volume IT-2, pages 113–123, September 1956. Chomsky, Noam. Syntactic Structures. Mouton & Co., The Hague, The Netherlands, 1957. Copestake, Ann. The (new) LKB system. Technical report, Stanford University, September 1999. Copestake, Ann. Implementing Typed Feature Structure Grammars. CSLI Publications, Stanford, 2002. Covington, Michael A. Natural Language Processing for Prolog Programmers. Prentice Hall, Englewood Cliffs, NJ, 1994. Dalrymple, Mary. Lexical Functional Grammar, volume 34 of Syntax and Semantics. Academic Press, New York, 2001. Dalrymple, Mary, Ronald M. Kaplan, John T. Maxwell, and Annie Zaenen, editors. Formal Issues in Lexical-Functional Grammar, volume 47 of CSLI lecture notes. CSLI, Stanford, CA, 1995. Damas, Luís and Nelma Moreira. Constraint categorial grammars. In Carlos PintoFerreira and Nuno Mamede, editors, Progress in Artificial Intelligence, volume
Cambridge Books Online © Cambridge University Press, 2012
Bibliography
301
990 of Lecture Notes in Computer Science, pages 347–358. Springer, Berlin and Heidelberg, 1995. URL http://dx.doi.org/10.1007/3-540-60428-6_29. Earley, Jay. An efficient context-free parsing algorithm. In Communications of the ACM, volume 2, pages 94–102, February 1970. Emerson, E. Allen. Alternative semantics for temporal logics. Theoretical Computer Science, 26:121–130, 1983. Feinstein, Daniel and Shuly Wintner. Highly constrained unification grammars. Journal of Logic, Language and Information, 17(3):345–381, 2008. URL Gazdar, Gerald. Applicability of indexed grammars to natural languages. In Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories, pages 69–94. Reidel Publishing Company, Dordrecht, 1988. Gazdar, Gerald and Chris Mellish. Natural Language Processing in LISP/PROLOG. Eddison Wesley, Wokingham, England, 1989. Gazdar, Gerald and Geoffrey K. Pullum. Computationally relevant properties of natural languages and their grammars. New Generation Computing, 3:273–306, 1985. Gerald. E. Gazdar, Ewan Klein, Jeoffrey K. Pullum, and Ivan A. Sag. Generalized Phrase Structure Grammar. Harvard University Press, Cambridge, MA, 1985. Groenink, Annius V. Mild context-sensitivity and tuple-based generalizations of context-grammar. Linguistics and Philosophy, 20:607–636, 1997. URL Haas, Andrew. A parsing algorithm for unification grammar. Computational Linguistics, 15(4):219–232, December 1989. Haddock, Nicholas, Ewan Klein, and Glyn Morill, editors. Categorial Grammar, Unification and Parsing, volume 1 of Working Papers in Cognitive Science. University of Edinburgh, Center for Cognitive Science, 1987. Harrison, Michael A. Introduction to Formal Language Theory. Addison-Wesley, Reading, MA, 1978. Jaeger, Efrat. Unification grammars and off-line parsability. Master’s thesis, Technion — Israel Institute of Technology, Haifa, Israel, November 2002. Jaeger, Efrat, Nissim Francez, and Shuly Wintner. Guaranteeing parsing termination of unification grammars. In Proceedgins of COLING’02, pages 397–403, August 2002. Jaeger, Efrat, Nissim Francez, and Shuly Wintner. Unification grammars and off-line parsability. Journal of Logic, Language and Information, 13(4), 2004. Jaffar, Joxan. Efficient unification over infinite terms. New Generation Computing, 2: 207–219, 1984. Johnson, Mark. Attribute-Value Logic and the Theory of Grammar, volume 16 of CSLI Lecture Notes. CSLI, Stanford, CA, 1988. Joshi, Aravind K. An introduction to tree adjoining grammars. In Alexis ManasterRamer, editor, Mathematics of Language, pages 87–114. John Benjamins, Amsterdam, 1987. Joshi, Aravind K. Tree-adjoining grammars. In Ruslan Mitkov, editor, The Oxford Handbook of Computational Linguistics, chapter 26, pages 483–500. Oxford university Press, 2003. Joshi, Aravind K., Leon S. Levy, and Masako Takahashi. Tree adjunct grammars. Journal of Computer and System Sciences, 10(1):136–163, 1975.
Cambridge Books Online © Cambridge University Press, 2012
302
Bibliography
Jurafsky, Daniel and James H. Martin. Speech and Language Processing. Prentice Hall Series in Artificial Intelligence. Prentice Hall, NJ, 2000. Kaplan, Ronald and Joan Bresnan. Lexical functional grammar: A formal system for grammatical representation. In J. Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173–281. MIT Press, Cambridge, MA, 1982. Kaplan, Ronald M. A general syntactic processor. In Randall Rustin, editor, Natural Language Processing, number 8 in Courant Computer Science Symposium, pages 194–241. Algorithmics Press, P.O. Box 97, New York, NY 10012, 1973. Kaplan, Ronald M., Tracy Holloway King, and John T. Maxwell III. Adapting existing grammars: The XLE approach. In N.Oostijk J. Carroll and R. Sutcliffe, editors, Proceedings of the COLING 2002 Workshop on Grammar Engineering and Evaluation, pages 29–35, 2002. Karttunen, Lauri and Martin Kay. Structure sharing with binary trees. In Proceedgins of the 23rd Annual Meeting of the Association for Computational Linguistics, pages 133–136, Chicago, 1985. Kasami, T. An efficient recognition and syntax algorithm for context-free languages. Scientific Report AFCRL-65-758, Air Force Cambridge Research Lab., Bedford, MA, 1965. Kay, Martin. The MIND system. In Randall Rustin, editor, Natural Language Processing, number 8 in Courant Computer Science Symposium, pages 155–188. Algorithmics Press, P.O. Box 97, New York, NY 10012, 1973. Kay Martin. Functional unification grammar. In 5th Annual Meeting of the Berkeley Linguistic Society, Berkeley, CA, 1979. Kay, Martin. Unification grammar. Technical report, Xerox Palo Alto Research Center, Palo Alto, CA, 1983. Kay, Martin. Parsing in functional unification grammar. In David Dowty, Lauri Karttunen, and Arnold Zwicky, editors, Natural Language Parsing: Psychological, Computational and Theoretical Perspectives, chapter 7, pages 251–278. Cambridge University Press, Cambridge, 1985. King, Tracy Holloway, Martin Forst, Jonas Kuhn, and Miriam Butt. The feature space in parallel grammar writing. Research on Language and Computation, 3:139–163, 2005. Knuth, Donald E. Semantics of context free languages. Mathematical Systems Theory, 2:127–145, 1968. (correction in Mathematical Systems Theory 5, pp. 95-96, 1971). Kuhn, Jonas. Towards a simple architecture for the structure-function mapping. In Miriam Butt and Tracy Holloway King, editors, The Proceedings of the LFG ’99 Conference. CSLI Publications, Stanford, 1999. Lewis, Harry R. and Christos H. Papadimitriou. Elements of the Theory of Computation. Prentice-Hall software series. Prentice-Hall, Englewood Cliffs, NJ, 1981. Lyons, John. Introduction to Theoretical Linguistics. Cambridge University Press, Cambridge, 1968. Manaster-Ramer, Alexis. Dutch as a formal language. Linguistics and Philosophy, 10: 221–246, 1987. Manning, Christopher D. and Hinrich Schütze. Foundations of statistical natural language processing. The MIT Press, Cambridge, MA, 1999. Martelli, Alberto and Ugo Montanari. An efficient unification algorithm. ACM Transactions on Programming Languages and Systems, 4(2):258–282, 1982.
Cambridge Books Online © Cambridge University Press, 2012
Bibliography
303
Meurers, W. Detmar, Gerald Penn, and Frank Richter. A web-based instructional platform for constraint-based grammar formalisms and parsing. In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching NLP and CL, pages 18–25, 2002. Michaelis, Jens and Marcus Kracht. Semilinearity as a syntactic invariant. In Christian Retoré, editor, Logical Aspects of Computational Linguistics, volume 1328 of Lecture Notes in Computer Science, pages 329–345. Springer Berlin / Heidelberg, 1997. URL http://dx.doi.org/10.1007/BFb0052165. Miller, Philip. Strong Generative Capacity: The Semantics of Linguistic Formalism. CSLI Publications, Stanford, CA, 1999. Moortgat, Michael. Categorial type logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language, chapter 2, pages 93–177. Elsevier, Amsterdam, 1997. Moshier, Drew. Extensions to Unification Grammars for the Description of Programming Languages. PhD thesis, University of Michigan, Ann Arbor, 1988. Moshier, Drew M. and William C. Rounds. A logic for partially specified data structures. In 14th Annual ACM Symposium on Principles of Programming Languages, pages 156–167, January 1987. Müller, Stefan. The Grammix CD Rom. a software collection for developing typed feature structure grammars. In Tracy Holloway King and Emily M. Bender, editors, Grammar Engineering across Frameworks 2007, Studies in Computational Linguistics ONLINE, pages 259–266. CSLI Publications, Stanford, CA, 2007. Oepen, Stephan, Daniel Flickinger, Junichi Tsujii, and Hans Uszkoreit, editors. Collaborative Language Engineering. A Case Study in Efficient Grammar-Based Processing. CSLI Publications, Stanford, CA, 2002. Pareschi, Remo and Mark Steedman. A lazy way to chart-parse with categorial grammars. In Proceedings of the 25th Annual Meeting on Association for Computational Linguistics, pages 81–88, Morristown, NJ, 1987. Association for Computational Linguistics. doi: http://dx.doi.org/10.3115/981175.981187. URL Pereira, Fernando C. N. and Stuart M. Shieber. Prolog and natural-language analysis, volume 10 of CSLI lecture notes. CSLI, Chicago, 1987. Pereira, Fernando C. N. and David H. D. Warren. Parsing as deduction. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, pages 137–144, June 1983. Pollard, Carl. Generalized phrase structure grammars, head grammars and natural language. PhD thesis, Stanford University, Stanford, CA, 1984. Pollard, Carl and Ivan A. Sag. Information Based Syntax and Semantics. Number 13 in CSLI Lecture Notes. CSLI, 1987. Pollard, Carl and Ivan A. Sag. Head-Driven Phrase Structure Grammar. University of Chicago Press and CSLI Publications, 1994. Pullum, Geoffrey K. and Gerald Gazdar. Natural languages and context-free languages. Linguistics and Philosophy, 4:471–504, 1982. Radzinski, Daniel. Chinese number-names, tree adjoining languages, and mild contextsensitivity. Computational Linguistics, 17(3):277–299, 1991. Ristad, Eric Sven. Computational structure of GPSG models. Linguistics and Philosophy, 13(5):521–587, 1990.
Cambridge Books Online © Cambridge University Press, 2012
304
Bibliography
Robinson, John Alan. A machine-oriented logic based on the resolution principle. Journal of the ACM, 12:23–41, 1965. Sag, Ivan A, and Thomas Wasow. Syntactic Theory: A Formal Introduction. CSLI, Stanford, CA, 1999. Sag, Ivan A. Gerald Gazdar, Thomas Wasow, and Steven Weisler. Coordination and how to distinguish categories. Natural Language and Linguistic Theory, 3(2):117–171, 1985. Satta, Giorgio. Tree-adjoining grammar parsing and boolean matrix multiplication. Computational Linguistics, 20(2), June 1994. Savitch, Walter J. Emmon Bach, William Marsh, and Gila Safran-Naveh, editors. The formal complexity of natural language, volume 33 of Studies in Linguistics and Philosophy. D. Reidel, Dordrecht, 1987. Sells, Peter. Lectures on contemporary syntactic theories: an introduction to government-binding theory, generalized phrase structure grammar, and lexicalfunctional grammar, volume 3 of CSLI lecture notes. CSLI, Stanford, CA, 1988. Shieber, Stuart M. Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8:333–343, 1985. Shieber, Stuart M. An Introduction to Unification Based Approaches to Grammar. Number 4 in CSLI Lecture Notes. CSLI, 1986. Shieber, Stuart M. Constraint-Based Grammar Formalisms. MIT Press, Cambridge, MA, 1992. Shieber, Stuart M. Yves Schabes, and Fernando Pereira. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1-2):3–36, July/August 1995. Sikkel, Klaas. Parsing Schemata. Klaas Sikkel, Enschede, 1993. Sikkel, Klaas. Parsing Schemata. Texts in Theoretical Computer Science – An EATCS Series. Springer Verlag, Berlin, 1997. Steedman, Mark. The Syntactic Process. Language, Speech and Communication. The MIT Press, Cambridge, MA, 2000. Torenvliet, Leen and Marten Trautwein. A note on the complexity of restricted attributevalue grammars. ILLC Research Report and Technical Notes Series CT-95-02, University of Amsterdam, Amsterdam, 1995. Uszkoreit, Hans. Categorial unification grammars. In Proceedings of the 5th International Conference on Computational Linguistics, pages 187–194, Bonn, 1986. Vijay-Shanker, K. A study of tree adjoining grammars. PhD thesis, the University of Pennsylvania, 1987. Vijay-Shanker, K. and Aravind K. Joshi. Unification Based Tree Adjoining Grammars. In J. Wedekind, editor, Unification-based Grammars. MIT Press, Cambridge, MA, 1991. Vijay-Shanker, K. and David J. Weir. Parsing some constrained grammar formalisms. Computational Linguistics, 19(4):591–636, 1993. Vijay-Shanker, K. and David J. Weir. The equivalence of four extensions of context-free grammars. Mathematical Systems Theory, 27:511–545, 1994. Wintner, Shuly. An abstract machine for unification grammars. PhD thesis, Technion – Israel Institute of Technology, Haifa, Israel, January 1997.
Cambridge Books Online © Cambridge University Press, 2012
Bibliography
305
Wintner, Shuly and Nissim Francez. Parsing with typed feature structures. In Proceedings of the Fourth International Workshop on Parsing Technologies, pages 273–287, Prague, September 1995a. Wintner, Shuly and Nissim Francez. Parsing with typed feature structures. Technical Report LCL 95-1, Laboratory for Computational Linguistics, Technion, Israel Institute of Technology, Haifa 32000, Israel, December 1995b. Wintner, Shuly and Nissim Francez. Off-line parsability and the well-foundedness of subsumption. Journal of Logic, Language and Information, 8(1):1–16, January 1999. Wintner, Shuly and Uzzi Ornan. Syntactic analysis of Hebrew sentences. Natural Language Engineering, 1(3):261–288, September 1996. Wood, Mary McGee. Categorial Grammars. Linguistic Theory Guides. Routledge, London, 1993. Wroblewski, David A. Nondestructive graph unification. In 6th Annual Conference of the American Association of Artificial Intelligence, pages 582–587, 1987. Younger, Daniel H. Recognition and parsing of context-free languages in time n3 . Information and Control, 10(2):189–208, 1967. Zeevat, Henk, Ewan Klein, and Jo Calder. An introduction to unification categorial grammar. In J. Nicholas Haddock, Ewan Klein, and Glyn Morrill, editors, Edinburgh Working Papers in Cognitive Science, Vol. 1: Categorial Grammar, Unification Grammar, and Parsing. Centre for Cognitive Science, University of Edinburgh, 1987.
Cambridge Books Online © Cambridge University Press, 2012
Cambridge Books Online © Cambridge University Press, 2012
Index
, 12, 37 -rule, 14, 149, 234, 236, 238, 239, 252, 254, 255, 266–268, 270 an bn cn , 29, 215–222, 225, 235, 238 abstraction, 58, 57–60, 99, 113, 128, 129 abstract feature structure, 34, 56, 54–64, 83–85, 99–108, 113, 116, 126, 127, 129, 142, 146, 287 abstract multirooted structure, 115, 116, 125, 129, 136–163 empty, 126 Ackerman function, 99 adjectival phrase, 206 adjective, 206, 207 AFS, see abstract feature structure agreement, 7, 27, 29, 31, 35, 36, 116, 165–171, 177, 180, 200, 204, 206, 208–210, 252, 253 ALE, 212 algorithm, 226, 235, 239, 254–256, 258–267, 270–273, 276 alphabet, 12, 14, 146 ambiguity, 146, 148, 162 lexical, 36 structural, 19 anaphora, 2 anti-unification, see generalization arborescence, 78, 78, 78–82 artificial intelligence, 2 assignment, 240, 241, 244 atom, 35, 35, 36–38, 44, 46, 49, 75, 89, 109, 117, 156, 165, 166, 179, 181, 215, 248 ATOMS, 37, 37, 55, 118, 146, 150, 176, 230
attribute-value matrix, see AVM attribute grammar, 164 AVM, 34–37, 65, 64–83, 85, 94, 108, 116, 130, 135, 157, 210, 217, 222 atomic, 65 complex, 65 empty, 66, 71, 75 expression, 79, 79, 81, 82 well-formed, 67, 67–68, 76, 77, 80, 81, 130, 287 Bambara, 23 body, 14, 14, 17, 146, 149, 149, 150, 151, 156, 221, 234, 245, 248, 249, 254 branching degrees, 235 C, 83 c-structure, 209–211 case, 165, 181, 197, 208, 210 case control, 166, 172–174, 208 Catalan number, 266 categorial grammar, ix, 31, 208, 211, 212, 276 category, 14, 17, 24, 116, 146, 148, 149, 163, 165, 166, 170, 172, 176, 180, 181, 185–188, 192, 193, 202–204, 206, 207, 215, 229, 256, 257, 260, 266, 269 base, 206 preterminal, 21 CFG, x, 14, 14–28, 34, 37, 115, 146, 150, 151, 157, 161, 163–166, 170–172, 201, 202, 208, 209, 213–215, 225, 226, 234, 245–247, 250, 254–268, 271, 275, 292 chart, 256, 258–266, 268, 271, 272, 274 Chinese, 210
307 http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
308
Index
Chomsky, Noam, 1, 22, 23 Chomsky hierarchy of languages, 22, 226 Church’s thesis, 226 closure operations, 100–105, 107, 113, 139, 140, 143, 144, 152 compiler, 164 complement, 177–181, 184–186, 198, 206 complementation, 23 complexity, 33, 99, 172, 213, 226, 239, 255, 256, 263–264, 270, 276 computational linguistics, ix, x, 34, 35, 209–211 computation theory, 227, 273 computer science, x, 1, 3, 83, 239, 275, 276 concatenation, 12, 13, 30, 125, 127, 127, 128, 133, 147, 217 concretization, 60, 60–62, 128 configuration, 227, 228–231 conjunction, 202, 203, 240 constituency, 30 constituent, 20, 21, 26, 176, 180, 185–187, 189, 202, 204, 208, 210 constituent order, 179 constraint, 7, 176, 180, 181 context-free grammar, see CFG context-free skeleton, 149, 211 context-sensitive grammar, 23 control, 28, 166, 197–201, 275, 295 coordination, 25, 114, 166, 201–209, 212 nonconstituent, 207–208 of unlikes, 206–207 cycle, 42, 42, 53, 70, 71, 77, 81, 94, 113, 122, 133, 286 cyclically unifiable sequence, 236, 236–237 decidability, 213, 233–235, 251, 267 deduction, 274 Definite Clause Grammar, 273, 274 DELPH-IN, 212 depth, 234–238 derivation, 15, 15, 17, 36, 115, 146, 151, 151–158, 161, 164, 166, 170, 171, 174, 176, 187, 188, 190, 203, 215, 217–219, 221–223, 225, 229, 232, 234, 236, 239, 247, 248, 249–251, 254, 255, 258, 269 equivalence relation over, 17 leftmost, 15 derivation tree, 17, 17, 18, 27, 157, 157–163, 167, 168, 170, 172, 173, 175, 176, 180, 181, 183, 187–196, 200, 201, 203, 206,
209, 211, 220, 222, 224, 225, 234–238, 242–244, 247, 248, 255, 265 determiner, 168, 170, 193 determiner phrase, 210 disjunction, 240 distance, 47 dominance, 165 dotted rule, 256, 256–261, 264, 266, 268, 268, 272 dot movement, 258, 259, 259, 260, 264, 268, 269, 269, 270 Dutch, 23, 33 E0 , 6–8, 11, 25, 31, 35, 166–168, 176, 178, 185, 202, 203 Econtrol , 8, 197 Ecoord , 11, 202, 203, 205 Eldd , 9, 185 Erelcl , 10, 192 Esubcat , 7–10, 178, 180, 183–185 edge, 256, 258, 260, 264–266, 269–271, 273 efficiency, 214 empty category, 193, 195, 196 English, xi, 2, 5, 6, 8, 10, 11, 23–25, 27, 28, 35, 36, 116, 172, 173, 176, 179, 202, 204–207, 210, 252 exhaustive search, 234, 235, 239, 244, 254–256 expressiveness, see expressivity expressivity, 115, 209, 213–253, 268 f-structure, 209, 210 failure, 89, 95, 140, 181, 217 F EATS, 37, 37, 55, 117, 118, 146, 150, 176, 230 feature, 35, 35, 75, 117, 186, 215 feature graph, 34, 37, 37–54, 56, 57, 74–83, 85–93, 95, 109, 111, 115, 119–121, 123, 285 atomic, 39 empty, 39 feature structure, x, 52, 34–86, 93–94, 105, 114–118, 164, 165, 204, 206–209, 212, 246, 249, 250, 268 abstract, see abstract feature structure cyclic, 84 empty, 35, 36, 108, 109, 172 typed, 84, 211
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
Index
filler, 189, 209 Finnish, 174 first-order term, see FOT form, 14, 115, 151, 153, 156, 234, 249, 255 sentential, 15, 17, 117, 232–234, 245, 248–250, 254, 255, 259 formalism feature-structure-based, see formalism, unification grammatical, 213, see formalism, linguistic linguistic, 4–6, 83, 166, 172, 209–211, 275, 276 unification, xi, 213–253 FOT, 84, 113, 114 French, 6, 27, 28, 205, 210, 211 frontier, 161, 162, 234 fusion closure, 56, 56, 60, 84, 100–103, 105, 125, 126, 137, 139, 289 gap, 189, 190, 192, 193, 195, 209 generalization, 85, 108, 108, 108–114, 206, 207, 212 generation, 214 generative capacity, 22–23, 25, 33, 226 strong, 25, 172, 267, 276 weak, 25, 214, 226, 267 German, 6, 23, 174, 181, 210, 211 Swiss, 33 Government and Binding, 163, 210 GPSG, 163, 212 grammar branching, 239, 239–244, 251–253, 267, 268, 270, 297 context-free, see CFG engineering, 211 equivalence, 18, 18–21, 25, 276 weak, 18, 21, 25 functional, 163 functional-unification, 163 generalized phrase structure, see GPSG head-driven phrase structure, see HPSG lexical-functional, see LFG linear-indexed, 245, 251 nonreentrant, 245–253, 274, 297 one-reentrant, 248, 251–253, 274, 297 phrasal/terminal normal form, 21, 21, 146, 254 polynomially parsable, 244–251 tree-adjoining, see TAG
309
unification, ix, x, 37, 42, 84, 150, 115–165, 168, 170–172, 188, 201, 205, 208–211, 213–276 grammar engineering, 211 grammar variable, see nonterminal grammatical function, 210 Grammix, 212 graph, x, 34, 37, 39, 75, 78, 99, 114, 118, 120 greatest lower bound, 108, 111, 112, 114, 212 hardness, 239, 240 head (of a phrase), 25, 170, 192, 193 head (of a rule), 14, 14, 17, 149, 149, 150, 151, 180, 219, 221, 234, 237, 238, 248, 249, 259, 266, 268–271, 273 head drives phrase structure grammar, see HPSG Hebrew, 6, 27, 28, 205, 212 homomorphism, 23 HPSG, 4, 31, 150, 163, 211, 212 infinitive, 178, 197, 198, 201 initial symbol, see start symbol invariant, 258–260, 270 isomorphism, 34, 41, 43, 47, 51, 52, 54, 55, 58, 72, 73, 91, 122, 122, 124, 125, 134, 288 Japanese, 210, 211 Kleene closure, 13 Korean, 211 language context-free, 16, 22, 22–28, 170, 239, 245–248, 251, 253, 268 formal, ix, x, 1, 5, 13, 12–14, 15, 16, 21, 22, 213–215, 225, 226, 275 fragment, x, 6–11, 165, 203 mildly context-sensitive, 29–30, 239, 245, 248–251, 253 natural, xi, 1–5, 6, 22–32, 146, 172, 176, 204, 206, 208, 211, 212, 214, 226, 227, 245, 251–253, 268, 275, 276 structure, 5 of a grammar, x, 115, 116, 151, 156, 171, 175 of a Turing machine, 229 programming, x, 1, 22, 83, 84, 117, 164, 211, 253, 276 recursively enumerable, 23, 213, 226, 227
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
310
Index
language (cont.) regular, 23, 32, 213 trans-context-free, 16, 215, 227, 252, 268 trans-regular, 23, 24 Latin, 174 least upper bound, 85, 86, 91, 106, 139, 143, 145 length, 12, 47, 118, 123, 126, 133, 138, 140, 149, 215, 217, 234, 237, 239, 247, 254, 255 letter, see terminal lexicalism, 31, 179, 209 lexical entry, 21, 35, 146, 147, 161–163, 172, 173, 176, 178, 179, 181, 183, 185, 188, 190, 198, 199, 201, 203, 241, 242, 246, 252, 266, 268, 294 lexical functional grammar, see LFG lexical item, see lexical entry lexicon, 25, 31, 35, 146, 146, 147, 173, 174, 181, 202, 209, 230, 241, 242, 252 LFG, 4, 30, 31, 150, 163, 209–211, 273, 274 LIFE, 84 linguistics, 1, 275 linguistic generalization, 165, 166, 170–172, 177, 179, 208–209, 211, 214 linguistic theory, ix, 4, 163, 165, 179, 185, 204, 205, 208–212, 275, 276 LISP, 117 list, 117, 178, 179, 217, 222, 229, 230 literal, 240, 241, 244 LKB, 211, 212 logic, 239, 240, 276 logic programming, x, 84 LOGIN, 84 long-distance dependencies, 166, 172, 185–197, 208, 212, 275 membership, 214, 235, 238, 239, 241, 244, 276 Mohawk, 23 morphology, 2, 3 movement, 165, 209 MRG, 115, 116, 118, 118–125, 128, 129, 135, 136, 138, 145 empty, 118 MRS, 115, 125, 115–129, 135, 138, 164 multi-AVM, 115, 116, 130, 130–137, 148, 175 well formed, 130, 131 multirooted feature graph, see MRG multirooted structure, 268 see MRS
natural language processing, ix nonbranching dominance chain, 236, 237, 238 nonterminal, 14, 14, 15, 31, 36, 116, 166, 172, 215, 225, 245, 246, 250, 254, 259 Norwegian, 10, 211 noun, 168, 170, 172, 181, 192, 193, 197, 202, 207 noun phrase, 154, 165, 168, 169, 172, 178, 179, 186, 187, 193, 196–198, 204–206, 210, 212, 297 object, 173, 176, 189, 190, 192, 193, 197, 198, 201, 210 off-line parsability, 213, 237, 233–239, 244, 251, 268, 273 overgeneration, 26, 187 PARGRAM, 211 parsing, xi, 5, 22, 36, 116, 164, 214, 235, 253–274, 276 part of speech, 25 Pascal, 83 path, 37, 39, 39, 39–41, 53, 55, 57, 63, 68, 78, 100, 104, 116, 118, 120, 122, 126, 132, 140, 143, 144, 149, 152, 285, 286, 290 empty, 37, 285 length, 37 value, 40, 40, 41, 43, 69, 117, 121, 121, 132, 132, 133, 285 PATHS, 37 person, 205 phonology, 2, 3 phrase, 36, 116, 186, 187, 189, 192, 197, 203 phrase structure, 30, 164, 201 pragmatics, 2 preterminal, 148, 161, 162, 170, 215, 219, 221, 231, 232, 244, 250, 253, 272, 242, precedence, 166 predicate, 204 prefix closure, 40, 56, 56, 60, 101, 102, 105, 125, 126, 137, 139 principle, 1 production, see rule Prolog, x, 84 pronoun, 2, 172, 173, 185, 186, 190, 297 proper name, 172, 297 pumping lemma, 23, 32 push-down automaton, 229 recognition, 22, 214, 235, 244, 245, 251, 253–255, 258–267, 270, 271, 276
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
Index
record, 83, 84, 95 recursion, 16 reduction, 239, 241, 244 reduplication, 29 reentrancy, 41, 41–42, 45, 53, 53, 56, 63, 70, 75, 77, 80, 84, 90, 91, 94, 109, 116, 122, 123, 127, 131, 133, 134, 140, 143, 144, 147–149, 151–153, 157, 159, 165, 167, 187, 191, 193, 200, 208, 209, 217, 218, 245–252, 268, 269, 286 relative clause, 166, 185, 191–197, 267 relativizer, 193 renaming, 74, 72–74, 133, 134 repetition, 12, 13 root, 118, 118, 119, 123, 136, 161 rule, 14, 31, 36, 115–117, 148, 146–151, 153, 156, 160, 161, 164, 168–173, 176, 177, 179, 184–190, 192, 193, 202–206, 208, 209, 211, 215, 217, 220, 223, 225, 230, 232, 236, 237, 239, 241, 245, 246, 248, 254, 257, 259, 266, 272 Russian, 6, 27, 28, 174 scrambling, 30 semantics, xi, 2, 3, 25, 31, 32, 210, 214 sentence, 178–180, 185–190, 192, 193, 195, 200, 201, 203, 204, 207–210, 295 declarative, 166, 185 interrogative, 166, 186–188 set, x, 99, 100, 104, 179, 291 signature, 37, 37, 38, 65, 66, 72, 85, 117, 118, 130, 134, 146, 147, 150, 157, 167, 168 172, 215, 223, 230, 245–248 slash, 186–190, 192, 193, 196, 212 stack, 229, 236 start symbol, 14, 14, 150, 151, 161, 175, 222, 223, 230, 241, 246, 255, 273 string, 12, 30 sub-AVM, 130, 130 subcategorization, 7, 28, 31, 165, 166, 171, 174–183, 188, 197, 201, 206–208 subgraph, 120, 120 subject, 36, 116, 166, 173, 177, 180, 190, 192–195, 197, 198, 200, 201, 204, 210, 252 substitution, 114 substructure, 126, 127, 129, 133, 142, 158 subsumption, 44, 44, 42–52, 53, 54, 55, 62, 62–64, 72, 71–72, 76, 77, 85, 91, 93, 99, 105, 108, 123, 123, 124, 129, 129, 133, 134, 134, 139, 143, 144, 286, 287
311
subsumption morphism, 44, 45–48, 50, 51, 54, 62, 63, 92, 93, 111, 124, 286 syntax, x, 2–4, 163, 208, 275, 276 TAG, 211, 212 tag, 65, 66, 75, 116, 130, 131, 156, 224, 249 tagset, 66 tense, 198 terminal, 12, 14, 14, 31, 215, 223, 245, 246, 254, 260, 266 termination, 96, 98, 214, 229, 232, 234, 235, 238, 253–255, 260, 267, 270 TFS, see feature structure, typed trace, 193, 196 TRALE, 211, 212 transformation, 165 tree, 78, 84, 265, 266 Turing machine, 213, 227, 226–236, 238, 267, 273 type respecting relation, 87, 87–90, 92 typology, 211 unbounded dependence, see long-distance dependencies underspecification, 171 unification, x, 36, 41, 42, 85, 86, 89, 104, 83–109, 113, 114, 139, 137–146, 151, 153, 154, 164, 181, 205–207, 209, 212, 217, 219, 245, 249, 269, 270, 294 algorithm, 94–99, 114, 145 destructive, 96, 99 nondestructive, 99, 114 in context, 115, 116, 139, 137–146, 151, 157, 158, 269 union-find, 94, 114 unit-rule, 14, 232, 234, 236–239, 251, 252, 254, 255, 264, 267, 268, 270 universal recognition problem, 213, 213, 226, 233, 234, 239, 241, 274, 276 unrestricted rewriting systems, 23, 226 Urdu, 211 value, 35, 35, 36, 66 complex, 35 variable, 65, 67, 72, 76, 78, 81, 130–133, 136, 150, 173, 181, 203, 219, 270 scope, 116, 130, 133, 150, 156, 157, 175 variable association, 68, 70, 76, 130
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012
312
Index
verb, 116, 168–170, 176–181, 184–188, 192, 197, 199–201, 206–210, 252 finite, 198, 295 intransitive, 176 transitive, 176 verb phrase, 154, 165, 166, 168–170, 176–181, 186, 189, 190, 192, 197, 198, 201, 203, 204, 207, 295
Word, 12, 31, 35, 36, 146, 148, 162 W ORDS, 146 wrapping, 30 ww (repetition language), 222–225 XLE, 210, 211
http://ebooks.cambridge.org/ebook.jsf?bid=CBO9781139013574 Cambridge Books Online © Cambridge University Press, 2012