Systemic Functional Grammar in Natural Language Generation
COMMUNICATION IN ARTIFICAL INTELLIGENCE SERIES
Series Edi...
49 downloads
613 Views
11MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Systemic Functional Grammar in Natural Language Generation
COMMUNICATION IN ARTIFICAL INTELLIGENCE SERIES
Series Editor: Robin P. Fawcett, University.
Computational Linguistics Unit,
Cardiff
Artificial Intelligence (AI) is a central aspect of Fifth Generation computing, and it is now increasingly recognized that a particularly important element of AI is communication. This series addresses current issues, emphasizing generation as well as comprehension in the AI communication. It covers communication of three types: at the human-computer interface; in computer-computer communication that simulates human interaction; and in the use of computers for machine translation to assist human-human communication. The series also gives a place to research that extends beyond language to consider other systems of communication that humans employ such as pointing, and even in due course, facial expression, body posture, etc. Recently published in the series: Evolutionary Language Understanding, Geoffrey Samspon
Systemic Functional Grammar in Natural Language Generation Linguistic Description and Computational Representation
Elke Teich
CASSELL London and New York
Cassell Wellington House, 125 Strand, London WC2R OBB 370 Lexington Avenue, New York, NY 10017-6550 First published 1999 ©ElkeTeich 1999 All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. ISBN 0304 70168 8 Library of Congress Cataloging-in-Publication Data Teich, Hike, 1963Systemic functional grammar in natural language generation: linguistic description and computational representation/Hike Teich. p. cm. — (Communication in artificial intelligence) Includes bibliographical references (p. ) and index. ISBN 0-304-70168-8 (hard cover) 1. Computational linguistics. 2. Systemic grammar. 3. Dependency grammar. I. Title. II. Series. P98. T38 1997 410'.285—dc21 97-2549GIF CIP Typeset by Textype Typesetters, Cambridge Printed and bound in Great Britain by Bookcraft (Bath) Ltd, Midsomer Norton
Contents
List of Figures List of Tables Preface Acknowledgements Notational Conventions 1 Introduction 1.1 Motivation and goals 1.2 Background 1.3 Overview: the structure of this book 2 Theory and Linguistic Representation: Systemic Functional Linguistics 2.1 Introduction 2.2 Systemic functional linguistics: the theoretical perspective 2.3 The realization of theory: linguistic representation 2.3.1 The system 2.3.2 Stratification 2.3.3 Functional diversification: the metafunctions 2.3.4 Realization 2.3.5 Intermediate summary 2.4 The view of grammar in SFL 2.4.1 Hie paradigmatic axis: functional diversification in the grammar 2.4.2 The motivation of grammatical categories in the system network 2.4.3 The syntagmatic axis and its representation; realization 2.4.3.1 The Hallidayan view: modes of expression
ix xiii xiv xvi xvii 1 1 3 5 7 7 8 9 10 13 15 16 17 18 21 27 29 30
vi Systemic Functional Grammar in Natural Language Generation
2.5
2.4.3.2 Limits of linguistic representation in Hallidayan SFG 2.4.3.3 Fawcett's approach to grammar 2 A3 A Hudson's daughter-dependency grammar 2.4.3.5 Interaxial and interrank realization 2.4.4 Summary of Section 2.4 Summary and outlook
3 Computational Application: Grammar Models in Natural Language Generation 3.1 Introduction 3.2 Natural language generation 3.3 Grammars for natural language generation 3.4 A comparison of selected grammar approaches in tactical generation 3.4.1 SFL in generation: PENMAN 3.4.2 MTT in generation: Gossip 3.4.3 FUG in generation: COMET 3.4.4 TAG in generation: MUMBLE-86/SPOKESMAN 3.5 Summary: A critical appraisal of SFG for natural language generation 3.6 Outlook 4 Description: A Systemic Functional Grammar of German for Natural Language Generation 4.1 Introduction 4.2 A fragment of a computational systemic functional grammar of German 4.2.1 Rank and ranking 4.2.2 Problems with rank 4.2.3 The clause: description 4.2.3.1 Clause-complexity 4.2.3.2 Transitivity 4.2.3.3 Object-insertion 4.2.3.4 Diathesis and diathesis-transitivity gates 4.2.3.5 Circumstantial 4.2.3.6 Theme 4.2.3.7 Mood 4.2.3.8 Tense 4.2.4 Clause rank: classification of problems
33 36 39 43 47 49 51 51 53 56 60 60 69 74 77 85 88 90 90 92 92 95 97 97 100 110 111 117 120 123 125 128
Contents vii
4.3
4.2.5 The nominal group: description 4.2.5.1 Nominal-group-complexity 4.2.5.2 Nountype 4.2.5.3 Epithet 4.2.5.4 Classification 4.2.5.5 Numeration 4.2.5.6 Qualification 4.2.5.7 Determination 4.2.5.8 Nominal-person and pronoun 4.2.6 NG rank: classification of problems 4.2.7 The adjectival group 4.2.8 The adverbial group 4.2.9 The prepositional phrase: description 4.2.9.1 PPother 4.2.9.2 PPspatiotemporal 4.2.10 PP rank: classification of problems Summary and outlook
130 131 133 135 135 136 136 138 138 140 147 148 148 149 150 152 152
5 Computational Representation: A Proposal for Dependency in Systemic Functional Grammar 158 5.1 Introduction 158 5.1.1 The problems: towards a solution 158 5.1.2 Overview of Chapter 5 160 5.2 Features for grammatical representation 161 5.2.1 From features to feature structures and typed feature structures 162 5.2.2 The 'meaning' of features: feature motivation in SFGandHPSG 172 5.2.3 The 'syntax' of features: system networks and (typed) feature structures 174 5.2.4 Some desiderata for SFG 179 5.3 The notion of dependency in grammar theory 180 5.3.1 Dependency versus constituency or dependency and constituency? 182 5.3.2 Hudson's ANTG revisited: heads and features 188 5.3.3 Towards a notion of dependency for SFG 189 5.4 A proposal for a dependency systemic functional grammar 190 5.4.1 A fragment of SFG as typed feature structures 191 5.4.2 From preselection to designated type constraints 196
viii Systemic Functional Grammar in Natural Language Generation
5.5
5.4.3 Results/examples 5.4.4 Summary and conclusions Summary
6 Summary and Conclusions 6.1 The theme of this book 6.2 The train of thought in this book 6.3 HPSG and SFG revisited 6.4 Conclusions 6.5 Envoi Bibliography Index
204 204 216 220 220 221 223
225 226 228 243
Figures
1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
Stratification in systemic functional computational linguistic theorizing The syntax and semantics of the system A partial grammatical system of MOOD Context, semantics and grammar The English MOOD system with realization statements The multidimensionality of the SFL representation model A system of RANK Rank reflected in syntagmatic organization The experiential system of TRANSITIVITY in English The logical systems of TAXIS and TYPE OF INTERDEPENDENCE in English The textual system of THEME in English Non-formal feature motivation in DEIXIS The experiential mode of expression - constituency The logical mode of expression - interdependency The interpersonal mode of expression - prosody The textual mode of expression - pulse and prominence Fawcett's congruence network An instantiated starting structure Generation: processes and resources Processes in tactical generation Primary choices in TRANSITIVITY Primary choices in MOOD A selection of choices in THEME Input to PENMAN as SPL expression The chooser of the PROCESS-TYPE system Function structure of'Kim devours the cookies.' (clause level) Function structure of'What does Kim devour?' (clause level)
5 11 12 14 17 18 20 20 22 23 26 28 31 32 33 34 37 39 54 58 61 62 63 64
64 65 65
x Systemic Functional Grammar in Natural Language Generation 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17
Correspondence generation model - SFL SemR of 'Kim devours the cookies.' DSyntR of Kim devours the cookies.' SSyntR of 'Kim devours the cookies.' SemR of'What does Kim devour?' DSyntR of'What does Kim devour?' SSyntR of'What does Kim devour?' FUG: a fragment of paradigmatic grammar (operative voice, transitivity) in COMET 3.18 PSFD (after unification with syntactic constraints) of 'Kim devours the cookies.' 3.19 PSFD (after unification with syntactic constraints) of 'What does Kim devour?' 3.20 Input to MUMBLE-86 3.21 A TAG representation of 'Kim devours the cookies.' 3.22 A TAG representation of 'What does Kim devour?' 3.23 Resources required in tactical generation 4.1 The top system of the grammar: RANK 4.2 Lexical and inflectional features of verbs in the word class hierarchy 4.3 The relation between clause and lexical verb 4.4 The system of CLAUSE-COMPLEXITY 4.5 The systems of TAXIS and TYPE OF INTERDEPENDENCE 4.6 Logical structure of sample text 4.1 4.7 The system of TRANSITIVITY - primary delicacy 4.8 The system of ACTION TYPE 4.9 The RELATIONAL TYPE system 4.10 The MENTAL TYPE system 4.11 The COMMUNICATIVE TYPE system 4.12 The DIRECT-OBJECT-INSERTION gate 4.13 The systems of DIATHESIS 4.14 Subsystems of passivization 4.15 Reflexive construction 4.16 The systems for Circumstantials (primary delicacy) 4.17 The system of TEMPORAL-LOCATION-PHORICITY 4.18 Subtypes of [space-location] 4.19 The THEME systems 4.20 The system of MOOD 4.21 The German TENSE systems 4.22 The system of NOMINAL-GROUP-COMPLEXITY 4.23 An example of the logical structure of the NG 4.24 Experiential structure of the NG
68 71 71 72 72 72 72 75 76 77 80 81 81 85 93 94 96 97 98 98 100 102 104 107 108 111 112 114 116 118 119 119 122 124 126 131 132 132
Figures xi
4.25 4.26 4.27 4.28 4.29 4.30 4.31 4.32 4.33 4.34 4.35 4.36 4.37 4.38 4.39 4.40 4.41 4.42 4.43 4.44 4.45 4.46 4.47 4.48 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14
Some systems for noun type Gates for noun-inherent boundedness An example of Epithet An example of Classifier The systems of DEFINITENESS and QUANTITY in numeration An example of Qualification The systems of THING-TYPE and PROCESSUAL-THINGTYPE The systems of DETERMINATION for German Nominal person systems An example of a gate for pronouns Deictic - Head-Noun case concord and number agreement Lexical gender of the noun Agreement in gender Deictic - adjectival-modifier agreement A Hallidayan dependency analysis of the simplex NG Scaling of adjectives The system of ADJECTIVAL-GROUP-DECLENSION The system of ADVERBIAL-TYPE The system of ADVERBIAL-INTENSIFICATION The system of MINOR PROCESS TYPE A gate for Minorprocess Subtypes of [spatial-process] Subtypes of [motion-process] Pronominalization of Minorprocesses An example of unification GPSG A GPSG representation of syntactic structure HPSG: Nonlexical types Unification of an ID-schema (1) with the HFP and the Subcategorization Principle (2) Conflation by unification A fragment of a clause system network as type hierarchy An SFG syntagmatic structure as typed feature structure A fragment of an SFG as a type hierarchy A surface-syntactically motivated transitivity hierarchy Kim devours a cookie. Kim sings. Kim gives Jodie a cookie. Kim gives a cookie to John.
133 134 135 135 136 137 138 139 140 140 142 143 143 144 146 147 147 148 149 149 150 150 150 152 166 167 168 169 171 176 177 178 192 199 205 206 207 208
xii Systemic Functional Grammar in Natural Language Generation 5.15 5.16 5.17 5.18 5.19
Kim verschlingt einen Keks. Kimsingt. Kim gibt Jodie einen Keks. Kim verschenkt einen Keks an John. An SFG typed feature structure without dependency
209 210 211 212 213
Tables
2.1 Realization statements used in SFG 45 2.2 A comparison of approaches to realization and syntagmatic structure 48 4.1 Functional regions of the grammar (English) crossclassified by metafunction and rank 92 4.2 Functional regions of the grammar (German) crossclassified by metafunction and rank 93 4.3 Major differences between NIGEL and KOMET transitivity 101 5.1 Feature usage in SFG and HPSG 180 5.2 Zwicky's analysis 184 5.3 Hudson's analysis 185
Preface
The research presented in this book is the result of several phases of work in one of the areas I was involved in while working at the Institute for Integrated Publication and Information Systems (IPSI) of the German National Center for Information Technology (GMD) in the department KOMET (Knowledge Oriented Production of Multimedia Presentations) from 1990 to 1996. It is based on my thesis submitted at the University of the Saarland in Saarbriicken, Germany, in 1995 and has been updated and revised to take into account several readers' comments. The work presented in this book is essentially about grammar. It takes the perspective of Systemic Functional Linguistics (SFL) and is an illustration of one of the applications SFL lends itself to: computational application in natural language (NL) generation. However, this book is not only of interest to researchers and students of systemic functional linguistics. What is presented here also includes the following topics and is therefore of potential interest to more general branches of linguistic enterprise, such as computational linguistics, contrastive linguistics and grammar theory. Computational grammar models. Apart from the presentation of the application of SFL in NL generation, the more general question that is discussed in this book is the place of the grammatical component in tactical generation, on the one hand, and the computational representation of grammar(s), on the other hand. In particular, there is a comparison of a selection of grammar approaches used in tactical generation - in which the grammatical component is the major linguistic resource drawn upon - and a discussion of (typed) feature structures as a method of computational representation in the domain of grammar. Contrastive linguistics. The fragment of a Systemic Functional Grammar of German described in this book, the KOMET grammar, is
Preface xv part of a larger system for multilingual generation., the KPML system, which offers lexico-grammatical resources for various languages (English, German, Dutch, Japanese, Greek). The method with which this grammar fragment has been built up is 'transfer comparison'. The already existing descriptions of a grammar of English, the NIGEL grammar, have been used as a basis, trying to reuse them for German. Therefore, the description of the German grammar fragment presented here is contrastive in many places, pointing out differences from and commonalities with the English one as implemented in the KPML system. Grammar theory. The more general issues that are discussed here concerning grammar theory and models of grammar are the notion of dependency as a basic concept used in grammatical description and the use and usage of features for grammatical representation.
Acknowledgements
I would like to thank the following people for support at various stages of my work: Erich Steiner and Peter Erdmann for continuous encouragement over many years in numerous ways; John Bateman and Renate Henschel for being critical readers of various versions of the manuscript that served as the basis of this book; Adelheit Stein, Brigitte Grote and Melina Alexa for proof-reading; and last, but not least. Peter for trusting me to do whatever it happens to be that I am doing. All misconceptions and errors are, of course, solely mine.
Notational Conventions
displayed examples
displayed examples appear in roman type with the items in focus in typewriter-like face
non-displayed examples (clause/phrase length)
non-displayed examples, i.e. examples in running text, of clause or phrase length are given in italics
'non-displayed examples' (single items)
non-displayed single-item examples are given in 'single quotes'
technical terms
central technical terms are
given in bold face
issues in focus
bold face is also used - primarily in introductory and summarizing sections - to draw the reader's attention to the key issues under discussion
NAMES AND ABBREVIATIONS
names of systems and software components, such as PENMAN, and standard abbreviations of theories, such as SFG, are given in small capitals.
The notational conventions for representations internal to particular computational systems or theories are given as notes in those places in the text in which they occur for the first time.
This page intentionally left blank
1 Introduction
Systemic theory is more like language itself- a system whose stability lies in its variation. (Halliday 1985a: 7)
1.1 Motivation and Goals The research presented here is primarily a contribution to the theory and application of Systemic Functional Linguistics (SFL) as proposed originally by Halliday (1961) and rooted in Malinowski (1935) and Firth (1957). More particularly, it is an illustration of computational linguistic theorizing in SFL. The contribution to SFL consists of two aspects: 1. The description, specification and computational implementation of a Systemic Functional Grammar (SFG) fragment of German for the purpose of natural language (NL) generation; 2. A re-examination of SFL theory in the domain of the logical metafunction based on the insights gained from computational application. The first is essentially motivated by application to multilingual generation (see, for instance, Kittredge et al. 1988; Delin et al. 1994; Teich et al. 1996) within the KOMET-PENMAN framework (Teich et al. 1996; Bateman 1997), whose kernel is the PENMAN system (Penman Project 1989), a sentence generator for English. Here, I present the description of a fragment of the grammar of German that has been built up by transfer comparison to the NIGEL grammar of English (Matthiessen 1992). This enlarges the set of languages for which there exist computational descriptions in systemic functional terms (Mann and Matthiessen 1983; Bateman et al. 1987; Fawcett and Tucker 1989; Teich 1991, 1992; Zeng 1993; Degand 1993; Grote 1994). The motivation of the second aspect is the following. The
2 Systemic Functional Grammar in Natural Language Generation major attractiveness of SFL lies in its holistic view of language, a view that is ultimately social, considering language as a social semiotic, as the means of establishing and maintaining social relations: We should like to be able to account for grammatical phenomena by reference to social contexts whenever we can, in order to throw some light on why the grammar of language is as it is. The more we are able to relate the options in grammatical systems to meaning potential in the social contexts and behavioral settings, the more insights we shall gain into the nature of the language system, since it is in the service of such contexts and settings that language has evolved. (Halliday 1973: 77)
Given this contextual orientation, SFL considers language a resource or a meaning potential that a speaker has available to fulfil communicative goals. In this view, language is a system of paradigmatic relations that accounts for the potential of linguistic expression. Describing languages as systems is one of the primary tasks of linguistic investigation. Moreover, based on the abovementioned inclusion of context in the realm of linguistics, it is maintained that linguistic expression is primarily in texts, and not in isolated sentences. The task of the linguist is then to relate the options of the linguistics system to the contexts in which they can occur and describe how they form texts. With the research priorities set this way, the paradigmatics of languages have been described in very fine detail in SFL. However, the complementary axis of linguistic description, syntagmatic relations (or structure), has not been worked out to the same level of detail or sophistication. This may not be a problem in applications of SFL that do not require a high degree of explicitness. However, in computational applications, this *syntagmatic gap* manifests itself quite clearly. The difficulty we have encountered using the PENMAN system as a grammarian's workbench can be briefly described as follows. There is a general lack of representational means for making generalizations about grammatical units and their internal morphosyntactic properties. This is partly due to the 'syntagmatic gap' we pointed to above, and partly due to a lag of implementation behind theory (cf. Matthiessen 1988a). Consequently, my goal here has been to try both to reduce this lag and to bridge this gap, moving towards 'more-theory-in-implementation'. I will argue that the syntagmatic gap lies more precisely in the common systemic interpretation of the logical metafunction,1 and I will suggest a reinterpretation of logical information as encoding dependency relations, drawing on other
Introduction 3 grammar theories that use dependency as a central organizing concept. It is with the help of this reinterpretation of logical organization as dependency that the desired generalizations over syntagmatic structure(s) can then be achieved. Proceeding this way it has been necessary not only to re-examine the theory, but also to reconsider the means of representation it makes available. Here, it has been found that these means are not sufficient to express the general dependency relations we envisage. Therefore, other computational-linguistic representation languages have been consulted. One such language is typed feature structures - currently one of the most widely used formalisms in computational linguistics, e.g. TFS (Emele and Zajac 1990), CUF (Dorre et al. 1996), ALE (Carpenter and Penn 1994) and TDL (Krieger and Schafer 1993). The computational application thus meant going back to the theory and feeding back to it the insights gained from the application. Proceeding this way is a general practice in SFL: the notion underlying this kind of methodology is that of linguistic theory as metasemiosis. Since this notion acts as the general background against which the work presented here must be seen, a brief explanation of it is required (Section 1.2) before giving an overview of the organization of this book (Section 1.3). 1.2 Background Systemic functional theory brings forth two convictions about language that build the thematic thread along which the research presented here is organized: • language varies as its function varies; • language is a semiotic system and a semiotic process that construes meaning. We can conceive of metalanguage, i.e. of the representational categories set up to express our linguistic findings, in the same way. Metalanguage also varies according to use, giving rise to 'metaregisters', as it were, and metalanguage is a semiotic system and a semiotic process that goes through cycles of semiosis. This analogy has been proposed for computational systemic theorizing by (Bateman et al. (1994)) and elaborated on as follows. Just as SFL considers the linguistic system as a resource organized by stratification, with the relation of realization between strata (see Chapter 2) - in which a lower stratum is a generalization over a
4 Systemic Functional Grammar in Natural Language Generation higher stratum and a higher stratum constitutes an abstraction over a lower stratum - so metalanguage can be considered a stratified resource in this sense. For computational SFL, we can assume the following strata (as shown in Fig. 1.1, taken from Bateman et al. 1994: 14). The highest, most abstract, metastratum is that of theoretical perspective. Here, we find information about general linguistic-theoretical concepts, the general theoretical stance taken by the theory and its assumptions about the nature of language. In SFL, this metagrammar includes notions such as 'language is a social semiotic', 'language is a resource', 'language fulfils multiple functions'. At the lower adjacent metastratum, linguistic representation, the theoretical concepts are realized or receded in a metagrammar, as it were, thus giving the user (the linguist) an instrument for making linguistic descriptions. In SFL, these are stratification, axial organization, the system network as receding the notion of resource, interstratal and interaxial realization, etc. At the next stratum, computational representation, linguistic representation is recast in explicitly computational terms. Since SFL is not explicitly designed for computational application, there is no tradition to speak of in investigating the level of explicit computational representation. In PENMAN, for instance, this metastratum is almost collapsed with the lowest one, implementation. It is only recently that explicitly computational representation (such as (typed) feature structures) has been experimented with for SFL (Winograd 1983; Patten 1988; Kasper 1988, 1989b; O'Donnell and Kasper 1991; Bateman et al. 1992; Henschel 1994).2 Finally, at the lowest stratum, implementation, we deal with receding in programming languages, such as Lisp or Prolog. Here, as Bateman et al. (1994) point out, we cross a line of arbitrariness - the relation between the other strata is not arbitrary, but one of realization (generalization/abstraction) similar to the one between the grammatical stratum and phonology. Now, the way the 'metaregister' of computational systemic linguistic theorizing is construed is by moving through these strata, from theory to the realization of theory in linguistic representation and to computational representation and implementation. Here, again, it is possible to draw an analogy with the linguistic-descriptive work of accounting for a register, i.e. a variation according to use, of natural language. In the description of some register of a natural language, one draws on the general resource, and extends it according to the registerial variation - and continuously feeds this back to the general resource. Similarly, with a metaregister, one uses the general theoretical resource for a particular purpose, and
Introduction 5
Figure 1.1 Stratification in systemic functional computational linguistic theorizing
possibly finds that it must be extended and adjusted according to the requirements of the context in which it is used. In the context we are concerned with here, computational application, one of the requirements placed on grammar specification in the PENMAN system is a reconsideration of the status of syntagmatic relations in general, and the representation of logical relations in particular (see Section 1.1). Thus, feeding back to the theory what its applications have revealed to be lacking or underrepresented is the common practice of SFL: proceeding in cycles of metasemiosis lies in the spirit of SFL. The metalinguistic system is therefore not stable and it must not be stable because it has continuously to accommodate users' needs. It is against this background that the research presented here should be seen, as an illustration of metasemiosis in computational systemic linguistic theorizing. 1.3 Overview: The Structure of This Book Given the background just described, against which the research presented here is placed, the themes this book is organized around are theory and linguistic representation (Chapter 2), computational application (Chapter 3), description for the purpose of NL generation (Chapter 4) - illustrating one cycle through metasemiosis and feedback to the theory, including explicit computational representation (Chapter 5), going through another cycle of metasemiosis.3 Chapter 2 provides a short introduction to SFL with special focus
6 Systemic Functional Grammar in Natural Language Generation on grammar, its place in the overall linguistic system and its internal organization. Particular attention is paid to the mapping from paradigmatic to syntagmatic structuring and to syntagmatic structure itself. Chapter 3 deals with the application of SFL in natural language processing (NLP), more particularly in NL generation, and focuses on the place of grammar in generation systems. We discuss a number of different approaches to grammatical generation, showing SFG'S strengths and weaknesses in this context of use. This includes a brief description of the PENMAN system for NL generation, with its grammar of English, NIGEL. In Chapter 4, a NiGEL-style grammar of German for NL generation, the KOMET grammar, is presented and the specific representational problems encountered in the specificational and implementational work are discussed. In Chapter 5 we show one possible method of bridging the 'syntagmatic gap' we have encountered and propose an enhancement of computational SFG that is formulated in explicitly computational terms, namely in typed feature structures. Chapter 6 provides a summary, discusses some of the repercussions of the proposal made in Chapter 5 for SFL theory, and concludes with an outlook on current activities in computational SFL. With respect to the perspective taken in the description of linguistic phenomena, it is only natural in the functional view of SFL that it will be oriented towards semantics, i.e. in giving descriptions of grammatical distinctions, there will be mention of the functional grounds that underlie them. In keeping with the systemic tradition of looking at language in context, most of the examples provided throughout this book are taken from 'real' text. Notes 1. See Chapter 2 for an account of the metafunctions assumed by SFL. 2. This is in contrast to the steady involvement in computational representation of, for instance, Functional Unification Grammar (FUG) (Kay 1979, 1985), Lexical Functional Grammar (LFG) (Bresnan 1982), Generalized Phrase Structure Grammar (GPSG) (Gazdar et al. 1985) or Head-driven Phrase Structure Grammar (HPSG) (Pollard and Sag 1987, 1994), in all of which computational methods of representation with a sound mathematical basis (unification, feature logics, etc.) are built in. This does not apply to SFL, which has not originally been developed with a viewpoint to computational processing. 3. The lowest metastratum of implementation has not been considered here because - as noted above - it has a rather arbitrary relation to the higher metastrata and is linguistically not relevant.
2 Theory and Linguistic Representation: Systemic Functional Linguistics
Halliday's theory is referred to as 'systemic grammar' or 'systemic linguistics', since the grammar of a language . . . is envisaged as a highly complex and delicate set of systems of options, some sequentially ordered, some simultaneous, through which one must (figuratively) move in framing an utterance and in terms of which as a hearer one must interpret an utterance. These interrelated networks of choices . . . are presumed to have taken the form they have, in all languages, in order that speakers and hearers can make use of their language to meet their requirements as determined by the general human situation and by their own particular culture. In his theory, Halliday . . . professes, as the central objective of his theory of language, to help answer the question: 'Why is language as it is?'(Robins 1967:245)
2.1 Introduction The purpose of this chapter is to provide the theoretical background for the topic of this book, which is the reconsideration of logical information in Systemic Functional Grammar (SFG) and its linguistic and computational representation for the purpose of specifying some general constraints on syntagmatic structure. The metastrata of theoretical perspective and linguistic representation of SFL are introduced. This embraces a presentation of the theoretical underpinnings of SFL theory (Section 2.2), an introduction of the representational devices set up according to the theoretical concepts and an illustration of the kinds of linguistic descriptions that are made possible in this way (Section 2.3). In terms of the theoretical perspective, the following questions have to be asked: • What kind of phenomenon is language considered in the theory? • What are the questions the theory asks accordingly?
8 Systemic Functional Grammar in Natural Language Generation • What are the theoretical concepts that the theory provides to answer these questions? The metastratum of linguistic representation then figures as the realization of the theoretical concepts for the purpose of making descriptions of language(s). Here, the following questions arise: • What kinds of representational devices are set up according to the theoretical concepts • What kinds of linguistic descriptions do these devices make possible? The main concern is with the metastratum of linguistic representation at the stratum of grammar, with grammatical paradigmatic relations encoded in system networks for all of three metafunctions, interaxial realization statements mapping paradigmatic choices to syntagmatic structures and the representation of syntagmatic relations itself (Section 2.4). 2.2 Systemic Functional Linguistics: The Theoretical Perspective SFL is a theory of language rooted in anthropology (Malinowski 1935). The earliest formulation of SFL as a linguistic theory dates back to Firth (1957) and it has been developed further, notably by Halliday (e.g. Halliday (1961, 1963, 1967, 1968, 1973, 1978)). The crucial characteristic of SFL is its orientation outside linguistics towards sociology. This orientation brings with it a view of language as a social semiotic (Halliday 1978): we can only learn about how language works if we consider the way it is used in particular contexts, both cultural and situational. Essentially, SFL advocates a view of language as a means of doing. In this sense, language provides a (linguistic) behaviour potential which is ultimately defined by the context of culture. Using language is choice among the linguistic possibilities determined by the context of culture in a particular context of situation. Language is thus considered primarily a social resource with which speakers and hearers can act meaningfully. Hence, the central question linguistic investigation is concerned with in SFL is: how is language organized to convey meaning? 'Meaning' in the systemic functional sense is considered to be construed by the linguistic behaviour potential, i.e. by language itself:
Theory and Linguistic Representation 9 Semantics is what he [the speaker] CAN mean and we are looking at this as the realization of what he DOES.' (Halliday 1973: 74)
And, crucially, meaning derives from function in use or function in context: It is a general feature of semiotic systems that they develop and function in a context, and that meaning is a product of the relationship between the system and its environment. (Halliday 1985b: 10)
Function, in turn, has various aspects that are simultaneously fulfilled whenever language is used: Whatever we are using language for, we need to make some reference to the categories of our experience; we need to take on some role of the interpersonal situation; and we need to embody these in the form of text.' (Halliday 1974: 49-50
All of these three functions, called metafunctions, simultaneously relevant. They are of equal status:
are
The speaker does not first decide to express some content and then go on to decide what sort of message to build out of i t . . . Speech acts involve planning that is continuous and simultaneous in respect to all the functions of language. (Halliday 1970: 145)
The major theoretical concepts that follow from SFL'S view on language and from the central question governing all linguistic investigation are: • • • •
language is a behaviour potential; language construes meaning; language is multifunctional; using language is choice in the potential and ultimately actualization of the potential.
2.3 The Realization of Theory: Linguistic Representation How are the theoretical notions realized in terms of linguistic representation? The various attributes associated with language as potential (meaning, multifunctionality, choice) call for a corresponding diversity of notions and categories at the level of linguistic representation:
10 Systemic Functional Grammar in Natural Language Generation • the concept of potential is realized by the system, which also supports the theoretical notion of choice; • the construal of meaning by language is notably realized by stratification and the realization relation between strata; • multifunctionality is realized by establishing functional diversification through metafunctions; • actualization of the potential is realized by the notion of realization. A brief description of these representational constructs follows. 2.3.1 The system The system is the general category that realizes 'language as potential'. It is the basic means of representation of paradigmatic relations, encoding the linguistic options that are available in certain specifiable contexts. The early conception of systems for grammatical description was as single sets of choices available at particular places in structure (see Halliday 1961); later, they were conceived of as sets of systems, system networks, operating with a particular rank of grammatical unit as domain (see Butler 1985). A system network minimally consists of an entry or input condition (a term from another system) and output terms. The output terms present mutually exclusive and exhaustive options, i.e. they are a finite set of which only one can be selected. See (1) in Fig. 2.1, which gives the basic syntax and semantics of systems (derived from Halliday and Matthiessen 1997). In terms of how system networks organize information, they constitute a special kind of classification device. The special property compared to simple classification is the possibility of cross-classification in addition to sub-classification. Crossclassification can be expressed by simultaneous systems (see (4) in Fig. 2.1). Further, system networks allow conjunctive and disjunctive entry conditions, i.e. a term of a system can have a conjunction of types as supertype and it can be a subtype of more than one supertype, where a type inherits from all its supertypes (see (2) and (3) in Fig. 2.1). In this respect, system networks are like multiple inheritance hierarchies.1 Simultaneity and compound (conjunctive and disjunctive) entry conditions are not sufficient for describing all possible linguistic phenomena. There may be exceptions to the general symmetry of linguistic paradigms. For example, with two simultaneous systems, a feature of one system may be constrained to co-occur with only one
Theory and Linguistic Representation 11
system if 'a', then 'x' or 'y' (abbreviated as 'a:x / y') disjunction in entry condition if 'a / b', then 'x / y' conjunction in entry condition if 'a' and 'b', (abbreviated as 'a & b') then 'x / y' simultaneity if 'a', then simultaneously 'x / y' and 'm / n'
delicacy ordering if 'a', then 'x / y'; if 'x', then 'm / n' conditional marking if 'x', then also 'm'
gate (one choice only) if 'x' and T, then 'm' recursive system (logical) if 'a', then 'x / y' and simultaneous option of entering and selecting from the same system again; '||' = stop
Figure 2.1 The syntax and semantics of the system
12 Systemic Functional Grammar in Natural Language Generation of the features of the other system. In such a case, it must be possible to mark this constraint. One way of doing this is conditional marking (see (6) in Fig. 2.1, where x and m are marked to cooccur). Another way of expressing such a co-occurrence constraint is by a gate (see (7) in Fig. 2.1, where an additional feature, f, is introduced as a placeholder, as it were, for m). An additional requirement on representation by the system is the possibility of expressing recursion: some linguistic paradigms, such as the English tense system, are recursive (Halliday 1976), the same option being available more than once (see (8) in Fig. 2.1 for the notation for recursive systems). In linguistic terms, the system allows for the encoding of the paradigmatic relations holding in a language. For exemplification, let us consider one of the systems of the grammar of the clause, the MOOD system (see Fig. 2.2). The mood of the clause can be either [indicative] or [imperative]. If it is [indicative], it can be either [interrogative] or [declarative]. We speak of the choice between [interrogative] and [declarative], and of [indicative] and [declarative] being ordered in delicacy, the ever finer discrimination of choices (see (5) in Fig. 2.1).2 A system network is in general a declarative representation of all the conceivable restrictions on co-occurrence of a particular set of terms or features which characterize the domain for which they apply - be this domain semantic, grammatical or lexical.3 All paradigmatic relations holding in a language can be represented as systems in this way. Therefore, the system as representation device both supports the view of language in use as choice and reflects the concept of language as a potential.
Figure 2.2 A partial grammatical system of MOOD
Theory and Linguistic Representation 13
2.3.2 Stratification
It is the stratum of grammar that is commonly considered the core of the linguistic system. In SFL, however, grammar is not the only resource that is linguistically relevant. Given SFL'S orientation to context and the notion of linguistic choice as being determined by context, the context in which language is used also has linguistic relevance. Accordingly, there must be a way of relating context to the actual linguistic resources, such as grammar. This relation is defined by means of stratification. It is the task of the semantic stratum to act as the interpreter of context so that context can be expressed linguistically: If we are to relate the notion of 'can do' to the sentences and words and phrases that the speaker is able to construct in his language - then we need an intermediary step, where the behaviour potential is as it were converted into linguistic potential. This is the concept of what the speaker 'can mean'. (Halliday 1973: 51)
Overall, the following strata are set up in SFL: SUBSTANCE Sounds
(interlevel)
FORM
(interlevel)
CONTEXT
Phonology
Grammar
Semantics
Situation
The areas of linguistic investigation proper are semantics, grammar and phonology (where grammar includes morphology and lexis). Semantics is seen as an 'interlevel' in the sense pointed to above: it mediates between context and form (Halliday 1973: 75). The same is true of phonology, which mediates between form and substance. The relation between the strata is a many-to-many relation rather than an isomorphism. For illustration of the relation between context, semantics and grammar let us take a simplified example. Fig. 2.3 shows the stratification of the linguistic system into a stratum of context, a stratum of semantics and a stratum of grammar.4 The context is represented as context of situation, with the three parameters of field, tenor and mode (see Halliday 1978). A brief explanation of these contextual parameters is given in Section 2.3.3 below. Given a setting of the parameter of tenor as 'interactive' (as, for example, in a dialogue), in terms of speech function semantics (see
14 Systemic Functional Grammar in Natural Language Generation
Figure 2.3 Context, semantics and grammar Martin 1992)) the features [exchanging: demanding & information] could be chosen for a move in an exchange (i.e. in a part of a dialogue), which can be reflected in the grammar in the selection of [interrogative] in the MOOD system. We say that the categories of a lower stratum realize the categories of a higher stratum; for example, the grammatical category [interrogative] realizes the semantic features [exchanging: demanding&information]. Stratification is crucial in distributing descriptive responsibilities and in contributing to the expression of the theoretical concept of the 'construal of meaning' by the whole linguistic system. For instance, a semantic speech function characterized by the features [exchanging: demanding&information] does not have to be expressed grammatically as [interrogative] - it may also be expressed as [declarative] (e.g. in certain sublanguages or registers, such as the language of instructions). The relation between strata is potentially many-to-many. On the one hand, this leaves a lot of flexibility in the mapping between any two strata; on the other hand, there is a general principle of feature motivation in that the categories at any
Theory and Linguistic Representation 15 one stratum are motivated by the adjacent higher stratum. Generally, the categories of grammar are motivated by semantics, and the categories of semantics are motivated by the stratum of context. In this sense, the relation between strata is 'natural' (rather than arbitrary) (Halliday and Matthiessen 1997). Uniformity of representation across strata is guaranteed by the fact that at any one stratum the potential is represented as system networks, as has been described in Section 2.3.1. 2.3.3 Functional diversification: the metafunctions The functional diversity of language is acknowledged in SFL by the metafunctional hypothesis: When we examine the meaning potential of the language itself, we find that the vast number of options embodied in it combine into a very few relatively independent 'networks'; and these networks of options correspond to certain basic functions of language. (Halliday 1970: 142) These 'certain basic functions' are the three metafunctions: the ideational, the interpersonal and the textual. The ideational metafunction is concerned with 'the speaker's experience of the real world' (Halliday 1970: 143). Within the ideational, there is a subdivision into experiential and logical. The experiential refers to prepositional content encoded as processes, events, the participants therein and the accompanying circumstances, the types of objects referred to and their qualities. The logical refers to some general organizing relations expressed, for instance, by dependencies between elements in structure (e.g. hypotactic versus paratactic organization). The interpersonal metafunction 'serves to establish and maintain social relations' (Halliday 1970: 143), including a speaker's assessment of the probability and relevance of a message. The textual metafunction 'enables the speaker or writer to construct texts' (Halliday 1970: 143). It is concerned with establishing coherence and cohesion in texts. All three metafunctions are of equal status; none is more important than any other. They are simultaneously relevant at any stratum of the linguistic system. For example, at the stratum of grammar, at clause level, functional diversity is reflected in the systems of TRANSITIVITY (ideational), MOOD (interpersonal) and THEME (textual).5 MOOD we have seen an exemplary system of in Section 2.3.1: it is concerned with the grammaticization of speech
16 Systemic Functional Grammar in Natural Language Generation function. TRANSITIVITY is concerned with the process type encoded in a clause and the participants involved, and THEME is concerned with the potential of placing certain elements in theme position, the 'point of departure' (Halliday 1985a: 39) of the clause. Grammar is not the only stratum that is seen as metafunctionally organized. The strata of phonology, semantics and context also have a metafunctional organization. For example, the stratum of context shows reflexes of the metafunctional split in the representation of situation types that are defined at this level. The three relevant parameters defining situation types are field, tenor and mode. Field is concerned with what is going on in a situation (including subject matter), tenor reflects interpersonal relations and the roles participants adopt and mode refers to the status assigned to the text, including the medium and channel of communication. The particular settings of these parameters and the specific linguistic constraints that emanate from this define a register, i.e. a variety of language according to use. With the metafunctions in place like this, the 'semiotic space' of a language and of the situational contexts in which it can be used (i.e. all the resources that are linguistically relevant) adhere to the same functional principle of organization. 2.3.4 Realization So far only one kind of realization assumed in SFL has been referred to, namely interstratal realization, as the relation that holds between strata. There is another kind of realization relation: the one that holds between axes, between paradigmatic and syntagmatic patterning, called interaxial realization. Just as the categories at a higher stratum are realized by categories at a lower stratum, so are paradigmatic categories (represented by features in the system network) realized by syntagmatic categories. Paradigmatic features of, for instance, the grammar are realized syntagmatically in function structures. These are constituency structures specifying functional elements, such as Subject, Actor, Process and the like. The relation between features and structures is specified by realization statements of insertion of functional elements in the structure, their conflation into constituents and their linear ordering, and the realization of features in terms of syntactic category is effected by preselection. The realization operations are given with the features in the system network, thus acting as
Theory and Linguistic Representation 17
constraints on syntagmatic structure. For an example, let us consider the grammatical system of MOOD again. Fig. 2.4 shows the MOOD system for English including realization statements. For instance, the feature [indicative] has an associated realization statement ' +Subject' meaning 'insert Subject', the feature [declarative] has an associated realization statement 'Subject A Finite', meaning 'order Subject before Finite'. These give rise to a partial syntagmatic structure, a function structure: Subject
Finite
Similar partial function structures result from other systems, from TRANSITIVITY, from THEME - in principle from each metafunction. These taken together define a complete function structure of a unit (a clause, a nominal group (NG), a prepositional phrase (PP) etc.). Separate realization statements then specify linear ordering within a unit. Realization, as we have exemplified it here for the grammatical stratum, is the basic relation that holds within the resources of any one stratum, between the paradigmatic and the syntagmatic axis. 2.3.5 Intermediate summary The system, the strata, the metafunctions and the notion of realization taken together define the linguistic representational potential for SFL according to its theoretical assumptions about what
Figure 2.4 The English MOOD system with realization statements
18 systemic functional grammar in natural language generation
Figure 2.5 The multidimensionality of the SFL representation model
language is and what a theory of language should be concerned with. The representational system may seem rather extravagant, having not only one, but several, dimensions (see Fig. 2.5). However, it is bound to be this way because the theory is extravagant, and this, in turn, is because language is complex (see Halliday and Matthiessen 1997). The remaining sections of this chapter look at the stratum of grammar more closely to see what kinds of linguistic descriptions the representational devices of SFL allow us to make (or not to make). For illustration, examples are taken from the grammar of English as described in Matthiessen (1995). 2.4 The View of Grammar in SFL Just as it holds for the overall linguistic theory that its shape is determined by the goals set, the shape a grammatical model takes is dependent on what are considered the goals of grammatical description. In general, the goal of SFG is to account for the appropriate grammatical structure according to a given context. In SFG, a grammatical description is an account of the grammar of a language as the grammatical potential available to a language user.6 The stratum of grammar enjoys the same properties as the other strata do: • there are two axes, the paradigmatic and the syntagmatic; • the system or system network is the device for representing paradigmatic relations;
Theory and Linguistic Representation 19 • the grammar is functionally diversified in ideational, interpersonal and textual metafunctions; • grammatical categories are primarily functionally motivated; • between axes, there is the relation of (interaxial) realization. There are, however, a number of grammar-specific characteristics. One is related to the interpretation of lexis as most delicate grammar, another concerns the rank hypothesis and the third is related to syntagmatic relations and their representation. The SFG-specific interpretation of 'lexis as most delicate grammar' (see Hasan 1987) is the reason why often the grammar in SFL is called lexico-grammar. This interpretation rests on the assumption that the finer discriminated systems become, the more lexical rather than grammatical distinctions are made. The relation between grammar and lexis is thus considered a cline, rather than grammar and lexis being two distinct kinds of linguistic patterning. The second grammar-specific representational construct, the rank scale, defines the types of linguistic units used in the grammar. In general, it serves two purposes. First, the ranks are singled out for an effective description of the paradigmatic relations pertaining to the grammatical units of a language (Halliday 1966), going on the assumption that every grammatical unit has a set of grammatical features that is disjoint from the set of features of the other units (see Fig. 2.6). This is referred to as polysystemicity. Second, the rank scale constitutes a hypothesis about syntagmatic organization in terms of constituency relations (see Fig. 2.7). The rank scale is comparable to the bar levels in X-bar syntax (Chomsky 1970; Jackendoff 1977) in that it is a postulation of what syntactic structure looks like; it incorporates, however, very different claims about the actual shape of the constituent structure (see Chapter 5 for a discussion). The highest rank in the grammar is the clause, the next lower one is the one of groups and phrases, the next lower one is word rank, the lowest one is morpheme rank. The general relation between units of ranks is the consist-of relation. The units of a rank themselves are characterized by the categories of their heads, so that, for instance, on groups/phrases rank there are nominal group and prepositional phrase as units. The rank scale is often represented in the grammar as the initial system, i.e. the system at primary delicacy (see again Fig. 2.6), thus defining the kind of population or domain for which a particular set of features holds.
20 Systemic Functional Grammar in Natural Language Generation
Figure 2.6 A system of RANK
Figure 2.7 Rank reflected in syntagmatic organization
The third characteristic specific to the level of grammar concerns the representation of syntagmatic relations. Syntagmatic organization in SFG is usually represented as a constituency tree, where the underlying branching is maximal, resulting in a rather flat tree structure in which the nodes are functionally labelled (as opposed to, for example, an X-bar syntax (Jackendoff 1977), where the branching is binary, resulting in rather deep constituency trees, where grammatical functions are implicitly given by a constituent's place in the tree). However, the function structure is not the only mode of representation on the syntagmatic axis. SFL maintains that each metafunction gives rise to its own mode of expression. The experiential metafunction is syntagmatically expressed as multivariate structure, a kind of structure that has unique elements. This is the function structure. The logical gives rise to potentially recursive structures that are essentially dependency structures. The interpersonal gives rise to wave-like structures, and the textual to pulse-like structures (Halliday 1979; Matthiessen 1990).7 The following sections further describing the representational means at the stratum of grammar focus on the principles of organization of the grammatical system network, the realization of paradigmatic categories in syntagmatic ones and the representation
Theory and Linguistic Representation 21 of syntagmatic structure - because these define the kinds of grammatical descriptions that can be made. Section 2.4.1 provides an illustration of paradigmatic grammar and shows how it can be used to describe some of the major systems of English. More details are then given about the motivation of features in the grammar's system network (Section 2.4.2). Section 2.4.3 discusses the means SFG makes available to describe syntagmatic structure and its relation to paradigmatic choice, pointing out some views on grammar alternative to the Hallidayan one and presenting the representational construct of realization. Section 2.4.4 concludes this part of the chapter with a summary.
2.4.1 The paradigmatic axis: functional diversification in the grammar
Analagous to the organization of the strata of semantics, phonology and context, the stratum of grammar in the systemic functional model is organized in terms of the three metafunctions of ideational (comprising experiential and logical), interpersonal and textual. As pointed out earlier, all three are of equal status; none is subordinated or prior to any of the others. However, the textual metafunction is often referred to as a 'second order resource' of the grammar, because it acts on the other two - which are often referred to as 'first order resources' (Bateman and Matthiessen 1993; Halliday and Matthiessen 1997) - to make text. Let us consider each of the three metafunctions, providing examples of the kinds of grammatical analyses that can be made with them. The ideational metafunction is defined in reference to grammar as follows: that part of the grammar concerned with the expression of experience, including both the processes within and beyond the self - the phenomena of the external world and those of consciousness - and the logical relations deducible from them. (Halliday 1973: 91) The experiential part of the grammar ultimately reflects the field parameter of the situational context. Semantically, the experiential encodes a language's prepositional content. At clause level, the experiential metafunction is most notably reflected in configurations of processes and the participants therein, called the system of TRANSITIVITY. Fig. 2.8 displays a TRANSITIVITY network for English at the first levels of delicacy.8 Other systems in the experiential part include tense, and temporal and causal circumstantials. Structurally,
22 Systemic Functional Grammar in Natural Language Generation
Figure 2.8 The experiential system of TRANSITIVITY in English
experiential systems give rise to constituent structures unordered in linear sequence. The parallel classification in AGENCY and PROCESS TYPE (Fig. 2.8) reflects two models of grammatical patterning of the English transitivity system (Halliday 1985a).9 AGENCY cuts across all process types and is concerned with external causation (effective = externally caused; middle = not externally caused). This provides for the ergative view of English transitivity. The process type view essentially reflects the option of a process of extending to a Goal, accounting for the transitive model of English transitivity.10 An example will serve to illustrate some of the possible choices in transitivity: (2.1)
(2.1.1 a) The north and south winds met (2.Lib) where the house stood, (2.Lie) and made it the exact centre of the cyclone. (2.1.2a) In the middle of a cyclone the air is generally still, (2.1.2.b) but the great pressure of the wind on every side of the house raised it up higher and higher, [. . .]; (2.1.3a) and there it remained (2.1.3b) and was carried miles and miles away (2.1.3c) as easily as you could carry a feather.11
In (2.1.1 a), the process type is [material], agency is [middle]. In (2.Lib), the process type is [relational] and agency is [middle]. In (2.Lie) transitivity is [relational&effective] and [ranged], i.e. a third participant is involved, a Range ('the exact centre . . .'), which takes
Theory and Linguistic Representation 23
the form of an object complement. The process type in (2.1.2a) is [relational] involving an Attribuend and an Attribute realized as a predicative adjective, and agency is [middle]. In (2.1.2b) transitivity is [material&effective]. (2.1.3a) has the transitivity features [relational&middle], like (2. Lib); and in (2.1.3b) and (2.1.3c) the process type is [material] and agency is [effective], i.e. the process extends to a Goal. The TRANSITIVITY system in the grammar thus describes what is known in other theories as semantic relations, deep cases or thetaroles. Together with systems for circumstantials (adjuncts), for tense, for noun-type (e.g. common noun versus proper noun), for types of prepositional phrase (expressing, for example, location in time or space), etc., it describes the experiential part of the ideational metafunction. The logical part of the ideational metafunction represents what is encoded in the grammar as complex units that are hypotactically and paratactically related. Further, units related in this way are said to be in a relation of interdependence, where one element in the relation is dependent on another element in the relation and vice versa. What distinguishes the logical metafunction from all the other metafunctions is that its systems can be linearly recursive: hypotactically or paratactically related linguistic units can potentially be so connected ad infinitum. Fig. 2.9 shows the least delicate distinctions in the logical metafunction for English clauses. Let us look at an example to illustrate this system.
Figure 2.9 The logical systems of TAXIS and TYPE OF INTERDEPENDENCE in English
24 Systemic Functional Grammar in Natural Language Generation (2.2)
(2.2.la) It is bad enough (2.2.1b) when parents treat ordinary children (2.2.1c) as though they were scabs and bunions, (2.2.Id) but it becomes somehow a lot worse (2.2.1e) when the child in question is extraordinary, (2.2.If) and by that I mean sensitive and brilliant. (2.2.2a) Matilda was both of these things, (2.2.2b) but above all she was brilliant. (2.2.3a) Her mind was so nimble (2.2.3b) and she was so quick to learn that her ability should have been obvious even to the most half-witted of parents.12
The conjunctions, the binders and linkers 'when', 'but' and 'and', signal the types of interdependence relations involved in this text. (2.2.1) to (2.2.3) are all clause complexes. The relation between (2.2. la) and (2.2.1b) is a [hypotactic] one and in terms of TYPE OF INTERDEPENDENCE is [enhancing]; so is the relation between (2.2.Ib) and (2.2.Ic). The relation between (2.2.1a-2.2.1c) and (2.2. Id) is [paratactic] and [extending] in terms of interdependence; so are the relations between (2.2.2a) and (2.2.2b), between (2.2.1e) and (2.2.1f) and between (2.2.3a) and (2.2.3b).13 The systems of TYPE OF INTERDEPENDENCE and TAXIS describe the major reflex of logical relations holding in complex grammatical units. Together with the experiential systems, they define the ideational grammatical potential of a language. The interpersonal metafunction encodes speakers' attitudes and evaluations and relates to the contextual parameter of tenor. One of Halliday's definitions of the interpersonal metafunction in relation to grammar is the following: the grammar of personal participation; it expresses the speaker's role in the speech situation, his personal commitment and his interaction with others. (Halliday 1973: 91) One of the typical characteristics of the interpersonal metafunction is that it relates to choices that repeatedly and at different places affect the structure of a grammatical unit (Halliday and Matthiessen 1997). An example of this kind of grammatical patterning is Subject-Finite agreement. The major grammatical systems reflecting interpersonal information are MOOD and MODALITY. In Fig. 2.4 we have shown the system of MOOD. Sample text 2.3 illustrates all the basic MOOD options: [declarative], [interrogative] and [imperative]. (2.3.1), (2.3.4), (2.3.5) (2.3.7) are all [declarative], (2.3.6.), (2.3.8) and (2.3.9) are
Theory and Linguistic Representation 25 examples of [interrogative] and (2.3.3) is an example of [declarative] being [tagged]. (2.3)
(2.3.1) Very gingerly the boy began to cut a thin slice of the vast cake. (2.3.2) Then he levered the slice out. [...] (2.3.3) 'It' s good, isn't it?' the Trunchbull asked. (2.3.4) 'Very good,' the boy said, chewing and swallowing. (2.3.5) He finished the slice. (2.3.6) 'Have another', the Trunchbull said. (2.3.7)'That's enough, thank you,' the boy murmured. (2.3.8) 'I said have another,' the Trunchbull said, and now there was an altogether sharper edge to her voice. (2.3.9)'Eat another slice! Do as I told you! 1 4
Finally, correlating with the situational-contextual parameter of mode, the textual metafunction encodes the textual aspects of grammar, such as theme selection and the markedness of themes, given-new information, etc., which have their realization in English most notably in word order, focus and intonation. The realization of choices in the textual metafunction typically cuts across constituent boundaries as they are associated with the experiential metafunction. For examples see Section 2.4.3 on the syntagmatic axis. Halliday gives the following definition of the textual metafunction: concerned with the creation of text; it expresses the structure of information, and the relation of each part of the discourse to the whole and to the setting. (Halliday 1973: 91)
Figure 2.10 displays the basic theme options for English. Let us look at another sample text for illustration. (2.4)
(2.4.1) So the Scarecrow followed him and was admitted into the great Throne Room, where he saw, sitting in the emerald throne, a most lovely Lady. (2.4.2) She was dressed in green silk gauze and wore upon her flowing green locks a crown of jewels. (2.4.3) Growing from her shoulders were wings, gorgeous in colour and so light that they fluttered if the slightest breath of air reached them.15
In (2.4.1) the options selected in the THEME network are
26 Systemic Functional Grammar in Natural Language Generation
Figure 2.10 The textual system of THEME in English
[unmarked] in THEME SELECTION, being realized in linear ordering as the position before the finite verb and consisting of the conjunction 'so' and the subject (where 'so' is marking an interdependence relation of [enhancing]), [nonpredicated-theme] in THEME PREDICATION,16 and [nonsubstitute] in THEME SUBSTITUTION. Subject-theme constitutes the unmarked choice in theme options. The same option - subject-theme - applies to (2.4.2); in (2.4.3) the theme is [marked], and in terms of THEME MATTER it is a [process] theme. The systems of the three metafunctions, of which only a few have been shown here, constitute the whole of the paradigmatic component of the grammar, thus structuring the grammatical functional potential of a language. Given that one criterion for setting up the metafunctions is their relative independence from one another, the interconnectivity of systems within one metafunction is said to be stronger than across metafunctions. However, interconnectivity across metafunctions exists: especially in the case of textual systems being dependent on ideational and interpersonal choices. For example, the system of THEME (textual) shows a high degree of mutual constraints with transitivity roles and syntactic functions, in particular in the areas of THEME MATTER and LOCAL THEME SELECTION, as we have seen in example (2.4.2) in sample text 2.4. In summary, the system network is a declarative statement of the
Theory and Linguistic Representation 27 grammatical potential of a language - all the metafunctions are simultaneously active, i.e. choices in all three mutually constrain realization in syntagmatic structure. The ground on which the grammatical classification is established is the view of language as functionally diversified, in which not only syntactic function (such as Subject, Object), but also ideational, interpersonal and textual functions are recognized.
2.4.2 The motivation of grammatical categories in the system network
Working with SFG as a framework for describing the grammar of a language, one of the fundamental questions is how the systems and their features are motivated for a particular language. Generally, features are motivated empirically. Internally to the systemic model, features have the following motivation. Given the place of grammar in the representation model of SFL as the lower adjacent stratum to semantics and given that grammar is the realization of what a speaker means, the motivation of grammatical categories is from above in stratum: grammatical categories are generalizations over semantic categories. Internally to the grammar, the motivation of categories is twofold. First, features in the system network are motivated from above in rank, i.e. from the function they fulfil in the unit they are a feature of; second, they are motivated from below in axis, i.e. from their own internal structure. In other words, features may have a nonformal, functional motivation or a formal motivation. To the latter, Martin (1987) also adds the following: • a feature is motivated formally when it has a reflex in form (i.e. it must be associated with a realization statement); • a feature is motivated formally when it acts as an entry condition for simultaneous systems; • a feature is motivated formally when it is a term in a disjunctive or conjunctive entry condition for a more delicate system; • a feature is motivated formally when it is terminal and all other features are motivated by one of the other criteria. The criterion of formal reflex justifies, for example, the gate as a kind of degenerate system with only one output term. One usage of the gate is for cases, where one wants to specify one particular realization that holds under complex conditions. For example, the phenomenon of agreement in the German nominal group (NG) can
28 Systemic Functional Grammar in Natural Language Generation be handled by gates. In the German NG, the head noun and its premodifiers agree in number, case and gender, and an adjectival modifier depends in its declension on the kind of determiner (definite or indefinite) present in the nominal group. The realization of the single forms of the paradigm can be effected by gates by formulating the conditions under which they hold as compound entry condition (for examples of this see Chapter 4's description of a fragment of German in SFG style). In this usage, the gate is sufficiently motivated by the criterion of formal reflex. Besides the various kinds of formal motivation, system networks usually reflect the nonformal kind of motivation. Features are often introduced to make a semantic distinction or generalization - this is what we have referred to as motivation from above in stratum - and are then subclassified in terms of systems whose features are again internally formally or nonformally motivated. The semantic (nonformal) motivation of features can be illustrated by an example of the Hallidayan system network of DEIXIS; see Fig. 2.11 (adapted from Martin 1987: 31). Here, according to (Martin 1987), the feature [total] is not motivated by any of the formal criteria, it does not encode formal meaning. Its meaning consists in a semantic generalization about the features that realize it, which all fulfil a quantifying function in the nominal group they are part of. It is this property of system networks, i.e. the grammar-external
Figure 2.11
Non-formal feature motivation in DEIXIS
Theory and Linguistic Representation 29 motivation from above in stratum and the grammar-internal dual formal and nonformal motivation of features, that often makes them hard to interpret. Most of the existing system networks thus appear an amalgam of semantic and syntactic information. Martin (1987) calls these mediated networks. As will be seen in Sections 2.4.3.3 and 2.4.3.4 on alternative views on syntagmatic organization, there have been attempts to reduce the criteria for establishing features in system networks either to purely formal ones (as in Hudson 1976) or to purely nonformal, semantic ones (as in Fawcett (1980)). The kinds of networks resulting from the former Martin (1987) refers to as first-level networks; the latter he calls second-level networks. Interestingly, deciding for one or another also means a decision in architecture: there is a correlation between the kind(s) of feature motivation and stratification. Dispensing with the nonformal motivation of features for the grammatical system network, and leaving it to a higher stratum, e.g. to discourse semantics, one would have first-level networks for grammar and second-level networks for semantics (Martin 1987: 37).17 In a generative model, the only necessary criterion for establishing features in the system network is that it must include at least those features that are necessary for generating the well-formed structures of a given language. A feature must have a reflex in form, i.e. there must be some generative consequences when systems are related to syntagmatic patterning through realization rules so that all the categories employed must be clearly 'there' in the grammar of the language. They are not set up simply to label differences in meaning. In other words, we do not argue: 'these sets of examples differ in meaning; therefore they must be systematically distinct in the grammar.' They may be; but if there is no lexico-grammatical reflex to the distinctions, they are not. (Halliday 1985a: xx) 2.4.3 The syntagmatic axis and its representation; realization Given that 'the theory of syntagmatic organization will determine what the realization statements have to specify' (Halliday and Matthiessen 1997), we begin this section with an overview of a number of different viewpoints on syntagmatic organization within SFG and then discuss the realizational apparatus necessary to arrive at the kind of representation of syntagmatic structure aimed at. Generally, the mapping between system and structure is specified in SFG in terms of realization rules or realization statements. The
30 Systemic Functional Grammar in Natural Language Generation actual form these rules take depends not only on the theory of syntagmatic relations, but also on what kinds of information are included in the paradigmatic part of the grammar. These interrelations become obvious when one compares some alternative approaches within SFL, e.g. those proposed by Fawcett (1980) and Hudson (1976). Also considered are some alternative approaches to realization proper that assume a Hallidayan paradigmatic organization and some kind of constituency structure for the representation of syntagmatic organization (Huddleston 1965; Henrici 1965/81; Berry 1977). Common to all proposals of syntagmatic organization is the dual representation of function and class (syntactic category) in structure and the realizational relation from function to class: Structure is a set of relations on the syntagmatic axis, and the elements of structure [i.e. the functions] are the values defined by these relations a class is a set of items which realizes these values. (Halliday 1965/81: 29)
This commonness is, however, at a rather superficial level; the following exposition on types of structure will show that there are various ways in which the dual representation of function and class can be implemented. For example, while Halliday only gives functions as labelling and accounts for class by interrank realization (preselection), with Hudson (1976), functions are left implicit in the dependency relations he uses for syntagmatic representation, and with Fawcett (1980) both function and class are given explicitly as a dual labelling of constituents. 2.4.3.1 The Hallidayan view: modes of expression While constituency is a kind of representation of syntagmatic relations very commonly employed in grammar theory as the sole expression of structure, the view of syntagmatic structure taken in SFG is more diversified. The representation of syntagmatic relations also reflects functional diversification. It does so in two ways. First, in the same way as the metafunctions organize the grammatical system network into simultaneous sets of options, they also organize grammatical structure into simultaneous functional layers of structure (Halliday 1979): functional diversification shows in the multifunctional layering of the function structure. The function structure has a rather flat constituency organization with maximal bracketing. Its nodes are labelled with microfunctions
Theory and Linguistic Representation 31 (elements of structure) that are derived from the different metafunctions, e.g. Actor, Process, Goal (experiential), Theme, Rheme (textual) and Subject (interpersonal), and conflated into single constituents, such as Actor/Theme/Subject. A function structure is thus the syntagmatic unification of the diverse functions established for a particular instance of a grammatical unit. Second, functional diversification is reflected in the modes of expression hypothesis we have referred to earlier, which essentially states that every metafunction gives rise to its own mode of representation on the syntagmatic axis (Matthiessen 1990). Let us consider this hypothesis in more detail. The prototypical mode of expression of the experiential part of the ideational metafunction is in constituency configurations. For an example see Fig. 2.12, presenting a function structure for sentence (2.3.5) from sample text 2.3. The mode of expression of the logical part of the ideational metafunction is the interdependency structure, a structure of a more generalized kind than the experiential structure. See Fig. 2.13 for a syntagmatic analysis of sentence (2.2.1) from sample text 2.2 in interdependency terms, where a, (3 mark a hypotactic relation, and 1, 2 mark a paratactic relation. Paradigmatically, logical relations are represented as recursive systems. Syntagmatically, logical relations give rise to complex units (as opposed to simplex units, such as NG, PP) - clause-complex, nominal-group-complex, etc. the characteristic of which is their open-endedness, i.e. they are not a predefined whole as the structures deriving from the other metafunctions are. The logical mode of syntagmatic structure can be of one of two types: multivariate or univariate. Simplex units always have a multivariate structure; complex units always have a univariate structure, either paratactic or hypotactic. A multivariate structure involves more than one variable, where each one is unique in occurrence (e.g. Actor-Process in a clause). It is said to be Theme
Rheme
Subject
Finite
Complement
Agent/ Actor
Process
Goal
He
finished
the slice
Figure 2.12 The experiential mode of expression - constituency
32 Systemic Functional Grammar in Natural Language Generation
Figure 2.13 The logical mode of expression - interdependency
exocentric. A multivariate structure is not inherently recursive; recursion can only be introduced by rankshift. In this case, we speak of cyclical recursion. This is opposed to univariate structures, which are linearly recursive; it lies in their nature to be recursive. The elements of a univariate structure are repetitions of the same variable. Univariate structure is said to be endocentric. The difference of rankshift to recursion in univariate structures is that the recursion is at the same rank in a univariate structure, whereas cyclically recursive structures are regarded as units embedded in another unit that is typically higher on the rank scale (e.g. a clause embedded in a nominal group), thus creating an additional layer in the syntagmatic representation.18 While univariate, paratactic structures are considered as chains of dependencies, where none of the units involved in the dependency relation is considered as head or dependent, with univariate, hypotactic structures there is a designated head (the a element). Typical cases of univariate, paratactic structures are coordinated structures and appositions. The prototypical mode of expression of the interpersonal metafunction is the prosodic structure. Examples are agreement between Subject and Finite in number and person, and agreement in tense, person and number of the Subject and the Finite in main clause and tagging clauses. See Fig. 2.14, displaying agreement as prosody for sentence (2.3.3) from sample text 2.3. The difference from logical organization is that for interpersonal systems (such as agreement), there is only one selection that is scattered in its realization over the syntagm, whereas with logical systems there is
Theory and Linguistic Representation 33 singular
singular Moodtag
Mood
Residue
Subject Finite
Complement
Finite(neg) Subject
good,
isn't it?
It's
Figure 2.14 The interpersonal mode of expression - prosody
repeated choice in the same system: whether to continue the complex or not, and if so, with what kind of interdependency relation to fill the logical relation (see again (8) in Fig. 2.1). For the textual metafunction, the prototypical modes of expression are pulse and prominence structures. Fig. 2.15 shows the textual syntagmatic organization for (2.4.1) of sample text 2.4. The Theme marks the 'point of departure' (Halliday 1985a: 39) of the clause as message, and the Given is the information that is known. Observe here that Theme/Rheme and Given/New do not necessarily coincide. 2.4.3.2 Limits of linguistic representation in Halliday an SFG The general view taken on syntagmatic relations by Halliday is that one kind of representation does not suffice for describing the different modes of syntagmatic organization (Halliday 1979). It is maintained in SFG that constituency is not a suitable means for representing all the different kinds of structure. However, the means of representation offered for the modes other than constituency are not explicit enough to cover all the phenomena that can be encountered in natural languages (see also Chapter 4). Of the four modes of expression, the experiential is the best developed in terms of representation. There are a number of models available for constituency, ranging from binary-branching, deeply layered tree structures to multiple-branching, flat tree structures, functionally annotated or annotated with class labels or both. In Hallidayan SFG, flat tree structures with function labels are used. The main argument for flat trees and against binary-branching trees for the representation of constituency brought forth in SFG is the following. Binary branching typically goes together with leaving syntactic functions implicit, because they can be derived from their place in the tree (e.g. the subject is the NP node that is immediately
34 Systemic Functional Grammar in Natural Language Generation
Figure 2.15 The textual mode of expression - pulse and prominence
dominated by S). In SFG, microfunctions are given explicitly in any case - so there is no need to define them over a constituency tree.19 The nodes (i.e. the layers in the tree representation) that are still required build the rank scale in SFG (Anderson 1969), which constitutes the basic skeleton of syntagmatic and paradigmatic organization. It is ultimately the classification along the rank scale that 'makes it easier to avoid the imposition of unnecessary structure' (Halliday 1966: 112). For the modes of expression other than constituency, the means of representation are not as explicitly worked out (Matthiessen 1988a). For the logical metafunction, recursion presents a problem in that the system network would have to be interpreted dynamically rather than synoptic ally, as is normally the case (see Martin 1985 for this distinction). For representation of the syntagmatic reflexes of logical relations, it would be necessary to consider other representational mechanisms for expressing feature co-occurrence constraints among a higher-ranking unit, its head and its dependents. Similarly, representation in the interpersonal and textual domains is critical. For instance, agreement, an interpersonal syntagmatic reflex, requires first to be able to characterize the 'domain of agreement' and second to have available the means to describe the congruence of particular attributes (such as number, person) across distinct microfunctions. The representational problem specific to the textual metafunction is that textual syntagmatic organization often does not respect constituency boundaries. An example of this kind of syntagmatic organization is fronting of a nominal group embedded in a prepositional phrase by movement (preposition stranding), which creates a long-distance dependency. In such cases, the realization statements, such as conflation (see Section 2.4.3.5), would have to be able to operate on microfunctions of units of lower ranks, not only at the same rank.
Theory and Linguistic Representation 35 Furthermore, there is one aspect that this richly diversified representation of syntagmatic structure misses, in our view. This is the representation of logical, i.e. dependency relations, in simplex units. As mentioned earlier, according to the SFG model, simplex units always have a multivariate structure. While the part that logical organization plays in the model is generally not disputed, there is only one place that we are aware of where Halliday explicitly speaks of the logical structure of a simplex unit in the sense of dependency structure. This is in the treatment of the nominal group in Halliday (1985a), exemplifying the logical view on simplex units. Nominal groups can be analysed in terms of logical structure, too, similar to complexes that exhibit hypotaxis, with a head element and dependents. A dependency analysis acknowledges the general head-dependent pattern that most linguistic units can be interpreted in - the basic observation that linguistic description is based upon in dependency grammar (e.g. Mel'~cuk 1988; Vater 1975; Kunze 1975). The notions that typically underlie the head as descriptive category are those of semantic functor, subcategorizand, governor, morphosyntactic locus and distributional equivalent (see e.g. Zwicky 1985; Hudson 1987b). Neither of these is explicitly taken into account in SFG. The discussion of the computational modelling of SFG for generation in subsequent chapters will demonstrate that the lack of a head notion, especially in its readings as distributional equivalent, subcategorizand and governor, compromises the achievement of certain grammatical generalizations. Summarizing the representational potential for syntagmatic relations proposed by Halliday, the function structure offers the possibility of unifying the distinct contributions made by the different metafunctions of the system network into one representation. It essentially reflects a constituency organization (e.g. a clause consists of Actor, Process and Theme, Rheme, etc. and can therefore be considered the experiential mode of expression. The other modes of expression are interdependency, prosody and pulse/prominence for the syntagmatic reflexes of the logical, interpersonal and textual metafunctions, respectively. There are a number of alternative approaches to the representation of grammar within SFL that do not make this close connection between metafunctions and individual modes of expression for each metafunction. One of them, Hudson's (1976), very strongly rests upon a dependency account in a sense similar to the one mentioned above. Hudson (1976) dispenses with metafunctions completely, presenting a grammatical system network
36 Systemic Functional Grammar in Natural Language Generation in the form of classification rules, where the classes are exclusively syntactically motivated, and suggesting a hybrid representation of syntagmatic relations in terms of constituency and dependency. Another approach, Fawcett's (1980), suggests that the system network be regarded as the semantic level or stratum and that (syntagmatic) structure be the domain of what he calls syntax. In contrast to Hudson, Fawcett keeps the metafunctional organization. Let us look at these alternative approaches more closely to see in comparison to the Hallidayan proposal what the different strategies imply for the overall grammar model. 2.4.3.3 Fawcett's approach to grammar Fawcett's model of grammar has three components: semantics, realization rules and form/intonation. Part of the form level is 'syntax' together with 'items' (the latter constituting the lexicon). As already mentioned, Fawcett relegates the system network as representational device for paradigmatic relations to the semantic component; the categories of unit and element of structure (i.e. rmcrofunction) are kept, but solely as categories of syntax. There are three types of syntactic relationships: componence (a unit is composed of a structure), filling (a unit fills an element of structure) and exponence (an item expounds an element of structure). Fawcett uses the class category, but rejects Halliday's criterion of class as being defined primarily with reference to the structure of the unit next above in the rank scale (Halliday 1961). For Fawcett, classes are to be exclusively defined by their componence, i.e. by their own internal structure. He establishes four classes of the unit 'group' the quality (adjectival and adverbial) and quantity groups nominal, prepositional, and 'clusters' (basically for genitives) and the class of clause. This reduces the rank scale to two ranks. The remainder of a rank scale is diversified with respect to the consist-of relation. The consist-of relation is broken down into 'composed-of and 'filled-by'. Further, it is intended as an organization of the syntagmatic axis only. For example, a clause is said not to consist of groups directly, but of elements of structure which in turn may be filled by groups. The filling by groups is the unmarked case. In the marked case, a clause may be rank-shifted, i.e. its elements of structure might not consist of groups, but they may be immediately expounded by items. For paradigmatic relations, rank as an initial entry condition to the system network is abandoned. Instead, the CONGRUENCE network is established, accounting for the following. Clauses
Theory and Linguistic Representation 37
Figure 2.16 Fawcett's congruence network
typically realize 'situations' (or events), or, in other words, the clause is the congruent realization of a situation; in the same way, the congruent realization of 'things' (or entities) is as nominal groups. However, units can also be realized incongruently, i.e. by grammatical metaphor. See Fig. 2.16 for Fawcett's entry system to the grammar (taken from Fawcett 1980: 93); the arrows are to indicate preselection (see Section 2.4.3.5). The congruence network replaces the rank scale implemented as the initial grammatical system in the paradigmatic part of the grammar in the Hallidayan model, and in the syntagmatic part the rank scale is substituted by the relation 'consist-of, diversified into filling and componence, applying to clauses and elements of the structure of clauses. Word and morpheme ranks are eliminated, and there is a rejection of the total accountability postulate, which says essentially that every unit of some rank must be accounted for in terms of the units of the next lower rank it is composed of. This was postulated to guarantee that every item of a text in a textual analysis be accounted for at all ranks, so that each unit would be fully identifiable (Halliday 1961). The argument for rejecting this postulate is essentially that elements of structure of a group may often not be filled by a word, but must be filled by a larger unit, e.g. with collocations or idiomatic expressions. Further, elements of
38 Systemic Functional Grammar in Natural Language Generation structure of the clause are very often directly expounded by items (for example, linkers and binders, expounded directly by conjunctions). Consequently, word and morpheme ranks are eliminated, and words and morphemes are thus relegated to items (the lexicon). The motivation behind these changes is to dispense with the expectation that elements of structure of a unit will be filled by any particular unit (Fawcett 1980). With regard to complexes, which build a sort of 'interrank' in the Hallidayan model, Fawcett gives up the notion of complexing as dependence-without-embedding. Clause complexes are simply sentences, where the sentence builds the highest unit. The only logical relation in the Hallidayan sense that Fawcett retains is coordination. Coordination, however, is not treated as a specific kind of complex, but handled by the relationship of filling in syntax (for example, a complement is said to be filled by two coordinated nominal groups). Thus, in contrast to Halliday's treatment, no additional layer in structure is introduced. In terms of recursion in the system network, this means that there are two types of recursion that differ from the types Halliday observes: there is no linear recursion at all, coordination is handled by filling and hypotactic structure, which also derives from linear recursion with Halliday, is simply covered by embedding. The Fawcett types of recursion derive from the two types of consist-of relation present in the syntax: the relation holding between an element of structure and the unit that fills it is used to cover coordination, and the componence relation is used to account for embedding. Since the systemic, paradigmatic description for Fawcett constitutes the semantic stratum, realization in his model is formally interstratal. In general, realization rules refer to a 'starting structure' (Fawcett 1980: 48), which is a representation of the potential syntagmatic organization of a clause in the form of sequentially ordered unmarked places and elements of structure filling them. For a starting structure plus an example see Figure. 2.17. In Fig. 2.17, '#' denotes initial and final boundary markers. There are elements and places (shown by lines). The elements are the constituents of the clause. '&' denotes Linker (coordinating conjunction), B denotes Binder (subordinating conjunctions). S stands for Subject, O for Operator, X for Auxiliary, M for Main verb (Halliday's Process), C for Complement, A for Adjunct and V for Vocative. Each element occupies a place, which represents the unmarked ordering for that element in the structure. Where places are unfilled, they represent marked options in sequence. The
Theory and Linguistic Representation 39
Figure 2.17 An instantiated starting structure introduction of a starting structure is essentially a response to the rather inelegant order realization statements used in other versions of SFG, by which only two elements can be ordered at a time and with respect to each other (see Section 2.4.3.5 below), presupposing that these elements have already been inserted in structure. Most of Fawcett's changes must be seen in the light of his decision to take the system network as semantics and only syntagmatic relations as grammar/syntax. The congruence network in the paradigmatic part of his model opens up the possibility of functional generalizations across units. This is not possible in the Hallidayan model, where polysystemicity is strictly adhered to: the set of features characterizing one unit (such as clause, NG, PP) must be distinct from the set of features characterizing another unit. In Fawcett's model, the category of unit is only relevant for the organization of syntagmatic relations and is given explicitly by the filling relationship. With Halliday, 'unit' is not explicitly given in the representation of syntagmatic relations, but covered by preselection in the system network. At first sight, Fawcett's suggestions seem to imply a stricter separation of paradigmatic relations and syntagmatic relations. Indeed, since paradigmatic is equated with semantics and syntagmatic with syntax (grammar), grammar is reduced to specifications of syntagmatic relations and loses both functional diversification and paradigmatic organization. 2.4.3.4 Hudson }s daughter-dependency grammar The distinctive property of Hudson's Arguments for a nontransformational grammar (ANTG) model (Hudson 1976) is the explicit representation of both constituency and dependency.20 He explicitly addresses the lack of a general head notion in SFG we have mentioned before. The similarity of Hudson's ANTG model and those of Halliday and Fawcett lies primarily in the use of features on higher nodes (Hudson 1987a), i.e. linguistic units other than words are sub- and cross-classified. This is done either by means of system
40 Systemic Functional Grammar in Natural Language Generation networks (as with Halliday and Fawcett) or by means of classification rules (as with Hudson). Another similarity of Hudson's and other systemic approaches to grammar is that a rather flat tree structure is favoured as structural representation. The differences between ANTG and other systemic grammars begin with the motivation of features for system networks/ classification rules. For Hudson, the criterion for including a feature in a classification rule is that features must reflect similarities between units with respect to syntactic distribution. There are other criteria, e.g. similarity in internal structure (the only one that Fawcett recognizes), but the only necessary criterion is the one of syntactic distribution. This reduction has the following reasons. First, Hudson's view of the relation between syntax and semantics is congruent with the Chomskyan one: Hudson's rules relating syntax and semantics are able to read the syntactic structure of the clause; this is not possible in a Hallidayan style systemic grammar, where syntax (lexicogrammar) and semantics are rather strictly separated. Hudson's stance goes together with a conception of the lexicon as collection point of phonological, syntactic and semantic information, where lexical insertion involves finding a lexical item which is compatible with the syntactic structure (Hudson 1976). This is very similar to the place of the lexicon in some versions of generative grammar or in Government and Binding theory (GB) (Chomsky 1981), and computational grammar models, such as Head-driven Phrase Structure Grammar, or HPSG (Pollard and Sag 1987, 1994), where semantic features from the lexicon can be read into the syntactic structure representation and the lexical features can constrain syntactic structure in this way.21 Second, the rules relating clause features to the internal structure of the clause (i.e. the equivalent to realization rules in SFG) are optional or determined by the features of the verb. While, for example, extraposition is represented as an option in the clause system network in an SFG, in Hudson's ANTG model this is covered by the rules relating features to structures, and is not included as an option in the classification rules (because clauses with extraposition do not differ in distribution from clauses without extraposition - if one does not consider context, that is). The representation of syntactic structure Hudson employs is a combination of constituency and dependency. Hudson in this way tries to unify the advantages of constituency descriptions (the recognition of larger units together with the possibility of expressing whole-part relations) and dependency descriptions (the explicit
Theory and Linguistic Representation 41
representation of part-part relations). Hudson claims that in doing so, the features in system networks that are there solely in order to represent indirectly such dependencies can be eliminated, thus rendering the classificational part of the grammar much simpler. Given that features are restricted to showing distributional differences, it is not relevant, for example, for clause classification to specify the types of complements a verb can take (information that would be specified in the classification of the unit of clause in Hallidayan SFG) . What is relevant to a clause's distributional class is the kind of verb - given by daughter dependency rules - while complements are introduced by sister dependency rules (Hudson 1976:81). Hudson's proposal for having both constituency and dependency in the structural representation is based on the following heuristic: 'only connect the whole to those of its parts which depend for their presence or their features on the features of the whole, rather than on features of one of its parts' (Hudson 1987a: 103). This is handled by the first kind of realization rule, daughter dependency rules. In the other case, when a feature or element of structure is not dependent on some feature(s) of the mother, but rather on a sister or some feature (s) of the sister, sister dependency rules apply. The third type of realization rule is sequencing rules. Systemically interpreted, daughter dependency rules and sister dependency rules roughly correspond to insertion and preselection, where dependency is interpreted in terms of features of higher nodes that constrain the relevant sister or daughter constituents of that node to have particular features. For example, a daughter dependency rule22 (1) + phrase, — sentence, + nominal -»• + noun which says that noun phrases contain a noun or pronoun (as head), is roughly expressed in SFG as (+Thing (insert Thing)) in the unit of nominal group plus a preselection of Thing to be a noun on word rank. For examples of sister dependency rules and their equivalents in SFG, consider the following rules: (2) + predicate -> + nominal / not + passive which says that every verb takes one nominal as complement, provided it is not passive; and
42 Systemic Functional Grammar in Natural Language Generation (3) + transitive -> + nominal
saying that transitive verbs take another complement. Information such as this is commonly reflected in SFG in the classification of the higher unit (i.e. the clause) and thus constrains syntagmatic structure to have, for instance, an Agent (which is roughly equivalent to rule (2)) and/or a Medium (roughly equivalent to rule (3)). The difference between a sister dependency treatment and SFG lies in the fact that microfunctions are generally not considered necessary in the ANTG model. Hudson has to introduce a few of them though, such as the Topic (roughly corresponding to Theme), and represents them explicitly in the syntagmatic representation (Hudson 1976: 97-107). This is because there seems to be no way in his model of defining a Topic/Theme in terms of features 'because the items covered by such terms can have a very wide range of features' (Hudson 1987a: 111): the function Topic generalizes over topicalized nominals and fronted wh-phrases (which can never occur together in the same clause). It is, however, not surprising that such a generalization cannot be made in Hudson's model. The generalization attempted is a functional one, but Hudson's categories are generally motivated nonfunctionally. The exclusive motivation of categories in the classification by distributional criteria makes it necessary to introduce a fourth type of rule, the function assignment rule, which specifies functions such as the mentioned Topic. It is difficult to estimate the cost at which Hudson makes these rather drastic changes to the Hallidayan grammar model, because it is not made explicit what the mapping between semantics and syntax looks like (except that the lexicon plays a unifying role).23 Given that Hudson's grammatical features are rather narrowly defined, i.e. oriented towards linguistic surface form, one can expect that the trade-off compared to Hallidayan-style SFG, where grammatical features are semantically significant, is between having a less complex mapping between syntax/grammar and semantics and a rather complex system network (as in Hallidayan SFG) and having a simpler systemic network/set of classification rules and a rather complex mapping between syntax and semantics (as, we strongly assume, would be the case in the ANTG model). This is in fact a trade-off that also shows with other types of grammar whose categories are solely distributionally motivated and/or motivated by the internal structure of their units when compared to SFG - in
Theory and Linguistic Representation 43 principle, all phrase structure grammars. What is lost in Hudson's model compared to the Hallidayan one is the general relationship between strata and the uniformity of design of all linguistically relevant resources. Hudson seems to trade in the means of functional generalization for the means of surface-syntactic generalization. Hudson's model is more like modern classificationbased approaches to grammar (and was therefore very much ahead of its time, when features on higher nodes were hardly used in phrase structure grammars), which are surface-syntax based, such as HPSG (Pollard and Sag, 1987, 1994), than it is like SFG. Hudson's suggestion of a dual representation of constituency and dependency as reflecting two major modes of syntagmatic organization, the reflexes of the experiential and the logical, is, however, attractive. And so is the claim that with the representation of dependency many features that 'simulate' dependency relations in the system network can be eliminated. The relocation of features expressing dependency relations to the description of syntagmatic relations will interest us again in Chapter 5, when we deal with the definition of the logical aspect of simplex units in SFG within a grammar of the Hallidayan style. Let us now turn to the realization relation between paradigmatic features and syntagmatic structure. 2.4.3.5 Interaxial and interrank realization Realization pertaining solely to grammar, i.e. realization that is not interstratal, is the receding in structure, i.e. in syntagmatic patterning, of a set or bundle of features selected from the grammatical system network of paradigmatic relations. Realization statements are not subject to a particular sequential order of application, they are immediately applied at each unit of rank. Further, realization statements do not imply direction: they are interpreted as nondirectional statements specifying syntagmatic constraints. The syntagmatic structure thus specified is not changed, once it is established. This is possible on the grounds that a selection expression - the set of features selected from the system network - offers a sufficient condition for a particular syntagmatic constraint to hold. The realization statements as expressing minimal constraints on syntagmatic structure are typically argued for as an advantage of SFG. Commonly involved in realization are specifications of insertion, conflation, expansion and ordering of constituents in a structure. Generally, in SFG, the structure built up for a unit, e.g.
44 Systemic Functional Grammar in Natural Language Generation the clause, is a configuration of elements of structure, such as Actor, Process, Goal, which are unordered in sequence. The presence of functions in a structure is specified by the realization statement of insertion. Grammatical functions result from all metafunctions and are grouped into single constituents by the realization statement of conflation. The structure specified in this way is the function structure. Further, an inserted grammatical function may be expanded to consist of more delicate microfunctions (e.g. Mood is typically expanded into Subject and Finite), which creates an intermediate layering in the structure. Specifications of linear sequence are provided by order realization statements, which typically order two constituents relative to each other. These four types of realization statements are subsumed under interaxial realization, because they define a mapping between paradigmatic features and syntagmatic structure. The function structure of a particular unit plus ordering relations does not contain a specification of how the functions are to be realized in terms of class (syntactic category). This is provided by the interrank realization statement of preselection. Preselection takes as arguments a function inserted into the structure of a unit and a feature or a set of features realizing that function in terms of a unit or features of that unit on a lower rank. For example, for a clause that is [effective] in transitivity, there is an element of structure Goal that is realized as a nominal group in terms of syntactic category. Table 2.1 summarizes the realization statements of SFG (based on Matthiessen 1988a).24 There are alternative approaches within SFG that differ from this general view in that a sequential ordering of rule application is explicitly stipulated (e.g. by Berry 1977), or modifications (of partial structures) including deletions are used (e.g. by Henrici 1965/81). The realization rules proposed by (Berry 1977) include a sequential order of rule application. She proposes six types of realization rules: (1) insertion, (2) concatenation (order), (3) particularization (preselection), (4) inclusion (a kind of insertion), (5) conflation and (6) a type of rule for handling discontinuity. Logically, (1) to (3) as well as (5) and (6) presuppose (4). Thus, Berry suggests an order of rule application starting with (4), then going through (6), (5), (1), (2), (3). However, again, if we consider realization statements simply as simultaneous constraints on the well-formedness of syntagmatic structure, imposing an order is superfluous and against SFL'S original conception.
Theory and Linguistic Representation 45 Name
Description
Notation/example
insert
+ Subject
Function inserted as constituent of the structure of the unit being specified
order
SubjectA Finite
One function ordered to precede another
expand
Mood (Finite)
One function expanded to have another function as constituent
conflate
Subject/Agent
One function conflated with another one to form the same constituent together
preselect
Subject: singular
A function preselected for a feature; the realization of the function is constrained to display that feature
or Subject \ singular
Table 2.1 Realization statements used in SFG
Another approach to realization, operating with modifications, has been proposed by (Henrici 1965-81). The starting point for realization is a null structure included within two boundary markers. In the process of realization, partial structures are built up by modification operations - the final result being a full structure that is sequentially ordered (and contains no dummy elements). The modifications are of three types: identity (leaving a partial structure unchanged), insertion/deletion of an element or a superscript and concatenation of two elements. Modification operations work on partial structures, turning them into other partial structures or full structures. An element of structure is represented as having a base (Subject, Predicate, Complement, Adjunct and dummy element) and a superscript. Dummy elements are used to hold superscripts, when no function has been inserted yet. For example, if a feature [intensive] is chosen, a C is inserted in the partial structure with a superscript 'intensive' for raising constructions, which tells us that the C will not be realized by a nominal group, but by a minor, nonfinite clause. This type of rule is called plexal realization:
46 Systemic Functional Grammar in Natural Language Generation plexal rules operate on the system network only. They are roughly equivalent to preselection in that they constrain choice in that part of the network that further realizes a function. Henrici's account of realization is very explicit, but suffers from over-complication in that a structure may not be fully specified and may have to be changed again by modification. The proposal for realization most similar to Halliday's in how the tasks are distributed over different kinds of rules is Huddleston's (1965). He explicitly distinguishes between two types of rules: feature assignment rules and realization rules. Feature assignment rules assign features to a grammatical unit, which then trigger the realization rules that build up a constituency structure. Feature assignment rules are arranged in cycles. For example, features assigned to a unit clause are subsequently realized in a constituency structure. In the next cycle, the constituents are again assigned features, which are again realized in structure, and so on, until the terminal nodes are reached. Thus, feature assignment rules are to select features from the system network for a particular unit. Huddleston's realization rules specify the following: (1) the presence of a constituent with a given function; (2) the addition to a function label of a superscript that defines the syntagmatic function more delicately; (3) the part of the system network operating on a constituent at the next cycle of feature assignment rules; (4) the path to be taken in the network specified under (3); (5) linear sequence of a constituent in the constituent structure. Here, (1) is the equivalent to insertion, (2) corresponds to conflation, (3) and (4) to preselection and (5), obviously, to ordering. Interestingly, the function labels used by both Huddleston and Henrici as 'bases' are syntactic functions only (Subject, Predicator, Adjunct) - Actor, Theme etc., which are also treated as functions in most versions of SFG, are represented as superscripts to these bases. Such a proposal for representation may have been due to the state of development of SFG at the time when these proposals were put forward - most likely, the proposals by Hudson, Huddleston, and Henrici were based on Halliday (1961), which did not yet include the multifunctional labelling. Notwithstanding the differences in the detail, it is common for all proposals for realization • to distinguish between realization that operates within the system network - preselection or interrank realization, and realization that constrains structure directly - interaxial realization; • to distinguish strictly the presence of a constituent and its linear ordering (insertion versus ordering).
Theory and Linguistic Representation 47 The latter is also a characteristic of most modern grammar models, e.g. of Generalized Phrase Structure Grammar (GPSG) (Gazdar et at 1985), employed in computational linguistics. 2.4.4 Summary of Section 2.4 This section about the view of grammar in SFG has presented the metastratum of linguistic representation for the grammar. It has discussed the representational means SFG makes available to describe the grammars of languages and illustrated this with examples from the grammar of English. We have seen that in the overall systemic model of language, grammar constitutes the level of form; it is the only level that is absolutely internal to language (Halliday and Matthiessen 1997): only the mrerlevels of semantics and phonolgy connect outside of language, to the context and to phonetic material, respectively. Grammar is, however, not considered an autonomous level. Rather, Grammar is the level of formal organization in language; it is a purely internal level of organization, and is in fact the main defining characteristic of language. But it is not arbitrary. Grammar evolved as 'content form': as a representation of the meaning potential through which language serves its various social functions. The grammar has a functional basis. (Halliday 1973: 90)
While this generally is the stance taken by proponents of SFG, there are alternative views, of which we have discussed primarily those of Fawcett (1980) and Hudson (1976). For a summarizing overview of the approaches discussed see Table 2.2). Hudson's and Fawcett's viewpoints mark a change in theoretical direction from what Halliday suggests, namely from the uniformly applying organizational principles of axiality and functional diversification, and from grammar as semantically significant, grammatical categories being grammaticizations of semantic categories and generalizations across a variety of different semantic contexts (Matthiessen 1992). For Hudson, most of Chomsky's evaluation criteria for grammar models do apply: his view is essentially mentalistic; he advocates a rather strict separation of syntax and semantics in that semantic significance cannot be part of the motivation of syntactic functions, and maintains that generativeness is a goal for a grammatical model. Fawcett's view is again distinct from this: his view is also cognitive, but he does not dispense with the social-semiotic aspect altogether.
48 Systemic Functional Grammar in Natural Language Generation Hudson type(s) of structure
constituency dependency
Huddleston constituency
Henrici constituency
Halliday
Fawcett
constituency
constituency
Berry constituency
interdependency prosody pulse
microfunctions
reduced: Topic, Scene-Setter Subject
kinds of realization statements
1 . structure building rules
1 . feature assignment rules
null structure;
- daughterdependency
2. realization rules
1 . identity modification
- sisterdependency
- presence
- insertion/ deletion of an element
2. sequence rules 3. peripherality rules 4. feature addition rules 5. function assignment rules
yes
- addition of superscript - system operating in next cycle of feature assignment rules - path to be taken in system network
yes
yes
1 . presence
partial structure undergoing: 2. conflation
- addition of a superscript
yes
starting structure
yes
1 . insertion 2. concatenation
3
„ ' expansion
1 . componence
3. particularization
4. preselection
2. filling
4. inclusion
5. ordering
3. exponence
5. conflation 6. discontinuity
- concatenation of 2 elements 2. plexal realization
- linear sequence - features of phonological represent ation
Table 2.2 A comparison of approaches to realization and syntagmatic structure Further, his system networks are themselves considered the semantics and - in contrast to Halliday - he leaves the syntagmatic part to be 'syntax'. For Fawcett, syntax can be considered a level of linguistic description in its own right. Only if explanation is sought must this level be related to the semantic system networks. The differences among the different versions of SFG discussed in Section 2.4 then follow from divergences in theoretical assumptions, since the theory and representation are in a realizational relation. The stance adopted throughout this book is essentially the
Theory and Linguistic Representation 49 Hallidayan one: the system network functioning as a classification hierarchy of paradigmatic grammatical relations and its interpretation as a set of conditions on well-formedness of grammatical structure by means of feature co-occurrence constraints (Bateman 1992a). However, some of Hudson's ideas about combining constituency and dependency for the representation of syntagmatic structure will be taken up, and the rank scale together with the polysystemicity postulate (which is questioned by both Hudson and Fawcett) will be re-examined in Chapter 5. 2.5 Summary and Outlook The concern of the present chapter has been with the metasemiosis of SFL at the metastrata of theoretical perspective and linguistic representation. It has described the relation between theory and the representational means set up according to it, focusing on the linguistic stratum of grammar. Here, the foremost interest has been the representation of syntagmatic structure and its relation to paradigmatic relations. The next step will be to describe the realization of theory in computational application. This comprises taking up the characteristics of the theory again to see what use they have for a particular application (Chapter 3), and representing the linguistic descriptions we need in a form that suits the application (Chapter 4). Further, an application of theory may force us to reconsider the means of linguistic representation that it makes available. There may be under-specifications in these means or gaps in the theory that require us to go through the cycle of theory - linguistic representation - application again and add to it what the application has revealed to be underrepresented (see Chapter 1, Section 1.2). It has already been indicated that in the domain of modes of expression the representational means for the modes other than the experiential (constituency) are not worked out explicitly enough. Further, the logical aspect of organization is not generally acknowledged for simplex units. We will see that, due to this, certain generalizations over the structure of simplex grammatical units are prevented. Furthermore, the rank scale together with the polysystemicity postulate is a restricting factor that adds to the difficulty of achieving syntactic generalizations. Again, it needs to be stressed that all this may not be much of a problem in certain
50 Systemic Functional Grammar in Natural Language Generation application contexts; but in computational application - where explicitness is absolutely essential and where for reasons of representational efficiency one would like to be able to make such generalizations - this problem reveals itself clearly. One of the goals of the following chapter on natural language generation is to uncover this problem of SFG further by comparing the SFL-based generation system PENMAN with other generation systems that use other linguistic theories as their bases.
Notes 1. In terms of set theory, cross-classification by means of simultaneous systems is the building of the intersection; disjunctive connection of systems corresponds to union building. 2. The writing conventions for systems throughout this book are as follows: system names, such as MOOD, are given in small capitals, and feature names, such as [interrogative], are given in square brackets. A path in a system network is given as [a:x] for [x] as subclass of [a] and as [a&x] for [a] as a cross-class of [x], i.e. for [a] and [x] stemming from two simultaneous systems. 3. The declarative interpretation of the system network as co-ocurrence constraints of terms or features can be dated back to the middle of the 1960s (see Henrici 1965/81). 4. The arrows are to indicate interstratal constraints. 5. See Section 2.4.1 for examples of all three systems. 6. This is one of the major differences from, for example, the goal in generative grammar, which is to predict all possible structures of language in general (universal grammar) and ruling out the ungrammatical ones (generativeness). In SFG, the issue is rather to describe all appropriate grammatical structures for a specific language. 7. For more details on the modes of expression see Section 2.4.3.1. 8. All systems displayed in this section take Matthiessen (1992) as a basis. 9. The most detailed accounts of English transitivity can be found in Halliday (1967, 1968) and Matthiessen (1995). 10. As a convention, throughout this book grammatical functions (or microfunctions or elements of structure) are given with the first letter capitalized. When sample texts are given that are analysed for illustration, the items in focus of the analysis are given in typewriterlike face. 11. From L. Frank Baum, The Wizard of Oz. 12. From Roald Dahl, Matilda. 13. In a hypotactic, elaborating relation between two clauses, the dependent clause is typically realized as a nondefining relative clause (Halliday 1985a: 204); see also Quirk et al. (1985: 1118). 14. From Roald Dahl, Matilda.
Theory and Linguistic Representation 51 15. From L. Frank Baum, The Wizard of Oz. 16. The commonly employed term for predicated Theme (Halliday 1985a: 59-60), e.g. in It was the scarecrow who was admitted into the great Throne Room, is cleft sentence (Quirk et al. 1985: 1384). 17. There may be exceptions to this, e.g. Fawcett's approach to grammar (Fawcett 1980). He calls the networks on the grammatical stratum semantic. However, the criteria for including a feature in a system network he uses are similar to the ones of Martin (1987): a feature must have a relation to features at the same level, it must have a relation 'downwards' (level of form) and it must relate 'upwards' (knowledge of the universe). This means that Fawcett's networks, although labelled 'semantic', are mediated networks (see Section 2.4.3.3). 18. In other grammar theories and in Fawcett's version of systemic grammar (Fawcett 1980), there is no distinction between linear and cyclical recursion - all recursion is embedding, with the possible exception of coordination. 19. This is actually similar to the use of syntactic functions as primitives in LFG (Bresnan 1982): f-structures are rather flat and their nodes carry syntactic function labels. 20. The abbreviation ANTG is borrowed from Butler (1985). 21. The difference between generative grammar and descendent approaches and Hudson's approach with respect to the lexicon is that with Hudson the lexicon does not contain any syntactic information that is not present in the structure into which it is inserted. Lexical and syntactic well-formedness are kept separate. In generative grammar approaches, lexical and syntactic well-formedness are not separate: the lexicon does constrain syntactic structures, because it is here that subcategorization, semantic relations, etc. are encoded. 22. The examples of daughter and sister dependency rules are taken from Hudson's partial grammar of English in Hudson (1976). 23. For a brief sketch of the relation between syntax and semantics in Hudson's model see (Hudson 1980b). 24. Note the inclusion of an alternative notation for preselection ('V) which normally signifies interstratal realization. This alternative notation for preselection is used throughout this book to avoid the more common notation (':') becoming confused with denoting entry conditions to systems.
3 Computational Application: Grammar Models in Natural Language Generation
The theory has evolved in use; it has no existence apart from the practice of those who use i t . . . Systemic theory is a way of doing things . . . The value of a theory lies in the use that can be made of it, and I have always considered a theory of language to be essentially consumer-oriented. (Halliday 1985b: 7)
3.1 Introduction The concern of the present chapter is with the description of one of the applications SFL lends itself to, which is the computational application, more concretely natural language (NL) generation, and with the illustration of computational representation as the re-expression of linguistic representation in computational terms (see Chapter 1). Focusing on the grammatical stratum, this chapter compares the systemically based approach pursued in PENMAN (Mann and Matthiessen 1985) to a selection of other linguistic approaches used in generation (Meteer et a/., 1987; Meteer 1992; lordanskaja et al. 1988, 1991;McKeown«o/. 1990). The presentation of the PENMAN approach and the comparison with other approaches takes us one step further in the illustration of the metasemiosis of systemic functional theory and exemplifies the systemic view of theory as 'a means of doing'. Two points will be made about the computational application of SFL in generation. First, many of the characteristics of the systemic functional model can be taken to be suggestive of the kinds of resources needed in natural language generation. Thus, adopting the systemic model as a basis, modelling the tasks involved in generation is in general supported. Second, describing PENMAN and its grammar of English, NIGEL, it will be seen that the lack of syntactic generalizations and of the representational means to express them -
Computational Application: Grammar Models 53 SFG'S syntagmatic gap - is perpetuated in computational representation. This is revealed more clearly by comparison to other generation systems that have taken other theories and grammar models as linguistic bases, which focus on different aspects of linguistic description from SFL. The chapter is organized as follows. It starts with a brief introduction to natural language generation, its goals and its basic organizational principles (Section 3.2). The main interest is in tactical generation (see Section 3.2) and the linguistic resources required for it, which is notably the grammatical resource (Section 3.3). Special recognition is given to a problem assumed notorious in generation, the problem of the generation gap (Meteer 1991). Section 3.4 describes the PENMAN system as a computational realization of the systemic functional model of language and presents three other approaches to generation, the system developed by Meteer et al. (1987) and Meteer (1992), MUMBLE-86, using Tree Adjoining Grammar (TAG) (Joshi 1987) as the basis for tactical generation, the Gossip system of lordanskaja et al. (1988), based on Meaning-Text-Theory (MTT) (Mel'cuk 1988), and the COMET system of McKeown et al. (1990), using Functional Unification Grammar (FUG) (Kay 1979). Section 3.5 summarizes the similarities and divergences among the approaches discussed and concludes with a critical appraisal of SFL-based generation. Finally, Section 3.6 sketches the steps we are going to take in the remaining chapters. 3.2 Natural Language Generation
With the development of ever more intelligent information systems, the need for automatic natural language generation is continuously increasing: NL generation is useful in a variety of contexts, such as expert systems, tutoring systems and NL query interfaces (e.g. for information retrieval), and it offers relevant technology for document creation, such as report generation, automatic abstracting and summarizing. NL generation is the automatic production of natural language utterances - ranging from the production of single sentences to entire discourse or text - on the basis of a given input that is most commonly a communicative intention. The process of NL generation is a process of choice at many levels, comprising both linguistic and nonlinguistic choices. Usually, two areas are distinguished in generation: strategic and tactical generation
54 Systemic Functional Grammar in Natural Language Generation (Thompson 1977). Typically, strategic generation involves: • the determination of prepositional content and its organization into a text plan which contains information about discourse organization, e.g. as schemas, McKeown 1985, and possibly about the register of the language to be produced, e.g. Bateman and Paris 1991); • breaking this up in linguistically realizable units, e.g. by means of rhetorical relations (Mann and Thompson 1987). The tactical generation part typically comprises • grammatical selection according to a text plan; • lexical choice. See Fig. 3.1 for a schematic overview of the processes and resources involved in generation.1 Generally, there are two strands of design of generation systems, one being basically inspired by artificial intelligence (AI), the other being essentially inspired by computational linguistics (CL). In the PROCESSES
Figure 3.1 Generation: processes and resources
RESOURCES
Computational Application: Grammar Models 55 former, the generation task is seen as an analogue to AI planning (e.g. Appelt 1985; Patten 1988). This goes together with a focus on the strategic part of generation. In the latter, linguistic theory serves as a basis for system design, and the focus is often on tactical generation (e.g. in Mann and Matthiessen 1983; lordanskaja et al. 1988; Kittredge et al. 1988). Therefore, many generation systems often deal with either the strategic part of generation or the tactical part only. When full-scale generation is needed, the component part lacking is often imported (e.g. DIAMOD, using the LFG generator of Dorre and Momma 1987). However, the separate development of strategic and tactical generation components can create the problem that the resources built up for planning have a rather arbitrary relation to the linguistic resources built up for (lexico-)grammatical generation (see Hovy et al. 1988 for a discussion of the interface between planning and linguistic realization). This problem has been termed 'generation gap' (Meteer 1991): the mapping between the higher-level (often not linguistically motivated) planning resources and the low-level (linguistically motivated) linguistic resources often becomes an impediment to successful full-scale generation because the control of linguistic expression according to a text plan is underspecified. In the worst case, the text planner constructs a plan that is not expressible at all and must be rejected for revision. To make sure that a text plan is linguistically expressible, the text planning component should know what is linguistically possible. One of the most obvious candidates for providing input for a solution to this problem is, however, often not taken into account: this candidate is linguistic theory. As Matthiessen and Bateman (1991) note, work in NL generation seems to have drawn more on CL and AI and less on established bodies of linguistic theory. While CL and AI provide models for the processes involved in producing a running system, it is linguistics that is primarily concerned with the resources of a language, its potential for textual organization, its grammar, semantics and phonology and the relation between them. At a general level, the generation gap is then a problem of too little consultation of linguistics concerning the resources needed for generation systems. The input from linguistics to generation has been restricted to selected aspects, notably syntax, applied to some particular aspect of the generation task. Methodologically, an obvious step in addressing the problem of the generation gap is to consult linguistic theories more in order to motivate the resources in generation systems, in particular comprehensive theories that have
56 Systemic Functional Grammar in Natural Language Generation something to say about the 'generation question', i.e. about the relation of some communicative situation to a linguistic utterance. Further, given that generation asks the question about the function of a linguistic utterance in a particular communicative situation, linguistic theories that are functionally oriented can provide further useful input. One such theory is Systemic Functional Linguistics, which has inspired a number of generation architectures (e.g. Davey, 1978; Bateman et al. 1991b; Patten 1988; Fawcett and Tucker 1989) and provided input in the domain of grammar to a few generation systems (e.g. McDonald 1980; McKeown et al. 1990). In a similar vein, with the choice of a grammar model that is embedded in a more general model of language (rather than one that is 'autonomous' in the generative grammar sense), fewer problems relating to the generation gap can be expected to arise because the theory itself makes an attempt at specifying the relation of grammar to the other linguistic and linguistically relevant resources it accounts for. Let us now turn to the discussion of the role of grammar as the major resource needed in tactical generation to illustrate the points just made. As a basis for this discussion a general scheme of classification of grammar approaches used in generation and a finergrained division of tactical generation into two subtasks are presented. It is then discussed how the selected approaches fulfil these tasks, taking two sample sentences to be generated as examples. 3.3 Grammars for Natural Language Generation The grammar models used in generation range from phrasestructure-based grammars (e.g. LFG, GPSG) - plus transformations (e.g. Busemann 1988) and/or enriched by case frames (e.g. Buchberger and Horacek 1988) - through dependency approaches (e.g. lordanskaja et al. 1988; Kittredge et al. 1988) to functionally oriented approaches, themselves comprising grammars as diverse as FUG (McKeown et al 1990), LFG (Rosner 1988), Functional Grammar (Kwee 1987) and Systemic Functional Grammar (Davey 1978; Mann and Matthiessen 1985; Fawcett and Tucker 1989; Bateman et al. 199la). Most of the existing generators are monolingual; of these, many are for English. Recently, there has been active research in multilingual generation with systems such as the FoG (Kittredge and Polguere, 1991), Techdoc (Rosner and Stede 1992) and KOMET-PENMAN MultiLingual (KPML) (Bateman 1997). On a more general level, of special interest to us in the discussion
Computational Application: Grammar Models 57 of the role of grammar in NL generation are the kinds of information that are included in the grammar and how this information is organized. This is important since the choice of grammatical approach has an influence on the design of the whole generation system to the extent that the mappings to the other resources of the generation system are affected. More generally, the following criteria can be taken as relevant when we are considering grammar approaches for the purpose of tactical generation. • High upper bound versus low upper bound in terms of abstraction. While the lower bound of grammars in terms of abstraction level is commonly agreed upon (either morphology or some surface-syntactic structure), there are diverging positions as to what the upper bound of grammar should be. This is crucial since the kinds of information that the grammar makes available will affect the mapping between grammatical categories and the categories of the higher-level resources (knowledge about text structuring, knowledge about the type of communication situation, etc.) available to the generation system. • Linguistic versus computational roots. One can distinguish two perspectives taken by grammar approaches used in generation: one focuses on linguistic resources, the other on computational processing. Included in the latter is 'reversible grammar' (see Strzalkowski 1994 for a representative selection), which has been a topic of active research since the emergence of unification-based formalisms for linguistic processing (Kay 1979, 1985). Here, computational representation is built in, as it were: reversible grammars are designed explicitly for computational purposes. One of the claimed advantages of reversible grammars (apart from the obvious advantages of effort reduction for system development) is a closer match between a system's analysis and generation capabilities. • Generation perspective versus analysis perspective. Another characteristic that can be attributed to grammar models is the built-in perspective in terms of processing direction: whether there is an orientation towards analysis or towards generation. Some kinds of grammar approaches simply seem to be more suitable for analysis because the topics they deal with are seen from the analysis perspective (see Block 1988 for a discussion of whether LFG is suitable for generation); and others may be more suitable for generation because the questions they deal with are
58 Systemic Functional Grammar in Natural Language Generation
asked with the generation perspective in mind. • Functional versus surface-syntactic focus. Another relevant criterion is whether a grammar approach is functionally or surface-syntactically oriented. In NL generation, it is crucial to recognize both functional and surface-syntactic categories (see below on the subtasks of tactical generation). As already mentioned at the beginning of this chapter, the task of a tactical generator is to handle the mapping from some text plan constructed for a particular communicative situation into some utterance specification that is typically realized as a sentence. As Yang et al. (1991) note, this process can be viewed from two different perspectives. One is functional, being concerned with 'deciphering the goals given by the speaker program and determining how they can be realized in language' (Yang et al. 1991: 207); the other one is syntactic and is concerned with ensuring 'that the end product adheres to the syntactic rules of the language' (Yang et al. 1991: 207). Tactical generation must take into account both these perspectives. The task of a tactical generator can thus be split into two subtasks (illustrated graphically in Fig. 3.2): • Task 1. To interpret an (semantic) input in terms of the grammar it has available. • Task 2. To spell out the syntactic constraints of the language in which an utterance is to be generated.
Figure 3.2 Processes in tactical generation
Computational Application: Grammar Models 59 Apart from the obvious advantages of having a built-in generation perspective rather than an analysis perspective and apart from being geared to computational implementation, our hypothesis is that it is potentially advantageous for a grammar approach to be used in tactical generation to have a rather high upper bound in abstraction and to account for both functional and surface-syntactic information. The former appears to be a positive characteristic because the complexity of the other resources needed in full-scale generation can be reduced, putting more burden on grammar; the latter can reduce the problem of the 'generation gap' mentioned earlier, the functional information mediating between the higherlevel semantic and pragmatic resources of the generation system and surface-syntactic categories. Generation with Tree Adjoining Grammar (TAG) is an exemplar of reversible grammar and strongly computationally motivated. Further, (TAG), as a system for generating syntactic tree structures, takes the surface-syntactic perspective. At the other extreme, there are SFGbased generation and Meaning-Text-Theory-based generation as exemplars of approaches using theories with strong linguistic roots. In the case of both PENMAN and Gossip, the linguistic theory has had a major influence on generator design (by SFG and MTT, respectively) as a whole. Generation based on FUG, as in COMET, is somewhere between these two extremes: it is strongly computationally motivated (FUG being a unification-based formalism) and has been linguistically enhanced by a functional perspective. For illustration of how the two subtasks of a tactical generator can be handled and for testing the assumptions just brought forward, the grammar components of the four selected systems are discussed. To make the presentation as concrete as possible, two sample constructions are taken for illustration:2 (3.1)
Kim devours the cookies.
(3.2)
What does Kim devour?
We assume an input representation to tactical generation as some kind of logical form (LF), such as SPL (Kasper 1989a) or QLF (Alshawi 1992). Typically, a logical form contains propositional kinds of knowledge, such as information about events, processes and states of affairs, the participants therein and the accompanying circumstances. Usually, the input logical form expression follows predicate-argument structure. For example, a logical form for 3.1 and 3.2 is
60 Systemic Functional Grammar in Natural Language Generation (LF1)
devour(Kim, c o o k i e ) .
We consider the categories at the level of input to tactical generation to be essentially linguistic (semantic). However, in addition to the propositional-semantic information, 'pragmatic' information (e.g. about the speechact to be performed (question versus statement versus order)), and information about the textual statuses of the participant or circumstantial roles (thematicity, identifiability of referents in the discourse, etc.) are needed. The following section describes the kinds of structures that the selected generators produce and discusses the relation of these to an LF-like input. In COMET, the structure targeted is a PSFD (partial surface functional description); in MUMBLE-86, it is a TAG tree structure; in Gossip, it is a DSyntR (deep syntactic representation); and in PENMAN, it is a set of systemic features (and ultimately a function structure). 3.4 A Comparison of Selected Grammar Approaches in Tactical Generation 3.4.1 SFL in generation: PENMAN In order to show how the potential of SFL is realized in computational application (here, in NL generation), we take as example the PENMAN system, a general-purpose generation system for English, originally developed by Mann and Matthiessen (1985), and then relate PENMAN'S features back to SFL theory. The generation question put for PENMAN-style generators is, when to choose which features from the system network. Or, more precisely, which functional alternative to choose in a particular context. The components of PENMAN are: • the NIGEL grammar of English (Matthiessen 1992), consisting of a grammatical system network that accounts for the paradigmatic options of English in its ideational, interpersonal and textual functions; for instance, encoded by transitivity, mood and thematicity at clause rank (see Figs. 3.3, 3.4 and 3.5, which show these in PENMAN-internal notation;3 (see also Figs. 2.5, 2.9 and 2.11). Realization rules of insertion, conflation, preselection and ordering interpret paradigmatic features in terms of syntagmatic structures (see the function structures syntagmatically describing sample sentences 3.1 and 3.2 at clause rank in Figs. 3.8, 3.9).
Computational Application: Grammar Models 61 • the UPPER MODEL (Bateman et al. 1990), which encodes ideational (i.e. prepositional) meaning. • the chooser-inquiry interface (Matthiessen, 1988b). A chooser is a decision procedure that is associated with a system. Its task is to mediate between semantic and grammatical information. A chooser is organized as a tree, the nodes of which are inquiries, which are the actual interpreters of semantic knowledge for the grammar (for an example, see Fig. 3.7 showing the chooser of the PROCESS TYPE system displayed in Fig. 3.3). (system :name PROCESS-TYPE :inputs TRANSITIVITY-UNIT :outputs ((0.25 MATERIAL (INSERT AGENT) (PRESELECT AGENT NOMINAL-GROUP)) (0.25 MENTAL (INSERT SENSER) (PRESELECT SENSER NOMIMAL-GROUP)) (0.25 VERBAL (INSERT SAYER) (PRESELECT SAYER NOMIMAL-GROUP)) (0.25 RELATIONAL)) :chooser PROCESS-TYPE-CHOOSER : region NONRELATIONAL TRANSITIVITY :metafunction EXPERIENTIAL) (system :name AGENCY :inputs (OR MATERIAL MENTAL VERBAL) :outputs ((0.5 MIDDLE (INSERT MEDIUM) (PRESELECT MEDIUM NOMIMAL-GROUP)) (0.5 EFFECTIVE (INSERT AGENT) (PRESELECT AGENT NOMINAL-GROUP) (INSERT MEDIUM) (PRESELECT MEDIUM NOMINAL-GROUP))) :chooser AGENCY-CHOOSER •.region NONRELATIONAL TRANSITIVITY :metafunction EXPERIENTIAL)
Figure 3.3 Primary choices in TRANSITIVITY
62 Systemic Functional Grammar in Natural Language Generation (system :name MOOD-TYPE :inputs INDEPENDENT-CLAUSE-SIMPLEX :outputs ((0.5 INDICATIVE) (0.5 IMPERATIVE (INSERT NONFINITIVE) (INFLECTIFY NONFINITIVE STEM))) :chooser MOOD-TYPE-CHOOSER :region MOOD :metafunction INTERPERSONAL) (system :name INDICATIVE-TYPE :inputs INDICATIVE :outputs ((0.5 DECLARATIVE) (0.5 INTERROGATIVE)) :chooser INDICATIVE-TYPE-CHOOSER :old-feature DECLARATIVE :region MOOD :metafunction INTERPERSONAL)
(system :name INTERROGATIVE-TYPE :inputs INTERROGATIVE :outputs ((0.5 YES-NO (CONFLATE TOPICAL FINITE)) (0.5 WH (INSERT WH) (PRESELECT WH INTERROGATIVE-NOMINAL) (ORDER WH FINITE) (CONFLATE TOPICAL WH))) :chooser INTERROGATIVE-TYPE-CHOOSER :region MOOD :metafunction INTERPERSONAL)
Figure 3.4 Primary choices in MOOD
The tactical generation in PENMAN starts with an input in the form of SPL (Sentence Planning Language) (Kasper 1989). See Fig. 3.6, showing two such SPL expressions for sample sentences 3.1 and 3.2.4 Besides the ideational information typically expressed in a logical form, an SPL expression also contains interpersonal information (e.g. : speechact question) and textual information (e.g. identifiability-q identifiable, :theme p). Given an input such as (sp!2) or (sp!3) (Fig. 3.6), the traversal of the grammatical system network is started. At each choice point
Computational Application: Grammar Models 63 (gate :name TOPICAL-SUBJECT :inputs (AND SUBJECT-INSERTED (OR EXPLICIT-DECLARATIVE-SUBJECT INDIRECT-INDICATIVE EXTENDING-CLAUSE ENHANCING-CLAUSE FINITE-ELABORATING IMPLICIT-SUBJECT-ELABORATING (AND IMPERATIVE (OR OBLATIVE SUGGESTIVE IMPERATIVE-SUBJECT-EXPLICIT)))) :outputs ((1.0 TOPICAL-SUBJECT (CONFLATE TOPICAL SUBJECT))) :region THEME :metafunction TEXTUAL)
(system :name TEXTUAL-THEME :inputs CONJUNCTED :outputs ((0.5 NONTEXTUAL-THEME) (0.5 TEXTUAL-THEME (INSERT TEXTUAL) (CONFLATE TEXTUAL CONJUNCTIVE) (EXPAND THEME TEXTUAL))) :chooser TEXTUAL-THEME-CHOOSER :region THEME :metafunction TEXTUAL) (system :name COMPLEMENT-THEMATIZATION :inputs COMPLEMENTED :outputs ((0.5 NONTHEMATIC-COMPLEMENT) (0.5 THEMATIC-COMPLEMENT (EXPAND THEME DIRECT COMPLEMENT))) :chooser COMPLEMENT-THEMATIZATION-CHOOSER .•region THEME :metafunction TEXTUAL)
Figure 3.5 A selection of choices in THEME (each system), a chooser is invoked that poses its inquiries to the semantics (that is, it checks the input representation for the grounds to make a choice). For example, one of the choices that have to be made at clause rank is the choice in transitivity (see again Fig. 3.7 for the chooser of the PROCESS-TYPE system). Here, the inquiry verbalprocess-q in the nonstatic branch of the chooser tree asks whether the clause to be generated is verbal or not. If the top concept in the SPL is of type verbal, the answer is verbal and the grammatical feature [verbal] can be chosen. Here, however, since the domain
64 Systemic Functional Grammar in Natural Language Generation (spll): Kim devours the cookies. (d / devour :actor (p / person :name Kim) :actee (c / cookie :identifiability-q identifiable :number plural) :time present :theme p rspeechact statement)
(sp!2): What does Kim devour? (d / devour :actor (p / person :name Kim) :actee (c / object :nountype-q wh-question) :time present :theme c :speechact question)
Figure 3.6 Input to PENMAN as SPL expressions
Figure 3.7 The chooser of the PROCESS-TYPE system
Computational Application: Grammar Models 65 Theme
Rheme
Subject
Finite
Complement
Agent
Process
Goal
Kim
devours
the cookies
Figure 3.8 Function structure of 'Kim devours the cookies.' (clause level)
Theme
Rheme
Complement
Finite
Goal What
does
Subject
Predicator
Agent
Process
Kim
devour?
Figure 3.9 Function structure of What does Kim devour?' (clause level)
concept 'devour' is not of semantic type 'verbal' the answer is nonverbal; neither is 'devour' a mental process, mentalprocess-q is thus answered nonmental, which results in the choice of the grammatical feature [material]. This choice ultimately constrains syntagmatic structure to have a constituent 'Agent' (see Fig. 3.3 for the PROCESS-TYPE system). The process of choosing is applied throughout the traversal of the system network, invoking the chooser of each system until the syntagmatic function structure is built up exhaustively at all ranks (clause, phrases and groups) by the realization statements (see Figs. 3.8 and 3.9 for the resulting function structures for examples 3.1 and 3.2 at clause level). For a systemically based approach, such as PENMAN, task 1 of tactical generation - the grammatical interpretation of the information contained in an input LF - means choice of a set of paradigmatic features from the grammatical system network. And task 2 of tactical generation - building up a syntactic structure means spelling out the realization statements that are associated with the paradigmatic features. Let us consider the properties characteristic of systemic functional theory again and relate them back to the design of the PENMAN system. The properties of linguistic representation characteristic of SFL noted in Chapter 2 are the following:
66 Systemic Functional Grammar in Natural Language Generation • Axiality. Both paradigmatic and syntagmatic relations are explicitly accounted for. • Motivation of categories. Grammatical (paradigmatic) features have a twofold motivation: one is functional and oriented upwards in stratum to semantics and the situational context; the other is oriented downwards in axis to the syntagmatic categories that realize paradigmatic features. • The centrality of choice. The notion underlying all linguistic description is choice. (Using) language is considered choice in the linguistic system available to a speaker. Furthermore, choice in the grammar is controlled choice, since (a) the grammatical (paradigmatic) options are related to semantics and context, and (b) the syntagmatic patterns are related to paradigmatic contexts. • Stratification. Grammatical categories are set up as relational categories in the sense that grammar is not considered an isolated system, but as part of a larger whole. Generally, the relation of the categories of a lower stratum to the ones of a higher stratum is one of generalization; and the relation of the categories of a higher stratum to the ones of a lower stratum is one of abstraction. • Functional diversification. SFL acknowledges the functional diversity of language by the metafunctions. For the application of SFG in generation, these characteristics have the following relevance. Axiality provides a division in paradigmatic and syntagmatic relations that is suggestive of the conceptually different resources that the tasks involved in tactical generation require: a functional account of grammar and a specification of syntactic, structure-oriented grammatical features. This property is reflected in the PENMAN system by representation of the grammatical resource, the NIGEL grammar, as system network (the axis of paradigmatic relations) with associated realization operators (specifying syntagmatic structure). The paradigmatic part is primarily functionally motivated, thus offering a resource with which a (semantic) input specification can be grammatically interpreted. For instance, (sp!2) specifies that the speechact of the utterance is a question. The congruent grammatical interpretation of : speechact question is interrogative. Only in the second instance is a speechact of question interpreted as a syntagmatic structure (e.g. for a wh-question: insert a Wh-element, order Wh-element before Finite; see Fig. 3.4). Furthermore, the metaphor of choice as used in SFL readily carries over to choice in generation, i.e. the choice of linguistic expression
Computational Application: Grammar Models 67 according to a context. Choice in SFL-based generation can be considered primarily choice in the functional part of the grammar rather than direct choice of a particular syntactic structure. In an SFG this choice is controlled choice in two senses: first, choice is under control by the higher stratum in that the categories of the grammar are motivated 'from above', i.e. from semantics (which, in turn, is motivated from the stratum of context); second, choice is under control by axis, i.e. a syntagmatic structure is always related to paradigmatic features. Giving priority to the paradigmatic axis in linguistic description and motivating categories primarily functionally is thus an important step in controlling choice in the lower-level linguistic resources - and therefore reducing the generation gap. The information pool available to a text planner only has to be responsive to the paradigmatic, largely functionally motivated categories, not to the rather surface-syntactic, syntagmatic ones. Similarly to the way in which the axiality dimension is suggestive of the kinds of resources needed in tactical generation, SFL'S functional diversification is suggestive of the resources needed in generation in general - both at the higher strata of semantics and context and at the stratum of grammar. Ideational linguistic information is prepositional content in semantic terms: the processes, events and states-of-affairs of participants in a discourse and the circumstances accompanying. This is information we typically find in the knowledge base of a generation system. The grammatical interpretation of ideational information is usually as predicate-argument structures and adjuncts. Interpersonal information belongs to the domain of pragmatics and includes information about speechacts. This is usually contained in generation systems in user or reader models. Grammatically, interpersonal semantic information is commonly interpreted as mood. Textual information at a higher level is concerned with discourse structure, coherence relations and cohesive ties. This is the kind of resource that a text planner typically needs. Grammatically, textual semantics is interpreted in terms of themerheme structure, conjunctions, pronominalization, etc. Finally, as Matthiessen and Bateman (1991) note, there is a clear rough correspondence between the overall organization of a generation system and the stratal organization of the systemic functional model (see Fig. 3.10 for illustration). Apart from dividing descriptive responsibilities more evenly, stratification always opens up the possibility of further, more delicate distinctions in meaning (a point made by Matthiessen and Bateman 1991), since there is always the possibility of many-to-many mappings between any two
68 Systemic Functional Grammar in Natural Language Generation strata. For instance, a speechact of command or order is typically realized grammatically by imperative. But it may also be realized as declarative - depending, for example, on the interpersonal relations holding between speaker and hearer in the discourse that is taking place, such as speaker-hearer distance, authority, etc. In this way, stratification supports the flexibility needed in the mapping between a specification of communicative goals and their linguistic expression in particular contexts. The PENMAN system is only one possible and partial computational instantiation of systemic functional theory, implementing only some of its characteristics. In PENMAN, both paradigmatic and syntagmatic patternings are accounted for at the stratum of grammar, and the realizational relation between them is specified as given in systemic theory. Further, grammatical (paradigmatic) information is metafunctionally organized throughout. A NiGEL-style grammar thus has a rather high upper bound in abstraction, and its categories are functionally motivated with a view to the higher levels of linguistic organization and to situational context. In terms of the higher levels of linguistic organization, PENMAN does not realize all of SFL'S notions, and in PROCESSES
RESOURCES
Figure 3.10 Correspondence generation model - SFL
Computational Application: Grammar Models 69 terms of the representation of syntagmatic structure, it does not implement the full range of functional diversity. At the semantic stratum, only the ideational metafunction is accounted for (the UPPER MODEL), and the stratum of context has not been addressed, except for some ongoing work on register, e.g. Bateman and Paris 1991. Textual semantics has been worked on in more recent developments based on PENMAN, in the KOMET-PENMAN system (Bateman and Teich 1995; Teich et al 1996) taking Martin (1992) as a basis; interpersonal systemic semantics has so far only been dealt with in dialogue processing, e.g. O'Donnell 1990; Fawcett and Davies 1992; Grote et al. 1996; Teich et al. 1997a. Syntagmatic structure is given in NIGEL as function structure only, i.e. syntagmatic structure is specified only in its experiential mode of expression (part of ideational information; see Chapter 2). This creates some problems for the specification and implementation of grammatical phenomena that do not adhere to experiential organization, but call, for instance, for a logical treatment in terms of dependency relations. Just to name one example, subcategorization is represented by preselection as the assignment of syntactic category to elements in the function structure (such as 'preselect Agent nominalgroup'). Preselection statements such as this are spread all over the system network without an attempt at a generalization of the subcategorization properties of, say, a unit or the head of a unit. As will be argued in Chapters 4 and 5, this kind of generalization could be made possible if heads and dependents were explicitly represented. The NIGEL grammar is thus only an approximation to grammar as it is seen by systemic functional theory. However, as Bateman (1988) notes, many design decisions in PENMAN have been made with a view to the systemic model for the other necessary components of a generation system. So, however partial the PENMAN implementation is, it is open to extensions and revision - as demanded by application. 3.4.2 MTT in generation: GOSSIP The Gossip (Generating Operating System Summaries in Prolog) system is one of a number of special-purpose generators based on Meaning-Text-Theory (MTT) (Kittredge et al. 1986; Bourbeau et al. 1990; lordanskaja et al. 1988). Gossip's purpose is to generate summaries of the usage of operating systems (login, users, duration of activities, etc.). The system generates paragraph-length texts in English. The output of the planning component, which uses a schemabased approach (McKeown 1985), is a so-called conceptual
70 Systemic Functional Grammar in Natural Language Generation communicative representation (CCR) which contains both conceptual (i.e. roughly: ideational) information and textual information in the form of theme-rheme structure. The actual input to the tactical generator - to which the CCR is mapped - is a semantic representation (SemR). The characteristic feature of an MTT semantic representation is that it allows various lexicogrammatical paraphrases of its contents, e.g. The user who ran editors is called Martin. - The name of the user who ran editors is Martin. (lordanskaja et al 1991: 305). A brief description of Meaning-Text-Theory will be useful to see exactly how it has inspired the design of Gossip. MTT is a grammar theory that deals with the relation of meanings to texts. 'Meaning' in MTT means linguistic meaning without reference to any extralinguistic abilities. 'Text' in MTT refers to all kinds of linguistic patterns, such as clauses, phrases and words. 'Text' thus has a rather broad denotation. It means utterance in general rather than discourse unit: the largest unit considered in MTT is actually the sentence. The goal of MTT is to describe the correspondences between all meanings and all texts as MTMS (Meaning-Text-Models, which are instances of MTT) - more particularly, it is concerned with the symbolic representation of these correspondences (Mel'cuk 1988: 43-5). MTT uses a stratified model of representation consisting of semantic, syntactic, morphological and phonological levels: REALITY SemR SyntRs MorphRs PhonRs LINGUISTIC SOUNDS
The relation between strata or levels is as follows. A given SemR (semantic representation) is matched with a DSyntR (deep syntactic representation); this is matched with a SSyntR (surface-syntactic representation); the SSyntR is matched with MorphRs (morphological representations; again deep and surface), and this is matched with PhonRs (phonological representations). Connectivity between strata is given by the notion of dependency which uniformly applies for all strata. This goes together with a lexicon that acts as major constraint on structure at all levels of representation. The lexicon notably includes information about the collocational behaviour of words as generalized lexemes in so-called lexical functions. Moreover, it contains syntactic information, e.g. about the valency of a lexeme (encoded as government patterns). Each representation level is again divided up into a deep and a surface
Computational Application: Grammar Models 71 level. The deep (sub)level is oriented upwards in terms of strata, the surface one is oriented downwards. The question to be posed in a generation system based on MTT, such as Gossip, is when to choose which DSyntR interpretation of a SemR. For illustration, let us consider the SemRs and SyntRs of examples 3.1 and 3.2 (see Figs. 3.11 to 3.16). In our sample sentences, one of the major differences between 3.1 and 3.2 (type of speechact) is reflected in the SemRs already: there is a different assignment of theme and rheme. Information about the kind of speechact is not explicitly represented, but has to be inferred. The semantic feature Mef of the SemR shown in Fig. 3.11 is interpreted at DSyntR as grammatical feature def and appears at SSyntR as the arc determinative with the node labelled the. The semantic information ^now' in both of the SemRs is to be interpreted grammatically as the feature pres (present) associated with the governing verb Mevour'. Information about the type of noun is lexically encoded and comes in at DsyntR (e.g. 'Kim' is a proper noun (PN), 'cookie' is a common noun (CN)).
Figure 3.11 SemR of 'Kim devours the cookies.'
Figure 3.12 DSyntR of 'Kim devours the cookies.'
72 Systemic Functional Grammar in Natural Language Generation
Figure 3.13 SSyntR of 'Kim devours the cookies.'
Figure 3.14 SemR of 'What does Kim devour?'
Figure 3.15 DSyntR of 'What does Kim devour?'
Figure 3.16 SSyntR of 'What does Kim devour?'
Computational Application: Grammar Models 73 A DSyntR (see Fig. 3.12) is a tree, where the node labels are generalized lexemes.5 A generalized lexeme carries all its meaningbearing morphological features (number, tense etc.), which have been derived from the semantic structure. For instance, the features present and plural in the DSyntR of Fig. 3.12 have been derived from the node A n o w ' and >1, respectively, of the SemR displayed in Fig. 3.11. Syntactically conditioned morphological features are induced by syntactic rules and treated separately. In addition to the node labelling, the arcs of a DSyntR are numbered, each number referring to a class of syntactic constructions that can serve the same semantic role (e.g. arcs I and II in the DSyntRs shown in Figs. 3.12 and 3.15, which are triggered by the lexicon). In the SSyntR, to which the DSyntR is matched and which is again a dependency tree, the nodes are actual lexemes carrying morphological information of word forms (e.g. DEVOUR stands for a lexeme and present is morphological information at this level). What makes MTT an attractive model for generation is primarily the high degree of stratification and its rather high upper bound in abstraction. A SemR typically allows various DSyntR interpretations - which could be lexically conditioned - as paraphrases of the same ideational content. However, it does not necessarily have to be the case that the semantic net to which the text planning output is mapped is in fact linguistically expressible. Gossip assumes that there is some realization and tries various paraphrases until one is found that works (Meteer 1991: 302). Again, there seems to be too little contact between text planning and lexico-grammatical expression. Even though MTT'S theoretical perspective is the generation perspective, dealing with the question of which 'texts' encode which 'meanings', the denotation of 'text' is rather narrow and the denotation of'meaning' is basically ideational. Thematic structure is given special recognition, thus accounting for one aspect of the textual function of grammatical units. But other kinds of linguistic information, such as speechact type realized in mood (an aspect of the interpersonal function), are not explicitly accounted for. Therefore the representation remains strongly ideationatty biased in function, thus compromising MTT'S potential for application in generation. Further, since the resources on which text planning operates (a library of schemas) cannot be motivated by MTT itself because textual matters are outside the scope of MTT, a uniform motivation of categories for all generation resources - potentially advantageous - is not given.
74 Systemic Functional Grammar in Natural Language Generation Once a DSyntR is established in the generation process, however, spelling out the syntactic constraints (task 2 of tactical generation), both the ones that derive from the SemR and the rather idiosyncratic, nonsemantically motivated ones (e.g. syntactic agreement) is unproblematic. The semantically derivable constraints are separate from the purely surface-syntactic syntagmatic constraints, so that the latter can be applied as specifications in their own right (for instance, government patterns encoded in the syntactic zone of the lexicon; see Mel'cuk 1988: 69). 3.4.3 FUG in generation: COMET COMET is a generator of English that has been employed for the purpose of producing explanations for equipment maintenance (McKeown et al. 1990). A special feature of COMET is that it integrates text and graphics generation (Feiner and McKeown 1990). The representation of all generation resources in COMET is in FUF (Elhadad 1990), a multi-purpose formalism based on FUG. FUG in its original form (Kay 1979) is a formalism for describing grammars of natural languages. Similar to TAG (see Section 3.4.4), it allows the expression of the rather idiosyncratic syntactic constraints of a language. However, it is not a theory of grammar itself in that it does not subscribe to a particular grammar model. The grammar used for tactical generation in COMET is based on FUG and enhanced by some notions from SFG (Halliday 1985a). The generation question for COMET appears to be when to choose which grammatical FD (functional description), or rather PSFD (partial surface functional description)(see below). McKeown et al. (1990) explicitly note the two distinctive tasks in tactical generation: Conceptually, the grammaticalisation task of a natural language generator can be decomposed into two subtasks: (i) the expression of semantic content by grammatical devices (e.g., expression of a request by use of the interrogative mood), which we call semantic grammaticalisation; and (ii) the enforcement of syntax (e.g., subject-verb agreement) and building of syntactic structure. (McKeown et al. 1990: 124)
The input to tactical generation in COMET is an LF structure. Task 1 of tactical generation is performed by a 'lexical chooser' that maps pragmatic and semantic features on to features specifying the lexical items and the overall grammatical form of the utterance. The resulting structure is an FD, as commonly used in FUG; however, this FD is fully lexicalized and structured in terms of semantic roles, following
Computational Application: Grammar Models 75
predicate-argument structure. This is called a PSFD. The resource necessary for this enrichment is made available in the form of a system network motivated by SFG that contains information about transitivity, voice, mood, etc. It is in the system network that the mapping between semantic roles and syntactic relations is specified; more particularly, this depends on choices in transitivity and voice. For illustration, see Fig. 3.17, showing a fragment of paradigmatic grammar6 defining a 'system' that distinguishes between action, mental, equative and attributive processes. With these options realizations are associated, e.g. subject A agent. The condition under which the system and the realizations hold is given in the top of the definition: voice must be operative (voice operative). The purely syntactic constraints are handled by FUG proper, e.g. agreement is handled by attribute-value sharing over specified constituents (Agr = ), word order is handled by the FUGattribute PATTERN. Since COMET uniformly uses unification as its basic operation, tactical generation is simply unification of an input expression and ((voice operative) (verb ((voice active))) (alt (index on process-type) (((process-type action) (subject (~ agent)) (object (" medium)) (iobject (" benef))) ((process-type mental) (subject (~ processor)) (object (" phenomenon)) (iobject (~ none))) ((process-type equative) (subject (" identified)) (object (" identifier)) (iobject (" none))) ((process-type attributive) (subject (~ carrier)) (object (~ attribute)) (iobject (" n o n e ) ) ) ) ) ) Figure 3.17 FUG: a fragment of paradigmatic grammar (operative voice, transitivity) in COMET
76 Systemic Functional Grammar in Natural Language Generation the grammar, i.e. the structure resulting from generation is an FD plus grammatical information, such as constituency, category, syntactic function and syntactic constraints such as agreement. See the PSFDS in Figs. 3.18 and 3.19 for the two sample sentences. Having imported part of an SFG, the grammar's upper bound in terms of abstraction is rather high. The grammar incorporates some functional aspects and conceptually separates these from the purely idiosyncratic syntactic constraints. In terms of internal organization, it gives recognition to the paradigmatic axis, i.e. to the grammar as system, in explicitly formulating grammatical options as networks. Further, in terms of stratification, there is a distinction between semantics (semantic roles, such as Agent, Carrier) and syntax (syntactic functions are given explicitly; syntactic constraints, such as agreement and word order, are handled by FUG attributes). Generation in COMET is therefore first of all choice of one of several functional alternatives available in a particular context (as shown by the example of transitivity and operative voice in Fig. 317) and only in the second instance choice of a surface-syntactic structure. An (cat clause) (mood finite) (finite declarative) (tense present) (process-type action) (process ((cat verb) (lex 'devour') (Agr ((Number singular)(Person 3))) (voice-class non-middle) (transitive-class transitive))) (agent ((cat np) (head ((np-type proper) (lex ' K i m ' ) ) ) ) ) (medium ((cat np) (head ((np-type common) (number plural) (lex 'cookie'))) (determiner ((lex ' t h e ' ) ) ) ) ) (voice operative) (subject (" agent) (Agr = )) (object (" medium)) ( = <subject Agr>) (PATTERN ((* FOCUS) . . . ) ) ) (PATTERN ((SUBJECT VERB . . . ) ) ) (PATTERN ( ( . . . VERB OBJECT))))
Figure 3.18 PSFD (after unification with syntactic constraints) of 'Kim devours the cookies.'
Computational Application: Grammar Models 77 ((cat clause) (mood finite) (finite interrogative) (aux ((cat aux) (lex 'do') (Agr ((Number singular)(Person 3 ) ) ) ) ) (tense present) (process-type action) (process ((cat verb) (lex 'devour') (voice-class non-middle) (transitive-class transitive))) (agent ((cat np) (head ((np-type proper) (lex ' K i m ' ) ) ) (medium ((cat np) (head ((np-type wh-question) (lex ' w h a t ' ) ) ) ) (voice operative) (subject (" agent)(Agr = )) (object (" medium)) ( = <subject Agr>) (PATTERN ((* FOCUS) . . . ) ) ) (PATTERN ((SUBJECT AUX . . . ) ) ) (PATTERN ( ( . . . AUX OBJECT . . . ) ) ) (PATTERN ( ( . . . OBJECT VERB))))
Figure 3,19 PSFD (after unification with syntactic constraints) of 'What does Kim devour?' appropriate syntactic structure is produced by spelling out the functionally motivated constraints of the network (such as, with operative voice and a process-type action, the Agent occupies the grammatical function of Subject and the Medium that of Object) and the nonfunctionally motivated purely surface-syntactic constraints (e.g. agreement). The COMET version of FUG thus fulfils both of the tasks of a tactical generator in a perspicuous manner, accounting for both the functional perspective and the syntactic surface perspective. 3.4.4 TAG in generation: MUMBLE-86ISPOKESMAN MUMBLE-86 (Meteer et al. 1987) is a multiple purpose tactical generator inspired by Tree Adjoining Grammar (TAG) Qoshi 1987). A TAG is a tree generating system that is specified by a finite set of elementary structures in the form of initial trees and auxiliary trees. An initial tree, such as
78 Systemic Functional Grammar in Natural Language Generation (t1)
is a phrase structure tree corresponding to a single nonrecursive construct. An auxiliary tree, such as (t2)
is a phrase structure tree corresponding to a minimal recursive structure. Complex structures are obtained by derivation, i.e. they are obtained from the elementary trees by using the operation of adjunction (illustrated by Figs. 3.21 and 3.22).7 One of the outstanding features of TAG maintained to be potentially advantageous for generation is its notion of locality Qoshi 1987). Consider a PS-rule
(r3)
S -> NP + VP
with a local tree
(t3)
(r3) constitutes a domain of locality D (Joshi 1987: 242-3) in the sense that • constituency is specified over D; • syntactic constraints (such as agreement) can be specified on D; • functor-argument relations can be specified on D;
Computational Application: Grammar Models 79 • word order can be specified using D. A typical elementary tree (such as (t2)) of TAG has a domain of locality that is larger than that of (t3) because there is also information about some of the nonimmediate constituents of some root node. Thus, the syntactic constraints (such as agreement) can also be formulated over more nodes of a structure. It is more convenient, for instance, to state directly that V (an immediate constituent of VP) must agree with the NP that is immediately dominated by S, rather than having to see to the percolation of this information to the VP if the domain of locality is that of (t3). Furthermore, for functor-argument relations, in the clause, for example, V can be regarded as a function over the NP immediately dominated by S and the NP immediately dominated by VP. All these dependencies can therefore be expressed within one domain of locality in TAG - in a common PS-tree they have to be formulated across more than one domain of locality, requiring more complex means of representation (for example, percolation mechanisms). On the basis of this notion of enlarged locality, (TAG) defines a set of minimally complete syntactic structures that describe the syntax of a language. These definitions give recognition to those syntactic constructs that have all and only those parts that must coexist and that exhibit certain tight dependencies between their elements (agreement, functor-argument relations, etc.). Now, using a TAG for tactical generation, we have to ask the following question: when to choose which tree(s) from the set of elementary trees with which associated tree manipulation operate (s). Elementary trees are defined in MUMBLE-86 as phrases, such as define-phrase SVO (S V O) (clause :set-state (:aux-state initial) subject S :additional-labels (nominative) predicate VP verb V direct-object 0 :additional-labels (objective)))
The difference from a straight TAG tree (for instance, al in Fig. 3.21) is that here syntactic functions are given explicitly. Other additional information is given in the : additional-labels slot, e.g. case information.
80 Systemic Functional Grammar in Natural Language Generation The input to MUMBLE-86 is a so-called realization specification that consists of kernel specifications; these are roughly equivalent to elementary trees. The kernel specifications compose into larger units, the bundle specifications. Different types of such bundles may have different 'accessories' associated with them. For example, the input specifications for sample sentences 3.1 and 3.2 in MUMBLE-86 could look as displayed under (mumble 1) and (mumble2) in Fig. 3.20. From these, we have to arrive at tree structures (given in MUMBLE86 as 'position path notations') similar to the ones shown in Figs. 3.21 and 3.22. That is, first, an initial tree that fits the input (mumblel): Kim devours the cookies. DISCOURSE-UNIT head: GENERAL-CLAUSE head: realization-function: DEVOUR+TRANSITIVE-TWO-ARGS arguments: GENERAL-NP head: realization-function: PROPER-NAME arguments: "Kim" accessories: <determiner-policy NO-DETERMINER> GENERAL-NP head: realization-function: NP-COMMON-NOUN arguments: "cookie" accessories: <determiner-policy DEF-DETERMINER> accessories:
(mumble2): What does Kim devour? DISCOURSE-UNIT head: GENERAL-CLAUSE head: realization-function: DEVOUR+TRANSITIVE-TWO-ARGS arguments: GENERAL-NP head: realization-function: PROPER-NAME arguments: "Kim" accessories: <determiner-policy NO-DETERMINER> GENERAL-NP head: realization-function: NP-WH-PRONOUN accessories: <WH PATIENT>
Figure 3.20 Input to MUMBLE-86
Computational Application: Grammar Models 81 specification must be chosen. For an input like (mumble 1), this is a tree structure such as al; for an input like (mumble2), this is a tree structure such as a2. Second, an appropriate adjunction operation with an auxiliary tree (e.g., |31, (31' and (32) must be chosen to build up the structures 7! and y2. In processing an input such as (mumble 1), at the head position, a realization class is triggered. Here, the realization class is transit!ve-two-args. A number of subtypes are distinguished for this realization class (see (Meteer et al. 1987: 94) for a list of possible realization subclasses) according to different realizations in surface-syntactic structure, e.g. whether the clause to be realized is a relative clause or not, whether it is a wh-question or a declarative. The input itself specifies the realization subclass it requires in the
Figure 3.21 A TAG representation of kim devour the cookies
Figure 3.22 A TAG representation of 'What does Kim devour?'
82 Systemic Functional Grammar in Natural Language Generation accessories attribute. For the input (mumble 1), the value of accessories is unmarked, so a realization class resulting in an initial tree as displayed in al in Fig. 3.21 is chosen. For the input (mumble2), with the value of accessories <wh pat lent >, a different realization class is chosen that matches this requirement, resulting in an initial tree representing an inverted structure as shown in Fig. 3.22. During further processing, attachment points for auxiliary trees (adjunction) are determined (e.g. for auxiliaries or negation particles), and the structure is completed successively for each node in the tree, syntactic constraints being spelled out as they are given in the definition of elementary trees (such as case assignment under the additional-labels attribute).8 An input specification as provided for MUMBLE-86 can thus quite straightforwardly be mapped on to an elementary TAG tree. However, this kind of input specification is already rather surfacesyntactically oriented. The MUMBLE-86 input is, for instance, less abstract than the SPL input shown for PENMAN. An SPL expression such as (sp!3) (d / devour
:actor (p / person :name Kim) ractee (c / cookie :identifiability-q identifiable :number plural) potentially allows a congruent interpretation as Kim devours the cookies and an incongruent one as Kim's devouring (of) the cookies. The same is true of a SemR as used in Gossip. Further, <wh pat lent > is already an interpretation of semantic information such as that the speechact is a question and that the inquired element (wh-element) is the patient. Of course, this is also included in (sp!2) in Section 3.4.1, but the information : speechact question does not necessarily fully determine the mood of a clause to be generated to be interrogative, whereas the only syntactic interpretation <wh patient> appears to allow in MUMBLE-86 is an inverted constituency structure. The problem with TAG-like grammars for generation in general is that a TAG only deals with spelling out surface-syntactic constraints. It is thus suitable for dealing with task 2 of tactical generation only. This is generally acknowledged by proponents of TAG:
Computational Application: Grammar Models 83 Any system that attempts to use the TAG formalism as a substrate upon which to build a generation component must devise some mechanism by which a TAG can articulate appropriately with semantic information. (Shieber and Schabes 1991: 220) The problems we have noted above are therefore rather specific to MUMBLE-86. If there is a mechanism that relates semantic information to a TAG, these problems can be alleviated, but TAG itself does not offer such a mechanism. In fact, problems like the ones just noted have made Meteer formulate the generation gap problem in later work (Meteer 1991). Her suggestion is to introduce an intermediary level of representation, the Text Structure, between the text planning resources and grammar. This approach is realized in the SPOKESMAN) system, a general purpose generator that has been used for a number of different application programs (see Meteer 1992). The main advantage of the Text Structure is that it does not have to be realized by a clause, but potentially admits any kind of grammatical unit as its grammatical realization. The Text Structure takes the form of a tree in which each node represents a constituent in the utterance being planned. Each node contains information about semantic category, such as Thing, Event, Action (Jackendoff 1983), possibly a lexical item, information about the relation of a node to its parent and children, and a pointer to the object in the application program that the constituent expresses (Meteer 1992: 79). Similar to the organization at the syntactic level, there is a resource of minimal Text Structure trees (resource trees) that can be expanded and extended (operations similar to substitution and adjunction in TAG). A Text Structure representation for sample 3.1 could look as follows:
84 Systemic Functional Grammar in Natural Language Generation From a Text Structure tree such as this, a bundle is created (see Fig. 3.20): the matrix node contributes the head of the bundle, a headargument contributes a kernel where the head builds the realization function and the leaves build the arguments. Potentially, a Text Structure allows different syntactic interpretations - similar to an SPL expression (stripped of textual and interpersonal information as (sp!3)) or a SemR (stripped of themerheme information). Using a Text Structure as intermediary representation level is one possibility to gain more flexibility in the mapping from a text plan to a TAG-like grammar. What remains unclarified in Meteer's approach, however, is how information other than the ideational can be accommodated. A different solution with a similar effect, but functionally broader, is shown by Yang et al. (1991). To enable TAG to relate to semantic information, Yang et al. (1991) introduce an intermediary level of linguistic representation at which the nonlinguistic categories encoding a communicative situation can be matched to syntactic categories. For providing this intermediary level, Yang et al. (1991) combine TAG with the functionally motivated SFG (more particularly, the system network part). In this combined SFG-TAG generator, both functional linguistic information and surface-syntactic information are organized in a uniform way as networks. There is a TAG network containing the possible tree structures and there is a systemic network containing the functional grammatical options. The choice of a tree is mediated by the functional network: the selection of a tree is determined by the choice in the TAG network, which is itself determined by the traversal of the functional network. The realization operations normally used in SFG have been translated into corresponding tree composition operations so that the system network is free from any surface-syntactic specification.9 This is handled by the TAG-style syntactic description exclusively, resulting in a generation model that separates tasks 1 and 2 of tactical generation. To summarize briefly: only if tree structures as they are specified by TAG are related to a more abstract, semantically motivated level of information is there a sufficient basis for choosing one over another, as linguistic realization in the process of generation and sufficient flexibility is given in this mapping. Similar problems to the ones we noted here with TAG can be expected to arise in generation with other surface-syntax oriented grammar approaches. TAG'S internal organization, however, has the above-mentioned advantage of enlarged domains of locality that are suitable for connecting to a
Computational Application: Grammar Models 85 more abstract, functionally oriented component that can handle task 1 of tactical generation, as exemplified by Meteer (1992) and Yang et al (1991). 3.5 Summary: A Critical Appraisal of SFG for Natural Language Generation We have presented a small selection of approaches that have been used as models for the design of tactical generation components, ranging from strongly surface-syntactically oriented approaches (such as TAG as employed in MUMBLE-86) and general-purpose formalisms that are not committed to a particular theory (such as FUG as employed in COMET) to strongly linguistically inspired approaches (such as MTT and SFL as employed in Gossip and PENMAN, respectively). What all four approaches we have discussed have in common is the recognition of the need for abstract, functionally motivated grammatical information, on the one hand, and surfacesyntactic grammatical information, on the other. This is typically realized in tactical generators in making available two kinds of resources, possibly separate, and possibly on two different strata or axes: a functionally motivated grammatical resource and a surface-syntactically oriented resource (see Fig. 3.23 for
Figure 3.23 Resources required in tactical generation
86 Systemic Functional Grammar in Natural Language Generation illustration). The former is the kind of resource needed for task 1 of tactical generation, and the latter is the kind needed for task 2. The four systems we have discussed exhibit rather different strengths and weaknesses with regard to the tasks they have to fulfil. In general, there are two problem areas: either the generation gap appears as a problem or the syntagmatic gap is a problem. The former is likely to arise when a tactical generator is not developed with a view to the other necessary resources of the generation system (e.g. text planning resources). This is the case, for instance, with the approach of Meteer et al. (1987). This kind of deficiency can be alleviated in providing an intermediate level of representation, as suggested by Meteer (1992) or Yang et al. (1991). Meteer's own proposal for a solution to this problem (Meteer 1992), however, covers mainly one aspect of the generation gap, namely understratification, and is concerned with ideational (i.e. prepositional) knowledge only. But only part of the answer lies in stratification. Let us consider MTT-based generation again. MTT offers a highly stratified model of grammar. But if we consider an MTM for application in generation, we find that it does not encode the full functional diversity needed for specifying the relation of a semantic input and its grammatical expression (see again the example of speechact and its relation to mood discussed in Section 3.4.2). MTT, even though it uses a highly stratified model with a rather high upper abstraction bound, is functionally not broad enough. This shows an additional aspect of the generation gap: functional under-diversification. Functionally broader and stratified are the approaches of McKeown et al. (1990) and Yang et al. (1991). Moreover, here we see the response to another aspect of the generation gap problem: the recognition of the two complementary axes of linguistic patterning, the paradigmatic and the syntagmatic. Usually, only the latter kind of relations is recognized. But crucially, it is the former kind of organization that supports choice in generation. Note that even though paradigmatic organization and the notion of resource are generally not explicitly acknowledged, in response to task 1 of tactical generation the organization of linguistic information is often implicitly paradigmatic in one or the other way: resource trees in Meteer et al. (1987), a system network in Yang et al. (1991), the systemically inspired functional resource in McKeown et al. (1990). This is because the paradigmatic axis accounts for the possibilities of linguistic expression according to different contexts, thus relating grammar to its function in context. If at the level of grammar only
Computational Application: Grammar Models 87
syntagmatic relations (e.g. in the form of constituency) are specified, the gap between it and the higher levels of linguistic organization will be unnecessarily wide. In the PENMAN system, the underlying model is both highly stratified and functionally broad, and axiality is recognized. The input to tactical generation, SPL expressions, allows the accommodation of semantic prepositional knowledge and interpersonal and textual information. This kind of input only has to be responsive to the functionally motivated, paradigmatic categories of the grammar, but not to the syntagmatic ones that realize the functional ones. Further, the grammar itself has systematic knowledge about ideational, interpersonal and textual linguistic patterns. However, here the deficit is a syntagmatic gap, which we have noted for SFG in the discussion of SFL theory in Chapter 2 already - a problem which none of the other approaches to tactical generation we have discussed seems to suffer from, because there the surface-syntactic generalization is built into the grammar theories that underlie them. For example, MTT explicitly distinguishes between those syntactic constraints that are semantically derivable and those that are not. Similarly, in FUG, general syntactic constraints are covered by FUG attributes, such as AGR. And TAG exhibits the special feature of enlarged domains of locality that allow the expression of a maximally specified syntactic structure at any one point in generation. SFG has none of these characteristics - and consequently PENMAN shows a gap here. What is considered another deficit of the PENMAN implementation - which is not in the focus of interest here, but should be mentioned again at this point - is that computational representation has not been dealt with explicitly and as an issue in its own right, in contrast to TAG or FUG which have been explicitly designed for computational purposes. In PENMAN, as Bateman et al. (1994) note, the metastratum of computational representation is collapsed with the metastratum of implementation so that the boundary between the two is obscured (see Chapter 1). First attempts at providing SFG with explicit realization in terms of computational representation are due to Kasper (1988, 1989b), O'Donnell and Kasper (1991) and Mellish (1988). Recently, there have been attempts to use typed feature structure systems (Emele and Zajac 1990; Carpenter and Penn 1994; Dorre et al. 1996) for the computational representation of SFG, or, more particularly, of the NIGEL grammar, by Bateman et al. (1992) and Henschel (1994, 1995).
88 Systemic Functional Grammar in Natural Language Generation 3.6 Outlook
The goal of this chapter has been to illustrate the application of systemic functional theory for computational purposes, here in natural language generation. From the perspective of generation, Systemic Functional Linguistics offers a number of theoretical constructs that are suggestive of the kinds of resources needed in generation and their organization (see Section 3.4.1). From the perspective of systemic functional theory, application offers the opportunity of theoretical refinement based on the insights gained from the application. This is in line with the practice of SFL (see the quotation from Halliday (1985b) at the beginning of this chapter): creating an instance of the theory (often partial, focusing on one aspect or other) usually requires concretization; and this, if fed back to the theory, refines the theory in a particular area of concern. In the remainder of this book we will be concerned with one such theoretical refinement of SFG that is desirable from the point of view of computational application. The observations made here about SFG in generation, comparing it to a selection of other approaches, are supported by the more concrete experience of implementing a fragment of the grammar of German in NiGEL-style SFG. Here, it has been found that it is difficult to provide generalizations in the kinds of information that are typically needed for task 2 of tactical generation (the spelling out of syntactic constraints). While it is essentially advantageous in the context of generation that all syntactic constraints can only be applied given appropriate paradigmatic feature selections (Bateman 1992a), this becomes an impediment when it comes to phenomena that are not completely functionally motivated (e.g. agreement, government, subcategorization). The following chapter will provide ample evidence of this problem using examples from an implementation of the grammar of German in the NiGEL-style. What is called for to deal with this problem are linguistic concepts that are capable of accommodating the rather surface-syntactically motivated grammatical constraints and a method of formulating these constraints independently of the system network of functionally oriented paradigmatic relations. One conceivable answer lies in the logical metafunction and that the status of logical relations in SFG has to be reconsidered and elaborated. However, in the context of NL generation it is crucial not to compromise the potential strengths of SFGs, such as the essentially functional motivation of categories, stratification, the recognition of
Computational Application: Grammar Models 89 two axes of linguistic patterning. Integrating these two views in one model will be our concern in the remainder of this book. Notes 1. Reproduced from Matthiessen and Bateman (1991: 9). 2. Example 3.1 is a slightly modified version of an example given by Pollard and Sag (1987). 3. The systems in Fig. 3.3, 3.4, and 3.5 are all taken from the NIGEL grammar of English (see Matthiessen 1992). The syntax of the NIGELinternal representation of systems is as follows: : name specifies the system's name; : inputs gives the entry conditions; : outputs specifies the system's features; features may have realization statements attached (e.g. INSERT, CONFLATE, PRESELECT); : region specifies the grammar region, and r m e t a f u n c t i o n the metafunction the system belongs to. 4. Three kinds of keywords are defined for SPL: Upper Model or domain concepts (e.g. devour, actor), inquiries (e.g. : identifiability-q identifiable) and macros, which consist of a set of inquiries (e.g. : speechact question, : number plural, :theme p). 5. A generalized lexeme can be a full lexeme, a fictitious lexeme, a multilexemic unit or a lexical function (LF). For details see Mel'cuk (1988). 6. Reproduced from McKeown et al. (1990: 119); alt marks a disjunction of features, A corresponds to conflation of elements of structure. 7. MUMBLE-86 uses a similar operation called attachment. 8. This is a somewhat simplified description of the process of building up a tree in MUMBLE-86; for more details see (Meteer et al. 1987: 94-5). 9. This is not possible in Hallidayan SFG or in NIGEL. We will see, however, that a stricter separation of paradigmatic options and syntactic features is desirable from a linguistic-theoretical point of view and can be effected with alternative means of linguistic and computational representation (see Chapters 4 and 5).
4 Description: A Systemic Functional Grammar of German for Natural Language Generation
An average sentence, in a German newspaper, is a sublime and impressive curiosity; it occupies a quarter of a column; . . . it treats of fourteen or fifteen different subjects, each enclosed in a parenthesis of its own . . . finally, all the parentheses and reparentheses are massed together, one of which is placed in the first line of the majestic sentence and the other in the middle of the last line of it - after which comes the VERB and you find out for the first time what the man has been talking about; and after the verb - merely by way of ornament, as far as I can make out - the writer shovels in haben sind gewesen gehabt haben geworden sein, or words to that effect, and the monument is finished. (Mark Twain, The Awful German Language)l
4.1 Introduction The present chapter provides a description of a fragment of the grammar of German in NiGEL-style, the KOMET grammar (Teich 1992; Grote 1994), which is an integral part of the KOMET-PENMAN system for multilingual text generation (Teich et al. 1996; Bateman and Teich 1995).2 This grammar fragment of German presented here helps to pursue two goals: • To illustrate the way in which a computational grammar can be developed taking the transfer comparison strategy (Halliday et al 1964) (see below). • To characterize more concretely the problem of syntagmatic under-specification due to a lack of representation of logical or dependency relations in SFG and its computational realization in a NiGEL-style grammar. In developing the grammar of German for NL generation the already existing grammar of English, NIGEL, has been drawn on. Proceeding in this way is motivated by a strategy for the comparison
A Systemic Functional Grammar of German 91 of linguistic systems suggested by (Halliday et al. 1964: 120): There is a special method for comparing the grammar of languages which differs somewhat from ordinary comparative descriptive; this is known as 'transfer comparison'. Comparison in the normal way brings together two languages which have been separately and independently described, with the categories appropriate to each; such comparison is therefore neutral, as it were, and gives equal weight to the languages concerned. In transfer comparison, on the other hand, one starts from the description of one language and then describes the second language in terms of the categories set up for the first.
The present description of German is therefore contrastive to the NIGEL grammar in many places. Only for some areas of the grammar has the transfer comparison not been applied. This is notably the area of transitivity, where we have drawn on a Fawcett-type classification for German proposed by Steiner and Reuther (1989). The main source of description of grammatical data has been Helbig and Buscha's grammar of German (Helbig and Buscha 1988) and the 'Akademiegrammatik' (Heidolph et al. 1980). The ultimate goal of presenting this description is to illustrate the hypothesis that SFG and its computational realization in the PENMAN system have neglected one major aspect of linguistic organization, and that this prevents a number of generalizations that are 'built in' in other grammar models employed in computational linguistics. This concerns logical (or dependency) relations and their representation, both on the paradigmatic and on the syntagmatic axes. So far, the problem has only been sketched in the discussion of SFL theory in Chapter 2 and mentioned again in the comparison of tactical generation approaches in Chapter 3. In the course of the present grammar description, the aim will be to uncover the linguistic phenomena for which the NiGEL-style representation has turned out not to be sufficient or unsatisfactory due to this neglect. This twofold goal is reflected in the textual organization of the present chapter, in that there are separate purely descriptive sections and sections characterizing the representational problems. The descriptive parts are structured in the following way: the two organizing principles are rank and region. We provide a short description of every region of a rank and give at least one example of each, also referring and comparing them to corresponding NIGEL regions. For the examples we will provide either English glosses or English interlinear translations. The examples used in this chapter are partly taken from real text,
92 Systemic Functional Grammar in Natural Language Generation both from the domains we have worked on in the KOMET project (Bateman et al. 199la) (economic reports, short informational texts about common diseases, short biographical texts) and from literary text; other examples are constructed for purposes of illustration. For an initial overview, see Tables 4.1 and 4.2 showing the regions of the NIGEL and KOMET grammars, respectively. 4.2 A Fragment of a Computational Systemic Functional Grammar of German
4.2.1 Rank and ranking
The ranking region contains the RANK system and all its subsystems (see Fig. 4.1). The RANK system is the first to be entered when the ideational class
logical
clause
interpersonal
experiential TENSE
TRANSITIVITY CIRCUMSTANCE
g ro u p s / p E r a s e s
prep
nom
>iX
POLARITY
CULMINATION
ATTITUDE
VOICE
MODALITY
CONJUNCTION
DETERMINATION
MINOR TRANSITIVITY CLASSIF.
NOMINAL-TYPE
PERSON
EPITHET
ATTITUDE
Q_
QUALIFICATION
2
adj
MOOD, TAG THEME
_l
LU
o o
textual
SELECTION MODIF.
QUALITY-TYPE
quant
MODIF.
QUANTITY-TYPE
adv
MODIF.
CIRCUMSTANTIAL
COMMENT
CIRCUMSTANTIAL-
COMMENT- CONJUNCTIVETYPE TYPE
TYPE complexes
CONJUNCTIVE
simplexes
Table 4.1 Functional regions of the grammar (English) cross-classified by metafunction and rank
A Systemic Functional Grammar of German 93 interpersonal
ideational class
textual
experiential
logical
clause
MOOD
TRANSITIVITY CIRCUMSTANCE
THEME DIATHESIS
TENSE OBJECT-INSERTION
ps/phrases
prep
nom
X LLJ
MINOR TRANSITIVITY CLASSIF. NOUN-TYPE
PERSON
_J
EPITHET
O.
QUALIFICATION
^.
SELECTION
O
NUMERATION
0
0 D)
> l-
adj
MODIF.
DETERMINATION
ATTITUDE
QUALITY-TYPE ADJECTIVALCOMPARISON
MODIF.
adv
ADVERBIALTYPE
complexes
simplexes
Table 4.2 Functional regions of the grammar (German) cross-classified by metafunction and rank)
Figure 4.1 The top system of the grammar: RANK
94 Systemic Functional Grammar in Natural Language Generation grammar is accessed during the generation process. What is determined here is the type of unit that is to be generated in that particular cycle through the grammar.3 Choosing, for example, the feature [clause] in the RANK system makes available all the relevant features of the clause that are specified by the grammar, e.g. the regions clause-complexity, transitivity, object-insertion, circumstantial, diathesis, theme, mood and tense (see Sections 4.2.3.1-4.2.3.8 below). The feature [group/phrase] is the input condition for a system that distinguishes between prepositional phrases, on the one hand, and types of groups, such as nominal groups or adverbial groups, on the other. Word and morpheme ranks as given in the theory of SFG are not included in the KOMET grammar. As a substitute, the NIGEL word class hierarchy is used, which provides a classification of lexemes in terms of their lexical and inflectional features. For an example see the lexical and inflectional features of finite, lexical verbs in Fig. 4.2; crossclassifying categories are given in capitals.5 The lexicon is then a collection of instances of this word class hierarchy, where the entries are word forms (i.e. the lexicon is a full-form lexicon).
Figure 4.2 Lexical and inflectional features of verbs in the word class hierarchy
Description: A Systemic Functional Grammar of German 95
The ranking systems contain features that stand for the basic constituency patterns assumed in SFG, and they encode the basic paradigmatic types for which disjoint sets of grammatical features hold. Units may be 'rankshifted' according to Halliday's definition of rankshift as a unit of some rank being embedded in another unit of the same rank or a lower rank. In German, for example, Qualifiers of the nominal group (NG) are rankshifted, being embedded in the NG and realized as prepositional phrases (PP), relative clauses or participles in pre-head position. Section 4.2.5 gives some examples of rankshift in the nominal group. Prepositional phrases are always rankshifted because they are on the same rank as the NG: a PP's Minirange is an NG that always appears embedded. 4.2.2 Problems with rank When the basic constituency organization encoded in the rank hypothesis is compared with phrase structure approaches, such as X-bar syntax (Jackendoff 1977), the general problem of SFG becomes already obvious. It is partly the rank scale together with the polysystemicity principle that prevents the kinds of generalizations that are possible in, say, X-bar syntax. The desirable generalization is one across different types of phrases (in systemic terms, units of different ranks) on the one hand, and about their internal make-up, on the other (in systemic terms, units of higher ranks: clause, verbal group, nominal group, prepositional phrase, etc.). For instance, in SFG one cannot say that, whatever category any two units have (nominal, verbal, prepositional), they may share certain attributes - a generalization that would be desirable for expressing that all phrasal units exhibit subcategorization of some kind. With the rank scale, these properties cannot be represented as common because it diversifies units according to syntactic category. Section 4.2.4 demonstrates that because of this, properties such as subcategorization have to be expressed separately for each unit of some rank by preselection statements which are spread over the system network, instead of being specified locally for a category 'phrasal unit' that is a superclass of all units of higher ranks. Syntactic generalizations are not only impossible over different units of the rank scale; neither can they be expressed over the internal make-up of phrasal units (e.g. the general constraints between a higher-ranking unit and its (lexical) head). As we have seen in the preceding section, the lexicon in NiGEL-style grammars is organized
96 Systemic Functional Grammar in Natural Language Generation as a word class hierarchy, which is in principle a substitute for word and morpheme ranks. Here, again, the polysystemicity principle holds: attributes of the word class hierarchy are not allowed to hold also for units higher on the rank scale (all phrasal categories: clause.
Figure 4.3 The relation between clause and lexical verb
A Systemic Functional Grammar of German 97 nominal group, prepositional phrase etc.). However, there is a dependency relation between phrasal units and the units expounding their elements of structure. This relation is expressed by classify and inflectify statements which correspond to preselection between phrasal units on the rank scale (see Fig. 4.3 for illustration). Here, too, the classify and inflectify statements are spread over the system network - when a generalization about this relation is potentially possible. What the preselect, classify and inflectify realization statements implicitly encode is the notion of higherranking units being 'projected' by their heads. There is a particular subset of these features that are always preselected (classified, inflectified), which suggests that these are the features that are the necessary conditions for a well-formed syntagmatic structure. As will be seen in the following sections, the problem just briefly sketched is not only due to the rank scale, but points to the general nonacknowledgement of dependency relations of this kind in SFG. The principal regions of the grammar of German will now be discussed, and following the descriptive account of each rank, the problem cases that arise from this lack of explicit dependency information will be pointed out.
4.2.3 The clause: description
The top distinction on clause rank is the CLAUSE-COMPLEXITY distinction between simplex clauses and complex clauses (Halliday 1985a: 192-248) (see Fig. 4.4). Simplex clauses cover both independent and dependent clauses; complex clauses always consist of two or more coordinated clauses, or an independent and a dependent clause (which are paratactically or hypotactically related). 4.2.3.1 Clause—complexity The major types of complexes in German are as specified in Halliday (1985a) and as implemented in NIGEL (see Fig. 4.5). For illustration, let us look at sample text 4.1.6
Figure 4.4 The system of CLAUSE-COMPLEXITY
98 Systemic Functional Grammar in Natural Language Generation
Figure 4.5 The systems of TAXIS and TYPE OF INTERDEPENDENCE (4.1)
(4.1.1) Zuriickgekehrtj sah sie zum Fenster hinaus, (4.1.2) im Siidwesten stand noch Helligkeit iiber dem Horizont, ein letzter Schein, der die Ebene dammern liess, (4.1.3a) der Regen hatte aufgehort, (4.1.3b) aber es musste kalt sein, (4.1.3c) denn Franziska war gezwungen, mit ihren Handschuhen von Zeit zu Zeit die Scheibe abzuwischen, die sich immer neu beschlug.7
(4.1.1) to (4.1.3b) are paratactically related and characterized by the TYPE-OF-INTERDEPENDENCE feature [elaborating]. Within (4.1.3) there is the relation of [paratactic&extending] between (4.1.3a) and (4.1.3b), being marked by the conjunction 'aber' ('but'), and there is a [paratactic&enhancing] relation between (4.1.3b) and (4.1.3c), being marked by the conjunction 'denn' ('since').8 The syntagmatic structure of sample text 4.1 is displayed in Fig. 4.6.
Figure 4.6 Logical structure of sample text 4.1
A Systemic Functional Grammar of German 99
Elaborating clauses can also be hypotactically related to the clause they depend on. This is a phenomenon known in the grammar of German as 'weiterfuhrende Nebensatze' (Helbig and Buscha 1988: 672-5) (see example 4.2). These have to be distinguished from appositions which are equally optional, can be omitted, exhibit reference identity with the antecedent, are reduced copula clauses and show case identity with the antecedent. However, appositions are defined in descriptive grammars of German as attributes of nouns only. (4.2)
(4.2.1) Im Geschaftsjahr 1986/87 setzten die 40 Kasehandelsfirmen der Schweizerischen Kaseunion insgesamt 79035 t Ka'se ab, (4.2.2) was einem Riickgang um 2,6 Prozent entspricht.9
In terms of discourse semantics, the decisions underlying the complexity and interdependence options in the grammar are decisions in rhetorical text organization. In example 4.2, the kind of discourse structure relation in terms of RST (rhetorical structure theory; Mann and Thompson 1987) between (4.2.1) and (4.2.2) is the interpretation subtype of the elaboration relation. In the grammar, the features [hypotactic&elaborating] are chosen. Here, a constituent Event is inserted and preselected to be a dependent elaborating clause. In the second cycle through the grammar the structure of the Event constituent is built up. Dependent clauses share most of the information of independent clauses (except for word order, hypotactically dependent clauses having an SOV order, whereas independent clauses have an SVO order). The relative pronoun in this case must be 'was', which is classified in the grammar as [proposition-antecedent-relative]. 'Was' does not carry any agreement features; it has case, however, which it is assigned according to the process type and according to which participant it realizes in the dependent clause. The alternative feature to [expansion], the [projection] option, in combination with features from the TAXIS system, is to cover direct and reported speech. In other grammar theories projection is treated as sentential complementation, where the sentential complements can be either finite (e.g. that-clauses) or non-finite (e.g. raising and control constructions). Treatments of these phenomena deal with the syntactic and semantic relations holding between a matrix clause (in systemic terms, the projecting clause) and a constituent clause (the projected clause) (see e.g. Chomsky 1981; Bresnan 1982).
100 Systemic Functional Grammar in Natural Language Generation 4.2.3.2 Transitivity Transitivity refers to the predicate types of a language and the participant roles with which they combine. The basis of the TRANSITIVITY systems in the German grammar described here is Fawcett's interpretation of transitivity (Fawcett 1980) applied to German by (Steiner and Reuther 1989). The least delicate system distinguishes between [action], [relational], [mental] and [communicative] processes (see Fig. 4.7), a distinction that the Hallidayan transitivity system also shows. For an illustration of the 'classify' relation between the Process function and the lexical verb realizing it see Fig. 4.7, which gives the classify realization statement with each of the transitivity options; in subsequent figures, these will not be included, but note that each subtype of the primary transitivity type restricts the type of lexical verb further (e.g. [agentcentred]: (classify Process effective-verb)). The difference from the Hallidayan transitivity analysis lies in the nonexistence of an AGENCY system. In the Fawcett analysis of transitivity (Fawcett 1980), agency (i.e. the potential causation of processes) is included in the PROCESS TYPE system. The Hallidayan AGENCY system serves to account for the two aspects of the transitivity behaviour of English as ergative and transitive language. This is not a characteristic of the German transitivity system, which is much less flexible in this respect. In general, the range of participant roles that can be Agents in German is much smaller than in English. Another reason why the treatment of transitivity in the
Figure 4.7 The system of TRANSITIVITY - primary delicacy
A Systemic Functional Grammar of German 101 NIGEL material
KOMET action
middle ranged
agent-centered
They played tennis
Sie spielten Tennis.
nonranged
affected-centered (- Affected) Die Tur offnete sich.
The door opened.
effective
relational
associative (+ Third-party-agent)
ranged He gave the door a kick.
nonranged
Ergab tier Tureinen Tritt.
action
She opened the door.
mental
middle
Sie offnete die Tur.
mental ranged
processor-oriented (+ Phenomenon) Sie mag Musik. processor-oriented (- Phenomenon)
She likes music. nonranged She suffers.
verbal
middle
Sie leidet.
communicative ranged
message-oriented (+ Message) Sie sagte nichts. message-oriented (- Message)
She said nothing.
nonranged She talked. relational
affected-centred (+ Affected)
existential There is a rumour...
Sie redete.
relational
existential Es gibt ein Gerucht ...
middle ranged Attribuend + Attribute She is very happy.
classificatory Sie ist sehr glucklich.
effective ranged Attributor + Attribuend + Attribute
classificatory (+ Third-party-agent) Er machte sie glucklich.
He made her happy.
Table 4.3 Major differences between NIGEL and KOMET transitivity
102 Systemic Functional Grammar in Natural Language Generation German grammar is different from the one implemented in NIGEL is case marking in German and its overt morphological realization: structural case and its assignment for German is motivated to some extent by transitivity type (see example 4.9 on p. 106 for [mentaliprocessor-oriented] versus [mental:phenomenon-oriented]). Where, for example, Halliday distinguishes between They played tennis and She opened the door as [material&middleiranged] and [material&effective] respectively, for German we do not distinguish between the two in terms of transitivity. Both simply take as second role an Affected which is realized in both cases as Direct Object (in active clauses), which is marked for accusative case. Of course, case is not assigned uniquely on the grounds of transitivity type. In many cases it is rather idiosyncratic, i.e. lexically determined, especially with verbs that govern genitives, and in other cases it is in addition, and more regularly so, determined by diathesis (see Section 4.2.3.4). Further, the [ranged] option in Hallidayan transitivity is not filtered out as a general option for all process types, but is represented by more delicate options of those subtypes of the PROCESS-TYPE system for which it applies as an option. See Table 4.3 for a presentation of the major differences between the NIGEL and KOMET transitivity. The ACTION-TYPE system of the German grammar has five subtypes: [agent-only], [affected-centred], [agent-centred], [affected-only] and [natural-phenomenon] (see Fig. 4.8). The difference between [agent-centred] and [affected-centred] mainly lies in the difference in mutual constraints with diathesis: affectedcentred processes can potentially undergo the decausativization
Figure 4.8 The system of ACTION TYPE
A Systemic Functional Grammar of German 103
alternation; agent-centred processes can undergo the detransitivization alternation (see Section 4.2.3.4).10 See example 4.3 for an illustration of the agent-centred-affected-centred distinction. (4.3)
Luis fuhrt die Expedition. (Luis leads the expedition.) Hans bewegt das Pferd. (Hans moves the horse.)
Sample analysis • Agent-centred Luis
fuhrt
die Expedition,
(Luis
leads
the expedition.)
Diathesis: activization, nonrelation-changing Theme (unmarked)
Agent Subject
Process
Affected
Finite
Direct Object
Hans
bewegt
das Pferd.
(Hans
moves
the horse.)
• Affected-centred
Diathesis: activization, nonrelation-changing Theme (unmarked)
Causer Subject
Process
Affected
Finite
Direct Object
The RELATIONAL-TYPE system has five subtypes: [existensial]; [locational]; [identifying]; [classification] and [associative] (see Fig. 4.9). We illustrate relational transitivity by the [locational] subtype. In general, there can be a certain tension between an action and a relational interpretation of a process, so that a clause: Wir steigen auf den Berg (We climb on to the mountain) can be interpreted as [action]
104 Systemic Functional Grammar in Natural Language Generation
Figure 4.9 The RELATIONAL TYPE system (Agent, Location-Circumstantial) or as [relational] (Attribuant, Location-Participant). We generally prefer the relational interpretation on the following grounds: • Relative obligatoriness of the Location role. • The direct opposition to clauses such as Wir ersteigen den Berg or Wir besteigen den Berg (We climb the mountain), where the Location role is strictly obligatory. In German, this is a way of expressing a locational relation where the relation is partly encoded in the prefix of the verb. The following sample analysis illustrates this distinction: (4.4)
Wir steigen auf den Berg. (We climb on to the mountain.) Wir ersteigen den Berg. (We climb the mountain.)
A Systemic Functional Grammar of German 105 Sample analysis • Noncentralized-location Wir
steigen
auf den Berg.
(We
climb
on to the mountain.)
Diathesis: activization, nonrelation-changing Theme (unmarked) Attribuant
Process
Location
Subject
Finite
Prepositional Object
Wir
ersteigen
den Berg,
(We
climb
the mountain.)
• Centralized-location
Diathesis: activization, nonrelation-changing Theme (unmarked) Attribuant
Process
Subject
Finite
Location Direct Object
The function structure of both of these sentences is (Attribuant Process Location). In the first sentence the locational relation is said to be noncentralized, which is formally reflected in the realization of the role Location as a Prepositional Object. In the second sentence the locational relation is said to be centralized, which is formally reflected in two ways: the relation can be seen as being encoded in the prefix er- of the verb ersteigen and in the realization the Location takes, i.e. being realized as Direct Object in accusative case. This is in fact a very productive pattern in German, going across process types where a relation is involved, e.g. 'ersteigen' versus 'steigen
106 Systemic Functional Grammar in Natural Language Generation auf, 'erklettern' versus 'klettern auf, 'erarbeiten' versus 'arbeiten an'. Another feature that distinguishes action and relational processes is that relational processes can potentially have a Third-party-agent, i.e. an additional participant who brings the action or relation about. For example: (4.5)
Wir steigen auf das Matterhorn. (We climb on the Matterhorn.)
(4.6)
Luis Trenker geleitet uns auf das Matterhorn. (Luis Trenker leads us on to the Matterhorn.)
(4.7)
Das Buch liegt auf dem Tisch. (The book is lying on the table.)
(4.8)
Peter legt das Buch auf den Tisch. (Peter puts the book on the table.)
In 4.6, 'Luis Trenker' is Third-party-agent; in 4.8, 'Peter' is Thirdparty-agent. The lexical verb in these examples is different according to whether there is a Third-party-agent or not, but the pattern is general for all subtypes of relational process. Underlying this interpretation is a view that focuses more on the result of an action that a causer (Third-party-agent) brings about, rather than on the action as 'going-on'.11 The MENTAL-TYPE system has five subtypes: [processor-oriented], [processor-only], [phenomenon-oriented], [two-phenomena], [phenomenon-only] with the roles of Processor and Phenomenon (see Fig. 4.10). The difference between [processor-oriented] and [phenomenon-oriented] lies in the conflation of participant roles with syntactic functions. With [phenomenon-oriented] the Processor is conflated with the Indirect Object, with [processororiented] the Processor is conflated with the Subject. The Phenomenon can be realized as a nominal-group, or as a clause, or as an infinitival verbal-group. (4.9)
Hans gefallt das Buch. (The book pleases Hans.) Hans mag das Buch. (Hans likes the book.)
A Systemic Functional Grammar of German 107
Figure 4.10 The MENTAL TYPE system Sample analysis • Phenomenon-oriented Hans
gefallt
das Buch.
(The book
pleases
Hans.)
Diathesis: activization, nonrelation-changing Theme (unmarked) Processor
Process
Phenomenon
Indirect Object
Finite
Subject
108 Systemic Functional Grammar in Natural Language Generation • Processor-oriented Hans
mag
das Buch.
(Hans
likes
the book.)
Diathesis: activization, nonrelation-changing Theme (unmarked) Processor
Process
Phenomenon
Subject
Finite
Direct Object
Communicative processes can be quite similar to mental processes in structure. For example, the Phenomenon in mental processes and the Message in communicative processes can be realized by a thatclause or a nonfinite (dependent/projected) clause, e.g. Ersagte, dass ihm die Ausstellung gefalien hat. (He said that he liked the exhibition.) (communicative process), Er glaubt, dass seine Geldborse gestohlen wurde. (He believes that his wallet has been stolen.) (mental process). This is because a communicative process could be considered an externalized mental process, and a mental process an internalized
Figure 4.11 The COMMUNICATIVE TYPE system
A Systemic Functional Grammar of German 109
communicative process. The roles corresponding to Processor and Phenomenon of mental processes are Sender and Message for communicative processes. There is no corresponding role to Receiver with mental process types because generally mental processes do not come out into the open, so there is no addressee. Communicative processes are subdivided into [message-oriented], where there is potentially a Message, [receiver-oriented], with at least a Sender and a Receiver, and [message-only], where there is only a Message (see Fig. 4.11). There is an additional role, Prompt, which stands with utterances in reported speech; this role can optionally occur with all subtypes of communicative processes (see examples 4.10 to 4.12; from Steiner et al 1988). (4.10)
Der Schiiler entgegnete (—Prompt), dass er krank gewesen sei. (The pupil replied that he had been sick.)
(4.11)
Der Schiiler entgegnete auf die Frage (+Prompt), dass er krank gewesen sei. (The pupil replied to the question that he had been sick.)
(4.12)
Der Schiiler entgegnete dem Lehrer auf die Frage (+Prompt), dass er krank gewesen sei. (The pupil replied to the teacher's question that he had been sick.)
For illustration of some of the transitivity phenomena in text, let us look at the following example. (4.13)
(4.13.1) Zu seinen Fiissen hockend, (4.13.2) erzahlte sie ihre Geschichte der Nacht und einem Fremden, (4.13.3) der auf so eine iibertriebene Weise ein Mann war, (4.13.4) dass sie ihn nicht zu furchten brauchte. (4.13.5) Sie machte es kurz; (4.13.6) obwohl sie ihm nichts verschwieg, nicht einmal ihre Angst vor einer Schwangerschaft, (4.13.7) war sie schon zu Ende, (4.13.8) als sie ihre Zigarette fertig geraucht hatte. (4.13.9) Sie erhob sich.12
In (4.13.2) and (4.13.6) the transitivity type is [communicative:message-oriented], involving also a Receiver. In
110 Systemic Functional Grammar in Natural Language Generation (4.13.3) and (4.13.7) the transitivity type is [relational], in (4.13.3) the relational subtype is [classificatory], in (4.13.7) it is [associative]. (4.13.1) is also [relational], of the [attribuent-only] subtype. (4.13.4) exhibits the transitivity type of [mental], subtype [processor-oriented], which is morphosyntactically reflected in the Phenomenon being realized in accusative case (whereas with phenomenon-oriented processes, the Processor is in dative case, and the Phenomenon in nominative case). (4.13.5) is another [relational:associative] process, and it is caused, i.e. the role of Third-party-agent is overtly realized ('sie'), the Attribuent being the second role ('es'), and the Attribute being realized as an Object Complement. (4.13.1) is of transitivity type [action:agent-only]; (4.13.8) is also of type [action], but [agent-centred] as subtype. Sentence (4.13.9) is a case of [action:agent-only]. The Process constituent here is realized as a reflexive verb, where reflexivity is lexical. This is in contrast to reflexive constructions, where there is semantic coreference of the reflexive pronoun and its antecedent (see Section 4.2.3.4). Looking at the action and relational process types again, it can be seen from the realization statements in the network that there are two features that are realized by a so-called empty subject ('es'subject ('it'-subject)). The [action: natural-phenomenon] type accounts for sentences such as Es regnet. (It is raining.} These always have an 'es'-subject without a semantic role, so there is no participant at all in this type of clause. A different case is the type [relational:existential], e.g. Es gibt guten Kdse in Frankreich. (There is good cheese in France.}, where there always has to be one participant (the Attribuant, 'guten Kdse''}. These two types of empty subject are licensed by transitivity type and must be distinguished from 'es' as a means of focus, in, for example, Esfahren tdglich Millionen von Autos auf Deutschlands Autobahnen.} (There are millions of cars daily on Germany's highways.} 4.2.3.3 Object-insertion Instead of having just two syntactic functions, Subject and Complement, as in NIGEL where they are part of Mood and Residue, for German we have to subspecify the type of complement further in Direct Object, Indirect Object, Object (for all other kinds of oblique Object, e.g. Prepositional Objects). Further, we need the functions Subject Complement and Object Complement. The main purpose syntactic functions serve is as carriers of case information, i.e. we assign morphological case via syntactic functions by preselection of
A Systemic Functional Grammar of German 111
case on the nominal group that realizes a syntactic function in terms of lower rank. The gates by which syntactic functions are inserted have complex entry conditions with features from transitivity and diathesis (see Fig. 4.12, which is given here in the system-(NIGEL) internal notation).13 This specifies case assignment in one place instead of having it distributed over diathesis-transitivity combinations and assigning it to participant roles.14 4.2.3.4 Diathesis and Diathesis-Transitivity Gates Diathesis refers to the way a language encodes the same situation experientially in different perspectives. It relates the notion of voice to such notions as ergative, middle, causative, impersonal, i.e. to what is known also as relation changes of verbs (Jackendoff 1987; Levin 1987). Thus, diathesis specifies the correspondence between participant roles and syntactic functions. The system of diathesis consists of two subsystems: VOICE and RELATION-CHANGING (see Fig. 4.13). The VOICE system distinguishes between [activization] and [passivization] (see sample analysis of example 4.14). The conditions under which the voice options are available must be specified as complex entry condition to the VOICE system (this is marked in Fig. 4.13 by '*')• For example, certain process types are (gate :name DIRECT-OBJECT-INSERTION :inputs (OR (AND ACTIVIZATION (OR (AND NONCENTRALIZED-LOCATION THIRD-PARTY-AGENCY) (AND CLASSIFICATORY THIRD-PARTY-AGENCY) (AND IDENTIFYING THIRD-PARTY-AGENCY) EXISTENTIAL CENTRALIZED-LOCATION AFFECTED-CENTRED PROCESSOR-ORIENTED TWO-PHENOMENA RECEIVER-ORIENTED))) :outputs ((1 DIRECT-OBJECT-INSERTION (INSERT DIRECTOBJECT) (CONFLATE DIRECTOBJECT OBJECT) (PRESELECT DIRECTOBJECT NOMINAL-GROUP) (PRESELECT DIRECTOBJECT ACCUSATIVE-CASE))) :region OBJECTINSERTION :metafunction TEXTUAL)
Figure 4.12 The DIRECT-OBJECT-INSERTION gate
112 Systemic Functional Grammar in Natural Language Generation
Figure 4.13 The systems of DIATHESIS not passivizable at all, such as classificatory processes (see example 4.20). (4.14)
Wir ersteigen den Berg. (We climb the mountain.) Der Berg wird erstiegen. (The mountain is climbed.)
Sample analysis • Activization Wir
ersteigen
den Berg,
(We
climb
the mountain.)
Diathesis: activization, nonrelation-changing Theme Attribuant
Process
Location (nonexplicit)
Subject
Finite
Direct Object
A Systemic Functional Grammar of German 113 .
Passivization Der Berg wird erstiegen. (The mountain is climbed.) Diathesis: passivization, nonrelation-changing Theme Location Subject
Process Finite
The grammatical difference between these two sentences lies in the value the clause feature diathesis takes: [activization] versus [passivization]. This has implications for • voice in the verbal group (finite and lexical verbs); • the conflation of the functions Theme and Subject with the transitivity participant roles (in the first case the Attribuant is the Theme (and Subject), in the second case the Location is the Theme (and Subject). The passive clause in 4.14 is [non-agentive] and has a [regularpassive] of subtype [action] (i.e. the passive is built with the auxiliary 'werden'; see Fig. 4.14). There are several subsystems of [passivization] that account for whether agency is expressed or not (see example 4.15), whether the passive is lexical or structural (see example 4.16), where the verb 'erfolgen' lexically calls for a structure that is passive) and whether it is a regular passive with 'werden' or 'sein' as passive auxiliary (see example 4.17) or a 'bekommen'-passive (see examples 4.18). Fig. 4.14 displays these options. (4.15)
Der Angriff wurde durch den Torwart abgeschlagen. - Der Angriff wurde abgeschlagen. (The attack was stopped by the goalkeeper. - The attack was stopped.)
(4.16)
Die Ansteckung erfolgt durch Tropfcheninfektion. (One gets influenza by airborne infection.)
114 Systemic Functional Grammar in Natural Lanquaqe Generation
Figure 4.14 Subsystems of passivization (4.17)
Er wurde verwundet. - Er 1st verwundet. (He was wounded. - He is wounded.)
(4.18)
Ihm wurde ein neuer Anzug geschenkt. - Er bekarn einen neuen Anzug geschenkt. (He was given a new suit.)
(4.19)
Er tanzt. - Es wird getanzt. (He dances. - There is dancing.)
(4.20)
Sie ist Lehrerin. (She is a teacher.)
The RELATION-CHANGING [causativization], e.g. (4.21)
system
distinguishes
between
Leo lasst sich die Schuhe putzen. (Leo has his shoes cleaned.)
[decausativization], e.g. (4.22)
Sein Gestandnis verbesserte die Situation. - Die Situation verbesserte sich. (His confession improved the situation. - The situation improved.).
A Systemic Functional Grammar of German 115 [detransitivization], e.g. (4.23)
Sie spielten Tennis. - Sie spielten. (They played tennis. - They played.)
and [nonrelation-changing]. In German, in the case of [decausativization] the 'Causer' of the action is not expressed; the verb realizing the Process is usually reflexivized (see 4.22 above). With [detransitivization], it is the Affected role that is not expressed (see 4.23 above). For English, VOICE options are an important resource for thematization, i.e. for making a constituent other than the subject the theme of the clause. For German, it is also true that in passivization the object constituent of the activization variant moves into the subject position of the clause. However, there are many more possibilities of thematization, leaving voice unchanged, so that the voice system cannot be regarded as the primary source for changing thematization. For German, and this holds also for English, another possibility given by the VOICE system is to leave the agent of an action unmentioned or put it in a place in the clause that marks it as New element rather than Given (see again example 4.15). In addition, what is covered in the diathesis region is the regular, productive reflexivization patterns of German. There are basically two major types of reflexives with subtypes: 1. Reflexive verbs (a) true reflexives, e.g. 'sich beeilen' ('hurry'); (b) reflexive constructions, e.g., 'sich berichtigen' ('correct oneself). 2. Reciprocal verbs (a) true reciprocals, e.g. Sie freundeten sich an. (They made friends (with one another).}; (b) reciprocal constructions, e.g. Sie unterhalten sich. (They are talking to each other.} True reflexives are formally, not semantically, reflexive. The reflexive pronoun is part of the lexical predicate and thus obligatory (e.g. *Er beeilt seine Mutter. - He hurries his mother.}. With reflexive constructions there is semantic coreference between the Subject and the reflexive pronoun. The reflexive pronoun is not obligatory, it is just another realization possibility of the Object of the transitive verb and not part of the predicate (e.g. Er berichtigt sich. Er berichtigt den
116 Systemic Functional Grammar in Natural Language Generation Schuler. - He corrects himself. He corrects the pupil.). True reciprocals appear mostly with a plural subject; replacement with 'einander' ('one another') is usually not possible (e.g. Siefreundetensich an. *Sie freundeten einander an.}. Reciprocal constructions, on the other hand, have alternative transitive and reflexive readings (e.g. Sie unterhielten sich. Er unterhielt das Publikum. Er unterhielt sich gut. — They talked to each other. He entertained the audience. He had a good time.}. Substitution with 'einander' is usually possible in the type of Sie unterhielten sich. In the clause, the reflexive pronoun of a reciprocal construction behaves like any object, i.e. it can change position in the clause. The same is true for the reflexive pronoun in reflexive constructions. The difference from true reflexives and reciprocals is that the reflexive and reciprocal constructions can be put under contrastive stress (e.g. Sich hat er berichtigt, nicht seine Mutter. (Himself he has corrected, not his mother.}}., whereas the reflexive pronoun of a true reflexive verb cannot (e.g., *Sich beeilte er.}. In all other cases of word order (dependent clause verb-second, verb-initial with questions and imperatives), the reflexive pronoun behaves like an object. Reflexive and reciprocal constructions can be derived at clause rank by the interaction of transitivity and the diathesis option [decausativization] (see an example of such a gate in Fig. 4.15). For example, with an agent-centred process, the Affected will be a reflexive pronoun, which can be preselected at this level. True reflexives are simply encoded in the lexicon.
(gate : name ACTIVIZATION-DECAUSATIVIZATION-REFLEXIVE-AFFECTED :inputs (AND ACTIVIZATION DECAUSATIVIZATION AFFECTING) :outputs ((1 ACTIVIZATION-DECAUSATIVIZATION-REFLEXIVE-AFFECTED (CONFLATE AGENT SUBJECT) (CONFLATE DIRECTOBJECT AFFECTED) (CONFLATE PROCESS LEXVERB) (PRESELECT AFFECTED REFLEXIVE-SPECIFIC-PRONOUN ACCUSATIVE-CASE))) :region DIATHESIS-TRANSITIVITY-GATES :metafunction TEXTUAL)
Figure 4.15 Reflexive construction
A Systemic Functional Grammar of German 117 Generally, the region of diathesis-transitivity-gates provides the actual conflation of syntactic functions with participant roles. There are strong conditional constraints between transitivity and diathesis options. As mentioned above, certain process types can only undergo the [decausativization] alternation (e.g. affected-centred processes), which is typically realized as a reflexive construction; others can only undergo the [detransitivization] alternation (including agent-centred processes; e.g. Er suchte mich. - Er suchte. (He searched for me. - He searched.)'). In addition there are types of processes that cannot be passivized, such as most relational processes. 4.2.3.5 Circumstantial Circumstantial is the functional label for adjunct or adverbial. Circumstantials in German can be realized as adverbial groups, as prepositional phrases or as nominal groups. (4.24)
Neulich fuhr er nach Berlin. (The other day he went to Berlin.)
(4.25)
Let zte Woche fuhr er nach Berlin. (Last week he went to Berlin.)
(4.26)
In der letzten Woche hielt er sich in Berlin auf. (Last week he was in Berlin.)
The types of Circumstantial are established according to the function they fulfil in a clause, e.g. as adjuncts of time, space, manner, etc. The circumstantial region takes the form of a number of parallel systems with two features, one standing for the occurrence of a Circumstantial, the other for the nonoccurrence in a given clause (see Fig. 4.16). As already mentioned, some types of Circumstantials are realized as adverbial groups, others as a prepositional phrase, others as nominal groups. This depends partly on the ideational semantic type of Circumstantial (e.g. the Upper Model type 'temporalnonordering' is typically realized as an adverbial group - the 'temporal-ordering' types are realized as prepositional phrases), partly on the preceding text (see the TEMPORAL-LOCATIONPHORICITY system in Fig. 4.17) and often on the verb. In examples 4.25 and 4.26, the Timelocatives are realized as a nominal-group and as a prepositional phrase respectively. This reflects the aspectual
118 Systemic Functional Grammar in Natural Language Generation tme-iocation TIME-LOCATIONCIRCUMSTANCE
nontime-location ontime-location
SPACE-LOCATION-
space-Iocation + Spacelocative
CIRCUMSTANCE
MANNERCIRCUMSTANCE CIRCUMSTANTIALS (input: clause-simplex)
+ Timelocath>e
CAUSALCIRCUMSTANCE
nonspace-location manner + Manner nonmanner cause + Cause noncause instrument
INSTRUMENTALCIRCUMSTANCE
+ Instrument noninstrument comparison
COMPARISONCIRCUMSTANCE
+ Comparison noncomoarison
Figure 4.16 The systems for Circumstantials (primary delicacy)
and 'Aktionsart' semantics the lexical verb expresses: in 4.25 'fuhr' is rather perfective and resultative, in 4.26 'hielt sich auf denotes imperfectivity and duration. For some examples from text see sample text 4.27.15 (4.27)
(4.27.1) Von der Terasse der Place Royale in Pau uber die Ebene zu sehen - auf die Gebirgskette der Pyrenaen: (4.27.2) das ist wie eine Symphonie in A-Dur. (4.27.3) Mit Graten und Spitzen, hohen Nasen und geraden Linien, mit den geschwungenen Vorbergen steht weit die grosse Wand der Berge und davor die kerzengraden Pappeln. (4.27.4) Vom Gebirge her went der Wind. (4.27.5) Das ist schon.16
A Systemic Functional Grammar of German 119
Figure 4.17 The system of TEMPORAL-LOCATION-PHORICITY In (4.27.1) three Spacelocatives are expressed: the first is of subtype source ('von'), the second of subtype path ('iiber') and the third expresses a goal ('auf') (see Fig. 4.18 for these subtypes of [spacelocation]). The Circumstantial in (4.27.2) expresses a comparison ('wie'). In (4.27.3) the Circumstantial ('davor') is pronominalized ('vor der grossen Wand. . .'), denoting another Spacelocative.17 The other Circumstantial in (4.27.3) ('mit den Graten . . .') is an Accompaniment Circumstantial, realized as a complex (coordinated) prepositional phrase. The Spacelocative in (4.27.4) is again of type source. The TRANSITIVITY and CIRCUMSTANTIAL systems build the basic (ideational) constituency of the clause; they correspond to NIGEL'S nuclear and circumstantial transitivity. The KOMET treatment of circumstantials and participants on clause rank differs from that
Figure 4.18 Subtypes of [space-location]
120 Systemic Functional Grammar in Natural Language Generation in NIGEL in that adverbial groups and prepositional phrases are functionally always circumstantials in NIGEL - in the KOMET grammar, prepositional phrases may also act as participants in transitivity structure. This interpretation applies mostly to processes of motion, e.g. (4.28)
Luis Trenker stieg auf das Matterhorn. (Luis Trenker climbed on the Matterhorn.)
where 'auf das Matterhorn' is the Location participant role in a process type [relational :locational]. This difference is due to the fact that we are dealing with particularities of German (e.g. the tighter bond of some relational processes to a location in space), which has resulted in a different view on transitivity patterns, as discussed in Section 4.2.3.2. 4.2.3.6 Theme Selections in the theme region provide a 'local' context of interpretation in that a specific element of the clause is given the textual status of Theme. In terms of the ongoing discourse, the choice of Theme reflects the method of development (Fries 1981) or thematic progression (Danes 1974) of a text. Thematic structure refers to the analysis of sentences into Theme and Rheme elements of structure (see the analysis of example 4.29 below). (4.29)
Man ersteigt den Berg. (One climbs the mountain.) Den Berg ersteigt man. (The mountain climbs one.)
Sample analysis • Subject-theme Man One
ersteigt climbs
den Berg, the mountain.
Diathesis: activization, nonrelation-changing, impersonal Theme (unmarked) Attribuant Subject
Process Finite
Location Direct Object
A Systemic Functional Grammar of German 121 • Object-theme Den Berg The mountain
ersteigt climbs
man. one.
Diathesis: activization, nonrelation-changing, impersonal Theme (marked) Location Direct Object
Process Finite
Attribuant Subject
In the first sentence the Attribuant is the Theme (in this case Subject-Theme, unmarked). In the second sentence the Location, conflated with the Direct Object, is the Theme (marked). The implementation of the theme region for German follows Steiner and Ramm (1995). Themes can be ideational, interpersonal or textual (see Fig. 4.19); and they can also be a combination of ideational, interpersonal and textual - in this case they are called multiple themes. Example 4.30 below encodes a multiple Theme reflecting the conjunctive, logico-semantic relations of causal ('weil'), adversative ('aber') and concessive ('trotzdem'), and includes the Subject ('der Wandervogel'), an interpersonal Circumstantial ('gern'), an experiential Circumstantial of place ('im Rucksack'), the Direct Object ('den gesamten Kosmos') and an accompanying Circumstantial ('mit sich'). (4.30) Weil aber trotzdem der Wandervogel gern im Rucksack den gesamten Kosmos mit sich tragt. . .18 The notion of unmarked theme as Subject-theme for English (as suggested by Halliday (1985a: 38-67) and as used in the NIGEL grammar) is not adequate for German. Since the Subject in German cannot take on the same variety of participant roles as in English, it is not typically the Subject that is conflated with the Theme in the unmarked case. Rather, a more appropriate generalization for German is that the least oblique (or most inherent) participant role occupies the Theme in the unmarked case (notwithstanding the syntactic function realized by this most inherent role). See again the
122 Systemic Functional Grammar in Natural Language Generation
Figure 4.19 The THEME systems transitivity examples in Section 4.2.3.2: in example 4.9 the most inherent participant is 'Hans' (the Processor), which is the Subject in Hans mag das Buck, but the Indirect Object in Hans gefdllt das Buck. In both cases 'Hans' is the unmarked theme. Further, according to Heidolph et al. (1980), certain types of adjuncts can also be unmarked Themes, most typically temporal ones, e.g. (4.31)
Er fuhr gestern nach Berlin. - Gestern fuhr er nach Berlin. (He went to Berlin yesterday. - Yesterday he went to Berlin.)
A Systemic Functional Grammar of German 123 Taking the Theme to be realized by ordering at the constituent position(s) before the finite element (Vorfeld; see Hoberg 1981), not only immediate constituents of the clause can be in this position, but also parts of clause constituents, e.g. (4.32)
Entschuldigt hat er sich ja bei mir! (Apologized he has to me!)
where the Process constituent constitutes the Theme. Sample text 4.33 illustrates some more of the experiential thematic options. (4.33)
(4.33.1) Durch eine Wallfahrt nach Lourdes kann man organische Krankheiten heilen. (4.33.2) Das ist der Fundamentalsatz der Glaubigen. (4.33.3) Erklart wird er nicht. (4.33.4) Bewiesen werden soil er durch das Bureau des Constatations Medicales.
(4.3.1) has an Instrument-Circumstantial as Theme, and is thus marked according to our definition of theme markedness. (4.33.2) has an unmarked Theme (Most-inherent-role is Theme, here it is the Subject), it is pronominalized and refers back to the whole preceding proposition. (4.33.3) and (4.33.4) are Process-Themes and are thus marked. In the theme region, the systemic potential is thus rather different from that of English, which reflects the fact that German is a 'freeword-order' language, as opposed to English being rather strict due to the lack of case inflection: where German can always unambiguously signal syntactic function by case, English uses word order to do this (Hawkins, 1986). For a more detailed account of the thematic potential of German see Steiner and Ramm (1995). 4.2.3.7 Mood MOOD is the central system of the interpersonal metafunction at clause rank (see Fig. 4.20). It distinguishes between interaction types, such as [indicative], [declarative], [interrogative] and [imperative]. These are known in the grammar of German as 'Satzmodus'. Note here that what is known as 'Verbmodus' (Verb mood') in the grammar of German ('Indikativ', 'Konjunktiv' and 'Imperativ') is not a question of mood. 'Verb mood' is a realization of clause mood only in the case of imperative. The other clause mood options may be realized as 'Indikativ' or 'Konjunktiv', the choice of which is dependent on a number of different features of the
124 Systemic Functional Grammar in Natural Language Generation clause, such as 'Konjunktiv' may be required with clauses in indirect speech, it may be required with certain process types that have a projected clause as argument (processes of believing, wishing and the like), it may be required in clause complexes expressing some kind of hypothetical comparison, etc. (Helbig and Buscha 1988: 195-207). Rather, mood in SFG is understood as the grammatical reflection of semantic speech function: whether to make a statement or a proposal, give an order or pose a question. The systems of the German mood region are similar to those implemented in NIGEL. However, the Mood element, as it is defined for English as a distinguished function on clause rank consisting of Subject and Finite (Halliday 1985a: 71-8), is debatable for German. The Mood element is that part of a clause that is arguable, i.e. that part that can be negated, modalized, emphasized (by 'do') and that is replayed in tag-questions (e.g. He has arrived today, hasn't he?}. In German, the Mood element can be replayed in the way English replays it, but this is a rather rare case. Rather, what is replayed is the whole clause (see 4.34.5 in sample text 4.34 below). There is no real equivalent in German to the English tagging in picking up the Mood element. Furthermore, the CLAUSE-ELLIPSIS system that operates on
Figure 4.20 The system of MOOD
A Systemic Functional Grammar of German 125 Mood and Residue in English works on Participant-Process structures rather than on a Mood constituent in German, e.g. ellipsis of Finite and Process: Niemand trdgt heutzutage rote Pullover. Christian trdgt welchelrote Pullover. (Nobody wears red sweaters nowadays. - Christian does.}. The system of MOOD, in terms of the distinction in modes of interaction (indicative, imperative, etc.), however, is basically the same for German as it is for English. Differences arise in the subtypes of [imperative] and in realization in syntagmatic structure: • As opposed to English, where Subject is linearly ordered before Finite (a realization that is associated with the feature [declarative]), in German this depends on whether the clause is independent (ordering of Subject before Finite) or hypotactically dependent (ordering of Finite at the end of the unit) (e.g. Siefuhr gestern nach Berlin. - ... well sie gestern nach Berlin fuhr. (She went to Berlin yesterday. - ... because she went to Berlin yesterday.}}. • Generally, as in English, there is no Subject in imperative clauses, e.g. Bring mir ein Glas! (Bring me a glass!}. However, the [polite] option does require a Subject (realized as pronoun in the polite form ('Sie'), e.g. Bringen Sie mir ein Glas!}. See sample text 4.34 for illustration of the basic German mood options in text. (4.34)
(4.34.1) 'Was macht die Brigade Meternagel?' fragte er. (4.34.2) Rita musste lachen, wie er so genau wusste, wer in ihrer Brigade den Ton angab . . . (4.34.3) 'Sie zanken sich', sagte sie. (4.34.4) Wendland verstand gleich, was sie meinte. (4.34.5) 'Meternagel macht zuviel Dampf, was?' (4.34.6) 'Er hat doch r e c h t , ' sagte Rita. (4.34.7) 'Warum glauben sie ihm nur nicht?' 2 0
(4.34.1) and (4.34.7) are [interrogative], and [w-question]. (4.34.2), (4.34.3), (4.34.4) and (4.34.6) are [declarative], where (4.34.5) shows one way in which German can realize tagging: the tag picks up the whole proposition by 'was'. 4.2.3.8 Tense The tense and time treatment for German is based on the time/tense treatment as it is implemented in NIGEL (Matthiessen 1984), in which
126 Systemic Functional Grammar in Natural Language Generation tenses are constituted by the relation of event time (ET), reference time (RT) and speaking time (ST) (see Fig. 4.21). German has six grammatical tenses: present ('Prasens'), present perfect ('Perfekt'), past ('Prateritum'), past perfect ('Plusquamperfekt'), future I ('Futur F) and future II ('Futur IF). Composite tenses (present and past perfect, future I and II) are built with the auxiliaries 'haben' ('have') and 'sein' ('be'). Haben is used with • transitive verbs (also in intransitive readings); • 'middle' verbs, i.e. those which do not passivize (e.g. 'bekommen', 'erhalten' (get, receive)); • reflexive verbs; • impersonal verbs;
Figure 4.21 The German TENSE systems
A Systemic Functional Grammar of German 127 • intransitive verbs with Aktionsart of durative (e.g. 'sitzen', 'schlafen' (sit, sleep)). Sein is used with • intransitive verbs with Aktionsart of perfective (except for phase verbs, such as 'beginnen' (begin)); • verbs of motion; • 'sein' ('be') and 'bleiben' ('stay'). There are basically two exceptions to this: • When a certain view of an event (i.e. aspect) is to be expressed, in a durative view, tense building is with 'haben'; in a perfective view, it is with 'sein', e.g. Sie hat viel getanzt. - Sie ist durch den Saal getanzt. (She has danced a lot. - She has danced through the room.}. • When there is a change in valency, e.g. with a nondirected action, i.e. an action without an Affected role, tense building is with 'sein' (e.g. Er ist nach Dresden gefahren. - He has gone to Dresden), with a directed action, it is with haben (e.g. Er hat einen Trabi gefahren. He has driven a Trabi.}. The grammar has two systems that describe the six German tenses: PRIMARY-TENSE and SECONDARY-TENSE, where the SECONDARYTENSE system is dependent on PRIMARY-TENSE. The NIGEL grammar has tertiary and quatenary tense systems to simulate the recursiveness of English tense (Halliday 1985a: 175-84); in German, in contrast, the only possible secondary tense type is past. RT is established only when [secondary] has been chosen in SECONDARY-TENSE. In all three combinations of primary tense ([present], [past], [future]) with secondary tense, ET precedes RT; the differences between the three result from the relation to ST: • primary tense: [present], secondary tense: [secondary] (past-inpresent; ET < RT = < ST); • primary tense: [past], secondary tense: [secondary] (past-in-past; ET < RT < ST); • primary tense: [future], secondary tense: [secondary] (past-infuture; ET < RT > ST).
128 Systemic Functional Grammar in Natural Language Generation
4.2.4 Clause rank: classification of problems
Having provided a description of some of the major systems of the German clause, let us now turn to the problem domain in question and see how it is manifested on clause rank. It was pointed out earlier that SFG shows a gap or lack of detail concerning the representation of syntagmatic relations, which is due to SFG'S focus on functional diversification rather than on surfacesyntactic generalizations. What can be seen in the implementation of a German grammar fragment at clause rank is that specifications of the rather surfacesyntactic properties of a unit, such as its subcategorization properties, are mixed with the rather functionally motivated properties, such as its process type. Category selection (i.e. subcatgorization) can only be expressed given the choice of some functional feature. For example, with the choice of a process type, certain functional elements of structure are inserted. No matter what the exact internal structure of the units that expound functional elements is, they must be assigned a syntactic category and, if the functional elements inserted are governed at the same time, they must be assigned case. In SFG, this requirement cannot be formulated in general, surface-syntactic terms, independently of the concrete functional type of the clause, but is typically tied to a functionally motivated feature in the system network. Looking at the features that are preselected for elements of structure of the clause, one can distinguish two sets of features: there is one set of features that is always preselected, and another set that depends on the more delicate functional type of the clause and is thus not always preselected. The former are features that are not really functionally motivated, whereas the latter are. Each functional element of structure (at any rank, including the clause) must always be assigned a category that realizes it on a lower rank (or the same rank, in the case of embedding). Further, a participant role of a process must always be assigned a case. That is, among all the features that may be preselected, category must always be preselected for all the elements of structure of the clause, and case must always be preselected for those elements of structure that are governed. The fact that the constituent category and the case (if the constituent is governed) are always preselected on the dependent elements in clause structure suggests that they belong to the set of necessary features for a clause's syntagmatic well-formedness - rather independently of the functional type. Looking at the preselection statements, of, for instance, the feature nominal-group, we find that
A Systemic Functional Grammar of German 129 they are rather individually expressed for each single case in which this subcategorization holds. A generalization is not possible because there is no distinction drawn between features that have always to be preselected (i.e. the features necessary for the syntagmatic wellformedness of a unit) and features that do not. In other grammar models, in contrast, this kind of distinction is made. For example, MTT distinguishes between syntactic constraints that derive from semantics (e.g. number), and syntactic constraints that do not derive from semantics (such as government, agreement) (see again Chapter 3). While MTT conceptually and representationally distinguishes between them, in SFG this distinction is not made and a possible generalization is thus prevented. A related problem can be noted with the classify and inflectify realization statements. What we find here is that information pertaining to higher ranks is receded in the lexicon (see again Fig. 4.3 for an illustration of this relation for clause and lexical verb). What is expressed here is a congruence between a higher unit (such as the clause) and its head (the verb). But again, there is a generalization that is missed when classify and inflectify are used to formulate this kind of dependency. The generalization is, more precisely, that there is a constraint between a unit and its head, or, in more general terms, that higher-ranking units are characterized by their heads: whatever process type the clause has, the Process element of structure will be expounded by a verb (unless the Process is nominalized). The same is, of course, true of other units of higher ranks, such as the nominal group (see Section 4.2.5 below) or the prepositional phrase (see Section 4.2.9 below). Again, this kind of generalization is one about the dependency relations pertaining to a unit. The rank organization of the grammar further inhibits making this kind of generalization (see again Section 4.2.2), i.e. it is not possible to say that whatever the unit is, its head is constrained to be of a particular type - a generalization that, for instance, X-bar syntax is able to make, since X-bar's 'rank scale' is not category-specific. WTiy should one care about these kinds of generalization? Generalizations of the above kind are desirable from a linguistictheoretical point of view. As we noted in Chapter 2, SFL theory assumes functional diversity on both the paradigmatic axis and the syntagmatic axis. In the PENMAN/NIGEL implementation, however, functional diversity is not fully realized. In particular, syntagmatic structure is given as function structure only (and is thus biased
130 Systemic Functional Grammar in Natural Language Generation experientially in that the underlying organization is constituency); and in the paradigmatic part, logical organization is not acknowledged for simplex units. Finding a method of representing the above sketched generalizations could then mean a move towards the realization of functional diversity in the domain of logical relations.
4.2.5 The nominal group: description
The three metafunctions are reflected in the nominal group (NG) in the following way (Halliday 1985a: 158-75): • the Thing referred to is classified (experiential), and modifications of the Thing are expressed (logical); • the use of the Thing in cotext is specified (textual); • interpersonal statuses are manifested on the Thing. The experiential can be looked at as exhibiting an organization into the Thing and elements characterizing the Thing. The textual shows a left-to-right organization: from the Deictic element, relating to the immediate context, 'the identification of the here and now' (Halliday 1985a: 165), through 'post-deictics' (Halliday 1985a: 162), quantification, qualitative features, and class membership to the Thing. Following the Thing are the Qualifiers, which are embedded (rankshifted) clauses or phrases. While the choice and classification of the Thing, i.e. the experiential aspect of the NG, is represented in NiGEL-style computational SFG in the regions of nountype, epithet, classification, numeration and qualification, headship - be the Thing or the Deictic element the Head; both must be possible (Halliday 1985a: 173) - and pre/postmodifiers are left implicit. In other words, the experiential function of the NG is given, while the logical function is not (again, except for complexes). The textual perspective is reflected in the description of the German NG in the regions of determination and pronoun; the interpersonal function is reflected in the regions of nominalperson and agreement (the latter being a German-specific region that does not exist in NIGEL). We begin the description of the German nominal group in the KOMET grammar with the entry systems to the NG of the nominalgroup-complexity region, and then move on to the experiential, the interpersonal and textual regions. Section 4.2.5.9 discusses two problem cases: the problem of specifying grammatical agreement in the German NG and the problem of a lacking specification of
A Systemic Functional Grammar of German 131
dependency for the NG-simplex. 4.2.5.1 Nominal-group-complexity Nominal-group-complexity is described in two systems, TAXIS and TYPE-OF-INTERDEPENDENCE - analogous to the clause complex (see Fig. 4.22). For hypotactically related NG complexes, only [extending] and [enhancing] are open as choices in TYPE-OFINTERDEPENDENCE (Matthiessen 1992). For paratactically related NG-complexes the [elaborating] relation is also possible. Paratactic complexes comprise the following: the coordinated nominal group (i.e. the [paratactic&extending] complex), which can be either [additive], in which the two parts of the complex are related by 'und' ('and'), or [alternating], in which the two parts are related by 'oder' ('or') (see example 4.35); and the [paratactic&elaborating] complex (i.e. the apposition), in which the two parts of the complex are related as [exemplifying] by 'z.B.' ('e.g.') or as [restating] by 'd.h.' ('i.e.') (see example 4.36), or with no overt grammatical marking as in the case of appositions (see example 4.37). (4.35)
Behrens schuf die Verwaltungsgebaude der Mannesmann AG in Diisseldorf und die Deutsche Botschaft in St. Petersburg. (Behrens created the administration building of Mannesmann AG in Diisseldorf and the German embassy in St Petersburg.)
(4.36)
Behrens schuf eine Reihe von Monumentalbauten, z.B. die Verwaltungsgebaude der Mannesmann AG in Diisseldorf und die Deutsche Botschaft in St. Petersburg.
Figure 4.22 The system of NOMINAL-GROUP-COMPLEXITY
132 Systemic Functional Grammar in Natural Language Generation (Behrens created a number of monumental buildings, e.g. the administration building of Mannesmann AG in Diisseldorf and the German embassy in St Petersburg.) (4.37)
Peter Behrens, der Architekt und Designer, lebte von 1899 bis 1903 in Darmstadt. (Peter Behrens, the architect and designer, lived in Darmstadt from 1899 to 1903.)
The syntagmatic structure of NG-complexes is the same as for clause complexes (univariate, hypotactic or paratactic) (see Fig. 4.23 illustrating the logical structure of example 4.36); and the syntagmatic structure of the NG-simplexes is multivariate, regarding the Thing as its core (see Fig. 4.24). Modification of the noun in the German nominal group in experiential terms is implemented as in the NIGEL grammar and based on Halliday's treatment of the NG (Halliday 1985a: 160-9), distinguishing between the five regions nountype, epithet, classification, numeration and qualification.
Figure 4.23 An example of the logical structure of the NG
Deictic diese (these
Numerative
Epithet
Classifier
drei
wunderbaren
three
wonderful
Thing
roten
Rosen
red
roses)
Figure 4.24 Experiential structure of the NG
A Systemic Functional Grammar of German 133
4.2.5.2 Nountype The nountype region encodes the inherent properties of the Thing. Most of these are rather lexically determined, such as the features of the INDIVIDUATION system (see Fig. 4.25) accounting for whether the Thing in question denotes a class or an individual (e.g. 'Haus' ('house') versus 'John'). This, while being actually a feature of the noun as lexeme, is important to know at nominal group rank because it has an impact on determiner (non)selection. Another set of systems, or rather gates, that relate to properties lexically inherent to the Thing accounts for the boundedness of entitities (ZelinskyWibbelt 1987), i.e. their conception as mass or countable (see Fig. 4.26). This is important because the boundedness of the Thing constrains the kinds of potential quantification and determination of the Thing. See examples 4.38 (taken from Zelinsky-Wibbelt 1991: 13-14) and 4.39: (4.38)
Wasser ist unverzichtbar fur die Menschheit. - Das Wasser in der Tasse ist schmutzig. (Water is indispensable for mankind. - The water in the cup is dirty.)
(4.39)
50 t Kase - 5 Wiirste (50 tonnes of cheese - 5 sausages)
For instance, whether a noun is countable or a mass noun depends on a language's conceptualization of entities, i.e. it is dependent on the semantic system of entities (Zelinsky-Wibbelt 1987), e.g. 'die
Figure 4.25 Some systems for noun type
134 Systemic Functional Grammar in Natural Language Generation (gate :name COUNTABLE-NOUN :inputs (AND (OR SINGULAR PLURAL) LEXICAL-THING) :outputs ((1.0 COUNTABLE-NOUN (CLASSIFY THING COUNTABLE))) :region COUNTNUMBER :metafunction IDEATIONAL) (gate :name UNCOUNTABLE-NOUN :inputs (AND NONSINGULAR NONPLURAL LEXICAL-THING) :outputs ((1.0 UNCOUNTABLE-NOUN (CLASSIFY THING UNCOUNTABLE))) :region COUNTNUMBER :metafunction IDEATIONAL) (gate :name PLURAL-THING :inputs (AND PLURAL LEXICAL-THING) :outputs ((1.0 PLURAL-THING (INFLECTIFY THING PLURAL-NOUN)) :region COUNTNUMBER :metafunction IDEATIONAL) (gate :name SINGULAR-THING .•inputs (AND SINGULAR LEXICAL-THING) :outputs ((1.0 SINGULAR-THING (INFLECTIFY THING SINGULAR-NOUN))) :region COUNTNUMBER :metafunction IDEATIONAL)
Figure 4.26 Gates for noun-inherent boundedness Schere' (countable, singular), 'the scissors' (plural). We can give the main grammatical options only for German nouns here, which are:21 • [countable-noun] (singular, plural), e.g. 'Haus', 'Baum', 'Film' ('house', 'tree', 'film'); • [uncountable-noun] (nonsingular, nonplural), e.g. 'Sand', 'Papier' ('sand', 'paper'); • [plural-thing] (nonsingular, plural), e.g. 'seine Papiere' ('his documents'); • [singular-thing] (nonplural, singular), e.g. 'der Mond' ('the moon').
A Systemic Functional Grammar of German 135 While nountype is thus concerned with inherent properties of nouns and is lexically realized, the regions of epithet, classification, numeration and qualification, which are described below, are concerned with modification of the noun. They have structural realizations (insertion of functional constituents in the NG's syntagmatic structure) rather than lexical ones. 4.2.5.3 Epithet With modification by an Epithet the Thing is attributed a quality (e.g. alte Manner - old men). Typical examples of Epithets are size, colour, age. For an example system see Fig. 4.27. In English, the Epithet is linearly ordered between the Numerative and the Classifier (which immediately precedes the Thing), e.g., these three wonderful red roses. The same is true for German. 4.2.5.4 Classification With modification by a Classifier, the Thing is classified as being part of a class or set, e.g. elektronische Datenverarbeitung (electronic data processing} (see Fig. 4.28). As opposed to Epithets, Classifiers are usually not gradable; nor can they be intensified (*elektronischere/elektronischste Datenverarbeitung (more/most electronic data processing), *sehr elektronische Datenverarbeitung (very electronic data processing}}. The only realization a Classifier can have according to the classification systems in the KOMET grammar currently is as an adjective. However, Classifiers can also be realized as nouns, e.g. 'Zinngiesser', 'Lattenzaun' ('tin man', 'picket fence'). In the NIGEL
Figure 4.28 An example of Classifier
136 Systemic Functional Grammar in Natural Language Generation grammar, this is actually the congruent realization given for Classifiers. For German, the realization of a Classifier as noun would have to result in a compounding operation. 4.2.5.5 Numeration A Numerative modifying the Thing can be [quantitative] or [ordinative] and [indefinite] or [definite] (see Fig. 4.29), e.g. zwei Bucher (two books) ([definite&quantitative]), einige Bticher (a few books) ([indefinite&quantitative]), die zweite Reihe (the second row} ([definite&ordinative]), die folgende Nummer (the following number) ([indefinite&ordinative]). 4.2.5.6 Qualification With modification by a Qualifier, further, restrictive information on the Thing is provided. Qualifiers in German are typically, but not exclusively, ordered after the Thing (e.g. das Haus im Wold - the house in the woods). They are usually embedded (i.e. rankshifted) clauses and phrases. For an example see 4.40.22 Hie generation of Qualifiers as relative clauses embedded in nominal groups (as in example 4.40) proceeds in the following way. A system PROCESS-QUALIFICATION in the NG-grammar accounts for the possibility of a relative clause within the NG. A constituent Event is inserted in syntagmatic structure and preselected to be a dependent, elaborating clause of type [elaboration-of-thing] (versus
Figure 4.29 The systems of DEFINITENESS and QUANTITY in numeration
A Systemic Functional Grammar of German 137 [elaboration-of-proposition]). In the cycle through the elaborating clause, given these preselections, the relative pronoun is determined to be of d-type (i.e. 'der', 'die', 'das').23 The choice of relative pronoun in English works according to whether the entity referred to by the pronoun is a conscious being or not ('which' versus 'who'), whereas in German there is agreement between the relative pronoun and the entity referred to in grammatical gender and number (but not in case). Further, the distinction between restrictive and nonrestrictive relative clauses does not have a formal reflex in German, whereas in English it can be partly realized in the choice of 'that' versus 'which' and more by intonation. In principle, all the types of circumstantials we find at clause rank can also act as Qualifiers at nominal group rank. See the example system in Fig. 4.30. For English, the Qualifier is always realized as linearly ordered after the Thing. In German, a Qualifier realized as relative clause or prepositional phrase also follows the Thing; however, a Qualifier realized as a participle (see examples 4.41 and 4.42), which can be analysed also as rankshifted clause, precedes the Thing. (4.40)
Einen giinstigen Verlauf nahm der inlandische Tafelkaseabsatz, der um 3,9 Prozent auf 22100 t stieg. (A good outcome took the national cheese sales, which rose by 3.9 per cent to 22 100 tonnes.)
(4.41)
der blau angestrichene Zaun (the fence painted blue)
(4.42)
die im Sessel sitzende Katze (the cat sitting in the armchair)
Another system in the NG that gives rise to Qualifiers is THINGTYPE. If the head-noun of the NG is deverbal ([processual]), almost the whole range of TRANSITIVITY options are opened up for the NG (see Fig. 4.31). However, the realization statements for nominal transitivity are different, the participant roles which are obligatory in clause transitivity being only optional for nominal transitivity. This is
Figure 4.30 An example of Qualification
138 Systemic Functional Grammar in Natural Language Generation
Figure 4.31 The systems of THING-TYPE and PROCESSUAL-THING-TYPE formulated in subsystems of PROCESSUAL-THING-TYPE. The only part that can be partially shared among clause and nominal group is the inquiries, so that the commonality of noun transitivity and clause transitivity is reflected semantically in the fact that the same Upper Model concepts are used. The regions of nountype, epithet, classification, numeration and qualification taken together account for the experiential function of theNG. 4.2.5.7 Determination The region of determination is mainly concerned with aspects of the textual metafunction of the NG. Here again, we have adopted the NIGEL implementation (see Fig. 4.32 for a network of DETERMINATION for German, including the lexical realization of features).24 A nominal group can be a [specific] (e.g. das Haus - the house) or a [nonspecific] instantiation (e.g. ein Haus - a house). Textually, this is essentially a matter of the NG's identifiability from the cotext, from the context or from general world knowledge. 4.2.5.8 Nominal-person and Pronoun The region of nominal-person encodes part of the interpersonal function of the NG and covers the grammatical person of pronouns (see Fig. 4.33), [speaker] meaning first person, [speaker-plus] meaning first person plural, [addressee] meaning second person. Third person is included in the more delicate options of the [noninteractant] feature of the NOMINAL-PERSON system. For
A Systemic Functional Grammar of German 139
Figure 4.32 The systems of DETERMINATION for German
140 Systemic Functional Grammar in Natural Language Generation
Figure 4.33 Nominal person systems (gate :name SIE :inputs (AND PRONOMINAL-SPECIFIC NONPLURAL FEMININE (OR NOMINATIVE ACCUSATIVE)) :outputs ((1.0 SIE-PRONOUN (LEXIFY THING SIE))) :region PRONOUN :metafunction INTERPERSONAL)
Figure 4.34 An example of a gate for pronouns
[noninteractant] pronouns, the more delicate systems are largely the same as for determination, e.g. [demonstrative] versus [possessive], [determinative] versus [interrogative]. All pronouns are accounted for in gates with conjunctive entry conditions that specify case, number and gender features of a specific pronoun (see Fig. 4.34 for an example). 4.2.6 NGrank: classification of problems In the computational account of the German NG, two major problems with the representational means available in the NIGEL SFG-implementation have been encountered. One concerns the representation of a rather German-specific phenomenon, which is agreement within the NG. Compared to English, this is rather complex. The other one is again the problem of the nonrepresentation of logical/dependency relations for the simplex unit. The present section discusses these two problems, starting with a description of the agreement relations in the German NG and its current treatment in the KOMET grammar. Agreement features in the German NG are primarily deployed in the domain of premodification (Deictic, Numerative, Epithet and Classifier). There is agreement in case, gender and number between
A Systemic Functional Grammar of German 141 premodifiers and Head (assuming that the Thing is the Head), and with regard to definiteness between the Deictic and the other premodifiers of the Head. With postmodifiers (experientially Qualifiers) realized as relative clauses, there is agreement between the relative pronoun and its antecedent in terms of number and gender (see above, Section 4.2.5.6). The English nominal group, in contrast, only shows agreement in number, which is restricted to certain kinds of Deictic and the Head, and it does not cover the whole range of premodifiers (e.g., all kids - a nice kid - all nice kids). Here, agreement can be semantically motivated. In German, however, this is different. Let us consider the following examples for illustration of the German NG's agreement phenomena: • Number agreement: singular versus plural (4.43) der schone Wald - die schonen Walder (the beautiful forest - the beautiful forests) • Gender agreement: masculine versus feminine versus neuter (4.44) ein schoner Wald - eine schone Frau (a beautiful forest - a beautiful woman) • Case concord: nominative versus genitive versus dative versus accusative case (4.45) der schone Wald - des schonen Waldes - dem schonen Wald - den schonen Wald • 'Identifiability' agreement: identifiable versus notidentifiable (4.46) der schone Wald - ein schoner Wald (the beautiful forest - a beautiful forest) In terms of implementation, a semantically motivated solution for number agreement would be to have muhiplicity-q (i.e. the inquiry that asks about the number of the concept to be expressed) posed to the semantics for each single element in the nominal group independently of each other. One could argue that number agreement in the nominal group is in fact a semantic phenomenon and that simply for all its elements choices with regard to a set of the same features have to be made because the referential object is the same. However, there are some counter arguments:
142 Systemic Functional Grammar in Natural Language Generation • the Deictic element does not have a corresponding ideational concept, its number cannot be inquired via semantic type; • one would lose the information that there is a grammatical relation between elements of one unit with overt morphological reflexes. Therefore, number agreement must be specified in the grammar itself. We describe number agreement between noun and determiner (Thing and Deictic) by gates that have as input conditions the number value determined for the noun and that systemic feature that results in the insertion of a Deictic. With the output features we specify that the Deictic has the same number value, e.g., if plural is one of the input conditions we write a realization statement about the determiner, e.g. (classify Deictic g-plural) where [g-plural] stands for 'german-plural'. An example of a gate is displayed in Fig. 4.35. Gender agreement is yet a different case, in that gender is an idiosyncratic feature of nouns as lexemes. The noun being (usually) the head of the NG, the dependents (determiner, adjective) reflect the gender agreement with the head in their morphological shape. The gender information is inquired about the NG's Head (i.e. the Thing) from the lexicon by two inquiries that are asked by the chooser of the LEXICAL-GENDER system (see Fig. 4.36). Licensing this kind of agreement from the semantics is not possible because we are not dealing with natural, but grammatical gender. The actual agreement is then specified in the system DEICTIC-GENDERAGREEMENT, the input condition of which is that a Deictic be inserted ([nominalspecific] or [deictic-indefinite]). The parallel realization for Deictic (classify Deictic masculine-determiner) then specifies the actual agreement in gender between Thing and Deictic (see Fig. 4.37). (gate :name DEICTIC-GENCASE-PLURAL-SPECIFICATION :inputs (AND GENITIVE-CASE PLURAL (OR NOMINAL-SPECIFIC DEICTIC-INDEFINITE)) :outputs ((1.0 GENITIVE-DEICTIC-PLURAL (CLASSIFY DEICTIC G-GENITIVE G-PLURAL))) :region AGREEMENT :metafunction INTERPERSONAL) Figure 4.35 Deictic - Head-Noun case concord and number agreement
Description: A Systemic Functional Grammar of German 143
Figure 4.37 Agreement in gender
Gender agreement between premodifiers (Classifier, Epithet) and noun (Thing) is then described in a separate system, which has those features as input conditions that result in the insertion of a premodifier (Classifier or Epithet or Numerative). This has to be done separately for Classifier, Epithet and Numerative because they can occur together in the nominal group. Furthermore, there is case concord. As described above, in the KOMET grammar, case is assigned to participants on clause rank on the grounds of transitivity and diathesis (or on PP rank, on the grounds of minor process type) as preselection on the NG of the case a verb or a preposition governs. Again, this is formulated as gates. For an example see again Fig. 4.35 (including number
144 Systemic Functional Grammar in Natural Language Generation agreement). These gates have the following input conditions: • features from systems the realization of which results in the insertion of some kind of Deictic (determiners, demonstratives, possessives); • the case preselected from clause or prepositional phrase rank; • the number determined as feature of the Thing. Finally, there is agreement between determiner and premodifying adjectives (i.e. between Deictic and Epithet, Numerative, Classifier). Here, depending on the identifiability and other features of the NG that have an impact on determination, the morphology of the adjective realizing a Classifier, Epithet or Numerative will be different, i.e. declension will be weak, strong or mixed (see example 4.46 above). This behaviour reflects the principle of monoinflection ('Monoflexion'; Helbig and Buscha 1988: 299-304), according to which the inflectional categories of gender, number and case always show only once - either on the determiner or on the adjective. This kind of agreement is also handled by gates which enforce a particular inflectional realization of the adjective. See an example of such a gate for a Colour modifier in Fig. 4.38. Since a adjectival modification of a noun is generated in a separate cycle (the cycle through the adjectival group, which is embedded in the NG), this information is preselected for the adjectival-group (see realization statement in sample system 4.38: [adj-gr-genitive] for case, [adj-gr-plural] for number, [adj-gr-weak] for declension type). (gate :name DEICTIC-GENCASE-PLURAL-SPECIFICATION :inputs (AND COLOUR-MODIFIED GENITIVE-CASE SINGULAR NOMINAL-SPECIFIC) :outputs ((1.0 COLOUR-GENITIVE-SINGULAR-IDENTIFIABLE (PRESELECT COLOUR ADJ-GR-GENITIVE ADJ-GR-SINGULAR ADJ-GR-WEAK))) :region AGREEMENT :metafunction INTERPERSONAL) Figure 4.38 Deictic - adjectival-modifier agreement
A Systemic Functional Grammar of German 145
There are a number of problems with this treatment of agreement in the NG. First, the specification of agreement is forced into the experiential organization - the only one that is provided by the representational means in NIGEL - when agreement is actually a reflex of the interpersonal metafunction with a prosodic syntagmatic organization. Second, the specification of agreement has to be formulated on what is actually there in the NG, i.e. on which elements of structure are actually realized. The individual experiential function of the premodifiers in the agreement domain must always be known, i.e. it is not possible to say that whatever premodifiers there are, they must agree in number, gender and case with the noun. The implementation of agreement operates on what is actually there, not on the elements that can potentially occur. For instance, whereas case can be assigned to the NG already on clause rank, this cannot be done for its dependents because the information about which constituents the nominal group will have is not existent at clause rank. As a consequence, every instance of agreement must be specified separately, resulting in an explosion of gates like the ones we have shown. The problem of representing agreement in the German nominal group relates to the more general problem mentioned earlier of lacking possibilities of expressing (morphosyntactic) generalizations about a grammatical unit. Again, one would need to be able to state that whatever constituents an instantiation of the nominal group has, they exhibit case concord. If one could state that all premodifying elements of structure have an attribute 'case' that has the same value as, say, the (head)noun, this would already be more general than the gate solution. In principle, of course, agreement is not a dependency relation in the strict sense, but rather calls for a representation as prosody. However, an attribute-value sharing could be one way of representing it, namely as a kind of sister dependency. Again, attempting a sister dependency solution, where head and premodifiers constitute the 'domain of agreement', calls for the representation of what the head is in the structure. As we noted at the beginning of Section 4.2.5, while the NIGEL grammar does provide a dependency analysis for NG-complexes, the logical aspect of the NG-simplex is not accounted for. But the NG-simplex can be analysed by means of dependency and can be shown to have a dependency (univariate) structure complementary to the multivariate constituency structure that results from the experiential analysis; see Halliday (1985a: 170-3) and the sample analysis in Fig. 4.39, where a is the head.
146 Systemic Functional Grammar in Natural Language Generation
diese drei wunderbaren roten Rosen Figure 4.39 A Hallidayan dependency analysis of the simplex NG
The only information given by this kind of representation, however, is that a is the head and that (3, -y, 8, etc. are the dependents. There is much more information that can be subsumed under dependency, which is generally not considered - neither in SFG theory nor in the NIGEL implementation. See again the problem areas of the clause rank we have discussed Section 4.2.4. There is a regular relation between features of a higher-ranking unit and the units that expound elements of structure of these higher-ranking units, e.g. the classify relation between a Process and the lexical verb realizing the Process. This also applies to the nominal group. Almost every system we have presented in the description of the German NG includes classify or inflectify realizations for elements of structure of the NG. And here, too, some of these classify and inflectify features have always to be preselected because they are surface-syntactically rather than functionally motivated (e.g. case, gender) and the only features necessary for the unit's syntactic wellformedness. Again, introducing a head can assist the formulation of the general properties of a unit, such as that nominal groups have a head which is realized as a noun (if we take the Thing as head); or that both NG and Head have a property 'case' which has one of the four values nominative, genitive, dative or accusative. With the current representational means of NIGEL (system networks, function structures) it would be possible to introduce a functional element 'Head' as a logical kind of element of structure in addition to Thing, Epithet, Classifier, etc., but it is not possible to formulate general surface-syntactic properties in the way mentioned above as attributes and values that can be common across a higher-ranking unit and its head. As will be discussed in detail in Chapter 5, this is due to the usage and motivation of features in SFG - which has to be reconsidered if we want to include dependency information in the grammar in the way sketched above.
A Systemic Functional Grammar of German 147
4.2.7 The adjectival group
The adjectival group is preselected from the nominal group when there is an Epithet or a Classifier inserted and from clause rank for predicative adjectives (subject or object complements). As for the other units we have discussed, the primary system for adjectival groups is ADJECTIVAL-GROUP-COMPLEXITY. Furthermore, there are systems accounting for ADJECTIVAL-SCALABILITY and INTENSIFICATION.25
ADJECTIVAL-SCALABILITY systems build the region of adjectivalcomparison. Scalability features are preselected from the higher unit (NG): Classifiers are generally nonscalable, whereas Epithets are scalable. The primary system distinguishes between [nonscalable] and [scalable]. If an adjective is [scalable], it can be [nonscaled] or [comparative-scaling] (e.g. schoner-(more beautiful)} or [superlativescaling] (e.g. schonst-(most beautiful}, see Fig. 4.40. Scaling in German is realized morphologically by suffocation (e.g. das schone Haus - das schonere Haus - das schonste Haus). All the above options also exist in the NIGEL grammar of the adjectival group. An additional, German-specific system is ADJECTIVAL-GROUP-DECLENSION, a rather formally motivated system
Figure 4.40 Scaling of adjectives
Figure 4.41 The system of ADJECTF/AL-GROUP-DECLENSION
148 Systemic Functional Grammar in Natural Language Generation with three features [adj-gr-weak]3 [adj-gr-strong] and [adj-gr-mixed] (see Fig. 4.41). These features are always preselected since their choice depends upon the type of Deictic that modifies the NG (see the agreement discussion in Section 4.2.6).
4.2.8 The adverbial group
The adverbial group is first of all subclassified according to metafunction (Matthiessen 1992: 597-600); see Fig. 4.42. Examples of the three types of adverbial are given in 4.47 (experiential)j 4.48 (interpersonal) and 4.49 (textual). (4.47)
Das Geschaft geht gut. (Business is going well.)
(4.48)
Vermutlich ist sie schon gegangen. (Probably she has already left.)
(4.49)
Erich hingegen ist gegangen. (Erich, in contrast, has left.)
Features of ADVERBIAL-TYPE are preselected notably from the clause; they realize Circumstantials. Like adjectives, adverbials can potentially be intensified, e.g. Das Geschaft geht sehr gut. (Business is going very well.} (see Fig. 4.43). Further, experiential adverbials and some types of interpersonal adverbials can be scalable (e.g. Das Geschaft geht gut. - Das Geschaft geht besser. - Das Geschaft geht am besten.} (See the ADJECTIVALSCALABILITY system in Fig. 4.40 above - a similar system accounts for scalability of adverbials.)
4.2.9 The prepositional phrase: description
The prepositional phrase is divided into two regions: PPother and PPspatiotemporal. PPspatiotemporal covers prepositional phrases with spatial and temporal meaning, PPother covers all other kinds.
Figure 4.42 The system of ADVERBIAL-TYPE
A Systemic Functional Grammar of German 149
Figure 4.43 The system of ADVERBIAL-INTENSIFICATION The basis for the PP account in the KOMET grammar is (Grote 1993). Here, we describe some of the major options only. For details see also Grote (1994). 4.2.9.1 PPother The major types of prepositional phrases are [spatio-temporalprocess] (which in itself builds the region of PPspatiotemporal), [instrumental-process], [comparative-process], [causal-process], [accompaniment-process], [matter-process] and [role-process] (see Fig. 4.44). The realizational consequence of each of these types is the choice of a particular preposition (the Minorprocess), which governs the embedded nominal group (the Minirange),26 i.e. it assigns case to it. Again, case assignment is formulated as preselection. This is accounted for mostly by gates with multiple entry conditions under which a particular case is required on the embedded NG (for an example see Fig. 4.45).
Figure 4.44 The system of MINOR-PROCESS-TYPE
150 Systemic Functional Grammar in Natural Language Generation (gate :name BEI-MINORPROCESS :inputs (AND FULL-MINORPROCESS MATTER-PROCESS) :outputs ((1.0 BEI-MINORPROCESS (LEXIFY MINORPROCESS BED (PRESELECT MINIRANGE DATIVE-CASE))) :region PPOTHER :metafunction IDEATIONAL)
Figure 4.45 A gate for Minorprocess 4.2.9.2 PPspatiotemporal The region of PPspatiotemporal is further divided along two axes: extent/location and spatial/temporal. Differences between English and German in the area of spatiotemporal prepositional phrases lie mainly in the potential ambiguity of German prepositions between [motion-process], i.e. PPs with a dynamic meaning, e.g. in die See (into the sea), and [rest-process], i.e. PPs with a static meaning, e.g. (in der See) (in the sea)., in the area of spatial PPs (see Fig. 4.46). In English, the difference between [motion-process] and [rest-process] can be reflected in the preposition itself ('into versus 'in') depending on the dimensionality of the object that is the goal of the movement; in German, the disambiguating factor is the case a preposition governs (usually accusative versus dative). Furthermore, with motion processes the case a preposition
Figure 4.46 Subtypes of [spatial-process]
Figure 4.47 Subtypes of [motion-process]
A Systemic Functional Grammar of German 151 governs depends on whether the motion is away from or towards an object (e.g. Wir laufen vom Hafen) (dative) ins Stadtzentrum (accusative) (We walk from the port to the centre.} (see Fig. 4.47). However, the case governed by a preposition cannot always be exhaustively motivated by minor process type - the same as for process type of the clause. There will always be some arbitrariness, which has to be accounted for as determined rather idiosyncratically (Hawkins 1986). Another property that distinguishes the German PP from the English PP is the possibility of pronominalization, e.g. (4.50)
Damit ging der Absatz um 2,6 Prozent zuriick. (With this, the sales went down by 2.6 per cent.)
Formally, 'damit' is a pronominal adverb which stands for a prepositional phrase (or for an adverb or even for a whole proposition; in the latter case it would better be called a pronominal proposition)., e.g. (4.51)
Wir haben dieses Jahr gute Resultate erzielt. Damit sind wir sehr zufrieden. (We have achieved good results this year. With this, we are very satisfied.)
Pronominal adverbs are pro-words that are used alternatively to personal, demonstrative, interrogative or relative pronouns. This has the following pattern: • 'da-' for personal and demonstrative pronouns; • 'wo-' for interrogative and relative pronouns. The grammatical possibility of pronominalizing Circumstantials that are realized by PPs is accounted for in the system PREPOSITIONAL-PHRASE-DEICTICITY (a parallel system to MINORPROCESS-TYPE). PREPOSITIONAL-PHRASE-DEICTICITY has tWO Options [full-minorprocess] and [deictic-minorprocess]. The realization statement for [deictic-minorprocess] is (conflate Minorprocess Minirange). Generally, the choice of one or the other has (co)textual reasons. A pronominalized PP can then be pronominalized either by a w-pronominal-adverb (e.g. 'womit', 'woher') or a d-pronominaladverb (e.g. 'damit', 'daher'). See these two systems in Fig. 4.48. For examples 4.50 and 4.51 we gather the following features for 'damit': [accompaniment-process&deictic-minorprocess:d-
152 Systemic Functional Grammar in Natural Language Generation
Figure 4.48 Pronominalization of Minorprocesses minorprocess]. A gate with these input features leads to the realization of the pronominal adverb as 'darnit' - if w-minorprocess was chosen, 'womit' would be generated.
4.2.10 PPrank: classification of problems
According to Halliday, the PP can be seen as a mini-clause. This shows, for instance, in the fact that both the clause and the PP are governing units, i.e. their heads are governors that assign case to its participants. Consequently, the same kinds of problems we have noted for the clause recur for the prepositional phrase. Generally, there are features that have always to be preselected (which implies that they are essential for the unit's syntactic well-formedness) and features that do not always have to be preselected. More particularly, the features of the Minirange that are always preselected from PP rank are category and case (subcategorization and government); and the feature that is always preselected for the Minorprocess is the preposition. It is these features that account for the phrase's morphosyntactic shape. Again, we face the same problem of these preselections being distributed over various places in the PP network and having to formulate them on what is actually there as instances of the PP instead of being able to formulate them as general attributes independently of the semantico-functional type of PP. 4.3 Summary and Outlook This chapter has presented a small fragment of the grammar of German represented in the NiGEL-style formalism. The presentation has been biased in two ways. First, since we have used the NIGEL grammar of English as a basis, our description has been contrastive to English in many places. Second, throughout the descriptive sections, we have paid particular attention to the realizational
A Systemic Functional Grammar of German 153
operator of preselection (including classify and inflectify), which can be looked at as encoding a dependency relation between higherranking units and their elements of structure in general, and their heads and dependents in particular. The general problem we have encountered here is a lack of representational means for a general expression of a unit's morphosyntactic properties. This includes the formulation of subcategorization, government and agreement and of the general constraints between a phrasal (i.e. higher-ranking) unit (such as clauses, groups and phrases) and the units that expound its elements of structure. Subcategorization and government could be rather generally formulated with the help of a notion of dependency: a head - as subcategorizand - constrains the kinds of categories it combines with, and, if it is a governor at the same time, it assigns case to its dependents. Further, since a phrasal unit is generally a projection of the type of lexical category of its head, the two are bound to show some sort of congruence or 'match' (see again Section 4.2.2).27 It is feasible to formulate all these properties in NiGEL-style SFG (by preselect, classify, inflectify), but not in a general way (see again the discussion in Sections 4.2.2, 4.2.4, 4.2.6 and 4.2.10). This has basically two reasons. The first lies in the theory of SFG, the second lies in representation. In SFG, dependency relations are only acknowledged for complex units, with recursive systems and univariate syntagmatic structures. Characteristic of logical relations is the recurrence of one and the same function (or element of structure) in one unit. But simplex units can be shown to exhibit dependencies very similar to the ones in complex units (see Halliday's (1985a) sample analysis of the simplex NG as univariate, hypotactic structure). However, what the systemic analysis of the simplex unit in logical terms, which focuses on mrerdependency between sisters in a structure, does not reveal is the dependency between the higher-ranking unit and its head and dependents (roughly, Hudson's (1976) daughter dependency): the 'match' between a higher-ranking unit and its head daughter, on the one hand, and the subcategorization and government properties of a higher-ranking unit, which are reflected on the dependent daughters (i.e. the number and kinds of dependents), on the other hand. It is these features that to a large degree make up the necessary conditions for a structure to be morphosyntactically well-formed. The second reason that impedes a general formulation of these basic dependencies is that the available representational means in
154 Systemic Functional Grammar in Natural Language Generation SFG and NIGEL are limited. Compared to other modern grammar theories (e.g. GPSG, HPSG), the motivation and usage of features as the basic representational means is rather different in SFG. In SFG, first, features are primarily functionally motivated (see Chapter 2) in most other theories they are formally, i.e. surface-syntactically, motivated. While this functional motivation is potentially advantageous, for instance, for application in natural language generation (see Chapter 3), it inhibits the formulation of generalizations about syntactic structure (s). Second, in SFG, a feature is a primitive which is not further decomposable. In contrast, in most of the modern computational grammar models operating with unification and using typed feature hierarchies one speaks of feature structures. Here, a feature consists of an attribute and a value, which can be used recursively. While the types in a typed feature hierarchy can be considered to be equivalent to features in system networks, there is no equivalent in SFG'S representation to attribute-value pairs. It is this more diversified conception of 'feature' used in modern unification-based grammars that contributes to the possibility of formulating the 'match' between, say, a head and its phrasal projection as so-called feature sharing (see Chapter 5), where one and the same attribute can be formulated for more than one type, and its value (s) (which are themselves types) can be declared to be the same. Applying this to the SFG problem of not being able to express dependencies of the kind sketched above, a representational means like the feature structure opens up the possibility of conceiving of dependency in simplex units as the recurrence of the same 'feature'28 on a head and its phrasal projection (the higher-ranking unit). With the current representational means of SFG and its computational realization in NIGEL and KOMET this conception is not possible (see Chapter 2, Section 2.4 on the postulate of polysystemicity). In order to alleviate the deficiencies discussed in the present chapter - which become rather prominent when dealing with the grammar of German, but also occur with the grammar of English implemented in PENMAN - are a number of issues that have to be dealt with. As mentioned above, the problem of lacking syntactic generalizations has a linguistic-theoretical aspect and a linguisticrepresentational aspect (which are perpetuated in the computational representation in NIGEL) . In the discussion of phenomena for which generalizations would be possible, one particular linguistic concept has been referred to again and again. This is the concept of
A Systemic Functional Grammar of German 155
dependency. Further, it has been noted that there are other kinds of feature-based representation that offer some representational means that system networks do not provide. Given this, the following questions are posed: • Can we make use of some of the linguistic-theoretical insights made in other grammar theories which employ dependency as a central notion? • Can we incorporate some of the representational means of other linguistic models that employ features as general representational device, but offer representational possibilities that are not available in system networks? In order to investigate this possibility for alleviating the problems discussed in this chapter, two more general questions have to be asked: • How is the notion of dependency employed in different grammar theories? • What can be achieved by the use of features for linguistic representation in different grammar models? These are the questions the following chapter is concerned with, on the basis of which we propose to include a stronger notion of dependency in SFG. In terms of metasemiosis, this illustrates a second cycle through theory and linguistic representation, and makes an explicit step towards computational representation, thus concluding our illustration of methodology in computational systemic functional linguistic theorizing. Notes 1. Requoted from Macheiner (1991: 127). 2. The KOMET grammar of German is one of the language resources integrated in the KPML (Komet-Penman MultiLingual) development environment for multilingual tactical generation resources (Bateman 1997). The system is currently available at ftp.darmstadt.gmd.de and accessible from the URL http: //www. dar mstadt. gmd. de/publish/komet/kpml. html. 3. Note that while the RANK system includes verbal-like-groups, the verbal group as a unit is not yet implemented, neither in the NIGEL nor in the KOMET grammars. Features that in a full treatment belong in the account of the verbal group are specified at clause rank directly, e.g.
156 Systemic Functional Grammar in Natural Language Generation
4.
5.
6. 7.
8. 9.
10.
11.
12.
the Process element of structure is preselected to be of the lexical type 'verb' directly without an intermediate verbal group. A note about typographic conventions might be in place here again: system names are given in SMALL CAPITALS; system features are given in square brackets and lower case; function names are given with the first letter as capital. Paths through system networks have the following notation: [a:x] where [x] is a subtype of [a], and [a&x] where [a] and [x] stem from two simultaneous systems. Furthermore, realization statements are given as follows: ' + ' stands for the insertion of a constituent in structure (e.g. + Subject), ' A ' stands for linear ordering of two constituents, '/' signifies conflation of two functions into one constituent, ':' stands for input conditions from other systems, 'V signifies preselection of a feature for a function (e.g. Finite \ plural), where the feature preselected can also be a lexical or a morphological one (this is classify and inflectify preselection). Note that the classification given here is a simplification in that VERBMOOD, VERB-TENSE, VERB-NUMBER and VERB-PERSON are presented as cross-classes. Strictly speaking, only for indicative and conjunctive verb mood, but not for imperative, is the full inflectional paradigm in terms of tense, number and person realized for German verbs. Sample texts 4.1 and 4.13 are taken from: Alfred Andersch, Die Rote. Items in focus of an analysis are given in typewriter-like face. English: Having come back, she looked out the window. In the southwest, there was still light on the horizon, a last gleam that made the plains dusky; the rain had stopped, but it must have been cold, since every once in a while Franziska had to wipe the window, which would then mist again. A hypotactic enhancing relation would be expressed with the conjunction 'weil' ('because'). The example is taken from a corpus of Swiss financial reports (made available by ISSCO (Institut Dalle Molle pour les Etudes Semantiques et Cognitives, Geneva). English: In the business year 1986/87 the 40 cheese trading firms of the Swiss Cheese Trade Union sold altogether 79 035 tonnes of cheese, which corresponds to a decline of 2.6 per cent. The alternation effective-middle (i.e. detransitivization) is not as pervasive in German as it is in English. A more productive pattern in German is that of decausativization (i.e. Agent-Affected - Affected), which we account for in the diathesis region (see Section 4.2.3.4). In a more complete analysis, this must be explained on the grounds of an interaction between transitivity and Aktionsart. Aktionsart accounts to a large degree for prefixation of verbs in German (similar to one of the functions of prefixation with Russian verbs). English: Sitting at his feet, she told her story to the night and to a stranger, who was a man in such an overstated way, that she did not
A Systemic Functional Grammar of German 157
13.
14. 15. 16.
17. 18. 19. 20.
21. 22. 23. 24. 25. 26. 27.
28.
have to fear him. She made the story short; even though she did not hide anything from him, not even her fear of a pregnancy, she had ended when her cigarette was finished. She rose. :name specifies the system's name; : inputs gives the entry conditions; : outputs specifies the system's features; with the features the realization statements are annotated (here INSERT, CONFLATE, PRESELECT); : region specifies the grammar region, and : m e t a f u n c t i o n the metafunction to which the system belongs. Gates are in fact often used for this purpose, i.e. for specifying a certain realization holding for various different features or combinations of features. Sample texts 4.27, 4.30 and 4.33 are taken from Kurt Tucholsky, Ein Pyrendenbuch. English: To look out from the terrace of the Palace Royale in Pau over the plains on to the mountain chain of the Pyrenees: that is like a symphony in A-major. With ridges and peaks, high noses and straight lines, with swinging foothills, stands the great wall of the mountains, and before it, bolt upright, the poplar trees. Down from the mountains, blows the wind. That is beautiful. Section 4.2.9 elaborates on this rather German-specific option of pronominalizing prepositional phrases. English: But because nevertheless a rolling stone likes to carry the whole universe in his backpack . . . English: By a pilgrimage to Lourdes one is able to cure physical diseases. This is the fundamental credo of the believers. Explained it is not. Proved it is supposed to be by the Bureau des Constatations Medicales. From Christa Wolf, Der geteilte Himmel. English: 'How is the Meternagel brigade doing?' he asked. Rita had to laugh, as he knew so exactly, who set the tune in her brigade . . . 'They quarrel', she said. Wendland understood immediately what she meant. 'Meternagel has put the screws on, hasn't he?' 'He's perfectly right', said Rita, 'Why don't you just believe him?' See also Fig. 4.26. Taken from ISSCO corpus of economic reports. Choice between 'der' and 'welcher' is accounted for in a system, but a functional motivation of the choice has not yet been formulated. For a complete account of NIGEL'S textual potential of the nominal group in English see Bateman (1992b) and Matthiessen (1992). These are paralleled in the adverbial group; see Fig. 4.43. Note that the Minirange can also be realized by an adverb or by a prepositional phrase (Grote 1994). Since there is no separate verbal group unit (features of the verbal group are accounted for at clause rank), we can assume for the time being that the heads of the major higher-ranking units (clause, NG, PP) are all lexical. See above: dependency in complex units being the recurrence of the same function.
5 Computational Representation: A Proposal for Dependency in Systemic Functional Grammar
It is not true that only one model can represent the nature of language; language is much too complex for any one model to highlight all its different aspects equally clearly. (Halliday et al. 1964: 18)
5.1 Introduction The previous chapters have moved through the 'metasemiotic space' of SFL theory and its linguistic representation, and have illustrated the use of SFG in computational application, taking the example of grammar in NL generation. A number of problems have been discussed in the specification of the general properties of grammatical units in a computational SFG of German which relate t the lack of a representation of dependency information as reflected in, for instance, subcategorization properties and government patterns. The concern of the present chapter is twofold. First, it will take us one step further in the illustration of computational linguistic theorizing in SFL to explicitly computational representation. With this, the illustration of computational metasemiosis will be concluded here. However, in the light of the 'syntagmatic gap' we have encountered in systemic grammatical specification, it is necessary to re-examine the theory, try to find a solution to the problems found and feed this back to the theory. This is the second point of concern of the present chapter. 5.1.1 The problems: towards a solution There are basically two reasons why the difficulties noted in Chapter 4 arise: a linguistic-theoretical one and a representational one. The first relates to the lack of dependency information in SFG - mor precisely, in simplex units - between a unit and its head, and between a head and its dependents. The second concerns systemic
Computational Representation: A Proposal for Dependency 159
representation, the motivation and the usage of features in the system network. Accordingly, there are two issues that need to be investigated to lay the grounds for a solution to the problem of lacking generalizations over syntactic structure in SFG and its computational instantiation in NiGEL-style grammars: • the linguistic notion of dependency as widely used for grammatical description in a variety of approaches; • the use of features and their motivation as a central representational means for grammatical description. Among the theories and approaches that have provided input to our proposal for a solution - mainly dependency models (Hudson 1976, 1987b; Zwicky 1985; Pollard and Sag 1987, 1994) - there is one that is particularly relevant given the two-sided nature of the problem that we are facing. This is Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag 1987, 1994). What is it that makes HPSG especially relevant? In general, looking at other frameworks makes sense only if, first, there is some commonness among the frameworks under consideration; and, second, the other framework has a solution to the problem that is to be solved, which could be, given the commonness, worth considering to be recast in one's own framework. The commonness of HPSG and SFG lies in the fact that HPSG can be classified along with SFG in the class of feature-base approaches to the description of grammar (Bateman 1991). There are two areas in which HPSG has advantages over SFG that are wort considering as input to a solution to the 'syntagmatic gap' of SFG. First, for HPSG, as in generative grammar, the generalization over structures and the description of their regular interrelations has been the primary goal. In contrast, SFG, which looks for functional rathe than structural generalizations, suffers from an under-specification in this domain, and consequently from a lack of representational means.1 HPSG achieves these kinds of generalizations with the help of a notion of dependency, which has its predecessor in dependency formulations used by, for instance, GPSG, such as the Head Feature Convention (Gazdar et al. 1985). It is this formulation of dependency for the purpose of generalization over the internal make-up of syntactic structures that is of interest here. The second property of HPSG that is highly relevant to our problem is the means of representation employed in the HPSG model. As pointed out above, similarly to SFG, HPSG uses a feature-based representation.
160 Systemic Functional Grammar in Natural Language Generation However, the representation in the form of typed feature structures (e.g. Emele and Zajac 1990; Carpenter and Penn 1994; Dorre et al. 1996; Krieger and Schafer 1993) as used in HPSG has some properties beyond those commonly employed in SFG (system networks and associated realization rules). For instance, one of the main expressions of dependency in HPSG, the Head Feature Principle, is formulated as a special kind of feature co-occurrence constraint between a phrasal unit (a higher-ranking unit) and its head, namely as feature or structure sharing. Such formulations are representationally not possible in SFG and NlGEL-style implementations. And they are linguistically not possible with the common motivational criteria for features in system networks and the nonacknowledgement of dependency relations. Given a common ground between HPSG and SFG in that both are featurebased approaches to grammar, the central question we ask here is: Could dependency and its formulation in HPSG be a useful concept to be incorporated into SFG for filling its 'syntagmatic gap'? Even though the HPSG view has influenced our proposal for a 'Dependency Systemic Functional Grammar' to a large extent, we have sought not to compromise the major systemic notions of functional diversification, modes of expression, axiality, realization, rank, minimal bracketing of structure, etc. Our proposal has to be seen as a contribution to the representation of metafunctional diversification, on both the paradigmatic axis and the syntagmatic axis, attempting a concretization of the modes of expression - for which there do not exist sufficiently elaborated representational means that are also computationally usable (Matthiessen 1988a) - in the domain of logical relations, which receive here a reinterpretation as encoding dependency in simplex units too.
5.1.2 Overview of Chapter 5
There are two important issues to be clarified before embarking on the attempt to specify a solution: • What are features used for in grammar theories, in particular in modern unification-based grammars such as HPSG, compared to their use in SFG? • What is the use of dependency in grammar theory, particularly in formulations used in HPSG?
Computational Representation: A Proposal for Dependency 161 The chapter is organized as follows. Section 5.2 discusses the motivation and the use of features in grammar theory, covering the development of feature use from early Generative Transformational Grammar (Chomsky 1957) to modern unification-based grammar formalisms, such as HPSG. These kinds of feature usage are compared to that of SFG; in particular, the properties of typed feature structures above and beyond those of system networks are pointed out, and a typed feature structure reformulation of SFG is discussed. Section 5.3 deals with the motivation for using a notion of dependency and for employing the concept of head as a primitive for grammatical description, uncovering a number of existing readings of 'head' (Zwicky 1985; Hudson 1987b) and determining which of these readings are the ones that could be helpful for filling SFG'S 'syntagmatic gap'. Section 5.4 presents a proposal for a formally motivated classification hierarchy including dependency information that is to complement the functionally motivated system network. Since the kinds of constraints we maintain are needed in SFG cannot be accommodated in system network representations, the specification of this feature hierarchy takes as a basis a typed feature structure reformulation of grammatical system networks as suggested by Bateman et al. (1992) and extended by Henschel (1994), in which these kinds of constraints can find straightforward expression. 5.2 Features for Grammatical Representation The present section provides a brief overview of the use and usage of features as representational means for grammatical description. It begins with a brief historical overview, starting from early generative grammar (Chomsky 1957) and moving via X-bar syntax Qackendoff 1977) to modern unification-based grammars, such as Generalized Phrase Structure Grammar (GPSG) (Gazdar et al. 1985) and Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag 1987, 1994) (Section 5.2.1). Then the ways in which features are used in SFG and HPSG are compared (Section 5.2.2). Even though there is a basic commonality between the two in that features serve as the basic representational means, there are some major differences in their motivation and in their usage. Section 5.2.3 then shows how system networks can be reformulated as (typed) feature structures. Section 5.2 concludes by stating some desirable representational properties for SFG from the point of view of modelling both grammatical systems and grammatical structures.
162 Systemic Functional Grammar in Natural Language Generation
5.2.1 From features to feature structures and typed feature structures
Taking structuralist phonology as an example, where features were used for the first time in linguistics to describe the articulatory characteristics of phonemes, features were also introduced to be used for the definition of grammatical categories in grammar theory, more particularly in the paradigm of generative grammar (Chomsky 1957, 1965). Here, features consist typically of an attribute and a value, where the value is boolean, e.g. [v +/—] for verballike/nonverbal-like categories. Essentially, features are interpreted as functions from categories (V, P, N, Adj) to features (V, N), e.g. P applied to N is '-' (P(N)='-')• The traditional system of features used in Generative Grammar was the following: +/-V V
P
N ADJ
+ +
+/-N -
+ +
The motivation for introducing features and using them in this way was the goal of having syntactic rules generalize across categories. An example of a generalization thus made possible is about the focus position in cleft sentences: only [v—] maximal projections (NP, PP) can appear in the focus position (see examples (5.1) and (5.2)). (5.1)
It is pasta that we all want for dinner.
(5.2)
It is in acting that Jodie made her reputation.
Thus, transformations for clefting could be specified as operating on the feature [v-] rather than on NP and PP, which are both characterized by [v—], formulating the domain of rule application on partial feature specifications of the categories involved rather than on the disjunction of those categories. Further, the introduction of features was a step in reducing the complexity of grammars which contained hundreds of phrase structure and transformational rules at that time.
Computational Representation: A Proposal for Dependency 163 The need for such generalization possibilities arose within the discussion of 'lexicalist' versus 'transformationalist' hypotheses, having as topic the question of whether derived nominals should be accounted for by lexical redundancy rules in the lexicon or by transformational rules in syntax (see Chomsky 1970). In the transformationalist position, as Horrocks (1987: 62) summarizes, the conclusion that derived nominals are not sentence transforms but deep structure NPs has led to a position in which notions such as 'subject of and 'direct object of have to be definable across S and NP. Since Chomsky's definitions of these grammatical relations are configurational, it follows that S and NP will have assigned similar 'geometric' properties.'
These observations called for two properties of phrase structure representations (Horrocks 1987: 62-3): • a mode of representation that was able to impose a certain degree of uniformity upon the potentially possible configurations of categories; • a theory of the internal structure of categories that allowed the formulation of constraints on rule application in terms of the common characteristics of categories rather than in terms of the categories themselves. Such a theory of phrase structure was provided by X-bar theory (Chomsky 1970; Jackendoff 1977), which became the central core of phrase structure representation in many theories, having developed within the paradigm of generative grammar since. X-bar theory's phrase structure model maintains that most phrasal categories have (lexical) heads, called X° or simply X, which the other elements in the structure are dependent on. Nonlexical categories (X") are categories 'projected by an X category' (see below). The largest category that must be set up to account for the distribution of the dependents of lexical categories is called a maximal projection. This is usually X2 (or X"), so that the general X-bar scheme appears as follows:2 This general scheme is useful only if there is a feature system behind it, and if there are conditions for these features to be distributed over the scheme in an instantiation of it, in order to arrive at correct representations of syntactic structure. Otherwise, the scheme would be overgeneral and not descriptively adequate.
164 Systemic Functional Grammar in Natural Language Generation
For example, X must be constrained to be a major category, i.e. one that can act as head; the category feature can then be defined to be shared by head daughter and mother to ensure the correct instantiation of X-bar trees. See the example below, which shows the distribution of the category expressed as features over an instantiated tree representation.3
In recent developments within generative grammar, such as Government and Binding theory (GB) (Chomsky 1981), and also in grammar approaches that have moved away from generative grammar, such as GPSG (Gazdar et al. 1985) and HPSG (Pollard and Sag 1987, 1994), such conditions on well-formedness are formulated as principles that account for the possible links between the highly generalized phrase structure scheme and the features (e.g. encoded with lexical entries in the lexicon). One of the major principles of GB, for example, is the projection principle, which requires lexical properties to be 'projected' at all levels of syntactic representations: 'Representations at each syntactic level are projected from the lexicon, in that they observe the
Computational Representation: A Proposal for Dependency 165
subcategorization properties of lexical items.' (Chomsky 1981: 29). In general, more sophistication in terms of features and how they were used was achieved owing to two developments. First, more descriptive responsibility was attributed to the lexicon in the move away from transformations. In GB, for instance, the reason for formulating the projection principle was the insight that phrase structure rules and subcategorization features of lexical entries both specifying the same information (i.e. the possible contents of lexical insertion) created a lot of redundancy. With the projection principle at work, this information is expressed only once, i.e. in the lexicon. Given the general X-bar scheme, for example, a verb 'meet' with the subcategorization frame [_ NP], but not a verb 'walk' with the subcategorization frame [ _ ] would fit in with a structure like the following:4
Second, more complex features, called feature structures, were introduced with the advent of unification-based grammars (Kay 1979, 1985; Shieber 1986). Unification is a mechanism based on informational domains being organized in a subsumption hierarchy, which allows checking for set inclusion, set intersection, etc. In linguistic applications, a node in a subsumption lattice is typically described by a (partial) feature structure. A feature structure typically consists of a set of attributes and values, where each attribute is associated with exactly one value or a specified set of values. At this stage of development in feature usage, features are no longer boolean in their values, but can also be atomic (e.g. [VFORM finite]), strings (e.g. [PFORM before]) or numeric (e.g., [NUMBER 3]), or themselves be feature structures. A simple example of how unification works is given in Fig. 5.1. Assume two partial descriptions with the attribute-value pairs [PERSON 3] for the verb
166 Systemic Functional Grammar in Natural Language Generation
Figure 5.1 An example of unification
(1) and [NUMBER plural] for the noun (2). Given the rule formulated in (3) that a subject and a predicate share their agreement features, (1) and (2) can be unified to (4). Unification is the major prerequisite for formulating the rather complex interaction between phrase structure rules and principles set up in feature-structure based grammars, such as GPSG (Gazdar et al. 1985). Here again, the phrase structure rules (ID(immediate dominance)-rules and meta-rules) are very general and too unconstrained to be descriptively adequate. For constraining the projection from rules to trees, a number of general principles specifying feature co-occurrence constraints in tree structures were introduced, notably the Head Feature Convention (HFC) and the Foot Feature Principle (FFP), applying to partial or local tree representations, and the Feature Co-occurrence Restrictions (FCR) and Feature Specification Defaults (FSD) applying to categories. These are essential for the definition of the relation between the highly schematic rules and fully specified phrase structures; see Fig. 5.2 for an overview of GPSG'S descriptive apparatus (derived from Sells 1985: 79). The HFC specifies constraints between head daughter and mother in the form of inheritance, i.e. the head daughter will inherit its head features from the mother. For example, for VP and V, the head features are category ([V+],[N-]), [VFORM)
Computational Representation: A Proposal for Dependency 167 finite/nonfinite] and tense [PAST +/—] (if the verb is finite). The FFP specifies constraints between nonhead daughters and mother, e.g. the feature [Q +/—] for wh-pronouns is a feature of nonhead daughters and shared with a mother by parallel instantiation. An example of a FCR is that if a category has a VFORM feature specification, it must be a verb. FSDs apply for so-called unmarked grammatical options, e.g. for a lexical head verb it is the unmarked case to be active. Marked options must be specified by meta-rules, e.g. the passive meta-rule. Another typical FSD is that S(entence) is normally [INV—] (noninverted), i.e. clauses normally have the subject ordered before the finite in declarative clauses; [INV+], for example for interrogative clauses, is introduced by meta-rules and will override the default. For illustration, consider an ID-rule: VP^H[1],NP 5 This rule results in the 'spelled-out' tree structure shown in Fig. 5.3 after application of the HFC.6 In the most recent developments in feature-structure-based grammar, such as HPSG, features are furthermore explicitly classified and cross-classified as types in an inheritance hierarchy. A feature structure type is characterized by a specific set of features (attributes) with a specific set of possible values that constitute a type's typical attributes. In the content of features, there are only slight differences from previous formulations of feature-
Figure 5.2 GPSG
168 Systemic Functional Grammar in Natural Language Generation
Figure 5.3 A GPSG representation of syntactic structure
structure-based grammar. The major difference lies in the fact that given the explicit classification, all well-formedness conditions on syntagmatic structure in terms of feature distribution in the representation are now formulated on or as types, and inheritance mechanisms can be made use of in formulating and processing these grammars. For instance, general principles such as the Feature Cooccurrence Restrictions, the Feature Specification Defaults and the Head Feature Convention of GPSG are defined as typical attributes of types in HPSG, so that it becomes possible to refer to types in formulating rules and constraints on rule application, where information from supertypes is inherited. For example, the top type 'sign' of HPSG'S syntactic classification hierarchy has two subtypes, 'nonlexicaP (i.e. higher-ranking units) and 'lexical'. Nonlexical types are classified as 'headed-structure' (having a head with head features) and 'coordinate-structure' (not having a head); headed structures are further classified in 'head-complement' and 'headadjunct' structures. The type 'head-complement-structure' has the typical attributes [HEAD-DAUGHTER [ sign ]] and [COMPLEMENTDAUGHTERS < list-of-signs >] (see Fig. 5.4 for the top layers of
Computational Representation: A Proposal for Dependency 169
Figure 5.4 HPSG: Nonlexical types
classification of nonlexical types, also giving the typical attributes of the head-daughter [SYN(TAX) [LOG(AL) |HEAD [ ]] and [SYN(TAX) | LOC(AL) | SUBCAT) < >]).7 The equivalent to GPSG'S FCRs in HPSG belongs to a type definition as head features, e.g., [VFORM finite] is part of the type definition of the VP:
FSDs are formulated as head features, e.g. [INV—] for noninverted structures is a head feature specified by one of the few grammar rules. The corresponding formulation of the HFC in HPSG is the Head Feature Principle (see below), which is valid for the type headed-structure and essentially says that if a phrase has a head daughter, then they share the same head features (Pollard and Sag 1987: 58). Furthermore, in HPSG some of the descriptive burden is taken away from grammar rules (such as GPSG'S ID-rules and meta-rules) and shifted to lexical types and lexical rules. For instance, the
170 Systemic Functional Grammar in Natural Language Generation Head Feature Principle (HFP)
subcategorization information is tied to lexical heads in HPSG instead of being formulated by grammar rules. How this is to be related to syntactic structure is subject to the Subcategorization Principle (or Subcat Principle; see below). The value of the SUBCAT feature encodes the valence of a lexical sign. The subcategorization principle states that in any headed structure, the SUBCAT value is a list obtained by removing those specifications from the SUBCAT value of the head that have been satisfied by the complement daughters (Pollard and Sag 1987: 71). This ensures completeness of the headed structure in terms of valence. Subcat (egorization) Principle
GPSG'S meta-rules are largely replaced by lexical information, which is, however, not formulated as types, but as lexical rules. This leaves a very small number of actual grammar rules based on an X-bar scheme, which are conceived of as partially specified phrasal signs (immediate dominance (ID) schemata). To be completed, these schemata are unified with the principles, e.g., in the case of headed structures they are unified with the head feature principle and the subcategorization principle. For illustration see Fig. 5.5: the feature structure (2) is obtained by unifying the feature structure (1) with the HFP and the subcat principle. Feature Structure (1) describes a saturated sign (SUBCAT = < >) which has one complement daughter
Computational Representation: A Proposal for Dependency 171
Figure 5.5 Unification of an ID-schema (1) with the HFP and the Subcategorization Principle (2)
and a head daughter that is nonlexical. This covers most standard PS-rules, such as [S -> NP + VP], or, [NP -» Det + N]. The use of features compared to earlier unification-based grammars remains the same: the main purpose is the nonredundant formulation of generalizations over syntagmatic structure and the use of this as well-formedness conditions on phrase structures. The progress over generative grammar and its usage of features lies in the organization of features as feature structures in a type hierarchy, thus making it possible to exploit inheritance for the representation and the processing of grammars formulated in this way (see, for example, Zajac 1992). The features encode not only syntactic category, but also other kinds of information about the constitutional properties of phrases (e.g. subcategorization exhibited by heads), making it possible to dispense with a large number of phrase structure rules. Thus, the motivation of features is not only the cross-categorial generalization, but also lies in the description as such of types of grammatical units by means of features. How does this kind of feature use compare to the one in SFG? As has been mentioned before, there is a basic commonality between SFG and HPSG in that both use features as basic representational means for linguistic description. However, as can be seen from this brief sketch of the use of features in other modern grammar models, there are some major differences in the way features are employed
172 Systemic Functional Grammar in Natural Language Generation and in the way features are motivated. Let us briefly compare the 'meaning' of features in SFG and HPSG (i.e. feature motivation) and then compare the 'syntax' of features, i.e. the kinds of metainformation that are made available in system networks and (typed) feature structures.
5.2.2 The 'meaning' of features .'feature motivation in SFG and HPSG
The overarching motivations for the way features are used in HPSG are, first, to show what is common in the structure of phrases (Xbar) and, second, to describe the constitutional properties of types of structures as generally as possible. X-bar brings out what is common in the structure of grammatical units - it is the common denominator of all phrase structures. It is then on the grounds of the linguistic notion of head as distributional equivalent (see Section 5.3.1) that feature co-occurrence constraints between a phrase and its head can be formulated as head feature principle. Moreover, given the uniform syntagmatic motivation of types in HPSG'S classification, the attributes associated with a type are general, surface-syntactic constraints. Traditionally, in SFG the conception of the structure of grammatical units is very different from that of X-bar syntax. First, the top layer of grammatical classification, the rank scale, is already diversified according to syntactic category (clause, PP, NG, etc.) Feature co-occurrence constraints can thus only be formulated between two or more particular features or units, but not as generalizing over different kinds of units. Second, grammatical system networks are usually mediated networks (see Chapter 2): they are both semantically and formally motivated (Martin 1987), motivations we have termed 'motivation from above' (in stratum and in rank) and 'motivation from below' (in axis), respectively (see Chapter 2). As a tendency, the features motivated from below are the ones with which realization statements are associated: 'the use of features to encode semantic distinctions whose realization is mediated in turn by more delicate features encoding formal meaning' (Martin 1987: 30). Let us consider an example. The German PROCESS-TYPE system at clause rank at the primary level of delicacy distinguishes between [action], [relational], [communicative] and [mental] processes. This level marks a functional generalization (the motivation from above) and there are no immediate reflexes in syntagmatic structure resulting from these features. At one level higher in delicacy, such as
Computational Representation: A Proposal for Dependency 173 the ACTION-TYPE system that further subclassifies the action processes in [agent-only], [agent-centred], [affected-centred] and [natural-phenomenon], features begin to be motivated more from their consequences in syntagmatic structure (the motivation from below). There are clearly specifiable constraints on syntagmatic structure resulting from these features. For example, the feature [agent-only] means for syntagmatic structure that there will be a constituent labelled Agent, which will be a nominal group. Syntagmatic structure specifications are thus tied rather to the more delicate features in the system network than to the less delicate ones, i.e. the less delicate ones tend to express functional generalizations and the more delicate ones tend to be oriented towards realization: mediated networks are so named because the scale of delicacy tends to mediate the position of features so that non-formally motivated features make generalizations about more delicate, formally motivated ones through which they are realized. (Martin 1987: 37)
In general, features in HPSG are syntagmatically motivated and oriented towards structural generalization. An example illustrating the consequences of this is the feature INV which is a head feature of verbs and is marked [INV+] for all verbs of subtype aux (auxiliary). The classificational generalization attempted by INV is over a variety of clause constructions as functionally diverse as, for example, polar questions, wh-questions, topicalized negations and exclamations on the grounds that the auxiliary in all these cases is 'inverted', i.e. ordered before the subject. The feature INV is thus a syntagmatically motivated class. In SFG, in contrast, features are exclusively used for paradigmatic descriptions of the linguistic system and are mainly motivated by functional generalization. SFG would thus rather acknowledge the functional contrasts in mood, topicalization, negation, etc. and express what is a surface-syntactic generalization in HPSG in the realization of the functional features, here by the position of the auxiliary in the linear ordering of constituents in the clause. In terms of feature motivation, the grammatical classification hierarchies of SFG and HPSG show one major difference: HPSG'S classification is purely syntagmatically motivated (see the example of inverted/noninverted structures), and SFG'S classification is essentially functionally and paradigmatically motivated, with more delicate types tending to be motivated by realization in syntagmatic structure.
174 Systemic Functional Grammar in Natural Language Generation The problem for SFG is to achieve not only the functional generalization, but also a generalization over realization in syntagmatic structure. Clearly, if HPSG'S classification was motivated in the systemic fashion, the representation of syntagmatic, structureoriented constraints in a general way would be problematic too. Again, the question is whether there is a possibility of combining the advantages of both views. There is another aspect of divergence concerning features that has to be pointed out here. This concerns the form taken by features. Generally, a feature in SFG is an atom. The corresponding representational category in typed feature structures is the type. In the typed feature representation of HPSG, in contrast, a feature is composite, consisting of an attribute and a value, where the value may be atomic or itself have structure. These differences and the implications for what is representationally possible or not possible are discussed in the following section. 5.2.3 The 'syntax* of features: system networks and (typed) feature structures It has been shown that system networks (and systemic syntagmatic structure specifications) can be recast both in feature structures (Kasper 1988) and in typed feature structures (Bateman et al. 1992; Henschel 1994). So, the observation that there is some basic equivalence in descriptive means between SFG and HPSG is supported by the fact that a system network is re-expressable as a type hierarchy as used in HPSG. However, the transformation from a typed feature structure formulation of HPSG to an SFG-style representation is problematic, if possible at all (Bateman 1991). This shows that SFG is restricted due to representation and it suggests that there are additional means of representation in typed feature structures not existing in system networks that HPSG makes use of in specifying its linguistic descriptions. The present section tries to show - in a rather informal way - what these additional representational means in feature structures in general, and in typed feature structures in particular, are and what one can do with them. Based on this, we can then question whether this additional metainformation could be useful for solving some of the SFG problems of concern here. The first formulation of an SFG in a feature-structure-based formalism is by Kasper (1988). FUG being a general-purpose unification-based formalism for making linguistic descriptions,
Computational Representation: A Proposal for Dependency 175 Rasper's goal was to be able to make use of the computational mechanisms developed for FUG, notably unification. Since 'both [FUG and SFG] organize a grammatical description around feature choices, stressing the paradigmatic aspect of language structure' (Kasper 1988: 177), the translation of an SFG to FUG is generally quite straightforward. The basic representational unit in FUG is the functional description (FD), which takes the form of a feature structure. What are more precisely the characteristics of FUG representation that allow a uniform representation of system networks, realization rules and syntagmatic structure representations? Since the values of attributes in a feature structure may themselves be instantiated by FDs, a functional description can be seen as a hierarchy of attribute/value pairs. A system network can then be modelled by representing all possible alternatives as values of an attribute. For instance, a mood network can be represented in the following way (Kasper 1987: 7):8
Realization is generally handled by introducing attributes that are associated with the designated attributes. Insertion is done via the FUG-PATTERN attribute (see Chapter 3, 3.4.2). Linear ordering is also handled in this way. Preselection and classification (i.e. interrank realization) are specified by introducing an FD for a given grammatical function. For instance, with indicative, the Subject has a CASE attribute with the value nominative. Finally, conflation is encoded by unification itself (see example in Fig. 5.6). There is one particular aspect to this reformulation of SFG in FUG that is of interest here. In some of the areas that are difficult to handle with the common representational means of SFG, FUG offers a straightforward solution. For instance, as noted by Kasper, in SFG
176 Systemic Functional Grammar in Natural Language Generation
Figure 5.6 Conflation by unification
there is no way to specify that two constituents must have some properties in common other than preselection. However, preselection can only specify top-down feature propagation. In a unification-based representation such as FUG, unification itself supports the expression of a more general way of feature propagation as feature co-occurrence constraints in the form of feature or structure sharing. This is not only a more declarative way of stating the preselection relation, which is the kind of interpretation of preselection and of all other kinds of realization statements intended by the theory of SFL, but it is not restricted to constraints between two grammatical functions of different ranks; rather, it allows the formulation of such constraints between two grammatical functions of one and the same rank. An example where this is needed is agreement - the representation of which is problematic with the common representational means of SFG (Matthiessen 1988a; and Chapter 4, Section 4.2.6). The solution FUG offers is a representation as an FD similar to the one displayed in Fig. 5.1, with AGREEMENT as an attribute that the grammatical functions involved in the agreement relation have and whose value they share. This kind of feature or structure sharing is not possible to specify in standard SFG; nor is it possible to introduce an attribute that is not a grammatical function because there is no realization operator that allows this. In general, the ways in which system networks and syntagmatic constraints are specified in SFG and FUG are very similar. However,
Computational Representation: A Proposal for Dependency 177
feature-structure, unification-based formalisms have the additional representational means of declarative and less restricted means for expressing feature co-occurrences. Employing feature structures opens up the possibility of representing modes of expression other than the experiential, e.g. the interpersonal mode as exemplified by agreement as a prosodic structure. Let us now turn to typed feature structures and see what a reformulation of SFG in a typed feature structure formalism has to offer compared to the feature structure formalism just described. As we pointed out before, in a typed feature structure formalism, information is organized as a multiple inheritance hierarchy (see Zajac 1992), the categories being ordered in a classification network by subsumption. In a similar way to Al-style representation languages such as KL-ONE (Brachman and Schmolze 1985) or LOOM (MacGregor and Bates 1987), in a typed feature structure representation a type inherits all the information associated with its supertype, or all its supertypes, if cross-classification is supported. Applied to SFG, for example, both the types [agent-only] and [agentcentred] inherit the constraint (\ Process lexical-verb) that is associated with their supertype [action-process]. The characteristic of multiple inheritance is thus equivalent to the 'behaviour' of system networks as implemented in NiGEL-style grammars, where this kind of type inference is also made. For illustration see Fig. 5.7 which displays a fragment of a system network as a type hierarchy (where Q] denotes cross-classifying types following; see Pollard and Sag 1987). Compared to untyped feature structures, in typed feature structures it is possible to name complex feature structures with associated constraints, thus defining a type, and making use of
Figure 5.7 A fragment of a clause system network as type hierarchy
178 Systemic Functional Grammar in Natural Language Generation inheritance in the way exemplified above (Emele and Zajac 1990). A specification of grammatical system networks in typed feature structures is thus even more straightforward than the one in untyped feature structures; and syntagmatic constraints can be specified just as in untyped feature structures; for a syntagmatic representation as a feature structure of the sentence Kim verschlingt Kekse (Kim devours cookies} see Fig. 5.8 (where the indices mark conflation). Consequently, the advantages of specifying syntagmatic constraints as attributes of types are the same as described for feature structures, i.e. they open up new possibilities of representation not available with the common representational means of SFG. Notwithstanding the limits that Henschel (1995) notes for the kinds of typed feature structure implementations she has experimented with for SFG,9 there are some obvious advantages over the common SFG representational tools:
Figure 5.8 An SFG syntagmatic structure as typed feature structure
Computational Representation: A Proposal for Dependency 179
• typed feature structure representations offer a declarative way of making computationally usable linguistic specifications (which is very much in the spirit of SFL theory); • typed feature structure representations offer additional means of representation that are highly relevant for filling some of the representational gaps of SFG (see the example of agreement). As implied at the beginning of this section, HPSG - which is generally formulated in typed feature structures - makes use of these additional properties for the representation of its linguistictheoretical concepts. One such property exploited by HPSG is structure sharing. As Pollard and Sag (1994: 19) point out, 'It is not going too far to say that in HPSG structure sharing is the central explanatory mechanism, much as move-a is the central explanatory mechanism in GB theory', thus making it clear that structure sharing is one of the major means to realize HPSG'S basic linguistic assumptions. One such basic assumption is that of heads characterizing the units they act in as head, where 'head' encodes notions such as distributional equivalent, subcategorizand and governor (see Section 5.3). This linguistic-theoretical assumption is expressed by two of the principles of HPSG, the head feature principle and the subcategorization principle, which account to a large degree for the well-formedness of a phrasal unit (if the unit is a headed structure). Again, there is no such notion in SFG, and we have suggested that this is one of the reasons why some of the problems discussed in Chapter 4 arise. 5.2.4 Some desiderata for SFG In the discussion of features in grammar theory some major differences have been pointed out between the use of features in SFG and in other modern grammar theories. A theory very similar to SFG in terms of how features are used for linguistic descriptions is HPSG: For both SFG and HPSG, features and their organization in classification hierarchies are the basic descriptive means. Table 5.1 gives a synoptic overview of differences and commonalities of SFG and HPSG along the lines of feature organization, feature motivation, representation, and interaction among features. HPSG'S strong points are in exactly the areas that are problematic for SFG. First, HPSG provides rather general constraints on syntagmatic structure; second, it uses a uniform representation as
180 Systemic Functional Grammar in Natural Language Generation SFG (NIGEL-style)
HPSG
feature organization
lexico-grammatical types
syntactic and lexical types, principles
feature motivation
functional, paradigmatic
surface-syntactic, syntagmatic
representation
system networks, realization rules, function structures
feature structure types, lexical rules
interaction among features
diverse according to strata/ranks
uniform (unification)
Table 5.1 Feature usage in SFG and HPSG
typed feature structures which offer a declarative way of expressing feature co-occurrence constraints. The following points can then be established as desiderata for representation in SFG: • a uniform representation of all linguistic knowledge as typed feature structures; • a specification of feature co-occurrence constraints in the form of structure sharing; • and, in terms of linguistic-theoretical concepts, a definition of the general syntactic properties of syntagmatic units employing typed feature structures. A notion of dependency can assist such a definition. A closer look at the notion of head and the uses of the concept of dependency in grammar theory is needed to validate this claim. 5.3 The Notion of Dependency in Grammar Theory The originator of describing syntactic structure in terms of dependency is Tesniere (1959), who established three kinds of dependency relations that grammatical units (here words) can participate in: • 'connexion' is the basic dependency relation holding between two units (Tesniere 1959: 11-14);
Computational Representation: A Proposal for Dependency 181
• 'jonction' is a relation pertaining to units that are in a coordination relation (Tesniere 1959: 323); • 'translation' is a transformational operation that changes category, if this is necessary to adhere to the well-formedness conditions in the syntactic tree (Tesniere 1959: 361). Of these, the relation of 'connexion' has been taken up most frequently; it underlies what has come to be generally understood by dependency in dependency grammars: La phrase est un ensemble organise dont les elements constituants sont les mots. Tout mot qui fait partie d'une phrase cesse par lui-meme d'etre isole comme dans le dictionnaire. Entre lui et ses voisins, 1'esprit apercoit des connexions, dont 1'ensemble forme la charpente de la phrase . . . Les connexions structurales etablissent entre les mots des rapports de dependance. Chaque connexion unit en principe un terme superieur a un terme inferieur. Le terme superieur recoit le nom de regissant. Le terme inferieur recoit le nom de subordonne. (Tesniere 1959: 11, 13)10
The term 'regissant' corresponds to 'head'; the term 'subordonne' to 'complement'. A typical dependency structure displays the part-part relations of syntactic structure, and only these, e.g.
More often than not, in dependency-based models of grammar, dependency goes together with the notion of valency. Valency is a term characterizing the property of certain words or word classes to select their participants (in Tesniere's termininology 'actants') and circumstances (in Tesniere's terminology, 'circonstants'). This property is predominantly attributed to the verb, the verb being the governing element of the clause (see also Kunze 1975; Mel'cuk 1988). Even if valency has not been kept in its original form, there are many theories that have been elaborated based on valency (e.g. extending the notion to a semantic dimension, as in case grammar (Fillmore 1968; Anderson 1971b) or the semantic emphasis model (Kunze 1991)). Valency is also encoded in generative grammar, as subcategorization, i.e. the selection of the categories with which a unit combines to form a phrase, also known as strict
182 Systemic Functional Grammar in Natural Language Generation subcategorization (Chomsky 1965). More recent grammar models have retained this term, e.g. the subcategorization list in HPSG, which is associated with lexical units that are heads.
5.3.1 Dependency versus constituency or dependency and constituency?
The dependency model and the constituency model developed from two rather separate traditions of linguistic theorizing. The dependency model developed within (continental) European structuralism, the constituency model within American structuralism. Consequently, in terms of formal grammars, English tends to be described mostly by constituency grammars but hardly by dependency grammars, whereas Slavic languages and German are more often described by dependency grammars." To date, it can still be observed that the two traditions are rather disjoint; however, they have influenced each other. In fact, most models of syntax make use of a notion of dependency, even if they employ constituency as the major mode of expressing syntactic relations. One crucial difference between strict dependency grammar and phrase structure grammar using a notion of dependency is that in the former the word is the only unit operated on, whereas in the latter predominantly relations between higher units (phrases) are considered. There are arguments both for the complementarity of dependency and constituency and for dependency and constituency as alternative models for the description of syntactic structure. At one extreme, there are grammar models that rely on dependency only; at the other extreme, there are grammar models that rely on constituency only. To the former belong, for instance, Mel'cuk's dependency grammar (Mel'cuk 1988), Kunze's dependency grammar (Kunze 1975) and Hudson's Word Grammar (Hudson 1984). The argument here is that it is sufficient to account for the relation between words for a syntactic description to be adequate; higher nodes are not necessary. Critical points often put forward against the dependency-only approach are in the following areas: linear sequencing (e.g. Baumgartner 1970); features and categorization of higher nodes; and headless constructions. Linear order was considered a problem for dependency grammars at a time in the development of grammar theory when in constituency-based grammars linear order was reflected in the surface-syntactic tree. With the removal of linear ordering from tree representations and the formulation of sequencing rules separately, this was no longer
Computational Representation: A Proposal for Dependency 183
considered a problem for dependency grammars (Matthews 1981).12 Furthermore, higher nodes as domains for rule application have been shown not to be necessary because they can equally well be formulated on words, e.g. gapping rules can be formulated on verbs (Hudson 1989). Furthermore, headless constructions can be circumvented if the notion of category is broadened so that there will be no headless constructions (see Hudson 1980a: 194-5).13 At the other extreme, arguing for heads not being necessary for syntactic description, if constituency relations are accounted for, is, for example, Zwicky (1985). Zwicky presents five candidates for the concept of head: the subcategorizand, the semantic argument, the morphosyntactic locus, the determinant of concord and the constituent determining government.14 These notions have to be included in any grammar model if it is to interface with semantics, the lexicon and morphology. Each of them is a 'head-like' notion, so it should not be necessary to introduce a separate category 'head', unless it can be shown that the head-like notions can be generalized into one category that one could then call 'head'. Analysing six syntactic constructions (Det+N, V+NP, Aux+VP, P+NP, NP+VP, and Comp + S), Zwicky shows that the various head-like notions represent different, or actually competing, analyses of syntactic structure. There is identity only between the semantic functor, which he has not listed as a head candidate, the subcategorizand and the governor. Further, three other head-like notions are considered - two of which are often quoted as providing operational criteria for headship, the distributional equivalent and the obligatory element, the other one representing the head concept of ruler used in true dependency grammar. These are completely new concepts that do not harmonize with the other five. In conclusion, a head is not only superfluous, but it would be a completely different additional category whose use for a grammar model is doubtful. Even though it has been shown that dependency representations and constituency representations are weakly equivalent - i.e. that for any dependency grammar there is a constituency grammar that describes the same set of sentences, e.g. by Hays (1964) referring to a proof by Gaifman (1965) - many approaches to grammar use both constituency and dependency. However, it has also been shown that dependency and constituency grammars are not strongly equivalent; in other words, one cannot be translated into the other. For example, in the constituency-dependency direction, for a bracketed structure (x (y z)), it is not possible to determine whether the
184 Systemic Functional Grammar in Natural Language Generation corresponding dependency i s x - » y - » z o r x < - y - > z , etc. (Matthews 1981: Chapter 4). In the same way, it is not always possible to derive a constituency structure from a dependency structure. The direction of influence among constituency and dependency approaches in amalgamate grammar models is in most of the cases constituency incorporating a notion of dependency.15 The fact that many grammar models based on constituency do incorporate one or the other notion of head and dependency suggests that these concepts fulfil a task that cannot be accomplished without them. This view is supported by Hudson (1987b) in a reply to Zwicky (1985). Hudson shows that a notion of dependency is indispensable for a grammar model, arguing for a different analysis of Zwicky's sample constructions which reveals that 'head' can be considered a unifying category of most of the head-like notions brought forth by Zwicky. His analyses yield a rather uniform picture of what is involved in headship; he concludes that 'head' as a generalizing category is indeed useful to have in a syntax model. To illustrate this, a table that summarizes Zwicky's results from Hudson (1987b: 111) has been reproduced (see Table 5.2) and another that Hudson himself arrives at (Hudson 1987b: 125) with his own analyses (see Table 5.3).16
(1) semantic functor
V + NP V
(2)
P+NP NP+VP P VP
(A) semantic argument
*
*
(B) determinant of concord
(*)
(C) morphosyntactic locus
Aux+VP Aux
*
*
=
=
*
==
*
=
.
=
. = .
*
Comp+S Comp
*
(F) distributional equivalent
Table 5.2 Zwicky's analysis
(6)
.
=
(H) ruler
Det+N Det
(5)
*
(E) governor
=
(4)
*
(D) subcategorizand
(G) obligatory element
(3)
Computational Representation: A Proposal for Dependency 185
semantic functor
(1)
(2)
(3)
(4)
(5)
V + NP
P+NP
NP+VP
Det+N
Aux+VP
V
P
VP
Det
Aux
(6)
Comp+S Comp
( C ) morphosyntactic locus (D) subcategorizand (E) governor ( F ) distributional equivalent ( G ) obligatory element ( H ) rule
Table 5.3 Hudson's analysis
As an additional head-like notion Hudson puts forward the semantic functor - rather than the semantic argument - because it is the semantic functor, in his view, that must be taken as 'semantically characterizing' (Hudson 1987b: 115). Therefore, the semantic argument is taken away from the list of candidate heads; and the determinant of concord is removed because there is no dependency involved in concord, as Hudson maintains. On the basis of these a priori alterations, Hudson argues that if all the entries in Table 5.2 were either '=' (identical with semantic functor) or '.' (not applying), then one could claim that most of the head-like notions are the same category in fact, and therefore a generalizing supercategory 'head' could be established that embraces them all. The critical points in Table 5.2 (the columns and rows with a majority of '*' (contradicting): (A), (B), (4), (5), and (6)) are eliminated or changed to '=' or '.' by Hudson's analysis, so that the original table is changed to Table 5.3, with no contradictions remaining. The result of Hudson's argumentation is that there is a supercategory 'head' that generalizes over most of the relevant headlike notions. He concludes that 'head' is the name of a grammatical relation category, on a par with categories like 'subject' and 'object', but on a higher level of generality than these. Like them it serves the essential function of allowing
186 Systemic Functional Grammar in Natural Language Generation generalizations across rules which could not otherwise be made. (Hudson 1987b: 131)
What Zwicky (1985) does not realize with his starting point and analysis results is that the convergence of semantic functor, subcategorizand and governor can already be of advantage. Creating a supercategory for these converging notions and calling it 'head' can provide us with a general category which can actually be used as an anchor for both valence (subcategorization) and government, as well as for semantic role assignment, thus providing a straightforward way of interfacing semantics and syntax. One of the attractions of a notion of head lies in exactly this kind of generalization. The concepts of head and dependency had actually already been taken up in early transformational grammar (e.g. by Hays 1964; Robinson 1970; Anderson 197la; Vater 1975, and incorporated in the deep structure representation. Most clearly, however, a concept of head received a special status in X-bar syntax (Chomsky 1970; Jackendoff 1977), which has become the phrase structure model underlying most current syntactic models within the generative tradition. X-bar theory was a response to the need for a crosscategory generalization of phrase structure that could account for the commonality between derived nominals and their associated clausal expression (see Section 5.2). The central insight underlying X-bar theory is that most phrasal categories have heads on which the elements of a constituent in question are dependent. Categories are distinguished according to whether they are lexical (X or X°) or not (X', X", . . . , or X"; n > 0), and if they are lexical whether they are subcategorizands or not. The items involved in subcategorization, i.e. the head as subcategorizand and the subcategorized element, appear together in a phrasal category X', where X' is called a phrasal projection of X. This sets up a generalized scheme of phrase structure, making the statement that X will always appear in the same basic configurational pattern irrespective of what its actual categorial value is. The largest category that has to be set up to account for the distribution of the dependents of a lexical category is called the maximal projection of that category. As mentioned before, X-bar syntax has become the most widespread of phrase structure models. It is used in Government and Binding (GB) theory (Chomsky 1981) as a separate subtheory (X-bar Theory) on a par with Binding Theory, Theta Theory, etc. In GB, X-
Computational Representation: A Proposal for Dependency 187
bar theory is one part where dependency relations are expressed. Another part is case theory, according to which abstract case is assigned under the relation of government, where government is an abstract relation that is separate from actual case assignment, but acts as a precondition for case assignment. A lexical head X may be said to govern its sister in X', where sisterhood is defined by mutual c-command, which is in turn defined on dominance relations in the tree (Haegeman 1991: 122-6); and certain lexical heads have in addition the power to case-mark certain of their complements. Dependency relations in GB are thus given in various related ways based on syntactic dominance in an X-bar-based phrase structure representation: by c-command, government and case assignment. Other modern theories of syntax, in a move away from transformational grammar, such as LFG and GPSG, incorporate the X-bar model of phrase structure. In LFG, c-structure representations are based on X-bar, the only change to regular X-bar models being that major categories are defined by [PRED +/—] (predicative; corresponding to [SUBJ +/—]) and [TRANS +/—] (transitive; corresponding to [OBJ +/—]) instead of by [N +/—] and [v +/—]. In GPSG, the X-bar model is underlying the ID-schemata that describe the basic syntactic structures. Thus, even though all these models are in the phrase structure (i.e. constituency) tradition, by employing X-bar syntax, they make use of a very basic expression of dependency that is integrated within the constituency approach.17 This allows generalizations that are not possible, if constituency is used as the only mode of representing syntactic structure. It is always possible to refer to the head of a construction with which various general morphosyntactic properties can be associated, most importantly the functions of a head as distributional equivalent, morphosyntactic locus, subcategorizand and governor. This is very clearly demonstrated in HPSG, where general wellformedness conditions are formulated, operating with a notion of head in this sense. These general well-formedness conditions are notably expressed by the head feature principle and the subcategorization principle (see Section 5.2). Specifying syntactic structures as types of headed-structures attributes the notion of dependency an important status in the overall model of HPSG. Again, the motivation for introducing the concept of head is the (crosscategorial) generalization; heads are used for expressing generalizations over syntactic structures (here syntactic structure types) that cannot be expressed without them.
188 Systemic Functional Grammar in Natural Language Generation
5.3.2 Hudson's ANTG revisited: heads and features
The first and only model to incorporate a notion of dependency into systemic grammar was Hudson's ANTG (Hudson 1976). The notion of dependency Hudson uses here is not strictly the one of dependency grammar; rather, it is similar to the one used in X-bar syntax. Further, Hudson concedes some of the goals and notions postulated and used within generative grammar, such as generativeness, the mentalistic view of language and the evaluation criteria of generality, comprehensiveness, simplicity and economy (Butler 1985: Chapter 6.3). Given this orientation, Hudson's ANTG model has a number of features that group it together with Generative Transformational Grammar rather than with SFG. Some other properties of ANTG are rather similar to SFG, but there are some advantages over SFG; and there are some advantages over transformational grammar. The main advantages over transformational grammar are the following: there is only one structural representation, higher nodes are classified in addition to the classification of terminal nodes, daughter dependency is recognized and daughter/sister insertion is separated from sequence rules. It is the classification of higher nodes that makes it possible to have only one structural representation because the context in which a certain structural property holds is given with sufficient specificity. The same effect is indeed achieved by Systemic Functional Grammar, where all the conditions for a realization to apply are given in the system network. So, the fact that there is only one structural representation and that higher nodes are classified places the ANTG model with SFG rather than with transformational grammar. One major difference to SFG, however, lies in the motivation of classificational features. Features in Hudson's ANTG are motivated solely by distributional criteria and by those internal constituency properties which are relevant to distribution. That is, features have a very restricted motivation compared to SFG, where they are motivated both 'from above', i.e. as grammatical generalizations over semantics, and 'from below', i.e. from realization in syntagmatic structure. Another aspect concerning classification in ANTG is that Hudson gives up the polysystemic principle according to which features or systems are set up for specific places in structure, realized in Hallidayan SFG by the rank scale. In ANTG even the grammatical units are treated as being in a paradigmatic relation. Hudson's reason for abandoning the polysystemic principle is that it prevents generalization across
Computational Representation: A Proposal for Dependency 189
categories.18 Another difference between ANTG and SFG is the abandonment of the microfunction as a central notion, functions being reduced to only a few (see Hudson 1976: 98). The crucial property of ANTG that is relevant for our present discussion, however, is Hudson's inclusion of dependency in linguistic description or, more particularly, his reasons for doing so. Hudson distinguishes between daughter dependency (dependency between a mother and a daughter in a phrase structure) and sister dependency (dependency between sisters in a phrase structure), i.e. he differentiates between structural properties that reflect dominance relations and properties that reflect head-dependent relations. Most of the sister dependency rules that are given in the partial grammar of English in Hudson's ANTG specify subcategorization properties. All of these are generally handled on higher ranks in SFG, daughter dependency relations being expressed by preselection. One could argue therefore that if SFG is able to express these two types of relations, not very much is gained by Hudson's ANTG approach compared to SFG. However, the clear advantage is the crosscategorial generalization allowed by the nonpolysystemic classification approach of the ANTG model - which is notably what makes these generalizations possible (e.g. the feature co-occurrence restrictions between a phrase and its head and complement constituents). This is in fact very similar to HPSG'S head feature dependency (see Section 5.2). 5.3.3 Towards a notion of dependency for SFG As pointed out in several places in the discussion of SFL theory, its linguistic representation and computational application, there is generally a nonacknowledgement of the necessity of making syntactic generalizations as they are made in X-bar-based grammar models or other surface-syntactically oriented approaches. It was pointed out earlier that this kind of generalization is desirable from a linguistic-theoretical point of view and also with a view to computational-linguistic representation (see Chapters 3 and 4). Given that with surface-syntax oriented grammar approaches that are based on X-bar theory, generalizations over structures and their internal make-up are achieved by attributing dependency a major role in grammatical description, then the introduction of a similar notion of dependency to SFG is suggested for this purpose. None of the dependency notions discussed in this section exactly coincides with the notion of dependency (or interdependency) used in SFG.
190 Systemic Functional Grammar in Natural Language Generation
Interdependency in SFG is the expression of logical relations pertaining to complex units only (clause complex, nominal group complex, etc.) Dependency relations in simplex units, which all other approaches focus on in discussing dependency, are not generally acknowledged in SFG, and consequently there are no means of representation offered by the theory. Focusing on interdependency and not attributing the common notion of dependency as much importance may also be a historical accident, as it were. Halliday's first model (scale and category grammar) was probably influenced by his investigations of Chinese (Halliday 1959) and later developments of the model were for modern English. Since Chinese is a noninflecting language, and English has developed to be a basically noninflecting language, there might not have been an immediate need for a dependency notion for the purpose of accounting for dependency relations as encoded in, for instance, case government.19 For an enhancement of SFG in this respect, it would be desirable to be able to formulate syntactic generalizations using dependency to get a hold of the features that are the basic conditions for a structure to be well-formed. At the same time, however, we do not want to abandon the functional motivation of grammatical classification, which is the major distinctive feature of SFG. Hudson (1976), for instance, escaped this problem somewhat because he reduced the feature inventory to those features that are distributionally and constitutionally motivated. Our proceeding in this way is, then, a departure from Hudson (1976) in two respects: first, polysystemicity is essentially retained; second, the functional motivation of grammatical types is not abandoned. In order to do so, it is necessary to reinterpret the logical metafunction and acknowledge a logical organization for simplex units as providing some general well-formedness conditions on syntagmatic structure. 5.4 A Proposal for a Dependency Systemic Functional Grammar It has been argued that what is lacking in SFL theory and its linguistic representation - and consequently also in computational grammars such as NIGEL or KOMET - are generalizations over syntactic structures and their internal make-up and the representational means to express them. Based on our findings in looking at the use and usage of features
Computational Representation: A Proposal for Dependency 191 for grammatical description on the one hand, and at the use of a notion of dependency in grammar theory on the other, we suggest taking the following steps towards providing SFG with the linguistic concepts and the representational means that assist such generalizations: • a reformulation of SFG in a typed feature structure formalism; • an augmentation of SFG'S grammatical classification by a type ' dependency-structure'; • the determination of those general syntactic well-formedness conditions for SFG that can be made with the help of a notion of head; • the formulation of these as general type definitions and type constraints. Accordingly, the present section is organized as follows. First, a Systemic Functional Grammar Fragment of English and German in the typed feature structure language TFS (Typed Feature System) (Emele and Zajac 1990) (Section 5.4.1) will be specified. This classification hierarchy will be adapted to include dependency information. Some general properties of grammatical units are determined, motivated by the preselection relation and by the notion of head as distributional equivalent, subcategorizand and governor (Section 5.4.2). Based on this is a formulation of these properties as general feature co-occurrence constraints. An executable specification is thus achieved, i.e. the grammar fragment defined in this way can be used for generation or parsing. The newly introduced types are shown to hold for both the grammar of English and that of German, with only minor differences. Finally, some examples are given of the token objects (both English and German) that this classification yields (Section 5.4.3). Section 5.4.4 concludes Section 5.4, summarizing the achievements of the proposal and pointing out its limitations. 5.4.1 A fragment of SFG as typed feature structures For the experimental implementation of dependency for SFG presented here, two small grammar fragments of English and German, which include a restricted set of units, have been implemented. Only clauses, nominal groups and prepositional phrases are considered. Moreover, the paradigmatic classification of the clause only contains the basic transitivity types, and mood is defaulted to declarative. The textual aspects of paradigmatic
192 Systemic Functional Grammar in Natural Language Generation classification are left out. Similarly, the classification hierarchy of the prepositional phrase contains only the less delicate types; and the nominal group classification contains only a few systems. The input system of the grammar fragment used here for our experimental implementation in TFS is a modified RANK system that distinguishes between LEXICAL and NONLEXICAL grammatical units at primary delicacy.20 The distinction between clause, prepositional phrase and nominal group is made only at the next level in delicacy (see Fig. 5.9 for the grammar's RANK type hierarchy). This departure from the regular SFG rank scale is motivated on the following grounds. Introducing the types NONLEXICAL and LEXICAL makes it possible to generalize over phrasal units on the one hand, and lexical units on the other. For instance, assuming that we want to describe all simplex phrasal units in terms of their dependency relations, we can associate a general type definition with the type NONLEXICAL in the following way: Every dependency structure (nonlexical) has a head daughter and dependent daughters. So, similar to HPSG'S definition of headed structures, we can define a systemic dependency structure as follows:21
Figure 5.9 A fragment of an SFG as a type hierarchy
(defl)
Computational Representation: A Proposal for Dependency 193 Furthermore, we must be able to distinguish between words that can act as heads and those that cannot. This is reflected in the subtypes of the LEXICAL type, LEX-HEAD and LEX-NONHEAD (see Fig. 5.9). Given the rather flat constituency organization in SFG and, for the purpose of simplification, eschewing the problem of the eventual necessity of a verbal group (VG) here, we can stipulate that the head daughter must be of type LEXICAL and, more specifically, that it must be of type LEX-HEAD.22 This further constraint is added in (defl'). (defl')
In TFS, this definition is formulated as follows: NONLEXICAL[head-dtr: LEX-HEAD, dependent-dtrs: list]. All subtypes of NONLEXICAL (CLAUSE, PREPOSITIONAL-PHRASE, NOMINAL-GROUP) will inherit this constraint. We can then further define the types CLAUSE, PREPOSITIONAL-PHRASE and NOMINALGROUP as follows: (def2)
(deO)
(def4)
194 Systemic Functional Grammar in Natural Language Generation The further classification of the clause, the prepositional phrase (PP) and the nominal group (NG) follows the functional classification as implemented in NIGEL and KOMET. For instance, the clause is classified in terms of PROCESS-TYPE, the PP is classified in terms of MINORPROCESS-TYPE and the NG is classified in terms of NOUN-TYPE. For some examples of how these common systemic types are defined as typed feature structures see (def5) and (def6):23 (defS)
(def6)
This means that the supertypes NONLEXICAL and LEXICAL are simply an add-on and do not compromise the functional part of the system network. Now, to bring these two perspectives together it is necessary to specify with which functional constituents (Process, Minorprocess, Agent, etc.) the Head-daughter is to be conflated. This is done by three type constraints that specify structure sharing between the Process and the Head-daughter in the clause (def7), the Minorprocess and the Head-daughter in the prepositional phrase (defS), and the Thing and the Head-daughter in the nominal group (def9). (def 7) Function-Head-Association-Principle-Clause (FHAP-C1)
Computational Representation: A Proposal for Dependency 195 (def8) Function-Head-Association-Principle-PP (FHAP-PP)
(def9) Function-Head-Association-Principle-NG (FHAP-NG)
In TFS, principles of this kind take the following form (see Zajac 1991): FHAP-CL := [process: #1, head-daughter: #1]. This is a brief summary of what has been done so far. With (def') and (def2) to (def4) we have covered the following information: • nonlexical units (clauses, prepositional-phrases, nominal-groups) have a head and dependents; • the heads of the clause, the PP and the NG are the verb, the preposition and the noun, respectively. • the dependents can be either nonlexical or lexical. Moreover, by (def 7) to (def9) we have made sure that interaction between the newly introduced types and their attributes and the regular systemic elements of structure (Process, Agent, Thing, etc.) is given.
196 Systemic Functional Grammar in Natural Language Generation Thus a basic skeleton of dependency structure for all subtypes of nonlexical now exists which can be further operated on. The following section will work out further specifiable constraints using this skeleton. For instance, not much information has yet been given about the head, nor about the internal structure of the dependants. What are their general properties? To answer this question, it is necessary to turn back to systemic networks and look more closely at the preselection relation. There is a certain subset of preselection features that always have to be preselected, rather independently of the functional type of a unit (see Chapter 4). This suggests that this set of features is special in the sense that it constitutes those features of a syntactic structure that are absolutely necessary for its wellformedness. As will be shown, this set of preselection features encodes three of the major notions of headship: distributional equivalence, subcategorization, and government.
5.4.2 From preselection to designated type constraints
Among the various constraints between features in the system network - network connectivity by input conditions, wiring of features in gates, interrank feature constraints - the ones of interest here are the interrank constraints expressed by preselection. Of all the features that are preselected for a particular grammatical function of a given unit, there is a small subset that is always preselected (see Chapter 4). For any unit, the syntactic categories that realize their elements of structure must always be preselected, e.g. (\ Agent nominal-group), and, if the unit realizing an element of structure is governed, the case of that unit must also be preselected. Preselection handles this kind of interrank constraint to the extent that it is applied for expressing the co-occurrence of all kinds of features, be they functionally or surface-syntactically motivated. However, the features just noted, which are always preselected, are special in that they express three aspects of headship: distributional equivalence, subcategorization and government. While subcategorization and government are marked on the dependent daughters of a unit, distributional equivalence holds between a head and the mother. So, this special set of preselection features is most generally to be expressed on heads and dependent daughters. And, since it encodes some of the basic features of a well-formed structure, it seems justifiable on these grounds to attribute to it a distinguished status among the preselection features. A more explicit recognition shall be given to headship in terms of distributional equivalence, subcategorization
Computational Representation: A Proposal for Dependency 197 and government. A generalization over this distinguished set of preselection features can thus again be achieved referring to dependency: • case is always marked on the category subcategorized by the head of a unit (if the category is case-bearing and if the head is a governor); • the dependents and the head will always be assigned a particular category, independently of the semantico-functional type of that unit. Based on this, it is then possible to proceed to define these properties in the form of type constraints. Acknowledging the notion of head as distributional equivalent, the following general constraint can be formulated on dependency structures - similar to HPSG'S head feature principle: In a dependency structure, the head-daughter shares the head features with its mother. (def 10) Head-Feature-Principle (HFP)
The head features are defined for each type of LEX-HEAD as follows: all LEX-HEAD have 'class' (category) as head feature (VERB has 'verbal', NOUN has 'nominal' and PREPOSITION has 'prepositional'); NOUN has in addition [case: CASE], case being assigned structurally to the whole nominal-group and being an attribute of the category noun at the same time: (def11)
198 Systemic Functional Grammar in Natural Language Generation (def!2)
(def!3)
Accordingly, we have to modify our definition of dependency structure again to include the attribute head-features:24 (defl")
In TFS, (defl") takes the following form: NONLEXICAL[head-features: SYNTAX, head-dtr: LEX-HEAD, dependent-dtrs: list]. The unification (U) of, for example, the type clause with the HFP (deflO) yields the following feature structure: CLAUSE U HFP
Computational Representation: A Proposal for Dependency 199 SUBCATEGORIZATION-TYPE
Figure 5.10 A surface-syntactically motivated transitivity hierarchy In order to accommodate subcategorization and government patterns too, we introduce four types that are motivated by the number of dependent daughters required, and by the values of case that are assigned to the dependent daughters. For English, these types build almost a cross-class to PROCESS-TYPE: they are a more surface-syntactically oriented reformulation of the AGENCY system. For German, they have to be subtypes of PROCESS-TYPE because they do not generalize over all kinds of process types (see Fig. 5.10 for this classification; the TFS formulation is given below). SUBCATEGORIZATION-TYPE := (INTRANSITIVE | TRANSITIVE DITRANSITIVE2) . (def!4)
DITRANSITIVE1
200 Systemic Functional Grammar in Natural Language Generation (def!6-E)
The corresponding definitions for German differ from the English ones in the values of the case attribute:
Computational Representation: A Proposal for Dependency 201
202 Systemic Functional Grammar in Natural Language Generation Note that subcategorization is not treated as an inherent property of lexemes (or, more specifically, of heads) because this, as is known from other theories such as LFG and HPSG, causes a whole range of problems for relation changes which, if treated in the lexicon, either create a large number of verb readings or have to be handled by lexical rules that are hard to control. Relation changes can be controlled in a satisfactory way in SFG as it is now, where in the KOMET grammar they are represented by the interaction of processtype and diathesis and in the NIGEL grammar they are represented by cross-classes of process-type and agency plus voice. For the PP, there is only one subcategorization/government type in English; for German, there are three, based on which case value the phrase/preposition requires.
Computational Representation: A Proposal for Dependency 203
For the NG, only two kinds of subcategorization in our small grammar fragments of English and German are distinguished, namely subcategorization for a determiner and zerosubcategorization (as with individual names).25
Note that in all of these definitions of subcategorization types syntactic functions have been included. Again, this is to ensure that the dependent daughters are related to the common SFG elements of structure, which is again done by specifying structure sharing, here of the individual members of the dependent-daughters' list with elements of the function structure. With the definitions given in this section the following has been
204 Systemic Functional Grammar in Natural Language Generation achieved: • the specification of subcategorization and government patterns separate from the common function structure of SFG; • the expression of the notion of head as distributional equivalent (which partly encodes the observation of phrasal (higher-ranking) unit being the projections of their heads). It is now time to look at the syntagmatic structures our small grammars of English and German yield.
5.4.3 ResultsIexamples
The results that our small grammars of English and German produce when a 'generation query' (i.e. one that asks about a particular type) is posed to the TFS system are exemplified here by the structures of the following sentences: (5.3)
Kim devours a cookie. - Kim verschlingt einen Keks.
(5.4)
Kim sings. - Kim singt.
(5.5)
Kim gives Jodie a cookie. - Kim gibt Jodie einen Keks.
(5.6)
Kim gives a cookie to John. - Kim verschenkt einen Keks an John.
The feature structure results are given in Fig. 5.11 to 5.14 for English and in Fig. 5.15 to 5.18 for German.26 Comparing these to an output produced by a grammar that does not include dependency (see Fig. 5.19; cf. Bateman et al. 1992), it is obvious that the dependency one carries more metainformation, but the features that are spread out over the representation in Fig. 5.19 are now more uniformly organized by the grouping under the newly introduced attributes head-daughter and dependent-daughters. That is, there are no new linguistic features other than the ones with the head-features attribute, but the existing features have been given an organization that acknowledges their being generalizable over.
5.4.4 Summary and conclusions
A small SFG fragment of English and German in the typed feature structure language TFS that includes dependency information for simplex units has been presented. Dependency information has
NONRECIPIENT [agent:
#6=INDIVIDUAL-NAME [dependent-dtrs: , #1=LEX-KIM thing: [common: kirn, name: head-features: *2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]], #1, head-dtr: head-features: #2], dependent-dtrs: , #6, subject: directcomplement: *7. head-dtr: #9=LEX-DEVOURS [finite: name: devours, head-features #8=V-SYNTAX[class: VERBAL]], head-features: #8, theme: *6, finite: #9, process: #9 #7, goal: #7] medium:
Figure5.ll Kim devours a cookie.
MIDDLE [agent:
#4=INDIVIDUAL-NAME [dependent-dtrs: . thing: #1=LEX-KIM
[common: kirn, name: head-features: #2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]],
head-dtr
head-features: theme: finite: dependent-dtrs subject: process: Figure 5.12 Kim sings.
head-dtr: head-features #5=LEX-SINGS [finite: name: head-features; #3, #4, #5, .
#4, #5]
#1. #2],
sings, #3=V-SYNTAX[class: VERBAL]],
RECIPIENT-NONFOCUSED
[agent:
#4=INDIVIDUAL-NAME [dependent-dtrs: , thing:
head-dtr:
head-features: beneficiary:
medium:
#1=LEX-KIM [common: name: kirn, head-features: #2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]] head-features: #2],
head-dtr: #1 #5=LEX-GIVES [finite: name: gives, head-features: #3=V-SYNTAX[class: VERBAL]] #3, #8=INDIVIDUAL-NAME [dependent-dtrs . thing: #6=LEX-JODIE [common: name: Jodie, head-features #7=NG-SYNTAX [class: NOMINAL, case: OBLIQUE]], head-dtr: #6, head-features: #7],
#12=CLASS-NAME [deictic:
#9=LEX-A [name: definite: number:
dependent-dtrs: , thing: #10=LEX-COOKIE [common: name: number:
a,
[countable: singular: +]],
+, cookie, [countable: +, singular: +], head-features: #11=NG-SYNTAX [class: NOMINAL, case: OBLIQUE]] head-dtr: #10, head-features: #11], goal: #12, theme: #4, finite: #5, dependent-dtrs: , subject: #4, indirectcomplement: #8, directcomplement: #12, process: #5]
Figure 5.13 Kim gives Jodie a cookie.
RECIPIENT-FOCUSED [agent: #4=INDIVIDUAL-NAME[dependent-dtrs: , thing: #1=LEX-KIM [common: -, name: kirn, head-features: #2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]], head-dtr: #1, head-features: #2], head-dtr:
#5=LEX-GIVES[finite: +, name: gives, head-features: #3=V-SYNTAX[class: VERBAL]], head-features: #3, medium: #9=CLASS-NAME[deictic: #6=LEX-A [name: a, definite: -, number: [countable: +, singular: +]], dependent-dtrs: , thing: #7=LEX-COOKIE [common: +, name: cookie, number: [countable: + , singular: •»•] , head-features: #8=NG-SYNTAX [class: NOMINAL, case: OBLIQUE]],
head-dtr: #7, head-features: #8], goal: #9, theme: #4, finite: #5, process: #5, beneficiary: *15=RECIPIENCY-MINORPROCESS [head-features: #13=V-SYNTAX[class: PREPOSITIONAL], dependent-dtrs: , minirange: #12, minorprocess: #14«LEX-TO[name: to, head-features: #13], head-dtr: #14], dependent-dtrs: , subject: #4, directcomplement: #9, indirectcomplement: #15]
Figure 5.14 Kim gives a cookie to John.
NONRECIPIENT [agent:
dependent-dtrs:
*6=INDIVIDUAL-NAME [dependent-dtrs: , thing: #1=LEX-KIM [common: -, name: kirn, head-features: »2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]], head-dtr: *1. head-features: *2], ,
subject: #6, directcomplement: 97, head-dtr: «9=LEX-VERSCHLINGT [finite: +, name: verschlingt, head-features: #8=V-SYNTAX[class: VERBAL]], head-features: theme: finite: affected: process:
88, *6. *9, *7, #9]
Figure 5.15 Kim verschlingt einen Keks.
MIDDLE [agent:
head-dtr:
head-features: theme: finite: dependent-dtrs subject: process: Figure 5.16 Kim singt.
#4=INDIVIDUAL-NAME [dependent-dtrs: , thing: #1=LEX-KIM [common: name: kim, head-features: #2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]], head-dtr: *1. head-features: #2], #5=LEX-SINGT [finite: +, name: singt, head-features: #3«V-SYNTAX[class: VERBAL]], #3, #4, *5, , #4, #5]
RECIPIENT-NONFOCUSED [agent: #4=INDIVIDUAL-NAME [dependent-dtrs: . thing: #1=LEX-KIM [common: name: kirn, head-features: #2=NG-SYNTAX [class: NOMINAL, case: NOMINATT head-dtr: #1, head-features: #2], head-dtr: #5=LEX-GIBT [finite: +, gibt, name: head-features: #3=V-SYNTAX[class: VERBAL]], head-features: #3, theme: #4, finite: #5, beneficiary: #8=INDIVIDUAL-NAME [depeadent-dtrs , thing: #6=LEX-JODIE [common: name: jodie, head-features: #7=NG-SYNTAX [class: NOMINAL, case: DATIVE]], Lead-dtr: *6, head-features: #7], #12=CLASS-NAME [deictic: #9=LEX-EINEN einen, [name: definite: [countable: number: singular: +3], dependent-dtrs: , thing: #10=LEX-KEKS [common: name: keks, number: [countable: +, singular: +], head-features: #11=NG-SYMTAX [class: NOMINAL, case: ACCUSATIVE]] head-dtr 910, head-features: #11], dependent-dtrs: , subject: #4, indirectcomplement: #8, directcomplement: #12, process: #5] affected:
Figure 5.17 Kim gibt Jodie einen Keks.
RECIPIENT-FOCUSED [aRent: #4=INDIVIDUAL-NAME[dependent-dtrs: , thing: #1=LEX-KIM [common: ~> name: kim, head-features: #2=NG-SYNTAX [class: NOMINAL, case: NOMINATIVE]], head-dtr: #1, head-features: #2], head-dtr: #5=LEX-VERSCHENKT[finite: +, name: verschenkt, head-features: #3=V-SYNTAX[class: VERBAL]], head-features: #3, affected: #9=CLASS-NAME[deictic: #6=LEX-EINEN [name: einen, definite: -, number: [countable: +, singular: +]], dependent-dtrs: , thing: #7=LEX-KEKS [common: +» name: keks, number: [countable: +, singular: +], head-features: #8=NG-SYNTAX [class: NOMINAL, case: ACCUSATIVE]] head-dtr: #7, head-features: #8], beneficiary: *15=RECIPIENCY-MIHORPROCESS [head-features: »13=V-SYNTAX[class: PREPOSITIONAL], dependent-dtrs: , minirange: #12, minorprocess: #14=LEX-AN[name: an, head-features: #13], head-dtr: #14], dependent-dtrs: , theme: #4, finite: #5, process: #5, subject: #4, directcomplement: #9, indirectcomplement: #15]
Figure 5.18 Kim verschenkt einen Keks an John.
[agent:
#2=INDIVIDUAL-NAME [thing: LEX-KIM [common: -, name: kirn, noun: +, case: NOMINATIVE]],
direct complement: #3=CLASS-NAME [deictic: LEX-A [name:
thing:
name:
subject: finite:
#2,
a,
number: [countable: definite: - +, number singular: +]], sigular; +]], LEX-COOKIE [common: + , cookie,
number: [countable: +, singular: +], noun +, case: OBLIQUE]],
#1=LEX-DEVOURS
[finite: +, name: devours, verb: +]], Figure 5.19 An SFG typed feature structure without dependency
theme:
#2,
medium: goal: process:
#3, #3, #1]
214 Systemic Functional Grammar in Natural Language Generation been introduced in the form of a type NONLEXICAL as a supertype of clauses, PPs and NGs, which is defined by the attributes HEADFEATURES, HEAD-DAUGHTER and DEPENDENT-DAUGHTERS. Furthermore, some of the basic syntactic properties of grammatical units have been defined in the form of type constraints on the values of these attributes. These alterations to SFG classification do not compromise the common systemic functional classification, however. The functional classification is retained, the dependency-based, syntactically oriented types being simply an add-on. What has been achieved by separating some of the surfacesyntactically motivated constraints on syntagmatic structure out from the functionally oriented system network and a specification of this in typed feature structures? Linguistically, the following has been achieved: • some generalizations about what is common in the structure of simplex grammatical units; • a linguistic representation of this as dependency relations. Furthermore, most of the newly introduced types hold for both English and German. This is interesting, since the new types are rather surface-syntactically motivated, but perhaps not surprising, since the representation we have employed is flexible enough to accommodate language-specific variation, e.g. the different government patterns of English and German due to the differences in morphological case options. Again, it needs to be emphasized that while the lack of syntagmatic generalizations due to a nonacknowledgement of dependency is a general deficiency of SFG and can also be noted with the grammar of English, it becomes especially prominent with the grammar of German, because here dependency relations are clearly grammaticized (e.g. in case government). Representationally, the advantages of typed feature structures as a metalanguage for linguistic description have been made use of. These are notably: • the uniform representation of all grammatical information (paradigmatic and syntagmatic) as typed feature structures; • the means of structure-sharing to express certain linguistictheoretical concepts, e.g. the dependency between a higherranking unit and its head and dependents.
Computational Representation: A Proposal for Dependency 215
Further, all constraints can be specified rather locally for each type, and can be kept separate from the specification of the interaction between types. For instance, the type definition of dependency structure (defl"), is separate from the constraint definitions between head daughters and functional elements of structure, e.g. (def7) to (def9). As a convenient side-effect, in TFS, the grammar can be checked independently of semantics (NIGEL cannot run without a semantic input). Thus, all grammatically allowable subtypes of a type that is queried are produced - which is valuable for the grammar writer to see whether the constraints she has specified are sufficient and yield the expected results (this is not possible with NIGEL). Besides the general limit of being a small-scale implementation, the more specific limitations of our proposal are the following. Only some of the specifiable surface-syntactic constraints have been covered, namely those that can be specified with the help of a notion of head as distributional equivalent, subcategorizand and governor. Constraints that can be covered if the notion of head as morphosyntactic locus is acknowledged have not been systematically covered (the only one is case for nouns/nominal groups). This would include, for instance, finiteness and tense of the verb - which are treated as head features in, for example, HPSG. Further, the head as semantic functor has remained implicit. Commonly, in SFG this is encoded in the process-type and the minorprocess-type of the clause and the PP, respectively. Moreover, those morphosyntactic constraints that could be made specifiable in general terms by the recognition of prosodic structure, as, for example, in agreement, have not been included. Since the focus has been on dependency structure and since, as Hudson (1987b) points out, the 'determinant of concord' does not belong to the head-like notions, the problem of representing agreement remains untackled. Finally, there is a flaw in our specification of head features. In general, in SFG the sets of features of any two ranks must be pairwise disjoint (polysystemicity). In our definition of head features we have ignored this postulate and declared the type CLASS = VERBAL | PREPOSITIONAL
NOMINAL.
as a rather general class that is valid for all kinds of units and expressed this information as shared among a higher-ranking unit and its head by (defl 0) to (defl 3). For this not to be a violation of SFG, we have to require that polysystemicity be relaxed in order not
216 Systemic Functional Grammar in Natural Language Generation to hold for this distinguished set of features. Proceeding this way, and extending the mother-head dependency to include other kinds of information that are handled in NiGEL-style grammar by classify and inflectify, one has to declare this set of features only once and express the dependency solely as feature sharing via the headfeatures attribute. This would mean taking more seriously the observation that heads characterize their phrasal projections. Thus it is not claimed that the potential of generalizations that are possible to be made by means of dependency have been exhausted, and a comprehensive account of the generalizable morphosyntactic properties of grammatical units that a grammatical system network typically contains has certainly not been given. The main goal has been to find one possible method of specifying some of the surfacesyntactically oriented constraints on systemic syntagmatic structure - without having to abandon the major characterizing properties of SFG, such as functional diversification, axiality, classification and realization. 5.5 Summary This chapter has concluded the illustration of the 'metasemiosis' of computational systemic linguistic theorizing. On the grounds of the problems encountered in the specification of the general morphosyntactic properties of grammatical units in NiGEL-style computational SFG, an attempt has been made to introduce some linguistic concepts and representational means that allow us to make more general statements about these properties. A central concept in the approach to SFG'S syntagmatic gap is the relation between higher-ranking units and their heads and dependents encoded commonly in SFG by the preselection realization statement. The chapter has tried to filter out from the set of preselection features those that are always preselected. They encode three major notions of head: the distributional equivalent, the subcategorizand and the governor. The representational means that has allowed the formulation of these has been a typed feature structure language (TFS; Emele and Zajac 1990). In terms of systemic functional theory, the proposal must be seen as a reinterpretation of the logical metafunction in terms of a notion of dependency as it is widely used in grammar theory. The interpretation that the logical metafunction has received here is as locus of some general constraints on the internal make-up of grammatical units, in particular of simplex units. As in complexes,
Computational Representation: A Proposal for Dependency 217 where logical relations are characterized by interdependency structures with the recurrence of one and the same function in one unit, for simplexes the formulation of logical relations we have proposed here is as dependency structure with the recurrence of the same features across the higher-ranking unit and its head. This is a viable solution under the condition that the polysystemicity postulate be relaxed. The proposal has mainly been inspired by the notion of dependency as it is used in a variety of grammar models, and more particularly it has been influenced by HPSG'S formulation of dependency relations in typed feature structures. The means of representation used here has opened up the way of conceiving of logical relations in the way described above. Notes 1. Recall that this gap in SFG was one of the major motivations for the ANTG version of SFG (Hudson 1976), in which the motivation of features was reduced to purely distributional and constitutional criteria (so that the system network became more surface-syntax oriented than in traditional SFG), and a notion of dependency was used to assist the representation of syntagmatic relations (see Chapter 2). 2. Specifier is a variable for categories that can be immediately dominated by X", and Complement stands for categories subcategorized by lexical heads. 3. For a detailed discussion of feature use in X-bar syntax see also (Muysken and van Riemsdijk 1986). 4. In current GB theory, T is said to be the head of clauses, carrying the inflectional properties of the head-verb. IP is the maximal projection of I, i.e. the clause node S. 5. Where the category names are just abbreviations for feature structures, 'H' stands for head, and [1] marks the bar level of the head. 6. The features affected by the Head Feature Convention are shown in bold face. 7. This notation is an attribute path., which gives all the attributes that are inherited from the supertypes of the type defined. A type 'headcomplement-structure' inherits the attributes SYNTAX from the type 'sign', it inherits the attributes LOCAL and HEAD from the type 'headedstructure', and is defined itself by the attribute SUBCAT. 8. ':=' is used to designate all attributes corresponding to systemic features, '{' stands for the systemic '[', i.e. denotes disjunction (Kasper, 1988: 183.) 9. Notably these are: in general, all of the typed feature structure formalisms with which she has tested the NIGEL grammar - TFS (Emele
218 Systemic Functional Grammar in Natural Language Generation
10.
11.
12.
13.
14. 15. 16. 17. 18. 19. 20.
21.
and Zajac 1990), CUF (Dorre et al. 1996), ALE (Carpenter and Penn 1994) - are limited in capacity (NIGEL contains a few thousand types) and difficult to handle in practice; TFS is weak in cross-classification; and CUF is weak in structure sharing. English: The sentence is an organized whole in which the constituting elements are the words. Every word that is part of a sentence is not isolated as it is in the dictionary. Among a word and its neighbours, relations ('connexions') are built up, which form the skeleton of the sentence. The structural relations ('connexions') establish dependency relations between the words. A 'connexion' relates a superior term to an inferior term. The superior term is called governor. The inferior term is called subordinated. (Translated by Kai Stolzenburg and myself.) This may just be a historical coincidence, but one could also hypothesize that since the Slavic languages and German are inflecting languages, they are more liable to a dependency analysis than English (as an essentially noninflecting language). Quite on the contrary, there is a tendency in language typology studies to classify languages according to their ordering of heads and modifiers (Hawkins 1983; 1984). In a similar vein, i.e. using dependency relations for typological insights, is NichoPs (1986) proposal for a classification of languages according to their morphologically marking heads, dependents, or both. The only case where Hudson concedes the necessity of constituency representations is for coordinate structures: there is no head-dependent relation involved here. See Hudson (1980, 1988) in a reply to Dahl (1980), who criticizes Hudson (1980a). He leaves aside the notion of head for syntactic percolation, which is coexistent with the morphosyntactic locus. See also Nichols (1986) for a similar observation about European structuralist linguistics and American structuralist linguistics in general. In Tables 5.2 and 5.3, the places marked '*' mean 'contradictive', '=' means 'the same', and '.' means 'not applying'. See also Kornai and Pullum (1990), who argue for focusing on dependency relations in X-bar syntax. It was around that time that the generalization across categories was becoming possible in transformational grammar by the introduction of the generalized phrase structure scheme of X-bar syntax. Erich Steiner, personal communication. The typographic conventions here are the following: TYPES are given in small capitals, and attributes are given in noncapital letters. All the definitions presented here are given in attribute-value matrices as they are commonly employed for feature structure representation. The attribute 'dependent-daughters' takes as value a list of RANK types. Note that the definitions we present here hold for both English and
Computational Representation: A Proposal for Dependency 219
22. 23. 24. 25. 26.
German, unless marked otherwise by '-E' (English-specific) or '-G' (German-specific). With a VG, more layering in structure would be created and the headdaughter of the clause would be the whole VG. In (def5) and (def6), the indices are simply to mark that Agent and Goal and Thing and Deictic are two distinct elements of structure. Here, SYNTAX is a type that has been introduced simply for easier reference to all the attributes embedded in head-features. ELIST denotes the empty list. The types NG-SYNTAX and V-SYNTAX (subtypes of SYNTAX) have been introduced simply for grouping the attributes embedded in headfeatures together and giving a name to them for easier reference. Note that the feature structures presented here are only a selection of all the structures the grammars specify: asked for a particular type, the TFS system spells out all the subtypes and the constraints specified for them, thus producing a whole range of features structures.
6 Summary and Conclusions
Theories can always be replaced by better theories, new facts elicited and new syntheses made. The applications themselves are an important source of feedback: a theory is constantly reexamined in the light of ideas suggested in the course of its application. If a theory is allowed to stand still, it soon ceases to be useful. (Halliday et al. 1964: 139)
6.1 The Theme of this Book The overarching notions that have built the thematic thread of the work presented here are: • linguistic theory as metasemiotic activity; • linguistic theory as varying according to the use to which it is put. In Systemic Functional Linguistics, theory, like natural language, is considered a means of doing. Systemic Functional Linguistics is thus inherently user-oriented. Its metalanguage must therefore be flexible and able continuously to accommodate the needs of its users (the linguists). This is how the notion of metasemiosis has been interpreted here: the continuous readjustment of theory in the light of the uses to which it is put. Being oriented this way, the role application plays in the theory is a crucial one. It is the source of refinement of the theory, of feedback to the theory according to contexts of use. So, in the same way as theory is for application, so is application for theory. An illustration of this methodology has been attempted by taking the example of computational systemic linguistic theorizing. Of particular concern here has been one specific problem area that has been revealed in the implementation of a fragment of the grammar of German in the NiGEL-style. The following section briefly summarizes this problem and the
Summary and Conclusions 221 steps taken towards a solution. 6.2 The Train of Thought in this Book Comparing a number of grammar approaches employed in tactical generation to Systemic Functional Grammar as used in PENMAN and its grammar of English NIGEL, it has been found that generally the tasks involved in generation are supported in SFL-based generation, and even that the systemic model is suggestive of the kinds of linguistically relevant resources needed in generation (see Chapter 3). However, some problems arise with SFG that do not arise in the other approaches discussed. The general difficulty we have found with linguistic specification in implementing a grammar fragment of German in NiGEL-style is a lack of means to express generalizations about syntagmatic structures and their internal morphosyntactic properties. This was termed 'syntagmatic gap' in Chapter 1. Let us recall the basic reasons why this gap arises and summarize our proposal for bridging it. First, in general, according to systemic functional theory, all linguistic variation is functionally motivated. In linguistic description, forms have to be mediated by functions, or, in other words, the functional diversification has a priority over the surface-syntactic generalization. While the functional motivation is potentially advantageous, particularly in generation where the major question is how to relate function to form, this becomes an impediment when it comes to linguistic paradigms that are not strictly motivatable by function, but reflect some rather idiosyncratic morphosyntactic property of the language under consideration, which then has to be forced into the functional organization. As suggested, one way of bridging this gap is to re-examine the theory and see whether a place can be found in which surfacesyntactic generalizations can be accommodated. Reinvestigating the logical metafunction and reinterpreting it as the source of some general constraints on syntagmatic structure has been proposed. The notion that assists such an interpretation is that of dependency, as it is widely used in grammar theory. Further, the preselection relation as a source of constraints on syntagmatic structure has been re-examined. Here, it has been found that there is a specific subset of the preselection features that implicitly encodes a head notion (see Chapter 4), which has then been explicitly
222 Systemic Functional Grammar in Natural Language Generation declared to be effected via a head in the sense of distributional equivalent, subcategorizand and governor (see Chapter 5). Second, pursuing this approach it has been noted that there are some problems with two of the major linguistic representational concepts of SFG, namely with the rank scale and with polysystemicity. The rank scale defines the origin of both structures and systems. It is a hypothesis about the structure of grammatical units according to syntactic category and it is a hypothesis about what the different populations are for which disjoint sets of features hold (polysystemicity). The rank scale, together with the polysystemicity postulate, thus prevents generalizations across different types of units. Thus, the introduction of two more general categories to the rank scale that distinguish between higher-ranking (phrasal) units and nonhigher-ranking (lexical) units. With these newly added supertypes of the regular rank scale, have been associated some general syntactic properties (see Chapter 5, Section 5.4). Polysystemicity has then been relaxed for a small set of features (the head features), which have been declared to be shared by a head and its phrasal projection, thus taking more seriously the concept of phrasal units being characterized by their heads. Another deficit of SFG that becomes obvious in computational application is a general lag of representation behind theory, both linguistic and computational (Matthiessen 1988a; Bateman et al. 1994). While, for instance, in the domain of syntagmatic structure, SFL theory has a strong hypothesis about modes of expression that coheres with the general notion of functional diversification, in terms of representation, the modes of expression are not explicit enough to be computationally usable (e.g. agreement as prosody, hypotaxis as interdependency). In order to be able to formulate our proposal of dependency, we have used another feature-based representation language, typed feature structures. Finally, and this is specific to the PENMAN implementation, which - as noted in Chapter 1 - collapses the metastrata of computational representation and implementation, the control over linguistic specifications is solely left to the grammar writer. With a computational representation language based on a feature logic, such as typed feature structures, in contrast, consistency of the linguistic specification is checked by the properties of the computational representation, here, by the underlying relation of subsumption, and by the unification machine that is defined for the typed feature system. Thus, for example, only those types are recognized that have been correctly specified in the type lattice. This kind of control
Summary and Conclusions 223
is not given in the PENMAN implementation (Bateman et al. 1994).' Using typed feature structures as (computational) representation language for SFG, as originally proposed by Bateman et al. (1992), is thus one important step towards a sounder representation of SFG in explicitly computational terms. The linguistically relevant property of typed feature systems that has been in the focus of our interest and that we have made use of in specifying dependency types for SFG has been the additional means of linguistic representation that they offer, notably those of feature or structure sharing (see Chapter 5, Section 5.4). Trying to exploit these means for achieving 'more-theory-in-implementation' has concluded the cycles of metasemiosis we have gone through in this book. All of this is then the feedback to SFL theory we can offer here. A few additional remarks about Head-Driven Phrase Structure Grammar (HPSG) are in place at this point, since it has been the main source of inspiration for the inclusion of a stronger notion of dependency in computational SFG. While our perspective has been input from HPSG to SFG, there is the opposite perspective, too: what are problems typical of HPSG for which it might look for solutions in SFG? Let us briefly elaborate on this question.
6.3 HPSG and SFG revisited One area that proves problematic in HPSG is dealing with linguistic information that calls for functional rather than surface-syntactic generalizations. HPSG'S grammatical classification hierarchy is exclusively surfacesyntactically motivated. The information that does not fit in this hierarchy is relegated to the lexical classification and lexical (or: redundancy) rules (the latter being the received mechanism in the tradition of generative grammar for dealing with syntactically nongeneralizable information). The lexical sign hierarchy accounts for vertical redundancy, covering information that is considered lexically rooted, but still predictable to a large extent. The lexical sign hierarchy typically covers a word's category (part of speech), a word's potential of being a head (major-lexical-sign), and, if it acts as a head, its subcategorization type. Again, this is information that can be expressed as shared with whole word classes, i.e., a generalization in the form of a type lattice is possible (see Pollard and Sag (1987: 206) for the lexical sign hierarchy). However, Pollard and Sag
224 Systemic Functional Grammar in Natural Language Generation (1987) note a second kind of redundancy, horizontal redundancy, which cannot be handled this way because here information is involved that is considered rather nonpredictable. Horizontal redundancy is thus relegated to lexical rules, which have the lexical sign hierarchy as domain of application. Horizontal redundancy occurs in types of linguistic patternings as diverse as inflection, relation changes of verbs, such as passivization, dative shift and detransitivization, and extraposition. The lexical rule treatment of inflection, relation changes and extraposition thus falls out of the mechanism of type inheritance used normally in HPSG. Moreover, with this treatment, it is not clear how the application of lexical rules can be constrained. There are two sources to the problem HPSG faces. First and foremost, the problem is caused by disregarding the paradigmatic axis, i.e. by under-axialization. In SFG, inflection, relation changes and extraposition are uniformly treated on a paradigmatic basis within the system network, thus giving the various options of the paradigm a natural context. Choice and application of realizational rules is thus constrained by the higher rank, which is again constrained by the higher stratum (semantics) and ultimately by the context of situation (see Chapter 1). This kind of control is not given in the HPSG treatment. It is not obvious how rule application can be constrained, when the only context that is given is the type to which the rule potentially applies and the type the rule results in. First attempts to move towards a more paradigmatic view within HPSG have been made recently in the area of morphology, by Henschel (1991) for Russian morphology and by Krieger and Nerbonne (1993) for German morphology. The type lattice augmented by morphological information looks very similar to morphologically relevant information included in a system network of word and morpheme ranks. Second, as Bateman (1991) points out, HPSG tries to cover too many kinds of information by the two-fold top classification of grammatical units in lexical and nonlexical signs. This turns out to be difficult when information that is more varied must be accommodated (such as relation changes of verbs, inflection, extraposition). This problem can be called one of underdifferentiation of linguistic types. There seems to be generally no awareness in HPSG that 'idiosyncratic' lexical information could be more uniformly handled by grammatical classification proper, if the motivation of types included a functionally diversified perspective as it does in SFG. Such a perspective is beginning to be introduced to
Summary and Conclusions 225 HPSG, e.g. in work on information structure (Engdahl and Vallduvi 1994). In summary then, with HPSG we encounter exactly the complementary problem to SFG'S lack of syntactic generalizations, namely a lack of functional generalization. 6.4 Conclusions Clearly, not any one linguistic model can cover all aspects of language. However, for a fully explicit generative model of grammar, ultimately both the surface-syntactic and the functional perspective are needed. The proposal for a 'Dependency Systemic Functional Grammar' can be seen as one possible step towards intertwining the functional and the surface-syntactic view, as exemplified by SFG and HPSG respectively. In terms of systemic functional theory, the proposal put forward here is one step towards giving more equal weight to syntagmatic relations in Systemic Functional Grammar (and metagrammar), which has, as a tradition, focused on paradigmatic patterning and functional relations. At the same time it is important not to compromise the major notions of SFL, such as language in context, functional diversification, stratification, axiality and the like. On a more general level, an attempt has been made to illustrate the process of metasemiosis in computational SFL, moving from theory to computational application - where our concrete contribution is the implementation of a fragment of the grammar of German in the NIGEL style - and back to theory, and from there through another cycle of metasemiosis to explicit computational representation together with a proposal for employing a stronger notion of dependency in SFG. There are many issues that could not be dealt with here, and which will have to be left for future work. One immediate follow-up question is about the relation of dependency as introduced to SFG and the systemic notion of interdependency. Clearly, there is some overlap between the two. For instance, the dependency notion introduced here could possibly also be applicable to some types of univariate, hypotactic structure in complex units. For true integration of dependency and interdependency in one model, however, a systematic, theoretical investigation is needed. Another question that is closely related is the one about the representation of the other modes of expression, the interpersonal one of prosody and the textual one of pulse and prominence. While prosody in the form
226 Systemic Functional Grammar in Natural Language Generation of agreement could also be represented by structure sharing in typed feature structures, there is one notorious problem in the syntagmatic reflex of the textual metafunction. This is the representation of word order. The ordering realization operators provided in NIGEL are unsatisfactory for dealing with a 'free word order' language such as German. This leads to a third issue (another 'meta' issue) which could be interesting to investigate. This is the question about the precise nature of the correlation - assuming there is one - between the representational constructs of a linguistic model and the type of language it has originally been designed for. We have first encountered the problem of lacking means to express dependency relations in the context of implementing a grammar fragment of German. In the grammar of English, dependency is not so prominent, and it may suffice to account for interdependency relations in complexes. Moreover, the fact that modern English is a rather analytic language may have motivated the rank scale and the polysystemicity postulate. For German, however, they cause difficulties. So, ultimately, it is not only the application of systemic theory as such that can provide feedback to the theory, but also the application of the systemic model to a typologically different language. 6.5 Envoi Notwithstanding the alterations proposed to Systemic Functional Grammar here, the contention expressed at the beginning of this book that a linguistic theory, if it is not to fall short of explaining how language works, must acknowledge language as serving communicative needs in social contexts, remains true. This becomes especially clear in applications where the task posed by the application calls for a representation of this relation, i.e. the relation of a communicative context and the linguistic forms appropriate in that context. One such application that cannot do without a representation of this relation is natural language generation. This book has focused on grammar for the purpose of NL generation and its computational representation. But grammar is only one of the foci of research in computational Systemic Functional Linguistics. Current computational linguistic research using SFL attempts (a) to take more of the theory into computational application, (b) to extend the theory and computational representation towards multilingual processing and (c) to push forward methods of explicit computational representation.
Summary and Conclusions 227 Realizing more of SFL theory in computational applications comprises work on register and genre (see e.g. Bateman and Paris 1991; Zeng 1995; Hartley and Paris 1996), extending the KOMETPENMAN generator to include the more abstract levels of linguistic organization, such as textual semantics, text and context (see e.g. Teich and Bateman 1994; Bateman and Teich 1995; Ramm and Teich 1995; Teich et al. 1996), and employing SFL theory to motivate representations for multi-modal presentation generation in information systems, such as co-generation of graphics and text (Matthiessen et al. 1995; Alexa et al. 1996) and text-to-speech technology (Grote et al. 1996; Teich et al. 1997a). Extending the theory and its representation towards multilinguality has originally been suggested by Bateman et al. (1991b) and is now pursued by a number of people in Europe and Australia involved in various joint or local projects (Teich 1995; Bateman et al. 1996; Bateman 1997). This includes both enhancements of the KPML system (Bateman 1997) and the further development of lexico-grammatical resources for English, German, Dutch, French (Delin et al. 1994), Greek, Japanese, Chinese (Zeng, 1993) and Russian, Czech and Bulgarian (Teich et al. 1997b). On a more general level, this offers new perspectives on the question of interlingual representations and promises to shed more light on functional typological questions. Finally, methods of explicit computational representation in terms of typed feature structure systems are being further worked on (see Henschel 1997). Unfortunately, typed feature formulations of large-scale grammars, such as NIGEL, are hardly usable in practical applications such as NL generation, because their runtime behaviour is far from optimal. For large typed feature structure grammars to be usable in practice, typed feature systems themselves have to be further developed. Experiments in this area with Systemic Functional Grammars can offer valuable feedback to typed feature systems when faced with large classification hierarchies, as they are given in grammars like NIGEL or KOMET.
Note 1. See also Yang et al. (1991), who have noted that SFG'S major problem is maintaining consistent the syntactic consequences of the choices made in the system network. This is left completely to the system network designer since there is no mechanism checking consistency. For a discussion of similar problems see Brew (1991).
Bibliography
Alexa, M., J. A., Bateman, R. Henschel and E. Teich 1996 'Knowledge-based production of synthetic multimodal documents'. ERCIMNews 26. 18-20. URL: http:IIw-ww-ercim. inria.fr Alshawi, H., ed. 1992 The Core Language Engine. The MIT Press (Cambridge., MA). Anderson, J. M. 1969 'A note on "rank" and "delicacy"'. Journal of Linguistics 5. 129-35. Anderson, J. M. 197la 'Dependency and grammatical functions'. Foundations of Language 7. 30-7. Anderson, J. M. 1971b The Grammar of Case: Towards a Localistic Theory. Cambridge University Press (Cambridge). Appelt, D. E. 1985 Planning Natural Language Utterances. Cambridge University Press (Cambridge). Bateman, J. A. 1988 'From systemic-functional grammar to systemic-functional text generation: escalating the exchange'. In E. H. Hovy, D. D. McDonald, S. R. Young and D. E. Appelt, eds. Proceedings of the 1988 American Association for Artificial Intelligence (AAAI-88) Workshop on Text Planning and Realization (St Paul, MN). Bateman, J. A. 1991 'Language as constraint and language as resource: a convergence of metaphors in systemic-functional grammar'. Technical report, GMD - Forschungszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany. (Written version of paper presented at the International Workshop on Constraint based Formalisms for Natural Language Generation, November 1990, Bad Teinach, Germany). Bateman, J. A. 1992a 'Grammar, systemic'. In S. Shapiro, ed. Encyclopedia of Artificial Intelligence, 2nd ed. John Wiley and Sons. Bateman, J. A. 1992b 'NIGEL: textual semantics documentation'. Technical report, GMD Forschungszentrum fur
Bibliography 229 Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany. Bateman, J. A. 1997 'KPML development environment: multilingual linguistic resource development and sentence generation (Release 1.1)'. Technical report, GMD - Forschungszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany URL: http:llwww. darmstadt.gmd. delpublish/kometlkpml. html Bateman, J. A., J. Calder, R. Henschel and E. Steiner 1994 'Specification of a discourse grammar development tool'. Technical report, GMD Forschungszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany, (ESPRIT Basic Research Action: Dandelion, EP6665; Deliverable R2.2.1). Bateman, J. A., M. Emele and S. Momma 1992 'The nondirectional representation of systemic-functional grammars and semantics as typed feature structures'. Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), Vol. Ill (Nantes, France). Bateman, J. A., R. T Kasper, J. D. Moore and R. A. Whitney 1990 'A general organization of knowledge for natural language processing: the PENMAN Upper Model'. Technical report, USC/Information Sciences Institute, Marina del Rey, California. Bateman, J. A., G. Kikui and A. Tabuchi 1987 'Designing a computational systemic grammar of Japanese for text generation: a progress report'. Technical report, Kyoto University, Dept of Electrical Engineering, Kyoto, Japan. Bateman, J. A., E. A. Maier, E. Teich and L. Wanner 199la 'Towards an architecture for situated text generation'. Proceedings of the International Conference on Current Issues in Computational Linguistics (Penang, Malaysia). Bateman, J. A. and C. M. I. M. Matthiessen 1993 'Uncovering the text base'. In K. Hao, H. Bluhme and R. Li, eds. Proceedings of the International Conference on Texts and Language Research. Xi'an Jiaotong University Press. Bateman, J. A., C. M. I. M. Matthiessen, K. Nanri and L. Zeng 1991b 'The re-use of linguistic resources across languages in multilingual generation components'. Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI-91), Vol. 2. Morgan Kaufmann Publishers. Bateman, J. A., C. M. I. M. Matthiessen and L. Zeng, 1996 'A general architecture of multilingual resources for natural language
230 Systemic Functional Grammar in Natural Language Generation general architecture of multilingual resources for natural language processing'. Technical report, GMD - Forschungszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssystem (IPSI), Darmstadt, Germany and Macquarie University, Sydney, Australia. Bateman, J. A. and C. L. Paris 1991 'Constraining the deployment of lexicogrammatical resources during text generation: towards a computational instantiation of register theory'. In E. Ventola, ed. Functional and Systemic Linguistics: Approaches and Uses. Mouton de Gruyter (Amsterdam). Bateman, J. A. and E. Teich 1995 'Selective information presentation in an integrated publication system: an application of genre-driven text generation'. Information Processing and Management 31(5). 753-68 (special issue on summarizing text). Baumgarter, K. 1970 'Konstituenz und Dependenz. Zur Integration der beiden Prinzipien'. In H. Steger, ed. Vorschldge fur eine strukturale Gramamtik des Deutschen. Wissenschaftliche Buchgesellschaft (Darmstadt). Berry, M. 1977 An Introduction to Systemic Linguistics 2: Levels and Links. Batsford (London). Block, R. 1988 'Can a "parsing grammar" be used for natural language generation? The negative example of LFG'. In M. Zock and G. Sabah, eds. Advances in Natural Language Generation, Vol. II. Pinter (London). Bourbeau, L., D. Carcagno, E. Goldberg, R. Kittredge and A. Polguere 1990 'Bilingual generation of weather forecasts in an operations environment'. Proceedings of the 13th International Conference on Computational Linguistics (COLING-90) (Helsinki). Brachman, R. J. and J. Schmolze 1985 'An overview of the KL-ONE knowledge representation system'. Cognitive Science 9(2). 171-216. Bresnan, J., ed. 1982 The Mental Representation of Grammatical Relations. MIT Press (Cambridge, MA). Brew, C. 1991 'Systemic classification and its efficiency'. Computational Linguistics 17(4). 375-408. Buchberger, E. and H. Horacek 1988 'VIE-GEN - a generator for German texts'. In D. D. McDonald and L. Bole, eds. Natural Language Generation Systems. Springer (Berlin, New York). Buseman, S. 1988 'Surface transformations during the generation of written German sentences'. In D. D. McDonald and L. Bole, eds. Natural Language Generation Systems. Springer (Berlin, New York).
Bibliography 231 Butler, C. S. 1985 Systemic Linguistics: Theory and Applications. Batsford (London). Carpenter, B. and G. Penn 1994 The Attribute Logic Engine: User's Guide, Version 2.0. Computational Linguistics Program, CMU (Pittsburgh, PA). Chomsky, N. 1957 Syntactic Structures. Mouton (The Hague). Chomsky, N. 1965 Aspects of the Theory of Syntax. MIT Press (Cambridge, MA). Chomsky, N. 1970 'Remarks on nominalization'. In R. Jacobs and P. Rosenbaum, eds. Readings in English Transformational Grammar. Ginn and Co. (Waltham, MA). Chomsky, N. 1981 Lectures on Government and Binding. Foris (Dordrecht). Dahl, O. 1980 'Some arguments for higher nodes in syntax: a reply to Hudson's "Constituency and dependency"'. Journal of Linguistics 11. 110-18. Danes F. 1974 Papers on Functional Sentence Perspective. Academia (Prague). Davey, A. 1978 Discourse Production: A Computer Model of Some Aspects of a Speaker. Edinburgh University Press. Degand, L. 1993 'Dutch grammar documentation'. Technical report, GMD - Forschungszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany. Delin, J., A. Hartley, C. L. Paris, D. Scott and K. van der Linden 1994 'Expressing procedural relationships in multilingual instructions'. Proceedings of the Seventh International Workshop on Natural Language Generation (Kennebunkport, Maine). Dorre, J., M. Dorna and J. Junger 1996 The CUF User's Manual. Institut fur maschinelle Sprachverabeitung (IMS), Universitat Stuttgart (Germany). Dorre, J. and S. Momma 1987 'Generierung aus f-Strukturen als strukturgesteuerte Abbildung'. In K. Morik, ed. GWAI-87. Springer (Berlin, New York). Elhadad, M. 1990 'Types in functional unification grammars'. Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics (ACL-90). Association for Computational Linguistics, University of Pittsburgh (Pittsburgh, PA). Emele, M. and R. Zajac 1990 'Typed unification grammars'. Proceedings of the 13th International Conference on Computational Linguistics (COLING-90), Vol. ///(Helsinki).
232 Systemic Functional Grammar in Natural Language Generation Engdahl, E. and E. Vallduvi 1994 'Information packaging and grammar architecture: a constraint-based approach'. Technical report, CLLI (Amsterdam). (ESPRIT Basic Research Project 6852 DYANA-2; Report R. 1.3.B). Fawcett, R. P. 1980 Cognitive Linguistics and Social Interaction Exeter Linguistic Studies 3. Exeter University and Julius Groos Verlag (Exeter and Heidelberg). Fawcett, R. P. and B. L. Davies 1992 'Monologue as a turn in dialogue: towards an integration of exchange structure theory and rhetorical structure theory'. In R. Dale, E. Hovy, D. Rosner and O. Stock, eds. Aspects of Automated Natural Language Generation. Springer (Berlin, New York). Fawcett, R. P. and G. H. Tucker 1989 'Prototype generators 1 and 2'. Technical report, COMMUNAL Report Number 10, Computational Linguistics Unit, University of Wales College of Cardiff. Feiner, S. and K. McKeown 1990 'Coordinating text and graphics in explanation generation'. AAAI-90: Proceedings of the 8th National Conference on Artificial Intelligence, Vol. I. AAA Press/MIT Press (Menlo Park, CA). Fillmore, C. J. 1968 'The case for case'. In E. Bach and R. T. Harms, eds. Universals in Linguistic Theory. Holt, Rinehart and Winston (New York). Firth, J. 1957 Papers in Linguistics 1934-51. Oxford University Press (London). Fries, P. H. 1981 'On the status of theme in English: arguments from discourse'. Forum Linguisticum 6(1). 1-38. Gaifman, H. 1965 'Dependency systems and phrase structure systems'. Information and Control 8. 304-37. Gazdar, G., E. Klein, G. Pullum and I. A. Sag 1985 Generalized Phrase Structure Grammar. Blackwell and Harvard University Press (Cambridge, MA). Grote, B. 1993 'Generierung in der systemisch-funktionalen Grammatik: die Behandlung von Prapositionen'. Technical report, Universitat Trier, Linguistische Datenverabeitung, Germany, MA thesis. Grote, B. 1994 'Grammatical revision of the German prepositional phrase in KOMET'. Technical report, GMD - Forschungszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany. Grote, B., E. Hagen and E. Teich 1996 'Matchmaking: dialogue modelling and speech generation meet'. Proceedings of the Eighth
Bibliography 233 International Workshop on Natural Language Generation (Herstmonceux, England). Haegeman, J. 1991 Introduction to Government and Binding Theory. Blackwell (Oxford). Halliday, M. A. K. 1959 The Language of the Chinese: 'Secret History of the Mongols'. Blackwell (Oxford). Halliday, M. A. K. 1961 'Categories of the theory of grammar'. Word 17. 241-92. (Reprinted in abbreviated form in G. R. Kress, ed. Halliday: System and Function in Language. Oxford University Press (London). Halliday, M. A. K. 1963 'Class in relation to the axes of chain and choice in language'. Linguistics 2. 5-15. (Reprinted in abbreviated form in G. R. Kress, ed. Halliday: System and Function in Language. Oxford University Press (London). Halliday, M. A. K. 1965/81 'Types of structure'. In M. A. K. Halliday and J. R. Martin, eds. Readings in Systemic Linguistics. Batsford (London) (reprinted version of original paper from 1965). Halliday, M. A. K. 1966 'The concept of rank: a reply'. Journal of Linguistics 11. 110-18. Halliday, M. A. K. 1967 'Notes on transitivity and theme in English -parts 1 and 2'. Journal of Linguistics 3. 37-81 and 199-244. Halliday, M. A. K. 1968 'Notes on transitivity and theme in English - part 3'. Journal of Linguistics 4. 179-215. Halliday, M. A. K. 1970 'Language structure and language function'. In J. Lyons, ed. New Horizons in Linguistics. Penguin (Harmondsworth). Halliday, M. A. K. 1973 Explorations in the Functions of Language. Edward Arnold (London). Halliday, M. A. K. 1974 'The place of "functional sentence perspective" in the system of linguistic description'. In F. Danes, ed. Papers on Functional Sentence Perspective. Academia (Prague) (Reprinted in abbreviated form in G. R. Kress, ed. Halliday: System and Function in Language. Oxford University Press (London). Halliday, M. A. K. 1976 'The English verbal group'. In G. R. Kress, ed. Halliday: System and Function in Language. Oxford University Press (London), pp. 136-58. Halliday, M. A. K. 1978 Language as Social Semiotic. Edward Arnold (London). Halliday, M. A. K. 1979 'Modes of meaning and modes of saying: types of grammatical structure and their determination by
234 Systemic Functional Grammar in Natural Language Generation different semantic functions'. In I. D. Allerton, E. Carney and D. Holdcroft (eds), Function and Context in Linguistic Analysis: Essays Offered to William Haas. Cambridge University Press (Cambridge). Halliday, M. A. K. 1985a An Introduction to Functional Grammar. Edward Arnold (London). Halliday, M. A. K. 1985b 'Systemic background'. In J. D. Benson and W. S. Greaves, eds. Systemic Perspectives on Discourse, Vol. 1. Ablex, (Norwood, NJ). Halliday, M. A. K., A. Mclntosh and P. Stevens 1964 The Linguistic Sciences and Language Teaching. Longman (London). Halliday, M. A. K. and C. M. I. M. Mattiessen 1997 Systemic Functional Grammar: A First Step into the Theory. International Language Sciences Publishers (Tokyo). Hartley, A. and C. L. Paris 1996 'Two sources of control over the generation of software instructions'. Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL-96) (Santa Cruz, CA). Hasan, R. 1987 'The grammarian's dream: lexis as most delicate grammar'. In M. A. K. Halliday and R. P. Fawcett, eds. New Developments in Systemic Linguistics, Vol. 1. Pinter (London). Hawkins, J. A. 1983 Word Order Universals. Academic Press (New York). Hawkins, J. A. 1984 'Modifer-head or function-argument relations in phrase structure? The evidence of some word-order universals'. Lingua 63. 107-38. Hawkins, J. A. 1986 A Comparative Typology of English and German. Croom Helm (London and Sydney). Hays, D. G. 1964 'Dependency theory: a formalism and some observations'. Language 40(4). 511-25. Heidolph, K., W. Flamig and W. Motsch 1980 Grundzuge einer deutschen Grammatik. Akademie Verlag (Berlin). Helbig, G. and J. Buscha 1988 Deutsche Grammatik: Ein Handbuch fur den Ausldnderunterricht. VEB Verlag Enzyklopadie (Leipzig). Henrici, A. 1965/81 'Notes on the systemic generation of a paradigm of the English clause'. In M. A. K. Halliday and J. R. Martin, eds. Readings in Systemic Linguistics. Batsford (London) (reprinted version of original paper from 1965). Henschel, R. 1991 'The morphological principle'. Technical report, Zentralinstitut fur Sprachwissenschaft (Berlin, Germany). Henschel, R. 1994 'Declarative representation and processing of systemic grammars'. In C. Martin-Vide, ed. Current Issues in
Bibliography 235 Mathematical Linguistics. Elsevier (Amsterdam). Henschel, R. 1995 'Traversing the labryinth of feature logics for a declarative implementation of large scale systemic grammars'. Proceedings of the Constraint Logic Programming andNLP (CLNLP95) (South Queensferry, UK). Henschel, R. 1997 'Compiling systemic grammar into feature logic systems'. In S. Manandhar, ed. Selected Papers from CLNLP 95. Springer (Berlin, New York). Hoberg, U. 1981 Die Wortstellung in der geschriebenen deutschen Gegenwanssprache. Hueber (Munich). Horrocks, G. 1987 Generative Grammar. Longman (London and New York). Hovy, E. H., D. D. McDonald, S. R. Young and D. E. Appelt (eds) 1988 Proceedings of the 1988 American Association for Artificial Intelligence (AAAI-88) Workshop on Text Planning and Realization. American Association for Artificial Intelligence (St Paul, MN). Huddleston, R. 1965 'Rank and depth'. Language 41. 574-86. Hudson, R. A. 1976 Arguments for a Non-transformational Grammar. Chicago University Press (Chicago). Hudson, R. A. 1980a 'Constituency and dependency'. Linguistics 18. 179-98. Hudson, R. A. 1980b 'Daughter-dependency grammar'. In H.-H. Lieb, ed. Oberfldchensyntax und Semantik. Max Niemeyer (Tubingen). Hudson, R. A. 1980c 'A second attack on constituency: a reply to Dahl'. Linguistics 18. 489-504. Hudson, R. A. 1984 Word Grammar. Blackwell (Oxford). Hudson, R. A. 1987a 'Daughter-dependency theory and systemic grammar'. In M. A. K. Halliday & R. P. Fawcett, eds. New Developments in Systemic Linguistics, Vol. 1. Pinter (London). Hudson, R. A. 1987b 'Zwicky on heads'. Journal of Linguistics 23. 109-32. Hudson, R. A. 1989 'Coordination and grammatical relations'. Journal of Linguistics 24(2). 303-42. Hudson, R. A. 1989 'Gapping and grammatical relations'. Journal of Linguistics 2 5 ( I ) . 57-94. lordanskaja, L. N., R. Kittredge and A. Polguere 1988 'Implementing a meaning-text model for language generation'. Proceedings of the 12th International Conference on Computational Linguistics (COLING-88) (Budapest), lordanskaja, L. N., R. Kittredge and A. Polguere 1991 'Lexical selection and paraphrase in a meaning-text generation model'. In
236 Systemic Functional Grammar in Natural Language Generation C. L. Paris, W. R. Swartout and W. C. Mann, eds. Natural Language Generation in Artificial Intelligence and Computational Linguistics. Kluwer Academic Publishers (Boston/Dordrecht/London). Jackendoff, R. 1977 X Syntax: A Study of Phrase Structure. MIT Press (Cambridge, MA). Jackendoff, R. 1983 Semantics and Cognition. MIT Press (Cambridge, MA). Jackendoff, R. 1987 'The status of thematic relations in linguistic theory'. Linguistic Inquiry 18(3). 369-411. Joshi, A. K. 1987 'The relevance of tree adjoining grammar to generation'. In G. Kempen, ed. Natural Language Generation: Recent Advances in Artificial Intelligence, Psychology, and Linguistics. Kluwer Academic Publishers (Boston/Dordrecht/London). Kasper, R. T. 1987 Systemic Grammar and Functional Unification Grammar, Technical Report ISI/RS-87-179. USC/Information Licences Institute (Marina del Rey, CA). Kasper, R. T. 1988 'Systemic grammar and functional unification grammar'. In J. D. Benson and W. S. Greaves, eds. Systemic Functional Approaches to Discourse. Ablex (Norwood, NJ). Kasper, R. T. 1989a 'A flexible interface for linking applications to PENMAN'S sentence generator'. Proceedings of the DARPA Workshop on Speech and Natural Language. Available from USC/Information Sciences Institute (Marina del Rey, CA). Kasper, R. T. 1989b 'Unification and classification: an experiment in information-based parsing'. Proceedings of the International Workshop on Parsing Technologies. Carnegie-Mellon University (Pittsburgh, PA). Kay, M. 1979 'Functional grammar'. Proceedings of the 5th Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society (Berkeley, CA). Kay, M. 1985 'Parsing in functional unification grammar'. In D. R. Dowty, L. Kartunnen and A. Zwicky, eds. Natural Language Parsing. Cambridge University Press (London). Kittredge, R. L., L. N. lordanskaja and A. Polguere 1988 'Multilingual text generation and the meaning-text theory'. Conference on Theoretical and Methodoligical Issues in Machine Translation of Natural Languages. Carnegie Mellon University (Pittsburgh, PA). Kittredge, R., A. Polguere and E. Goldberg 1986 'Synthesizing weather reports from formatted data'. Proceedings of the llth International Conference on Computational Linguistics (COLING86) (Bonn, Germany).
Bibliography 237 Kittredge, R. and A. Polguere 1991 'Dependency grammars for bilingual text generation: inside FoG's stratification models'. Proceedings of International Conference on Computational Linguistics (Penang, Malaysia). Kornai, A. and G. Pullum 1990 'The X-bar theory of phrase structure'. Language 66(1). 24-50. Krieger, H.-U. and J. Nerbonne 1993 'Feature-based inheritance networks for computational lexicons'. In T. Briscoe, V. de Paiva and A. Copestake, eds. Inheritance, Defaults, and the Lexicon. Cambridge University Press (Cambridge). Krieger, H.-U. and U. Schafer 1993 'TDL - a type description language for unification-based grammars'. Technical report, DFKI (Saarbriicken, Germany). Kunze, J. 1975 Abhdngigkeitsgrammatik, Vol. XII of Studia Grammatica. Akademie Verlag (Berlin). Kunze, J. 1991 Kasurelationen und semantische Emphase, Vol. XXXII of Studia Grammatica. Akademie Verlag (Berlin). Kwee, T.-L. 1987 'A computer model of functional grammar'. In G. Kempen, ed. Natural Language Generation: Recent Advances in Artificial Intelligence, Psychology, and Linguistics. Kluwer Academic Publishers (Boston/Dordrecht/London). Levin, L. 1987 'Towards a linking theory of relation changing rules in LFG'. Technical report, Report No. CSLI-87-115, Center for the Study of Language and Information (Stanford, CA). MacGregor, R. and R. Bates 1987 'The LOOM knowledge representation language'. Technical report ISI/RS-87-188, USC/Information Sciences Institute, Marina del Rey, California. (Paper held at Knowledge-based Systems Workshop in St Louis, MO, April 1987). McDonald, D. D. 1980 'Natural language production as a process of decision making under constraint'. PhD thesis, MIT (Cambridge, MA). Macheiner, J. 1991 Das grammatische Variete. Eichborn Verlag (Frankfurt am Main). McKeown, K. 1985 Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press (Cambridge). McKeown, K., M. Elhadad, Y. Fukumoto, J. Lim, C. Lombardi, J. Robin and F. Smadja 1990 'Natural language generation in COMET'. In R. Dale, C. Mellish and M. Zock, eds. Current Research in Natural Language Generation. Academic Press (London).
238 Systemic Functional Grammar in Natural Language Generation Malinowski, B. 1935 Coral Gardens and Their Magic. Allen & Unwin (London). Mann, W. C. and C. M. I. M. Matthiessen 1983 'NIGEL: a systemic grammar for text generation'. Technical report ISI/RR-85-105, USC/Information Sciences Institute, Marina del Rey, California. Mann, W. C. and C. M. I. M. Matthiessen 1985 'Demonstration of the NIGEL text generation computer program'. In J. D. Benson and W. S. Greaves, eds. Systemic Perspectives on Discourse, Vol. 1. Ablex (Norwood, NJ). Mann, W. C. and S. A. Thompson 1987 'Rhetorical structure theory: description and construction of text structures'. In G. Kempen, ed. Natural Language Generation: Recent Advances in Artificial Intelligence, Psychology, and Linguistics. Kluwer Academic Publishers (Boston/Dordrecht/London). Martin, J. R. 1985 'Process and text: two aspects of human semiosis'. In J. D. Benson and W. S. Greaves, eds. Systemic Perspectives on Discourse, Vol. 1. Ablex (Norwwod, NJ). Martin, J. R. 1987 'The meaning of features in systemic linguistics'. In M. A. K. Halliday and R. P. Fawcett, eds. New Developments in Systemic Linguistics, Vol. 1. Pinter (London). Martin, J. R. 1992 English Text: Systems and Structure. Benjamins (Amsterdam). Matthews, P. H. 1981 Syntax. Cambridge University Press (Cambridge). Matthiessen, C. M. I. M. 1984 'Choosing tense in English'. Technical report ISI/RR-84-143, USC/Information Sciences Institute (Marina del Rey, C A). Matthiessen, C. M. I. M. 1988a 'Representational issues in systemic-functional grammar'. In J. D. Benson and W. S. Greaves, eds. Systemic Functional Approaches to Discourse. Ablex (Norwood, NJ). Matthiessen, C. M. I. M. 1988b 'Semantics for a systemic grammar: the chooser and inquiry framework'. In J. D. Benson, M. Cummings and W. S. Greaves, eds. Linguistics in a Systemic Perspective. John Benjamins (Amsterdam). Matthiessen, C. M. I. M. 1990 'Metafunctional complementarity and resonance in syntagmatic organization'. Technical report, Linguistics Department, University of Sydney (Sydney, Australia). Matthiessen, C. M. I. M. 1992 'Lexicogrammatical cartography: English systems'. Technical report, Linguistics Department, University of Sydney (Sydney, Australia).
Bibliography 239 Matthiessen, C. M. I. M. 1995 Lexico-grammatical Cartography: English Systems. International Language Sciences Publishers (Tokyo). Matthiessen, C. M. I. M. and J. A. Bateman 1991 Text Generation and Systemic-functional Linguistics: Experiences from English and Japanese. Pinter (London). Matthiessen, C. M. I. M., I. Kobayashi, L. Zeng and M. Cross 1995 'Generating multimodal presentations: resources and processes'. Proceedings of the Australian Conference on Artificial Intelligence (Canberra, Australia). Mellish, C. S. 1988 'Implementing systemic classification by unification'. Journal of Computational Linguistics 14(1). 40-51. Mel'cuk, I. A. 1988 Dependency Syntax: Theory and Practice. State University of New York Press (Albany). Meteer, M. W. 1991 'Bridging the generation gap between text planning and linguistic realization'. Computational Intelligence 7(4). 296-304. Meteer, M. W. 1992 Expressibility and the Problem of Efficient Text Planning. Pinter (London). Meteer, M. W., D. D. McDonald, S. Anderson, D. Forster, L. Gay, A. Huettner and P. Sibun 1987 'MuMBLE-86: design and implementation'. Technical report 87-87, COINS, University of Massachusetts. Muysken, P. and H. van Riemsdijk 1986 'Projecting features and featuring projections'. In P. Muysken and H. van Riemsdijk, eds. Features and Projections. Foris Publications (Dordrecht). Nichols, J. 1986 'Head-marking and dependent-marking grammar'. Language 62(1). 56-119. O'Donnell, M. 1990 'A dynamic model of exchange'. Word 41(3). 293-328. O'Donnell, M. and R. T. Kasper 1991 'Representing NIGEL in LOOM'. Working paper USC/Information Sciences Institute and Department of Linguistics, University of Sydney. Patten, T. 1988 Systemic Text Generation as Problem Solving. Cambridge University Press (Cambridge). Penman Project 1989 'PENMAN documentation: the Primer, the User Guide, the Reference Manual, and the NIGEL manual'. Technical report, USC/Information Sciences Institute, Marina del Rey, CA. Pollard, C. and I. A. Sag 1987 Information-based Syntax and Semantics, Vol. 1. University of Chicago Press (Chicago). Pollard, C. and I. A. Sag 1994 Head-Driven Phrase Structure
240 Systemic Functional Grammar in Natural Language Generation Grammar. University of Chicago Press and CSLI Publications (Chicago). Quirk, R., S. Greenbaum, G. Leech and J. Svartik 1985 A Comprehensive Grammar of the English Language. Longman (London). Ramm, W. and E. Teich 1995 'Grammatical choice in text and context: constraints between subject matter, text type and theme selection'. Tagungsband der DGfS-Computerlinguistik, Diisseldolf, October 1995. Robins, R. 1967 A Short History of Linguistics. Longman (London and New York), 3rd edn. 1990. Robinson, J. J. 1970 'Dependency structures and transformational rules'. Language 46(2). 259-85. Rosner, D. 1988 'The generation system of the SEMSYN project: towards a task-independent generator for German'. In M. Zock and G. Sabah, eds. Advances in Natural Language Generation: An Interdisciplinary Perspective, Vol. 2. Pinter (London). Rosner, D. and M. Stede 1992 'TECHDOC: a system for the automatic production of multilingual technical documents'. Technical report FAW-TR-92021, Forschungsinstitut fur anwendungsorientierte Wissenverarbeitung (FAW) an der Universitat Ulm, Ulm, Germany. Sells, P. 1985 Lectures on Contemporary Syntactic Theories. Center for the Study of Language and Information (CSLI), Stanford University, (Stanford, CA). Shieber, S. M. 1986 An Introduction to Unification-based Approaches to Grammar. Center for the Study of Language and Information (CSLI), Stanford University (Stanford, CA). Shieber, S. M. and Y. Schabes 1991 'Generation and synchronous tree-adjoining grammars'. Computational Intelligence 7(4). 220-8. Steiner, E. H., U. Eckert, B. Week and J. Winter 1988 'The development of the EUROTRA-D system of semantic relations'. In E. H. Steiner, P. Schmidt and C. Zelinsky-Wibbelt, eds. From Syntax to Semantics: Insight from Machine Translation. Pinter (London). Steiner, E. H. and U. Reuther 1989 Semantic Relations, EUROTRA Reference Manual, Version 6. Commission of the European Community (Luxembourg). Steiner, E. H. and W. Ramm 1995 'On theme as a grammatical notion for German'. Functions of Language 1(2). 57-93. Strzalkowski, T, ed. 1994 Reversible Grammar in Natural Language Processing. Kluwer Academic Publishers (Boston/Dordrecht/London).
Bibliography 241 Teich, E. 1991 'A systemic grammar of German for text generation'. In D. Noel, ed. Occasional Papers in Systemic Linguistics. University of Nottingham (Nottingham). Teich, E. 1992 'KOMET: grammar documentation'. Technical report, GMD - Forschugnszentrum fur Informationstechnik, Institut fur Integrierte Publikations- und Informationssysteme (IPSI), Darmstadt, Germany. Teich, E. 1995 'Towards a methodology for the construction of multilingual resources for multilingual text generation'. Proceedings of the IJCAI Workshop on Multilingual Generation. International Joint Conference on Artificial Intelligence (IJCAI95) (Montreal, Canada). Teich, E. and J. A. Bateman 1994 'Towards an application of text generation in an integrated publication system'. Proceedings of the Seventh International Workshop on Natural Language Generation (Kennebunkport, ME). Teich, E., L. Degand and J. A. Bateman 1996 'Multilingual textuality: experiences from multilingual text generation'. In G. Adorni and M. Zock, eds. Trends in Natural Language Generation: An Artificial Intelligence Perspective, Lecture Notes in Artificial Intelligence no. 1036. Springer (Berlin, New York). Teich, E., E. Hagen, B. Grote and J. A. Bateman 1997a 'From communicative context to speech: integrating dialogue processing, speech production and natural language generation'. Speech Communication 21(1-2) 73-99. Teich, E., R. Henschel, J. A. Bateman and E. H. Steiner 1997b 'AGILE: automatic drafting of technical documents in Czech, Russian and Bulgarian'. Technical report, Universitat des Saarlandes, Saarbriicken and University of Stirling. Project note, presented at Saarbriicken Workshop on Natural Language Generation, April 1997, DFKI (Saarbriicken, Germany). Tesniere, L. 1959 Elements de syntaxe structurale. Klincksieck (Paris). Thompson, H. S. 1977 'Strategy and tactics: a model for language production'. Papers from the Thirteenth Regional Meeting of the Chicago Linguistics Society. Chicago Linguistic Society (Chicago). Vater, H. 1975 'Toward a generative dependency grammar'. Lingua 36. 121-45. Winograd, T 1983 Language as a Cognitive Process, Volume 1: Syntax. Addison Wesley (New York). Yang, G., K. F. McCoy and K. Vajay-Shanker 1991 'From functional specification to syntactic structures: systemic grammar and tree adjoining grammar'. Computational Intelligence 7(4). 207-19.
242 Systemic Functional Grammar in Natural Language Generation Zajac, R. 1991 Notes on the Typed Feature System, Version 4. Project Polygloss, IMS-CIVIfl-AIS (Stuttgart). Zajac, R. 1992 'Inheritance and constraint-based grammar formalisms'. Computational Linguistics 18(2). 159-82 (special issue on inheritance: 1). Zelinsky-Wibbelt, C. 1987 'Semantische Merkmale fur die automatische Disambiguierung: ihre Generierung und ihre Verwendung'. Technical report EUROTRA-D Working Papers no. 4, Institut fur Angewandte Informationsforschung, Eurotra-D (Saarbriicken, Germany). Zelinsky-Wibbelt, C. 1991 'Reference as a universal cognitive process: a contrastive study of article use'. Technical report EUROTRA-D Working Papers no. 21, Institut fur Angewandte Informationsforschung, Eurotra-D, (Saarbriicken, Germany). Zeng, L. 1983 'Coordinating ideational and textual resources in the generation of multisentential texts in Chinese'. Proceedings of the Meeting of the Pacific Association of Computational Linguistics (Vancouver, Canada). Zeng, L. 1995 'Reasoning with systemic resources in text planning'. Proceedings of the Second Conference of the Pacific Association for Computational Linguistics (PacLing-II) (Brisbane, Australia). Zwicky, A. M. 1985 'Heads'. Journal of Linguistics 21. 1-30.
Index
adjectival group 147-8 adverbial group 148 agency 22, 100, 199 agreement 130, 141-2, 176 Alexa, M. 227 Alshawi, H. 59 Andersch, A. 156n Anderson,]. M. 34, 181, 186 Appelt, D. E. 55 Arguments for a nontransformational grammar (ANTG) 39-43, 188-9 artificial intelligence (AI) 54-5 auxiliary tree 77, 78 axiality 47, 65, 66 axis/axes 2, 27, 65 paradigmatic 17, 18, 21-7, 67, 86, 160,224 syntagmatic 17, 18, 25, 29-47, 160 Bateman, J. A. 1, 3, 4, 21, 49, 54, 55,56,61,67,69,87,88, 89n, 90, 92, 157n, 159, 161, 174, 204, 222, 223, 224, 227 Bates, R. 177 Baumgarter, K. 182 Berry, M. 30, 44 Block, R. 57 Bourbeau, L. 69 Brachman, R. J. 177 Bresnan, 6n, 5In Brew, C. 227n Buchberger, E. 56
Buscha,J. 91,99, 124, 144 Buseman, S. 56 Butler, C. S. 10, 5In, 188 Carpenter, B. 3, 87, 160, 218n case 27-8, 128, 141, 197 case theory 187 Chinese 190 choices, 9, 10, 12, 13,25,66 controlled choice 66-7 Chomsky, N. 19, 40, 47, 99, 161, 162, 163, 164, 165, 182, 186 chooser 61-2, 64, 74 circumstantial 94, 117-20 classification 36, 40, 130, 132 cross-classification 10, 167 explicit 167 nominal group 135-6 simultaneous systems and 10 sub-classification 10 clause 15, 16, 19, 36, 40, 44, 97, 191 circumstantial 117-20 classification of problems 128-30 clause-complexity 24, 38, 94, 97-9 gates 27-8, 111-17 mood 123-5 object-insertion 110-11 simplex 97 theme 16, 120-3 transitivity 100-10 COMET 59, 60, 74-7
244 Systemic Functional Grammar in Natural Language Generation complex units 31, 153, 190 complexity clause-complexity 24, 38, 94, 97-9, 190 nominal-group-complexity 130, 131-2 computational application 1, 3-5, 49, 52 etseq computational linguistics 1, 54 computational representation 4, 52, 158 concord case 141 determinant of 183, 215 conflation 16, 43, 44, 46, 175 congruence/congruent 36-7, 39 constituency 19, 30, 31, 40, 41 dependency and 182-7 structures 16 tree 20 context 2, 8, 9, 13, 16,69 situational 8, 13, 16 Dahl, O. 218n Danes, F. 120 daughter 167, 189 complement daughters 168-9, 170 see also head daughter daughter dependency 39-43, 153, 188, 192 see also dependent-daughters Davey, A. 56 Davies, B. L. 69 Degand, L. 1 deixis 28 delicacy 12, 21, 23, 172-3 lexis as most delicate grammar 19 primary 19, 172, 192 Delin,J. 1,227 dependency 2-3, 35-6, 153, 154-5, 158-60, 180-90, 221 ANTG model 39-43, 188-9 connexion 180, 181 constituency and 182-7
jonction 181 in simplex units 154 structure 20, 191, 215, 217 translation 181 valency and 181 dependency grammar 190-216 typed feature structures 191-6 dependency relations 35, 90 dependent-daughters 196, 199-203, 204, 214 see also daughter dependency determinant of concord 183, 215 determination 130, 138 diathesis 94, 102, 111-17 distributional equivalence 196 distributional equivalent 172, 183, 191, 197,216 D6rre,J. 3, 55, 87, 160 element of structure 31, 36, 38 elementary tree 77, 79 Elhadad, M. 74 Emele, M. 3, 87, 160, 178, 191, 216, 217n Engdahl, E. 25 English 190, 191 epithet 130, 132, 135 expansion 43 experiential metafunction 15, 23, 25 constituency configurations 31 mode of expression 20, 31, 33, 35 multivariate structure 20 see also ideational metafunction Fawcett, R. P. 29, 30, 36-9, 40, 47, 48, 49, 5In, 56, 69, 100 feature assignment rules 46 feature sharing 154, 160, 176, 223 feature structures 165-72, 174-7 see also typed feature structures features 159-61 cooccurrence 176 for grammatical representation 161-80
Index 245 'meaning' of 172-4 motivation 159, 161, 172-4 'syntax' of 174-9 see also head features Feiner, S. 74 Fillmore, C. J. 181 Firth, J. 1,8 Fries, P. H. 120 function 9 see also metafunctions function assignment rule 42 function structures 16, 30-1 functional diversification 15, 47, 66, 67, 221 see also metafunctions functional generalization 172, 173 functional linguistics 56 Functional Unification Grammar (FUG) 56, 59, 74-7, 87, 174-7 Gaifman, H. 183 gates 12,27-8, 111-17 Gazdar, G. 6n, 47, 159, 161, 164, 166
gender agreement 141, 142-3 generalized phrase structure grammar (GPSG) 56, 161, 166, 169 generation 53-8 multilingual 56; strategic 53-4, 55 tactical 53, 54, 55, 58-85 see also natural language generation generation gap 53, 55-6, 86 generative grammar 40, 159, 161, 162-4,223 German 1, 90-157, 182, 191, 199-204, 226, 227 adjectival group 147-8 adverbial group 148 clause 97-127 clause rank 128-30 mood 123-6 nominal group 130-40 prepositional phrase 148-52
rank and ranking 92-7 tense 126-7 transitivity 100-10, 119 see also clause; nominal group GOSSiP 59, 60, 69-74, 82 government 128, 153, 158, 186, 196, 197, 199 constituent determining 183 governor 187, 191, 216 Grote, B. 1, 69, 90, 149, 157n, 227 Haegeman, J. 187 Halliday, M. A. K. 1, 2, 7-29 passim, 37, 38, 39, 40, 46, 47, 48, 50n, 5In, 74, 88, 90, 95, 97, 102, 121, 124, 127, 130, 132, 145, 153, 158, 190,220 limits of linguistic representation 33-6 modes of expression 30-3 Hartley, A. 227 Hasan, R. 19 HawkinsJ. A. 123, 151, 218n Hays, D. G. 183, 186 head 183-9, 191, 221, 222 head complement structure 168 head daughter 164, 166, 168-9, 171, 192-5, 197,204,214 Head Feature Convention (HFC) 166, 168
Head Feature Principle (HFP) 160, 166, 169, 172, 197 head features 197-8, 214, 215, 222 head-driven phrase structure grammar (HPSG) 40, 159 et seq, 192, 197, 202, 223-5 features of 179-80 means of representation 159-60 structural generalization 173 see also dependency Heidolph, K. 91 Helbig, G. 91,99, 124, 144 Henrici, A. 30, 44, 45, 46, 50n Henschel, R. 4, 87, 161, 174, 178, 227 Hoberg, U. 123
246 Systemic Functional Grammar in Natural Language Generation Horacek, H. 56 Horrocks, G. 163 Hovy, E. H. 55 Huddleston, R. 30, 46 Hudson, R. A. 29, 30, 35-6, 46, 47, 49, 51n, 153, 159, 161, 182, 183, 184, 185, 188-9, 190, 215, 217n, 218n ANTG model 39-43, 188-9 daughter dependency grammar 39-43 hypotaxis/hypotactic 31, 35, 38 ideational metafunction 15, 19, 21, 23 modes of expression 31-2 see also experiential metafunction; logical metafunction; metafunctions incongruence/incongruent 37 inheritance 168, 178 hierarchy 10, 167 multiple 10 inquiry 61-2 insertion 16, 17, 43, 46 interdependency 23-4, 31, 189-90, 217 interpersonal metafunction 15, 19, 24-5
mode of expression 20, 32-3, 35 prosodic structure 32-3 wave-like structures 20 see also metafunctions lordanskaja, L. N. 52, 55, 56, 69, 70 Jackendoff, R. 19, 20, 83, 95, 111, 161, 163, 186 Joshi, A. K. 53, 77, 78
transitivity 101, 102 Komet-Penman 1, 69, 90, 227 Komet-Penman Multilingual (KPML) 56 Kornai, A. 218n Krieger, H.-U. 3, 160 Kunze,J. 35, 181, 182 Kwee, T.-L. 56 Levin, L. 111 Lexical Functional Grammar (LFG) 56, 202 lexico-grammar 19, 29, 55 lexis: as most delicate grammar 19 linear ordering 16, 46-7, 175 linguistic representation 4, 7, 9-18, 49 axiality 65, 66 choice 66 functional diversification 66, 67, 221 motivation of categories 65-6 realization 16-17, 29-30 stratification 13-15, 66, 67 the system 10-12 locality 78-9 logical form 59 logical metafunction 1, 2-3, 15, 23,88 dependency structures 20 interdependency structure 31 linearly recursive 23 mode of expression 20, 31, 35 see also ideational metafunction logical organization 2-3
McDonald, D. D. 56 MacGregor, R. 177 Macheiner, J. 155n Kasper, R. T. 4, 59, 62, 87, 174-6, McKeown, K. 52, 53, 54, 56, 69, 217n 74, 86, 89n Kay, M. 6n, 53, 57, 74, 165 Malinowski, B. 1, 8 Kittredge, R. L. 1, 55, 56, 69 Mann, W. C. 1, 52, 54, 55, 56, 60, Komet 69, 90, 92, 119-20, 130, 99 135, 140, 143, 149, 190, 202, Martin, J. R. 13, 27, 28, 29, 34, 227 51n, 69, 172, 173
Index 247 Matthews, P. H. 183, 184 Matthiessen, C. M. I. M. 1, 2, 10, 15, 18,20,21,24,29,31,34, 44, 47, 50n, 52, 55, 56, 60, 61,67, 89n, 126, 131, 148, 157n, 160, 176,222,227 meaning 2, 8-9 construal of 9, 14 meaning potential 2 Meaning-Text-Theory (MTT) 59, 69-74, 86, 129 mediated networks 29, 172, 173 see also system networks Mel'cuk, I. A. 35, 53, 70, 74, 89n, 181, 182 Mellish, C. S. 87 metafunctions 8, 9, 10, 15-16, 21, 31,35-6,66 see also experiential metafunction; ideational metamnction; interpersonal metafunction; logical metafunction; modes of expression; textual metafunction metagrammar 4 metalanguage 3-4 metalinguistic 5 metaregisters 3-5 see also register metasemiosis 3, 5, 52, 158, 220, 223 metastratum 4, 7, 8 see also linguistic representation; theoretical perspective Meteer, M. W. 52, 53, 55, 73, 77, 81, 83, 85, 86, 89n microfunctions 30-1, 34, 42, 44, 189 minorprocess 194 modality 24 modes of expression 20, 30-3, 35, 49, 160,222 experiential metafunction 20, 31,33,35 ideational metafunction 31-2
interdependency 23-4, 31, 189-90,217 interpersonal metafunction 20, 32-3, 35 logical metafunction 20, 31, 35 prominence 33 prosody 32-3, 225-6 pulse 33, 35 textual metafunction 20, 33, 35 Momma, S. 55 mood 12, 14, 15-16, 17, 24, 62, 94, 123-6, 175 morphology 57, 224 morphosyntactic locus 153, 183, 187,215,221 mother 41, 164, 166-7, 189, 196 multifunctionality 4, 9, 10, 30 MUMBLE-86 60, 77-85 Muysken, P. 217n natural language generation (NLG) 52, 53-6 grammars for 56-60 see also generation networks see mediated networks system networks Nichols,J. 218n Nigel 1, 52, 60-1, 68, 69, 87, 190, 202,215,216,226 German and 90-157 passim object-insertion 110-11 transitivity 101, 102, 119-20 word class hierarchy 94, 96 nominal group (NG) 17, 130-40 classification 135-6 determination 138 epithet 135 gates 27-8 nominal-group-complexity 130, 131-2 nominal-person 138-40 nountype 133-5 numeration 136 pronoun 140 qualification 136-8
248 Systemic Functional Grammar in Natural Language Generation rank, classification of problems 140-6 nountype 130, 132, 133-5, 194 number agreement 141, 142 numeration 130, 132, 136
pulse 33, 35 pulse-like structures 20
obligatory element 183 O'Donnell, M. 4, 69, 87 ordering 43, 46 linear 16, 46-7, 175
Ramm,W. 121, 123,227 rank 19-20, 27, 91, 92-7, 176, 192, 224 rank hypothesis 19 rank scale 19, 34, 36, 37, 49, 95, 172, 222 realization 4, 10, 16-17, 25, 29, 43, 175 interaxial 8, 16-17, 19, 43-7 interrank 43-7, 175 interstratal 10, 14 plexal 45-6 realization rules 29-30, 44, 46 realization statements 8, 17, 29-30, 39, 43, 44, 100 recursion 12, 38 cyclical 32 linear 23, 32 recursive structures 20 recursive systems 12 region 91, 92-5 register 4, 16 see also metaregisters resource 2, 3, 4, 86 grammatical 53 linguistic 57 Reuther, U.91, 100 reversible grammar 57 Riemsdijk, H. van 217n Robins, R. 7 Robinson, J. J. 186 Rosner, D. 56 ruler 183
paradigmatic relations 2, 8, 10, 12, 18 parataxis/paratactic 31, 131 Paris, C. L. 54, 69, 227 partial surface functional description (PSFD) 60, 74, 75,76
Patten, T. 4, 55, 56 Penman 1, 2, 4, 5, 50, 52, 53, 59, 68,69,87,91, 129, 154 tactical generation in 60-9 Penn, G. 3, 87, 160, 218n phonology 13, 16, 40, 47, 162 phrase structure 163, 186, 187 Polguere, A. 56 Pollard, C. 6n, 40, 43, 89n, 159, 161, 164, 170, 177, 179,223 polysystemicity 19, 39, 49-50, 96-7, 188, 190, 215-16, 222 potential 9, 10, 12, 15 predicate-argument structure 60 prepositional phrase 17, 148-52 classification of problems 152 PPother 149-50 PPspatiotemporal 150-2 preselection 16, 44, 46, 69, 175, 196, 221 interrank constraints 196 process 194 process type system 22, 61-2, 63, 64, 100, 102, 172, 199 projection principle 164, 165 prominence 33 prosody 32-3, 225-6 Pullum, G. 218n
qualification 130, 132, 136-8 Quirk, R. 5In
Sag, I. A. 6n, 40, 43, 89n, 159, 161, 164, 170, 177, 179,223 Schafer, U. 3, 160 SchmolzeJ. 177 selection expression 43 Sells, P. 166
Index 249 semantic argument 183 semantic significance 47 semantics 13-14, 16, 29, 36, 38, 39, 40, 47-8 semiosis language as social semiotic 2, 4, 8 metalanguage and 3 semiotic space 16 Sentence Planning Language (SPL) 59, 62, 64 sequencing rules 41 Shieber, S. M. 83, 165 simplex units 31, 35, 49, 140, 153, 154, 160, 190,216 sister 41, 153, 187, 188 sister dependency 41-2, 189 situational context 8,13 field 13, 16 mode 13, 16 tenor 13, 16 see also context social semiotic: language as 2, 4, 8 SPOKESMAN 83 starting structure 38, 39 Stede, M. 56 Steiner, E. H. 91, 100, 109, 121, 123, 218n stratification 3-4, 5, 10, 13-15, 66, 67 under-stratification 86 stratum/strata 3-4, 13-15, 27 context 13, 16 grammar 13, 15-16 implementation 4 interstratal realization 10, 14 metastratum 4, 7, 8 phonology 16 relation between 13-15 semantics 13, 15 structural generalization 173 structure constituency 16 definition 30 dependency 20, 191, 215, 217 element of 31, 36, 38
function structure 16, 30-1 multivariate 20, 31, 145 paratactic 31 phrase structure 163 prosodic 32 pulse-like 20 recursive 20 sharing 160, 176, 214, 223 starting structure 38, 39 syntagmatic see syntagmatic structure univariate 31, 32, 145 wave-like 20 Strzalkowski, T. 57 subcategorizand 69, 153, 183, 187, 191,216 subcategorization 95, 153, 165, 181-2, 186, 189, 196, 199, 202-4, 223 Subcategorization Principle 170 subsumption 165 syntactic function 58 syntagmatic 17, 18 syntagmatic gap 2, 53, 86, 87, 158, 159, 160 syntagmatic relations 2, 3, 5, 8, 19, 20, 30-6 syntagmatic structure 2, 3, 8, 29-47,69, 173 modes of expression 30-3 multivariate 31-2 univariate 31, 32 syntagmatic under-specification 90 system networks 8, 10-12, 15, 18, 39-40, 46, 174-9, 196 conjunctive entry conditions 10-11 disjunctive entry conditions 10-11 first-level networks 29 mediated networks 29, 172 motivation of grammatical categories in 27-9 recursion 12, 38 second-level networks 29 simultaneous systems 10
250 Systemic Functional Grammar in Natural Language Generation taxis 24, 98, 99, 131 Techdoc 56 tense 12, 94, 126-7 Tesniere, L. 180-1 text 2, 9, 37 text plan 55, 67 Text Structure 83-4 textual metafunction 15, 19, 25, 33 mode of expression 20, 33, 35 prominence structures 33 pulse structures 20, 33 see also metafunctions theme 15, 16, 17, 25-6, 33, 42, 94, 120-3 theoretical perspective 4, 7-9, 49 Thompson, H. S. 54 Thompson, S. A. 54, 99 total accountability 37 transfer comparison 90 transitivity 15, 16, 17, 21-3, 61, 94, 100-10, 119, 137 Tree Adjoining Grammar (TAG) 59, 74, 77-85, 87 tree structure 33-4, 40, 77-85, 166 auxiliary tree 77, 78 constituency tree 20 elementary tree 79 initial tree 77-8 locality 78-9 Text Structure 83-4 Tucholsky, K. 157n Tucker, G. H. 56 Twain, Mark 90 type 10, 167, 168-9, 174, 191, 222 type constraints 191
type hierarchy 167-8 type-of-interdependence 23, 24, 98, 131 typed feature structures 4, 160, 161, 167, 168-9, 177-9, 191-6,214,216,222-3 Typed Feature System (TFS) 3, 87, 191-6,204,216 unification-based grammars 57, 161, 165-6, 171, 175-6 Upper Model 61, 69 valency/valence 170, 181, 186 Vallduvi, E. 225 Vater, H. 35 voice 111 wave-like structures 20 well-formedness 49, 164, 167, 171, 179, 187, 191 Winograd, T. 4 Wolf, Christa 157n X-bar theory 163-4, 172, 186-7 X-bar syntax 19, 20, 161, 186 Yang, G. 58, 84, 85, 86, 227n Zajac, R. 3, 87, 160, 171, 177, 178, 191, 195, 216, 218n Zelinsky-Wibbelt, C. 133 Zeng, L. 1, 227 Zwicky, A. M. 35, 159, 161, 183, 184, 186