Cognition, 5 (1977) 101-117 @Elsevier Sequoia S.A., Lausanne
1 - Printed
in the Netherlands
Pauses and syntax in Amer...
55 downloads
1083 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Cognition, 5 (1977) 101-117 @Elsevier Sequoia S.A., Lausanne
1 - Printed
in the Netherlands
Pauses and syntax in American Sign Language*
FRANC06 and
HARLAN
Northeastern
GROSJEAN LANE University
Abstract Research on spoken languages has shown that the durations of silent pauses in a sentence are strongly related to the syntactic structure of the sentence. A similar analysis of the pauses (holds) in a passage in American Sign Language reveals that sequences of signs are also interspersed with holds of different lengths: long holds appear to indicate the ends of sentences; shorter holds, the break between two conjoined sentences; and the shortest holds, breaks between internal constituents. Thus, pausal analysis is a guide to parsing sentences in ASL. Several studies have shown that the durations of silent pauses in a spoken sentence are strongly related to the syntactic structure of the sentence. Grosjean and Deschamps (1975), for example, analyzed English and French interviews and found that pauses at the ends of sentences were longer and more frequent than those within sentences; about 70% of all pauses occurred at major constituent breaks. With a reading task, where such grammatical pauses are not confounded with hesitation pauses, Brown and Miron (197 1) report that “up to 64% of the pause time variance in an extended oral reading performance can be predicted from syntactic analyses of the message”. If a person is asked to read or recite a known passage slowly, then it turns out that, with decreasing rate, pauses first appear between sentences, next *This research was supported in part by grant number 1 R03 MH 28133.01, Department of Health, llducation and Welfare and 768 253, National Science Foundation. The authors would like particularly to thank Ann McIntyre, Ella Mae Lentz and Marie Philip for their assistance in making and analyzing the videotapes and R. Battison and the members of the New England Sign Language Research Society for their useful comments and criticisms. Reprints: Dr. F. Grosjean, Department of Psychology, Northeastern University, Boston, Mass. 02115.
102
Francois Grosjean and Harlan Lane
between major constituents (for example, NP and VP), and finally within these constituents. At any given rate, the pause durations are not equal; they reflect the importance of the syntactic breaks: for example, pauses within a major constituent are shorter than those between constituents (Grosjean, 1972; Lane and Grosjean, 1973). Indeed, L. Grosjean (1977) has shown that the surface structure tree of a sentence often can be reconstructed using only the record of pausing obtained from subjects reading the sentence at a reduced rate. To illustrate: one of the sentences in her study had the following szface structure tree:
Not
quite
all
of
the
recent
files
were
examined
that
day
Five readers read this sentence at five different rates including two rates above and three below normal; the average pause durations (in msec) are shown below, as is a hierarchical clustering of the words in the sentence based on these pause data:
quite
Not 0
all 100
of 120
recent
the 0
0
files 50
were 250
examined 20
that 40
day 20
Pauses and syntax in American Sign Language
103
The two structural descriptions are very similar; one measure of their correlation is the agreement between trees on the number of nodes dominating each successive pair of words. In this example, r = 0.89. In general, L. Grosjean found that there is substantial correspondence between the pause structure and surface structure of a sentence although deep structure and the length of the utterance may complicate the picture. In the present study we are interested in the relationship, if any, between pauses and syntax in American Sign Language (ASL). It seems likely that sentence breaks in sign are correlated with semantic and syntactic information as well, perhaps, as facial expression, head tilt, body movement, raising of the eyebrows, decrease of signing speed, and pause duration (Fischer, 1975; Liddell, 1976; Baker, 1976). But can pause duration alone enable us to delimit ASL sentences, as Covington (1973) suggests? Further, can an examination of the pause durations separating signs within sentences produced at slow rate serve as a guide to obtaining the surface structure trees of the sentences? These are the questions we undertake to answer in the following paper.
Method Subjects
The Ss were five adult native signers of ASL with deaf parents. Three Ss were congenitally deaf and two were hearing, ASL-English bilinguals. Each S signed a presented passage at five different rates, four times each, in a session lasting 30 minutes. Materials
An English-ASL bilingual signer was asked to sign a story that she learned as a young child from her deaf parents, and a video recording was made (Sony AVC 32.508 and VTR 3650). The first part of the story, Goldilocks, was transcribed literally into English, giving the following 52-sign passage*.
*Hyphenated glosses correspond to a single sign. It is important to recognize that the ASL passage reported here is not translated into English; in the absence of a writing system for ASL (but see Stokoe, Casterline and Croneberg, 1965), we have reported the passage by substituting an English gloss for each sign. The choice of English glosses is somewhat arbitrary; for example, the eighth sign might also bc translated as GO-INTO. Our informants have also pointed out certain English influences on the sign passage, for example IT in REALLY DON’T LIKE IT.
104
Fran@
Grosjean and Harlan Lane
LONG-TIME-AGO GIRL SMALL DECIDE WALK IN WOODS INTO WOODS SEE HOUSE INTO VERY HUNGRY THEN SIT-DOWN SEE BOWL BIG-BOWL EAT DON’T LIKE COLD MOVE-ON BOWL HOT REALLY DON’T LIKE IT MOVE-ON SMALLEST BOWL EATEAT PERFECT HUMM EAT ALL-GONE THEN SIT THREE DIFFERENT CHAIRS SAME THING HAPPEN ONE HARD ONE SOFT ONE PERFECT The transcription was printed on a 70 X 55 cm panel; the letters were 12 mm high. The panel had a 9 X 9 cm hole at its center so it could be slipped over the lens of the video camera, located two meters from the subject. Procedure
In order to avoid the variations in timing associated with spontaneous utterances (hesitation pauses, false starts and so on) and to obtain the identical each S first practiced reading and passage at several rates of utterance, signing the transcribed story. Once familiar with the story, S signed the passage at a normal rate. To the apparent rate of his signing E assigned the numerical value 10. A series of values (2.5, 5, 10, 20, 30) was then named in irregular order, four times each, and the signer responded to each value by signing the passage with a proportionate apparent rate. The signer was urged to use exactly the same signs at each rate. The 20 magnitude productions by each of the five Ss were recorded on videotape.
Data Analysis
With five signers producing the passage at five different rates, four times each, 100 recordings were made. We retained for analysis a representative sample of 2.5 by selecting for each signer, at each apparent rate, that recording whose signs per min (spm) was closest to the mean spm of the four replications of that apparent rate. Two native signers of ASL, one congenitally deaf, the other a hearing ASL-English bilingual, independently measured the durations of the pauses in the five recordings selected for the first signer. Each of the judges separately viewed the recording at normal speed (Sony CVM 950 monitor) and noted the locations of the pauses. Then the passage was played back at l/16 normal speed (Sony 3650 VTR) and the judge pressed a telegraph key for the duration of each pause. This response supplied a loo&Hz coding tone to an audio tape-recorder (Tandberg 1600X). The recording was sub-
Pauses and syntax in American Sign Language
105
sequently analyzed with a frequency counter: each pause duration in msec was equal to the number of cycles of the coding tone, divided by 16”. Although the two judges worked independently and were not coached on their criteria for a pause, both delimited pauses in the same way. By their account, they detected a pause between two signs when either (a) a sign executed with continuous or repeated movement was extended by holding the hand(s) without movement in the terminal position; or (b) a sign executed with such a hold was extended by sustaining the hold. This type of pause corresponds to the “single-bar juncture, ‘sustain’ I1 I” proposed by Covington (1973): “During the pause . . . the hands are held in the position and often the configuration of the last sign”. The judges also included as part of the pause the out-transition of the first sign, giving the following segmentation of the signing stream around each pause:
In transition
Key up
---__--_--_--
Sign
Hold Key down
Out transition
Neutral
In trans. . .
Key UP -_--__--_---__
The intra-judge reliability was generally quite good (see Table 1) with a mean correlation of r = 0.89. The inter-judge reliability was slightly lower: the mean correlation between the durations reported by the two judges at each rate was Y = 0.80 (they agreed on pause emplacements 88% of the time). Consequently, the recordings for the remaining four signers were analyzed by one judge, a congenitally deaf native signer of ASL. A college student unfamiliar with ASL was also asked to analyze one passage (30 spm) to determine if a knowledge of ASL is required to identify and measure pauses. It turns out that it is not. Like the ASL judges, he viewed the passage first at normal speed to note pause emplacements, then at l/ 16 speed to key in a coding tone concurrent with each pause. Our naive observer agreed on pause emplacements 86% of the time with judge 1 and
*The. reduction in the speed of the video playback was calibrated as follows: a running chronoscope, graduated in centiseconds (Standard Tier Sl), was videotaped at normal speed. The tape recording was played back at reduced speed and the same chronoscope was used to measure the time it took for the recording to show an elapsed time of one sec. There was an undershoot of about 5% early in the 0.5 inch reel and an overshoot of about 5% late in the reel. Consequently only the first 40% of the reels were recorded in the experiment and the mean reduction was computed to be l/16 + 2%. The frequency of the recorded coding tone was calibrated with a frequency counter (Hewlett-Packard, 204A) and a correction was applied to the pause durations measured by counting cycles of that tone so that the readings were expressed in msec.
106
Fraqois
Table 1.
Grosjean and Harlan Lane
Intra-judge reliability in reporting pause duration in ASL. A passage signed at five different rates was analyzed twice by each of two judges. Shown are the rate of the passage, the average hold between every pair of signs on the first and second
measurement,
and the correlation
measurements for each of the judges. highest signing rate). Rate (signs/min)
Judge
30 52 80 147 193
0.79 0.51 0.3 1 0.15 0.00
1
Evaluation
(There
Judge 1
Evaluation 0.73 0.52 0.32 0.15 0.00
2
between
these two sets of
were no pauses reported at the
2
r
Evaluation
0.95 0.86 0.91 0.83 1 .oo
0.72 0.47 0.32 0.14 0.00
1
Evaluation 0.71 0.52 0.33 0.12 0.00
2
i0.91 0.80 0.91 0.93 1.00
80% of the time with judge 2 (the two judges agreed with each other 86% of the time on this passage). He agreed on durations r = 0.85 with judge 1 and r = 0.70 with judge 2 (the judges’ duration measures correlated r = 0.76). Although he knew no sign language, his duration measures yielded slightly higher test-retest reliability than those of the native judges (r = 0.97 VS. 0.95 and 0.9 1). The durations of the measured pauses were pooled over the five rates by each signer and over the five signers to give a grand mean duration, based on N = 25, for each of the possible pause locations in the text. These pause data were used to partition the paragraph into sentences and then to make hierarchical clusters of the signs within the sentences, according to the following iterative procedure: First, find the shortest pause in the sentence. Second, cluster the two elements (signs or clusters) separated by that pause by linking them to a node situated above the pause, and delete the pause. (If three or more adjacent signs are separated from each other by the same pause duration, make one cluster of these signs: trinary, quaternary, etc.). Finally, repeat the process until all pauses have been deleted. The following tree illustrates the process for a sentence from the Goldilocks story by labelling each node for the iterative cycle in which it was derived (grand mean pause durations in msec are shown at the bottom of the tree). An examination of the pause frequencies at the 5 1 possible pause emplacements in the text showed them almost perfectly correlated with the mean pause durations at those emplacements (r = 0.97). Since the two judges showed high agreement on the presence or absence of pauses in the
Pauses and syntax in American Sign Language
THEN
SIT 90
THREE 140
CHAIRS
DIFFERENT 30
107
0
five signed passages used for the reliability check and since this measure is much more readily obtained than pause duration, future studies of ASL syntax may prefer it to the temporal measure used in the following analyses.
Results and Discussion Demarcating
sentences
Figure 1 presents the grand mean pause durations for the Goldilocks text, averaged over signers and rates. The distribution of pauses in the signed text is not random; the holds appear to cluster the signs together in an orderly manner: long holds appear to mark the end of sentences, whereas shorter holds tend to occur within these sentences. Figure 2 is a frequency distribution of the 5 1 ASL holds while Fig. 3 is the comparable distribution for the English version of the same text (6 speakers, 5 rates; Grosjean and Collins, 1977). Both distributions are approximately hyperbolic but contain significant peaks. In the case of English, we know from prior research on pausing in reading (see Grosjean, 1972) that the righthand peak is the mode of a distribution of long pauses occurring at the ends of sentences, whereas the first maximum reflects within-sentence pausing. In this particular English passage, all the pauses with duration > 445 msec were found at sentence breaks whereas pauses whose durations ranged from 245 to 445 msec were associated with breaks between conjoined sentences, between NP and VP, or between a complement and the following NP. Pauses with average durations less than 245 msec corresponded to breaks within constituents. Turning to the distribution for
108
Franpis
Figure 1.
Grosjean and Harlan Lane
English glosses for the Goldilocks passage in American Sign Language with the pause durations (holds) obtained after each of the 51 signs. Each pause is the grand mean of 25 signing productions: at five different rates.
I fg
SAME
I
MOVE-ON
I
each of five Ss signed the passage
SEE 1 LONG-TIME-AGO1 BOWL
THING 1 SMALLEST
I
HAPPEN 2
1
BOWL
GIRL SMALL
r
DECIDE BIG-BOWL
1
i
I
WALK EAT-EAT
ONE
J
PERFECT
1
IN WOODS
I
I EATI
HARD
I
DON’T HUMMI LIKE INTO WOODS
_
COLD1 SEE1 EAT1 “OUSE
ONEI
1
ALL-GONE1 MOVE-ON1
SOFT
ONE
I 1
INTO BOWL
I
THEN 1 VERY
PERFECT
I
SITI
THREE
HOT
I
DIFFERENT CHAIRS
i I
I
HUNGRY
THEN REALLY DON’T
I
SIT-DOWN
f I i
Pauses and syntax in American Sign Language
Figure 2.
109
Frequency distribution of the grand mean durations of 51 holds in the 25 signing productions of the Goldilocks text.
oI IS
Figure 3.
1 55
1 95
I 1 I I I 1 I 131 175 215 256 295 115 375
MEAN
DURATION
(MSEC,
CLASS
OF
HOLDS
INTERVALS)
Frequency distribution of the grand mean duration of I16 pauses in 60 readings of the English translation of the Goldilocks text (6 Ss, 2 readings at each of 5 rates).
45
145
245
5.s
445
MEAN DURATION (MSEC. CLASS
545
OF PAUSES INTERVALS)
645
745
8,s
110
Franrois Grosjean and Harlan Lane
ASL (Fig. 2), we find that if we select once more all the pauses associated with the righthand distribution (Z 215 msec), we obtain the following segmentation of the passage. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
LONG-TIME-AGO GIRL SMALL DECIDE WALK IN WOODS INTO WOODS SEE HOUSE INTO VERY HUNGRY THEN SIT-DOWN SEE BOWL BIG-BOWL EAT DON’T LIKE COLD MOVE-ON BOWL HOT REALLY DON’T LIKE IT MOVE-ON SMALLEST BOWL EAT-EAT PERFECT HUMM EAT ALL-GONE THEN SIT THREE DIFFERENT CHAIRS SAME THING HAPPEN ONE HARD ONE SOFT ONE PERFECT
It appears that longer hold durations correspond to the ends of simple and complex sentences. The adjacent mode in Fig. 2, at somewhat shorter hold durations (160 - 190 msec), reflects three different phenomena: first, holds following stressed signs: e.g., ONE-HARD, ONE-SOFT; second, holds where conjuctions might otherwise be expected: e.g., HOT-REALLY DON’T LIKE IT; third, holds of intermediate duration corresponding to pause emplacements where some signers put a sentence break (and hence a long hold) and some did not (hence a short or zero hold). Sentences 2, 3 and 6 enter into this last category. The differences in segmentation strategies followed by Ss are illustrated below (the locations of the sentence breaks and the number of Ss who chose the particular segmentation are shown). In 2.
INTO WOODS SEE HOUSE ; INTO ; VERY HUNGRY DOWN 2
3.
SEE BOWL 1 BIG-BOWL ; EAT DON’T LIKE COLD
6.
MOVE-ON SMALLEST
BOWL ; EAT-EAT
PERFECT
THEN SIT-
1 HUMM
1
sentence 2, for example, two Ss HOUSE; one S chose to do so after tions, after HOUSE and after INTO, WOODS SEE HOIJSE, INTO, and
chose to segment the utterance after INTO; and two Ss chose two segmentathus producing three sentences: INTO VERY HUNGRY THEN SIT-DOWN.
Pauses and syntax in American Sign Language
111
For the following analysis of within-sentence structure, then, sixteen sentences were demarcated; a sign was considered to terminate a sentence if the following hold duration fell into the righthand mode of the pause distribution of at least two Ss (Fig. 2). This criterion demarcated sentences 1,4, 5, 7, 8, 9, 10, 11 and the following other sentences: 2a. 2b. 2c. 2d. 3a. 3b. 3c. 3d. 6a. 6b.
INTO WOODS SEE HOUSE INTO VERY HUNGRY THEN SIT-DOWN INTO VERY HUNGRY THEN SIT-DOWN SEE BOWL BIG-BOWL EAT DON’T LIKE COLD SEE BOWL BIG-BOWL MOVE-ON SMALLEST BOWL EAT-EAT PERFECT HUMM
The mean sentence length was 3.25 signs; only one sentence is over 6 signs long (sentence 1) and two sentences are one sign long (sentences 2c and 3d). As will be seen below, several of the sentences in the passage are conjoined sentences (e.g., 2d). If we consider only the simple sentences, the average length is 2.82 signs, 0.43 signs less than the mean for the passage. The structure
of sentences
in ASL
Although the end-of-sentence pauses appeared quite clearly at normal signing rate, the within-sentence pauses had to be provoked by asking the signers to sign at a rate slower than normal (more precisely, at half and at one quarter their normal rate). As can be seen from Fig. 1, this tactic was on the whole quite successful and almost every sign is separated from the next by a pause. Figure 4 represents sentences 7, 8, 9 as they were signed at each of the five rates. At the highest rate (173 spm), no holds occur between these sentences. At the next two rates (130 and 82 spm), the between-sentence breaks appear. Then, as the Ss sign the text at rates slower than normal, breaks start emerging between and within major constituents. At 59 spm, sentence 8 is already divided up into Conj-Vb-VP2 and at the slowest rate (39 spm) the other two sentences are also partitioned by holds. The question now is: Can these pauses be used as a guide in parsing the sentences just demarcated? The answer seems to be yes. With the clustering procedure explained earlier applied to the grand mean pause durations, the following tree diagram was obtained for sentence 1:
112
Franpis
Grosjean and Harlan Lane
LONG-TIME-AGO
GIRL 40
SMALL
DECIDE
WALK 30
60
0
IN 0
WOODS 0
From this clustering solution, the breaks occur, by order of importance, between NP and VP (60 msec), between the Adv and NP (40) and between Vb and S (30). These terms imply a structural analysis such as the following:
LONG-TIME-AGO
GIRL
SMALL
DECIDE
WALK
IN
WOODS
This immediate constituent analysis differs from the clustering of the signs in two ways. First, the verb phrase WALK IN WOODS remains as a single cluster in the perfomlance tree, whereas it has structure in the linguistic tree. This is not a serious problem: the inclusion of more subjects, more replications, or lower rates of magnitude production would in all likelihood introduce pause structure where it is lacking. Second, the main break in the analysis is between NP and VP whereas in the structural analysis it is between ADV and S. This discrepancy may reflect narrative style in ASL
Pauses and syntax in American Sign Language
Figure 4.
113
The relation between hold durations and overall rate in an excerpt from a passage in ASL. As overall rate decreases, the syntactic structure of the sentences emerges in the pattern of hold durations (white spaces). Each hold is the mean of five productions, one by each of five Ss.
or a more general tendency to delimit units of equal length which interacts with the tendency to delimit constituents, often of unequal length. We have found a similar tendency in our analysis of recitation pausing in English. Despite these complications, the pause data do prove to be a useful guide to parsing the ASL sentences previously identified. In the first place, they
114
Frangois Grosjean and Harlan Lane
clearly reveal breaks between major constituents. The second example of the many conjoined sentences in this passage:
sentence
is an
/----qn~,s, /\ and
i’ 0
i’\ Vb
I 0
NP
I
INTO
i’ 0
INTO
I
I
I
SEE
N
HOUSE
of pausing gave the following
60
clustering:
HOUSE
SEE
WOODS 0
NP
N
WOODS The measures
/p\ Vb
40
Here we have a case of conjoining sentences where the conjunction has been deleted, as opposed to sentences 2b and 2d, for example, where it is maintained. The deleted conjunction is replaced by a pause, shorter than the endof-S pause but longer than within-constituent pauses.
Pauses and syntax in American Sign Language
115
The frequency of conjunction deletion (it is also found in sentences 5 and 11) may be related to signing economy; it takes less time and effort to replace a conjunction by a hold of set length than to sign it. Perhaps signing economy also motivates the NP-deletion observed in this text; the subject has been identified at the beginning of the story and is therefore not reiterated in each sentence. The durations of pauses indicate not only the breaks between simple sentences and between conjoined sentences, but also the boundaries between and within the major constituents of these sentences. For example, in sentence 9 (Fig. 4), a pause separates NP (THING) and the following VP (HAPPEN) and in sentence 5, the Adv and Vb (REALLY DON’T LIKE) are separated by a short pause from the following NP (IT). Within major constituents, the average pause duration drops to very short values, or zero, over the range of rates employed in this study. Examples in NP include sentence 1: GIRL,, SMALL; 6a: SMALLEST,, BOWL; 8: DIFFERENT,, CHAIRS; and 9: SAME,, THING; and in VP, sentence 5: DON’T, LIKE; 1: WALK0 IN, WOODS. In general, long pauses mark breaks between sentences; somewhat shorter pauses, those between conjoined sentences; shorter pauses still, those between major constituents. The grand mean duration of pauses between sentences was 229 msec; between conjoined sentences, 134 msec; between NP and VP, 106 msec; within NP, 6 and within VP 11 msec. The higher the syntactic order of the break, the longer the hold that occurs at the break. One reason for studying the relation between pause structure and sentence structure, whether in speech or in sign, is to discover units of sentence processing. But this study is highly motivated in ASL for two additional reasons. First, if the same relation applies to sign as to speech, then we may make a general statement about language processing that is founded in man’s cognitive processes and not in any particular sensory modality. Second, there is as yet no reasonably comprehensive grammar of ASL (or any other sign language) that would assign structural descriptions to sentences (but see steps in that direction by McCall, 1965; Fischer, 1973, 1975; Keg1 & Wilbur, 1976). Yet such descriptions are needed for many purposes, among them psycholinguistic studies of sign-language processing. To the extent that sentence processing units correspond to structural units, the analysis of signlanguage pause structure can serve as a guide in assigning structural descriptions to sign sentences. In taking this approach - from language function to language structure we are, from a traditional point of view, driving the wrong way on a one way street. Our research on the sublexical structure of signs provides another example. By examining sign confusions in production, perception and
116
Fraqois
Grosjean and Harlan Lane
memory, we are led to group the combining elements of signs into classes and to describe the shared features that determine class membership. It remains to be seen whether the constituents identified by analyzing pausing or the features identified by analyzing perceptual confusions provide convenient units when formulating a grammar of ASL. The point is that psycho linguistics has often been the handmaiden of linguistics, and all too subject to her mistress’s whims. We think that an exchange of roles might be therapeutic all the way around.
References Baker,
C. L. (1976) What’s not on the other hand in American Sign Language. Paper presented at the Chicago Linguistic Society, Chicago. Brown, E., and Miron, S. (1973) Lexical and syntactic predictors of the distribution of pause time in reading. J. verb. Learn. verb. Beh., IO, 658467. Covington, V. C. (1973) Juncture in American Sign Language. Sign. Lang. St., 2, 29-38. Fischer, S. (1975) Two processes of reduplication in American Sign Language. Found. Lang., 9, 469480. Fischer, S. (1975) Influences on word order change in American Sign Language. In C. N. Li (Ed.), Word Order and Word Order Change. Austin, Texas: University of Texas Press. Grosjean, F. (1972) Le role joue par trois variables temporelles dam la comprehension orale de I’anglais etudie commc seconde langue et perception de la vitesse de lecture par des lectcurs et des auditeurs. Unpublished doctoral dissertation, University of Paris VII. Grosjean, F., and Collins, M. (1977) Breathing pauses and syntactic structure in an oral reading task. Working paper, Northeastern University. Grosjean, F., and Deschamps, A. (1975) AnaJyse contrastive dcs variables temporelles de I’anglais et du francais: vitcsse de parole et vari:bles composantes, phenomenes d’hesitation. Phonetica, 31, 144-184. Grosjean, L. (1977) La structure syntaxique et lcs tcmps de pause en lecture. Working paper, University of Paris III. Kegl, J. A. and Wilbur, R. (1976) When does structure stop and sign begin? Syntax, morphology and phonology vs. stylistic variation in American Sign Language. Paper presented at the Chicago Linguistic Society, Chicago. Lane, H, and Grosjean, F. (1973) Perception of reading rate by listeners and speakers. J. exper. Psycho/., 97, 141-147. Liddcll, S. (1976) Restrictive relative clauses in American Sign Language. Working paper, Salk Institute, San Diego, California. McCall, L. (1965) A generative grammar of sign. Unpublished master’s thesis, Ames, Iowa: University of Iowa. Stokoe, W. W.,Casterline, D. C., and Croneberg, C. G. (1965) A Dictionary of American SignLanguage. Washington, D.C.: Gallaudet College Press.
Pauses and syntax in American Sign Language
117
RPsumP Les recherches sur les langucs parlees ont montrd que la duree des pauses silencieuses d’une phrase est fortement like i la structure syntaxique de cette phrase. Une analyse du mdme type sur un passage de la Langue des Signes Amdricaine permet de voir que les suites de signes sont dgalement entrecoupees par des pauses (arrdts entre les signes) de longueurs variables: les pauses longues semblent indiquer la fin des phrases, les pauses courtes marquent la frontiere entre des phrases coordonnees et les pauses t&s courtes indiquent les front&es de constituants intemes. L’analyse des pauses est un guide pour la segmentation des phrases dans la Langue dcs Signes Amdricaine.
Cognition, @Elsevier
5 (1977) 119-132 Sequoia S.A., Lausanne
2 - Printed
in the Netherlands
Toward a functional theory of reduction
transformations*
DEAN DELIS and ANNE SAXON SLATER University
of Wyoming
Abstract The theory is advanced that reduction transformations function to provide speakers with the option of deleting redundant information when communicating to a topic-cognizant addressee and/or when using a written mode. To test the theory, an experiment was run in which subjects from an advanced cell physiology class were given a list of deep structure proximal sentences (base propositions), all pertaining to the topic of cellular energy, and were asked to communicate them, in either a written or oral mode, to either graduate students in biochemistry or freshman nonscience majors. An analysis of the subjects’ use of reduction transformations when communicating the base propositions supported the redundancy-deletion theory developed in the paper. The implications of these results for the perceptual complexity theory of reduction transformations (Fodor & Garrett, 1967) are discussed.
Psycholinguists have traditionally studied reduction transformations in terms of how they affect language users’ perceptions of sentences. According to the “standard” transformational theory of syntax (Chomsky, 1965), reduction rules have the formal syntactic effect of “distancing” deep structure propositions from their surface structure manifestations by pronominally substituting or deleting repeated deep structure constituents (see reduction rules, Langacker, 1972; substitution and deletion rules, Chomsky, 1965).
*We are indebted to Nancy Kerr and Hugh McGinley, who assisted us in developing the experimental design and procedure, and who made many valuable suggestions; to Virginia Valian, whose extensive comments greatly enhanced the paper; and to David Foulkes, William Pancoe, Laura Young, John I:leer and Jarvas Bastian for their help and comments. This study was supported by the Lillian G. Portenier Award in Psychology received by the first author. Requests for reprints should be sent to Dean Delis, Psychology Department, University of Wyoming, Laramie, Wyoming, 82071.
120
Dean Delis and Anne Saxon Slater
Accordingly, psycholinguists have reasoned that if there is to be a “psychological reality” to these rules, the degree of complexity in the perception of surface sentences should correlate with the degree to which deep structure constituents have been reduced (Foss & Cairns, 1970; Miller & Chomsky, 1963). Fodor and Garrett (1967) have extended this theory by postulating that reduction transformations have the added effect of increasing sentential complexity by deleting from the surface structure clues which explicitly mark deep structure boundaries. Empirical support for the notion that reduction transformations serve to make surface structures perceptually more complex has been provided by Fodor and Garrett (1967), who have shown that sentences like (l), which contain optionally deletable relative pronouns, are easier to paraphrase than sentences like (2), which do not. (1) (2)
The pen which the author whom the editor liked used was new. The pen the author the editor liked used was new.
Hakes and Cairns (1970) have corroborated the perceptual complexity theory by showing that it takes less time for subjects to monitor a target morpheme in sentences like (1) than it does in sentences like (2). Valian and Wales (1976) have shown that when “talkers” are asked “What?” after they utter sentences to which an optional reduction transformation has been applied, they tend to change the sentences so that the reduction transformation is not employed. These researchers have interpreted their results as suggesting that “talkers” tacitly know that reduction transformations make sentences more difficult to perceive. Although the perceptual complexity theory has received empirical support, it raises an important question. If speakers tacitly know only that reduction rules have the effect of making underlying deep structures perceptually more complex, then it is difficult to explain why they ever employ these rules in their speech. Although speakers use language for a variety of ends (Chomsky, 1975), the perceptual complexity theory would predict that language users, when attempting to be effective communicators, should generate only those sentences which are as close as possible to their deep structure descriptions. However, people frequently generate transformationally distant sentences when attempting to share information, a fact of language usage that psycholinguists have tended to overlook. The question thus arises of the reasons such constructs are actually used. In order to explain W/ZJ*speakers generate transformationally distant sentences, the present study investigated the possibility that one frequently cited feature of reduction transformations ~ the deletion of redundant information - may serve a communicative function for language users
Reduction transformations
12 1
(Bach, 1974; Thorne, 1970). That reduction rules delete redundant information derives from the “standard” transformational grammar principle that transformational rules do not change the meaning of sentences (Chomsky, 1965 ; Liles, 1975); thus, only “recoverable” (redundant) deep structure constituents may be deleted. This property of reduction rules is shown in (3) - (8). (3) (4) (5) (6) (7) (8)
Billy wants to leave town. (Billy wants (Billy leave town)). McGraw is as stubborn as a mule. ((McGraw is as stubborn) as (a mule is stubborn)). The cowboy who is in the bar is drunk. ((The cowboy (the cowboy is in the bar)) is drunk).
In sentence (3), a reduction transformation (equi-NP deletion) has deleted the repeated noun phrase BiZly from a deep structure like (4). Sentence (5) illustrates the deletion of the repeated verb phrase is stubborn from a deep structure like (6). Sentence (7) shows that the relative pronoun who is substituted for the second occurrence of the noun phrase the cowboy in a deep structure like (8). Although these examples do not characterize many of the subtle and complex properties of reduction transformations*, they nevertheless illustrate that these rules delete redundant information from sentences while leaving the conceptual meaning of the sentences unchanged. In order to determine if reduction transformations serve a communicative function for language users by enabling them to delete redundant information from their discourses, it is necessary to identify those communicative factors which would influence a speaker either to delete redundant deep structure constituents or to manifest the repeated information. Two factors which fulfill these requirements are (1) the topiccognizance of the addressee, and (2) the mode of communication. The topic-cognizance of an addressee will influence a speaker to include or delete redundant information in the following way. If a speaker judges the addressee to have little knowledge of the topic being communicated, then (s)he is likely to include more repeated information in his or her discourse. In doing so, the speaker is being considerate of the addressee’s comprehension task by explicitly relating novel ideas to information to which
*In sentence (5), it is necessary to assume that there are dummy symbols in its deep structure for the deleted material; in the deep structure description of sentence (7), it is necessary td specify that a wh-morpheme is attached to the repeated noun phrase the cowboy in order to account for the relative pronoun.
122 Dean Delis and Anne Saxon Slater
the listener has already been exposed (Haviland & Clark, 1974). Employing this same strategy to an addressee who is cognizant of the topic matter would, however, result in a message containing unnecessary filler information. An example from Vygotsky (1962) illustrates this point: Now let us imagine that several people are waiting for a bus. No one will say, on seeing the bus approach, “The bus for which we are waiting is coming”. The sentence is likely to be an abbreviated “Coming” or some such expression, because the subject is plain from the situation. (P. 139) Thus, the inclusion of redundant information when communicating to an addressee who is cognizant of the subject matter can be an awkward addition, if not an outright hindrance. That the mode of communication is a determining factor in a speaker’s decision to manifest more or less repeated information seems readily apparent. Since an oral mode results in only a fleeting presentation of information, any repetition of information from the ongoing discourse will help alleviate the perceptual difficulties this mode produces. A written mode, on the other hand, simplifies an addressee’s comprehension task by presenting a permanent record of information, and the need for repeated information is therefore lessened. Another reason for the manifestation of less redundant information when communication takes place in a written mode is that writing usually entails making several drafts, and the writer thus has the opportunity to delete repeated information which may have been manifested in the initial construction of the sentences’ syntax. It can now be postulated that a communicative function of reduction transformations is to provide language users with the option of deleting redundant information when communicating to a topiccognizant addressee and/or when using a written mode.
Method In order to test the redundancy-deletion theory of reduction transformations empirically, an experimental procedure was developed which afforded the systematic study of surface structure variations occurring in concrete acts of communication. The procedure calls for asking subjects knowledgeable in a specialized topic area to communicate a list of deep structure proximal sentences (base propositions), all of which pertain to a subject in that topic area, to addressees either familiar with the topic (the initiated addressees) or unfamiliar with it (the uninitiated addressees). Half of the subjects in each of the addressee conditions are asked to communicate in an oral mode,
Reduction
transformations
123
and half in a written mode. The subjects are allowed to keep the list of sentences before them, even while they are communicating, to avoid introducing memory-straining factors. For the present experiment, the dependent variable was the percentage of initial base propositions in a subject’s communique which underwent one or more reduction transformations. Only base propositions in their initial presentation in a communique were analyzed, because a pilot study had indicated that when subjects are asked to communicate a list of technical propositions to uninitiated addressees, they tend to repeat some of the propositions in order to ensure that their discourses are adequately comprehended. If a proposition is repeated, then the entire proposition becomes redundant information for the addressees regardless of their previous topic-cognizant status. In a theoretical sense, all addressees become initiated addressees, and the initiated-uninitiated addressee variable should lose its effectiveness in influencing subjects to use reduction transformations differen tially . The percentage of base propositions which underwent reduction transformations was used as the measurement in the study because it was thought that some subjects, especially those communicating to initiated addressees, might omit entire base propositions from their communiques (e.g., for the present experiment, number 20: “Adenosine triphosphate is a compound”. See Appendix A). A measurement of the total number of base sentences which underwent reduction transformations would therefore not be sensitive to the possibility that some subjects might use fewer base propositions in their communiques and thus would have fewer opportunities to employ reduction transformations. The hypotheses for the present experiment were (1) subjects will apply reduction transformations to a higher percentage of initial base propositions when communicating to an initiated addressee than to an uninitiated addressee; (2) subjects will apply reduction transformations to a higher percentage of initial base propositions when communicating in a written mode than an oral mode; and (3) there will be interactions of the variables in that the written-mode and initiated-addressee variables will combine to influence subjects to use the largest percentage of reduction transformations, while the oral-mode and uninitiated-addressee variables will combine to influence subjects to use the lowest percentage of reduction transformations. Subjects
Fifty-six University of Wyoming students enrolled ology class were paid $2 each to participate. speakers of English.
in an advanced cell physiAll subjects were native
124
Dean Delis and Anne Saxon Slater
Material
A list of simple sentences served as the material which subjects were to communicate (see Appendix A). The sentences, which approximated kernel representations of their deep structure descriptions, were constructed and serially organized to present a comprehensible discussion on the topic of cellular energy; a topic specifically chosen since it was the major subject matter on a class test administered 10 days before the experiment. Special care was taken to construct the list so that adjoining sentences contained repeated nominals and occasionally repeated verbals to allow subjects to employ reduction transformations and embedding procedures. Although most of the sentences are “simple” sentences as defined by “standard” transformational grammar (i.e., they contain no embedded sentences), it was necessary to include a few complex sentences (e.g., sentences 1 and 10) so that the list presented a coherent series of sentences.
Procedure
and Experimental
Design
Subjects were tested in groups of four or five, each group randomly assigned to a communicative condition. At the start of the experiment, subjects were given the list of simple sentences and read the following instructions: This is an experiment on communication. You have been chosen to be the subjects of this experiment because, being in physiology 621D, you have a background in the material that you are to communicate tonight. Before you is a list of simple sentences, all pertaining to the topic of cellular energy. Your task is to study these sentences as long as you wish, and then communicate them as if speaking/writing to a freshman nonscience major/graduate student in biochemistry. You are to communicate the sentences so that you don’t just read/write a list of sentences. Combine them together as if speaking/writing in a normal fashion. You may omit various words if you wish when combining the sentences, but you are not to add any new material or ideas to your communication except for maybe simple words such as “and”, “this”, “which”, etc. Subjects were instructed not to add new substantive material to their communiques, since it was thought that what might be added in many cases would be simpler, nontechnical propositions, an addition which would ultimately interfere with our attempt to test the use of reduction transformations on material which would be difficult and unfamiliar to many of the addressees.
Reduction transformations
125
After listening to the instructions, subjects were given guidelines to use in communicating the sentences, the guidelines differing for the two modeof-communication conditions. Subjects in the oral-mode condition were told that a cassette tape deck was set up in each of the five side rooms in the laboratory, that they were to record their communiques individually, and that their communiques would later be played to the appropriate addressees. Subjects in the written-mode conditions were told that a pen and paper were in the side rooms, that they were to write their communiques individually, and that their communiques would later be read by the appropriate addressees. All subjects were allowed to keep the list of sentences before them to control for differential effects of memory. To simulate normal communicative situations, subjects in the written-mode conditions were allowed to make as many drafts as they wished, while subjects in the oral-mode conditions were allowed to make only one recording. Subjects in the oral-mode conditions were allowed, however, to operate an on-off switch attached to the tape recorder in case they became lost and needed to reorganize their thoughts. Finally, in order to ensure that they were aware of their designated addressees, the subjects were told that there would be a second part to the experiment: “The second part of the experiment will simply go as follows: your recorded/written versions of the sentences will be played/given to a group of freshman nonscience majors/graduate students in biochemistry to see if they can understand what you said/wrote”. A randomized, 2 by 2 (initiated vs. uninitiated, written vs. oral) betweensubjects design was used in the experiment.
Scoring Upon completion of the experiment, a secretary transcribed the communiques (both written and oral) into typewritten paragraphs, which were double-checked for transcription errors by two judges. The transcriptions were then recoded to blind the two scorers (the authors) of the communicative condition of each transcription. Photocopies of the transcriptions were given to each scorer, who independently recorded (1) whether or not a base proposition underwent one or more reduction transformations, (2) the types of surface structure constructions which resulted from the reduction transformations, and (3) whether a proposition was presented only once in a subject’s communique or was repeated. The inter-judge agreement in scoring the percentages of base propositions which underwent reduction transformations was 97%; the agreement in scoring the types of surface structure constructions, 93%. All discrepancies were resolved by mutual agreement
126
Dean Delis and Anne Saxon Slater
between the two scorers while they were still blind as to the treatment condition of each communique. Two criteria were used in scoring a base proposition as having undergone a reduction transformation: (1) pronominal substitution; and (2) deletion of a repeated base structure constituent. Examples of the different surface structure constructions which resulted from the reduction of base propositions are shown in Appendix B. Propositions which may have been added to a communique (i.e., not taken from the original list) were not scored.
Results The mean numbers of initial base propositions that subjects in the writtento-initiated, oral-to-initiated, written-to-uninitiated, and oral-to-uninitiated treatment conditions used in their communiques were 19.36, 22.00, 22.29, and 23.07 respectively. Table 1 shows the mean percentages of initial base Table 1.
Mean
percentage
of initial base propositions
which underwent
reduction
transformations
Mode
Addressee Initiated
Uninitiated
Written
Oral
Values with different superscriptsdiffer at p < 0.025. *Denominator indicates mean number of initial base propositions used in the communiquks; numerator, mean number of initial base propositions which underwent rcduction.
propositions which underwent reduction transformations across the four communicative conditions. As can be seen, the trends were in the predicted direction. An ANOVA performed on the data revealed that both main effects and the interaction were significant: for initiated versus uninitiated, F( 1,52) = 47.10, p < 0.001; for written versus oral, F( 1,52) = 6.182, p < 0.025. The results of two-tailed t-tests performed on all pairwise compari-
Reduction
transformations
127
sons of the treatment means showed that all means differed significantly from one another with the exception of the written-to-initiated and oral-toinitiated treatment means, which differed at the 0.10 level. From these results we may draw the following conclusions: (1) that language users employ a significantly higher percentage of reduction transformations when communicating to topic-initiated addressees than when communicating to topic-uninitiated addressees; (2) that language users, when communicating to uninitiated addressees, employ a significantly higher percentage of reduction transformations when using a written rather than an oral mode (in the initiated addressee conditions, the same trend occurred, though not with the conventional level of statistical significance, p < 0.10); (3) that language users employ the highest percentage of reduction transformations when communicating in a written mode to topic-initiated addressees; and (4) that language users employ the smallest percentage of reduction transformations when communicating in an oral mode to topic-uninitiated addressees. Table 2 shows the mean numbers of the different surface structure constructions which resulted from the reduction of initial base propositions. Mean number of different of initial base propositions
Table 2.
Surface
Structure
surface structures which resulted from reduction
Communicative
Condition
Written Initiated Simple pronominal Conjoined
sentences
Nonrestrictive Restrictive
substitutions with deletions
adjectivals adjectivals
___ Uninitiated
Oral __ Initiated
Uninitiated
0.36ab
0.21a
o.57ab
2.78=
3.07a
2.80’
2.00a
1 .57a
0.93a
1.21a
0.79a
9.43ab
0.71b
10.36a
10.86a
6.93b
Infinitives
0.79a
0.29a
0.29a
0.21a
Adverbials
1.57a
0.79ab
0.86ab
0.21b
For each surface structure, values with Duncan’s Multiple Range Tests.
different
superscripts
differ
at p < 0.05 protection
level using
The figures in Table 2 are not mutually exclusive in that some constructions occurred concomitantly (e.g., conjoined restrictive clauses). The Table shows that the most frequently used reduction transformations were those involved in the construction of restrictive adjectivals. The frequent use of this specific grammatical construction is attributable to the fact that many of the base
128 Dean Delis and Anne Saxon Slater
propositions were in the form of to be + Pred., a structure which invites reduction to restrictive adjectivals. Comparing the frequency of restricted adjectivals across the four communicative conditions raises an important question: if restrictive constructions were the primary source for reduction transformations, why did the written-to-initiated condition, which was predicted to influence subjects to employ the highest percentage of reduction transformations, fail to produce the largest number of these constructions? The answer to this question is that subjects in the written-to-initiated condition omitted an average of about four base propositions from their communiques. The base propositions most commonly omitted were those that would have most likely been manifested as restrictive adjectivals had they not been omitted (e.g., instead of a cell absorbs a glucose molecule, subjects in this condition often wrote a celE absorbs glucose, thus omitting the information that glucose is a molecule). It is interesting to note that the subj_cts decisions to omit base propositions from their communiques appeared to be influenced by the same factors that influenced them to employ reduction transformations; that is, subjects in the topic-initiated conditions omitted more base propositions than subjects in the topic-uninitiated conditions; subjects in the written-mode conditions more than subjects in the oral-mode conditions. Table 2 also shows that Duncan’s Multiple Range Tests yielded few statistically significant differences between the means of each surface structure construction. However, the mean numbers of nonrestrictive adjectivals, adverbials, and infinitives followed the same general pattern across the four communicative conditions that was predicted for the overall occurrence of reduction transformations (restrictive adjectivals followed the same pattern except that the written-to-initiated condition failed to produce the largest mean number). This suggests that it was the combined effect of these reduced surface structures which resulted in the overall findings reported in Table 1. Another analysis was performed to compare the percentages of optionally deletable relative pronouns that were included in the subjects’ communiques. To include such a pronoun is to increase the use of redundant information in one’s discourse. The mean percentages in the written-to-initiated, oralto-initiated, written-touninitiated, and oral-to-uninitiated conditions were 2.9%, 9.2%, 3.1% and 25.7% respectively; an ANOVA yielded one significant main effect (written vs. oral, F( 1,5 1) = 10.16, p < 0.005). These results suggest that language users manifest optionally deletable relative pronouns as a means for adding redundancy when communicating in an oral mode. A final analysis was performed to determine the use of reduction transformations on those base propositions that were repeated in the subjects’
Reduction transformations
129
communiques. Again, if a proposition is repeated in a text, then a speaker or writer should regard the entire proposition as redundant information for the addressees despite their previous topic-cognizant status. A speaker or writer should, therefore, feel more inclined to apply reduction transformations to base propositions when they are repeated in a discourse. The mean numbers of repeated propositions in the written-to-initiated, oral-to-initiated, written-to-uninitiated and oral-to-uninitiated conditions were 1.29, 3.14, 1.79 and 6.36 respectively (an ANOVA of these data revealed no significant effects). The mean percentages of repeated base propositions which underwent reduction transformations across the four communicative conditons are shown in Table 3. Table 3.
Mean percentage of repeated base propositions which underwent reduction transformations Mode
Addressee Initiated
Uninitiated
Written
*Denominator indicates muniquds; numerator, went reduction.
mean number of repeated base propositions used in the comthe mean number of repeated base propositions which under-
Although an ANOVA yielded one significant main effect (initiated vs. uninitated, F(1,29) = 4.89, p < 0.05), it is clear that subjects who were predicted to use a low percentage of reduction transformations on base propositions in their initial presentation in a communique used a high percentage of reduction transformations on propositions that were repeated in a communique. Thus, the notion that the initiated-uninitiated addressee variable should lose effectiveness in influencing subjects to employ reduction transformations on repeated base propositions differentially was supported.
Discussion The findings of the present experiment supported the theory that reduction transformations serve a communicative function for language users by
130
Dean Delis and Anne Saxon Slater
enabling them to communicate the same propositions as when reduction rules are not employed, but with redundant information deleted. When the communicative setting of the experiment specified that the designated addressees would easily comprehend the base propositions, the subjects elected to use a much higher percentage of reduction transformations than when the setting specified that the addressess would have difficulty in comprehending the propositions. Thus, in general, the experiment demonstrated that a relationship exists between the transformational histories of sentences and (1) the conceptual background shared by language users, and (2) factors inherent in the way human beings communicate and comprehend information. In particular, the experiment supported the notion that language users tacitly know about reduction rules in terms of their redundancy-deleting function. The redundancy-deletion theory of reduction transformations is closely related to the perceptual complexity theory, the difference between the two being that the latter predicts that reduction rules always increase perceptual complexity, whereas the former predicts that it is only when the material to be communicated is difficult to comprehend that reduction transformations increase perceptual complexity (when the material is easy to comprehend, reduction transformations function to delete unnecessary redundancy). Because of the similarity between the two theories, the redundancy-deletion theory can account for past findings which have supported the perceptual complexity theory. By using doubly self-embedded sentences in their experiments (see sentence (I) and (2) above), Fodor and Garrett (1967) and Hakes and Cairns (1970) presented their subjects with sentences which are difficult to understand (Kimball, 1973; Miller & Chomsky, 1963). With such sentences, the deletion of redundant information (the relative pronouns) should, and did, complicate the addressee’s comprehension task. In the Valian and Wales (1976) experiment, “talkers” tended to state transformationally less distant sentences when asked “What?” The redundancydeletion theory, which predicts the use of fewer reduction transformations the greater the addressee’s difficulty in comprehending the message, thus accounts for their results.
References E. (1974) Explanatory inadequacies. In D. Cohen (Ed.), f<xplaining Washington, V. H. Winston and Sons. Chomsky, N. (1965) Aspects of’the theory oj syntax. Cambridge, M.I.T. Press. Chomsky, N. (1975) Reflections on farzguagr. New York, Pantheon Books. Bach,
linguistic
plzeno~mna.
Reduction transformations
13 1
l’odor,
J. A. and Garrett, M. (1967) Some syntactic determinants of sentential complexity. Pert. Psychophy., 2. 289-296. Foss, D. J., and Cairns, H. S. (1970) Some effects of memory limitation upon sentence comprehension and recall. J. verb. Learn. verb. Beh., 9, 541-547. Hakes, D. T., and Cairns, H. S. (1970) Sentence comprehension and relative pronouns. Pert. Psychophy., 8, 5-8. Haviland, S. E., and Clark, H. H. (1974) What’s new? Acquiring new information as a process in comprehension. J. verb. Learn. verb. Beh., 13, 5 12-521. Kimball, J. (1973) Seven principles of surface structure parsing in natural language. Cog., 2, 15-47. Langacker, R. W. (1972) Fundamentals of linguistic analysis. New York, Harcourt Brace Jovanovich. Liles, B. L. (1975) An introduction to linguistics. New Jersey, Prentice-Hall, Inc. Miller, G. A., and Chomsky, N. (1963) Finitary models of language users. In R. D. Lute, R. R. Bush, and E. Galanter (Eds.), Handbook of mathematical psychology. New York, J. Wiley and Sons. Thorne, J. P. (1970) Generative grammar and stylistic analysis. In J. Lyons (Ed.), New Horizons in linguistics. Maryland, Penguin Books. Valian, V. and Wales, R. (1976) “What’s what”: talkers help listeners hear and understand by clarifying sentential relations. Cog., 4, 155-176. Vygotsky, L. S. (1962) Thought and language. Cambridge, M.I.T. Press.
R&urn& On propose une theotie selon laquelle les transformations de reductions fonctionnent pour donner aux locuteurs la possibilite de supprimer l’information redondante quand ils communiquent avec un auditeur connaissant le sujct et/au quand ils utilisent un mode &it. Pour tester cette theorie, une experience a dte faite au tours de laquclle on a present& aux Bltves d’une classe sup&cure de physiologie cellulaire une liste de phrases voisines par leur structure profonde (propositions de base) toutes ayant trait i un suject d’energie cellulaire. Les dlevcs devaient ensuite transmettre ces phrases soit oralemcnt soit par icrit i des etudiants diplomis en biochimie ou i des etudiants en premiere annee d’humanites. L’analysc qui a porti: sur l’utilisation des transformations de reduction pour communiqucr les propositions de base, appuie la thdorie de la suppression de la redondance present& dans cet article. On discute les implications dc ces resultats pour une theorie de la complexitk perceptuelle dcs transformations de reduction.
Appendix A: The base propositions Cellular Energy 1. Cells need energy to live. 2. A cell obtains energy from a molecule. 3. The molecule is glucose. 4. The molecule contains energy. 5. The energy is potential energy, 6. A cell absorbs the molecule. 7. The cell has enzymes. 8. The enzymes cause a reaction. 9. The reaction is chemical.
132 Dean Delis and Anne Saxon Slater
10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
The reaction breaks down glucose to water. The reaction breaks down glucose to carbon dioxide. The reaction releases energy. The energy is free energy. The energy is in the form of heat. The energy causes a reaction. The reaction combines a phosphate compound with a molecule. The molecule is adenosine diphosphate. The reaction then synthesizes a molecule. The molecule is adenosine triphosphate. Adenosine triphosphate is a compound. The compound is universal. The compound is for storing energy. The compound is for releasing energy. This energy is cellular energy.
Appendix
B
Examples of surface structures Surface
Structure Category
resulting from reduction Base
Propositions
of base propositions _____
Resulting
Sentences
3 and 4
The molecule
is glucose.
If contains
energy.
3 and 4
The molecule
is glucose and contains
energy.
3 and4
The molecule, energy.
7 and 8
The cell has enzymes
Infinitive
16, 17 and 18
The reaction combines a phosphate comdiphosphate pound with an adenosine molecule to s,vnthesizc a trdeculc.
Adverbial
2, 3 and 6
A cell obtains energy from a glucose cule by absorbing the rmlecule.
Simple pronominal Conjoined deletion
sentences
Nonrestrictive Restrictive
substitution with
adjectival adjectival
whicl~
is &~ose,
contains
whiclz cause (I reaction.
molc-
Cognition, @Elsevier
5 (1977) 133-145 Sequoia S.A., Lausanne
Discussion - Printed
in the Netherlands
Response
to Dresher
and Hornstein
ROGER C. SCHANK and ROBERT WILENSKY Yale University
The field of Artificial Intelligence has been in existence for about twenty years. The task we have chosen to undertake - building programs that exhibit ‘intelligence’ - is admittedly a very difficult one. Thus it is hardly surprising that in our short history we have looked toward related fields for help. Our interactions with psychology, for example, have benefitted not only AI, but psychology as well. For example, AI methodology has been used by Abelson, Anderson, Norman, Rumelhart and Schmidt to test their models of cognition, memory, and belief systems. Many ideas from AI have found their way into psychological theories; the notion of conceptual primitives in the recent work by Miller and Johnson-Laird is a case in point. AI theories themselves have been found interesting enough to motivate experimental psychologists to test them in the lab. For example, Bower’s group has done a number of experiments based on AI theories of language, Nelson on AI theories of child language learning, and Kosslyn on AI theories of visual processing. Given all the interest psychologists have shown in AI, one might wonder how it is that Dresher and Hornstein’s review of AI research is so condemning. It is not merely that they find AI theories flawed, or inferior to linguistic theories, but they conclude that “AI does not in any way address the central questions that any scientific inquiry into language ought to address” and is “of no psychological . . . interest because it is totally devoid of any principles which could serve as even a basis for a serious scientific theory of human linguistic behavior” (p. 322). The root of the problem lies in a paradigmatic difference between AI researchers and the linguistic theorists represented by the authors, who are sometimes referred to as the ‘interpretive semanticists’. We shall argue that once these differences are made clear, Dresher and Hornstein’s criticisms of AI become at best, innocuous, and at worst, arrogant, since they are based largely on the assumption that interpretivists’ philosophical view of the
134
Roger C. Schank and Robert
Wilensky
nature of language constitutes the only legitimate basis for a program of natural language research. Before d-lineating these paradigms and their differences, it should be pointed ou.: that the views of the interpretive linguists are not held universally by transformational linguists, let. alone by all linguistic theorists. For example, ‘generative semanticists’, who include Lakoff and Fillmore, question many of the fundamental assumptions of the interpretivists, and also have a much more sanguine view of AI research. Thus when we speak of linguistic theorists here, our comments are directed primarily toward the interpretivists. The dispute between the linguistic theorists and AI researchers is based largely on the following difference of aims: AI researchers are concerned with developing programs that are capable of intelligent behavior. When we study human cognition, therefore, we are interested in discovering the processes people have that enable them to behave intelligently. For example, an AI theory of language might include a step-by-step description of the mental processes that a person goes through upon hearing an utterance, an explanation of how memory was organized so that these processes could be carried out, a description of the machinery needed to effect them, and of what the result of all this work would be. Yet another piece of AI theory would describe the processes related to generating an utterance: Under what circumstances does a person initiate a discourse? Respond to another’s remark? How does a person decide what to express, and how to express it? And so on. Transformational linguistics, on the other hand, is concerned with the problem of how to characterize the grammatically correct sentences of a language, in particular, with how to characterize them in such a way as to discover characterizations that are true over all languages. Thus, the linguists don’t concern themselves with how a person actually goes about understanding a sentence, or generating a sentence, but with abstract characterizations about language that are independent of how people use their language. That is, the transformational grammar of a language is not a psychological model of language use. While the aim of the transformationalists is to produce a theory of language that is independent of the psychological makeup of the language user, the terminology used by the transformationalists has often led people to think otherwise. For example, in their own exposition of the goals of linguistic theory, Dresher and Hornstein argue that since language users can produce and understand an infinite number of sentences in their language, What a speaker has learned cannot be a list of sounds or sentences, but some system of rules, which we call a grammar, which can then be used to generate the entire language. (p. 323)
Response to Dresher and Hornstein
135
While the topic of discussion has been a person’s ability to produce and understand an unlimited number of sentences, the rules of grammar that the transformationalists refer to here have nothing whatsoever to do with rules that a person actually USES to produce or understand sentences. Rather, When we speak of a grammar as generating a sentence with a certain structural description, we mean simply that the grammar assigns this structural description to a sentence. . . . we say nothing about how the speaker or hearer might proceed . . . to construct such a derivation. The term ‘generate’ is familiar in the sense intended here in logic, particularly in Post’s theory of combinatorial systems. (Chomsky, 1965, p. 9). Thus grammar rules ‘generate’ a language in the mathematical sense that such rules are logically capable of characterizing grammatical sentences, not in the sense that a person would actually use such rules to generate a sentence. It is at this point that the linguist and AI theorists part company. The rules of language that AI researchers are interested in are rules about how to process language. Transformational grammar rules are explicitly not rules about processes. The motivation behind developing the latter kind of rules is at odds with most of the aims of AI: Linguistic theory is concerned primarily with an ideal speaker-listener . . . who is unaffected by such grammatical irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors .. . in applying his knowledge in actual performance. (Chomsky, 1965, p. 3) This is Chomsky’s famous ‘performance-competence’ distinction. The idea as stated is to abstract away certain ‘irrelevancies’ from the study of language, such as stuttering or forgetting. However, the transformationalists implicitly take this to mean that any aspect of using language lies outside the realm of a linguistic theory. According to Dresher and Hornstein, this excludes from study the Problems of how language is processed in real-time, why speakers say what they say, how language is used in various social groups, how it is used in communication, etc. (p. 328, emphasis added). These aspects of language are called performance aspects. By this terminology, AI strives to produce performance theories. Of course, in producing such theories, AI also tries to simplify the problems it studies. In fact, almost every AI model of language use is an ideal user. The basic difference is that we have thought that the problem of how people use language to communicate was too fundamental to be eliminated from the study of language, or to be relegated to some secondary role.
136
Roger C. &hank and Robert Wilensky
Since the concept of performance covers so much of what cognitive scientists appear to be interested in studying, and since the transformationalists feel these to be secondary aspects of language study, it might not be clear exactly what it is that the linguists feel we should examine. This they call the area of linguistic competence: The linguistic competence of a speaker is “the tacit knowledge that a speaker has of the structure of his language” (p. 378). The confusion here is about who ‘knows’ a transformational grammar, the language user, or the transformationalist. Apparently, Dresher and Hornstein are themselves caught up in this confusion. For example, they argue that any theory of performance must be based on a theory of competence because to assume otherwise would be analogous to suggesting that a group of scientists possessing knowledge of the theory of rocketry . . . would not use it at all in the building of any particular rocket. (p. 328). The theoretical knowledge of rocketry is presumably analogous to the knowledge of a transformational grammar. However, rocket scientists know this theory in a rather conventional sense; for example, they are quite capable of giving lectures about it. The only lectures on transformational grammar, however, are given not by language users, but by transformationalists. If a transformational grammar is a theory of language, then it is something known to transformational language theorists, not to the object of their study. If people ‘know’ a theory of their grammar, then by the same token, rockets ‘know’ the theory of rocketry, since they behave in accordance with it. Since the kind of theory sought by linguists is of such a different nature from the theories sought by AI researchers, the question arises as to whether the criteria by which the theories can be evaluated should be the same for both groups. One of Dresher and Homstein’s major contentions is that the methodological issues applicable to developing a theory of grammar also must apply to AI, and that, by these criteria, AI is found to be deficient. According to the transformationalists, a good theory of grammar must be explanatorily adequate. That is, for a theory of language to be of scientific interest, it must address itself to the principles according to which languages are organized and learned. (p. 329). The interpretivists call this set of principles a ‘universal grammar’. Just as a transformational grammar was an abstract set of rules that characterized an individual language, the universal grammar is an abstract set of rules that characterizes transformational grammars. That is, it describes which trans-
Response to Dresher and Hornstein
137
formational grammars may be constructed, and which may not. The transformationalists feel that such a set of rules is explanatory, because it explains how any given transformational grammar is possible. Dresher and Hornstein argue that the theories of AI must be explanatory as well. Thus, for an AI theory to be adequate, it must not be limited to the production of models which can process language, but aim to discover general principles from which such models will necessarily follow. (p. 328). There are two problems with this criterion: First, whether something is interesting or explanatory is highly relative. For example, a theory of language that described every detail of how a person understood the sentences of a language, and every detail of how that person went about generating sentences, as well as all the cognitive processes in between, would certainly be interesting to AI researchers and psychologists, but apparently not to Dresher and Hornstein. In addition, a theory that is explanatory to a linguist may not be explanatory to a cognitive scientist. For example, the following is a universal principle, given as an example in the Dresher and Hornstein article, to which they attribute explanatory power: In the case when no pronoun is left behind, it is not possible to relativize only one element of a coordinate structure. (p. 324). This is some rule which no transformational grammar can violate. However, from the viewpoint of cognitive science, a theory that was composed of such rules would not achieve what we may term procedural adequacy. That is, in addition to stating this general rule, an AI theory would have to give the processes that lead to this rule being true: What is there about the way a person processes language that this rule characterizes? Is the rule basic to the structure of the human mental processor ? Is it an artifact of other, more fundamental rules, that hold not just for processing language, but for other cognitive mechanisms as well? Or is the rule just coincidentally true for all grammars, and not an inherent attribute of the mechanism it characterizes? Thus while universal principles may constitute answers to transformationalists, they merely pose questions for AI researchers, who would find it necessary to account for the rules procedurally. Of course, this does not imply that a good methodology for AI research is to try to account procedurally for the rules linguists derive. One might just as well ask the linguists to wait for us to describe all mental processes, and then derive their universals from our descriptions. There is also a methodological problem with the criterion of explanatory adequacy. The linguists feel that the goals of determining individual transformational grammars and of finding linguistic universals should be pursued
138 Roger C. Schank and Robert Wilensky
simultaneously, while AI researchers generally concede that the problem of how people learn to use their language cannot be profitably approached until we have a serious theory of what it is people end up learning to do. In order to make a theory about how a child learns, we need a theory of what it knows initially, and of what it has eventually learned. This latter theory is of course the theory of language use AI is trying to develop. So from our viewpoint, the problem of learning, while interesting and important, is not something that the rest of our theories are based on. If anything, in AI just the reverse is true. Thus the goals of AI and of linguistic theory are not the same. A theory that is explanatory to one group need not be explanatory to the other. AI theories should not be judged by their ability to shed light on linguistic universals any more than transformational theory should be judged by how well it describes language processes. Each can claim that the goals of the other are implicit in their own work, but this hardly contributes to our understanding of language. Dresher and Hornstein’s critique of AI methodology is based on their identification of AI’s goals with those of the linguists. Their claim is that AI people are interested only in building computer programs that process language, and that such programs cannot distinguish between competing theories. This is true because “to create a functioning language understanding system, one need only Some grammar, Some semantic and pragmatic components, some lexicon and so on” (p. 330). That is, such a system would not address the issue of linguistic universals. One might infer from the above statement there exist dozens of competing theories, each of which could easily be used to build a general natural language processing program. Unfortunately, no one has been able to build an all-purpose natural language processor, nor do linguistics, AI or psychology possess a theory that would enable us to do so. What we do have is a number of competing theories, each of which accounts for some aspects of language use better than the others. Given that our problem is not one of choosing from among a number of adequate theories, but to define the limits and failings of existing theories, some sort of experimental methodology is needed. In this respect, AI methodology - that is, building a program as a test of theory ~ has been extremely useful in pointing out deficiencies of theories, and uncovering problems that were not well understood beforehand. For example, the problem of controlling the proliferation of inferences that occurred in programs like MARGIE (Schank et al., 1973) was not even recognized as a problem until we built a program that attempted to make inferences.
Response to Dresher and Hornstein
139
It hardly seems necessary to make the point that some method of checking a theory’s relevance to reality is needed. However, the authors’ critique of AI methodology is applicable to any method of distinguishing between competing theories of language use. For example, the psychology literature is full of descriptions of experiments that test different theories of language use by trying to determine which theory predicts people’s behavior better. Dresher and Homstein’s criteria would discredit the results of these experiments, just as they do the results of AI programs, because to perform the experiments one needs only “some grammar, some semantic and pragmatic components, Some lexicon and so on”. That is, we would be performing experiments without knowing if they were based on the correct theory of language. But it is just the fact that we don’t know what the correct theory is that makes experimentation so useful in the first place. Next, there is the specific criticism of Winograd, Minsky, and our own work. The work on Augmented Transition Networks is also criticized. However, this work is of a somewhat different nature than the rest, and will not be discussed here. In the case of Winograd, most of the points made are correct but inappropriate. For example, they attack Winograd’s use of Halliday’s systemic grammar, the nature of his semantic markers, and so on, all these being problems tangential to Winograd’s main purposes. The important part of Winograd’s theory involved the interactions among the components of his system. The authors object to this not on the basis of its validity, but by asking why the components act the way they do. However, as we have discussed at length above, a theory of why something happens means a different thing to a linguist than it does to a cognitive scientist, and from the viewpoint of the cognitive scientist, it is necessary to know what happens before we can know why it happens. The criticism of Minsky’s frame theory is equally off-base, primarily because the theory was not intended as a theory of language, but as a unifying theory of AI research. As such, it is necessarily vaguer than if it were a specific theory of language, but it is hardly vacuous. The authors not only find Minsky’s frame theory vacuous, but reject the one substantive claim that they attribute to it, “that cognition is mostly a matter of retrieving structures from memory and that learning is therefore mostly a matter of storing these structures” (p. 362). Their refutation is based on the consideration that If by ‘structures’ Minsky means representations of particular things or events... then at least for the case of language, he is surely incorrect. For we have seen that language, beir _’an open-ended system, cannot be learned as a list of patterns, much less as a list of sentences, but must be learned as a system oi rules. (p. 362).
140
Roger C Schank and Robert
Wilensky
It is hard to imagine how someone who has read the Frames paper can think that frames are merely specific memories. The cognitive structures that frames represent can include practically any level of abstraction from experience. For example, Minsky talks about frames for specific rooms, which can be used to recognize those rooms, and general room frames that can be used to recognize a room even if the viewer has not seen that particular room before. Furthermore, frames may have attached to them all sorts of information about what parts of them are important and what other frames to look at if the frame being tested fails, and so on. At any rate, simple memories they are not. It is interesting to note that in criticizing Minsky’s understanding of linguistics, the authors state that Minsky misconstrues the aims of syntactic research because of “his emphasis on language as a device for conveying meaning (an emphasis which is inherent in the task of communicating with machines)” (p. 335). On p. 333, the authors maintain that “the ultimate aim of research into language is to explain how language is organized to convey meaning”. It is hard to see how emphasis upon the ultimate aim of language research can lead to a misconception. We come next to the criticism of our own natural language research. What is most surprising about these criticisms is that the authors apparently failed to do their homework. Forexample, on p. 363, the authors claim that we are “not trying to present an actual computer program that can understand some fragment of natural language”. On the contrary, we have always tried to test our theories by using them to build computer programs. These include programs that can understand entire stories in English, paraphrase them, summarize them, answer questions about them, and translate them into Russian, Dutch, Spanish and Mandarin Chinese (for example, see Schank et al., 1973, and Schank et al., 1975) Some of these stories were taken directly from the newspaper, and constitute the first instances of computer understanding and summarization of actual newspaper articles. While the goals of our research are not limited to the production of computer programs, we find them to be an invaluable means of testing our theories, and also hope that our theories will have practical applications. Elsewhere, the authors object to our work because we fail to “give some indication of the type of entity that a concept can be” nor do we “indicate some way of showing which C[D] diagram goes with which sentence”. This is again inconsistent with the facts. Certainly we have not enumerated all conceptualizations that the human mind is capable of entertaining. However, we have provided systematic representations for large classes of them. For example, a major area of our research has been the domain of actions that occur in everyday situations, and a basic result that only a few ‘action pri-
Response to Dresher and Homstein
14 1
mitives’ are needed to effect these representations. It is difficult to believe that someone who has read anything we have written could have missed this basic thesis. Dresher and Homstein’s ignorance of our work is further demonstrated by their attack on our theory of natural language understanding, which is expectation based. The authors claim that Schank “never gives more than the vaguest indications of what these predictions are” (p. 372). One need only glance at Schank (1975) (not in the references of Dresher and Hornstein’s paper) to find scores of examples of predictions, along with detailed descriptions of exactly how they are to be used. Some of these predictions are presented both in English, and in the format in which they existed for the computer program that actually used them to understand. In the cases where issue is actually taken with our views, the authors appear to base their critique on some a priori knowledge of what is essentially an empirical issue. For example, we argue that there is no intrinsic need for an understander to perform syntactic analysis of a sentence beyond the point where this is useful for comprehending the sentence. Thus in understanding a sentence like Time flies like an arrow. for which linguists can find at least four different syntactic parses, we would claim that it was unlikely people perform all these parses because one semantic interpretation is so overriding (most people report difficulty finding all the syntactic alternatives). Our objection is not that people don’t use syntax for understanding, but that there is no reason to assume people perform syntactic analysis independently of finding a semantic interpretation of a sentence. That is, we have no reason to assume that people find all the syntactic parses of the above sentence, and then eliminate most of them on semantic grounds. It is our claim that the psychological importance of syntax is an empirical question, and that linguistic theorists have attributed an unjustified significance to it. The authors deny that linguistics has gone overboard on syntax, but the linguistics’ literature is full of statements like the following: The syntactic component is fundamental in the sense that the other two components both operate on its output. That is, the syntactic component is the generative source in the linguistic description. This component generates the abstract formal structures underlying actual sentences... . In such a tripartite theory of linguistic descriptions, certain psychological claims are made about the speaker’s capacity to communicate fluently. The fundamental claim is that the fluent speaker’s ability to use and understand speech involves a basic mechanism that
142
Roger C. Schank and Robert
Wilensky
enables him to construct the formal syntactic structures underlying the sentences which these utterances represent, and two subsidiary mechanisms: one that associates a set of meanings with each formal syntactic structure and another that maps such structures into phonetic representations, which are, in turn, the input to his speech apparatus [Katz and Postal (1964), pp. 1-2, emphasis added] . Dresher and Hornstein go on to claim that our position is tenable only if the syntactic processing necessary to distinguish English from gibberish has already been done. The point we have been trying to make, and that linguists cannot seem to understand, is that people don’t go around trying to distinguish English from gibberish, because the sentences people speak are not randomly selected from English and gibberish. The problem of distinguishing English from gibberish arises when one wishes to characterize the grammatically correct sentences of a language [(a transformational goal), not when one wishes to model a human language user (an AI goal)]. This is not a claim that people don’t use syntactic knowledge as part of their understanding mechanism, nor that people don’t object to gibberish when they hear it. What people object to when they hear gibberish is the fact that they can’t understand it, not that they find it ungrammatical. Dresher and Hornstein have mis-construed this to mean that we feel syntax plays no role in understanding. As an example, they claim our position leads to the conclusion that the following sentences should have the same meaning (p. 365): The tall man hit a round ball. Tall man the hit small round ball a. As it turns out, the placement of articles is an important syntactic item in our expectation based theory of language comprehension, since an article signals the introduction of a noun phrase, an expectation that the latter sentence violates. However, while the authors view the placement of articles to be a minor syntactic fact, for us it is a major one; the actual syntactic knowledge with which we have found it necessary to equip a language understander is of about this level of complexity. Our objection is directed primarily at those elements of syntax that go beyond the surface structure of a sentence, such as the hypothetical deep structures and transformational rules of the interpretivists. While such entities may be useful in characterizing grammatically correct sentences, it by no means follows that people use them during processing. What syntax people do use, and how they use it, are issues that must be addressed by a procedurally adequate theory of language use, not by an explanatory theory of language competence.
Response to Dresher and Homstein
143
The authors also deny that our conceptual representations are useful for understanding. They claim instead that the information these structures provide is just as readily obtainable from the syntactic structure. Thus they claim that in the sentence John ate the ice cream with a spoon “Looking only at the syntactic structure..., we can say that . . . with a spoon is some sort of instrumental associated with ate” (p. 368). If this is so, then consider the sentences John ate the ice cream with a cherry. John ate the ice cream with a friend. By the same rule of syntax, we can say that cherries and friends must be some sort of instrumentals associated with ate. In fact, Schank makes this very point about the limitations of syntax in an example quoted on p. 364 of Dresher and Hornstein’s paper. In quoting this example, the authors state that “For reasons that remain obscure, Schank insists on representing the action represented by eat as INGEST”, and later on, that these conceptual primitives have been selected without explanation (p. 372). Our representation may be controversial, but the reasons for it are hardly obscure to anyone who has read the numerous articles that have been written regarding this matter. The representation is meant to capture a conceptual generalization that underlies a large number of different actions, such as eating, drinking, breathing, inhaling, exhaling, and so on. The justification for the representation is not at all a “vague intuition” as the authors claim, but its demonstrable ability to be used as a basis for inference. The inferences that can be made from the representation are not left as an exercise for the reader, as Dresher and Hornstein might lead us to believe, but are described in detail in Schank (1975) and Schank and Rieger (1974). Yet the whole notion of inference, which is absolutely crucial for understanding of the motivation behind most of our research, is not even given lip service by the authors. A large part of the remainder of the critique addresses the concern that “there is no principled way to expand any C[D] diagram into a more complex C[ D] -diagram. Each step requires new information to be brought in” (p. 369). Interestingly enough, this is precisely the point that we are trying to make. Not that there are no rules for manipulating CD’s, but that such rules that there are depend upon the task at hand. Thus in different contexts, different rules of manipulation apply, and different pieces of information must be brought in. All these points are inherent in any theory of processing.
144 Roger C. Schank and Robert
Wilensky
In fact, it was to address this problem that the notion of a script was developed (see Schank and Abelson, 1975). A script is an organization for knowledge about familiar, stereotyped situations. A person trying to understand a story whose content conformed to the structure of a script would use the script to make the inferences necessary to connect together the sentences of a text. For example, in the story John went to a restaurant. He ordered came, he paid it and left a large tip.
a hamburger.
When the check
most people make the inference that John ate the hamburger, although this is not explicitly mentioned in the story. In order to make this inference, the understander had to have knowledge about what goes on in a restaurant, i.e., the restaurant script. Thus the application of scripts is one way in which CD-diagrams may be ‘expanded’ during processing. Not only have we used this notion of a script in building a computer program capable of understanding stories, but the idea has attracted a fair amount of attention from psychologists who have attempted to empirically test the validity of scripts. Yet Dresher and Hornstein ignore the role scripts play in our theory throughout their discussion. The authors characterize our work as nothing more than a restatement of known problems. We claim that a more careful reading of the literature will dispel this view. The computer programs based on our theories could not understand stories and generate reasonable natural language responses to questions had we simply restated problems to them, or supplied them with “vague intuitions” about language. The basic error the authors make in their critique is to treat CD as if it were a transformational theory, and to ask how it treats individual sentences completely out of context, independent of the task to be performed. However, as we have stated repeatedly, we are not interested in developing an abstract theory that characterizes individual sentences, but a process theory that describes the function of a human language user in realistic task environments. The authors state in summary “that there is no reason to believe that . . . AI research into language could lead to explanatory theories of language” (p. 377). In this, they are totally correct. We probably never will develop a theory that is explanatory in the trunsformationalist sense, any more than they will develop a theory that was procedurally adequate, in our sense. However, we will continue to construct such theories, and build programs that test them. We also hope that psychologists maintain the interest they have shown in procedural theories, and continue to subject them to the scrutiny of their empirical studies.
Response to Dresher and Homstein
145
The fact of the matter is that there is more than one way to study the problem of language, and that building a language user is one such way. If the interpretive semanticists do not wish to study cognitive processes, they need not, but that does not diminish the validity of such research. Science’s understanding of the human mind is not likely to be increased by limiting the few techniques we have available for its study.
References Chomsky, N. (1965) Aspects of the theory of syntax. Cambridge, Mass: M.I.T. Press. Katz, J. and Postal, P. (1964) An Integrated Theory of Linguistic Descriptions. The M.I.T. Press, Cambridge, Mass. &hank, R. C. (1975) Conceptual Information Processing. North Holland, Amsterdam. Schank, R. C. and Abelson, R. P. (1975) Scripts, plans and knowledge. Proceedings of the Fourth International Joint Conference on Artificial Intelligence, Tbilisi, USSR. Schank, R. C., Goldman, N., Reiger, C., and Riesbeck, C. (1973) MARGIE: Memory, analysis, response generation and inference in English. In Proceedings of the Third International Joint Conference on Artificial Intelligence. Schank, R. C. and Rieger, C. (1974) Inference and the computer understanding of natural language. Artific. Intell.. 5, 373412. Schank, R. C. and Yale, A. I. (1975) SAM - A story understander. Yale University, Department of Computer Science Research Report 43.
Cognition, @Elsevier
5 (1977) 147-149 Sequoia S.A., Lausanne
Discussion - Printed
in the Netherlands
Reply to Schank and Wilensky
B. ELAN DRESHER University
of Massachusetts,
NORBERT Harvard
Amherst
HORNSTEIN
University
Among the various issues discussed by Wilensky in his reponse to our article two major points can be distinguished. The first major point is that our critique of AI research is informed by a narrow parochial view of what research into language ought to consist of. Wilensky argues that our exclusive concern with linguistic competence leads us to downplay the importance of any research into performance. His second major point is that our specific criticisms of the work we discuss are either inappropriate or based on an inaccurate representation of that work. Consider the first point. Our critique of AI research into language is not that AI considers problems which are different from those investigated by a particular group of linguists, nor even that the AI view of language is different from certain views current in linguistics. Rather, we argue that the work we discuss is unscientific in that it does not lead to the elaboration of general principles of uny sort, be they of competence or performance or whatever. The aim of our article is to demonstrate that this claim is true in the case of the research we discuss and to suggest that the reason for this might be that the real - as opposed to the stated - goals of work in AI are technological rather than scientific. We nowhere claim that work on questions of performance is unimportant or uninteresting; in fact, we explicitly claim the contrary. Our point is that work on these problems, if it is to be of scientific interest, would have to proceed in ways methodologically similar to work done in other domains of scientific research through the elaboration of general principles and not through the writing of programs which could carry out particular tasks in limited domains. The problem with the AI research we discuss is that it never arrives at the elaboration of general principles, nor does it suggest how such principles might be elaborated in the future. Against this, Wilensky claims that AI theories must meet a condition of ‘procedural adequacy’ which appears to be, if anything, an even more
148
B. Elan Dresher arm’ Norbert Hornstein
stringent condition than what we have been calling explanatory adequacy. The fact remains, however, that none of the work we discuss comes anywhere near meeting either of these conditions. For the questions which Wilensky groups under the heading of procedural adequacy can be answered only by developing just the sort of general principles which we show to be lacking in the work we discuss. Substituting procedural adequacy for explanatory adequacy thus in no way affects our argument. As for theory-testing, our claim is that the existence of a program which is successful in some limited domain does not provide a test of explanatory adequacy (or of procedural adequacy, for those who prefer that term), and this is true whether there exist many competing theories or no theory at all. How the writing of a computer program such as Winograd’s can test the adequacy of a theory ~ beyond the level of arithmetic mistakes, which we assume is not what is at stake here ~~-has never been made clear, and Wilensky’s restatement of the claim that it does provides no further clarification. Consider next Wilensky’s objections to the way we characterize the specific research we discuss. He writes that our critique of Winograd is “correct but inappropriate”. Why? Because Winograd’s program is an attempt to describe how components of a system interact and this is of interest because “it is necessary to know what happens before we can know why it happens”. But our point is precisely that after Winograd is done we know no more about what happens when humans use language than we knew before he started. What we learn from Winograd’s program is one way to write a program that can answer questions about blocks. But as this program cannot be extended in any principled way beyond this rather idiosyncratic case it tells us nothing about how the components of a l~~nzar~ language system interact. Concerning Minsky’s frame theory, our point is that if a frame can be any arbitrary structure - i.e. if there are no principles which govern what can and what cannot be a frame ~ then it follows from frame theory that cognition must be mostly a matter of storing and retrieving arbitrary items from the memory. If, on the other hand, frames are not “simple memories”, then frame theory is not a theory at all but is just a way of talking, for neither Minsky nor Wilensky tell us what they are supposed to be, or what principles are involved in their structure and construction. In his discussion of our critique of the work on conceptual dependencies, Wilensky repeats positions and claims, involving, for example, the nature of the interaction between syntax and semantics, which we discuss at some length in our article. As he does not advance any arguments that go beyond the ones dealt with in the article we will not comment on them here. With
Reply to Schank and Wilensky
149
respect to the main point, Wilensky admits that CD-diagrams cannot be expanded in any principled way and that the various expansions of such diagrams depend on context and vary according to the task at hand. This being so, a theory which depends on the generation of such diagrams must contain general principles which govern the selection of diagrams by contexts and tasks. Lacking such principles, the work on conceptual dependencies cannot be said to provide a theory of human language understanding, nor does it suggest how such a theory might someday be developed. The inclusion into our discussion of the additional articles and examples mentioned by Wilensky would not substantially have altered this part of our argument or any of our other main points concerning the work on conceptual dependencies. Beside the two major points we discussed above, Wilensky raises other issues, such as the relation between competence and performance and what it means for a person to ‘know’ a grammar. These issues have been discussed at great length elsewhere so we will not attempt to go into them here. To end let us repeat that there is no reason why computer-based research into language and cognition cannot make important contributions to the development of theories of language and cognition. But to do so it will have to be aimed at the elaboration of general principles, something which the main trend of research in the field until now has failed to do.
Cognition, @Elsevier
5 (1977) 151-179 Sequoia S.A., Lausanne
Discussion - Printed
On some contested
in the Netherlands
suppositions of generative linguistics about the scientific study of language
A response to Dresher and Hornstein’s On some supposed contributions of artificial intelligence to the scientific study of language”
TERRY WINOGRAD Stanford
University
1. Introduction A recent issue of Cognition (December, 1976) contains a paper by Elan Dresher and Norbert Hornstein entitled “On some supposed contributions of artificial intelligence to the scientific study of language”. In that paper they discuss the work of a number of researchers, concentrating on papers by Marvin Minsky, Roger Schank, Eric Wanner, Ron Kaplan, and myself. As might be predicted from their title, they conclude that: There exists no reason to believe that the type of AI research into language discussed here could lead to explanatory theories of language. This is because first, workers in AI have misconstrued what the goals of an explanatory theory of language should be, and second because there is no reason to believe the development of programs which could understand language in some domain could contribute to the development df an explanatory theory of language. ...Not only has work in AI not yet made any contribution to a scientific theory of language, there is no reason to believe that the type of research that we have been discussing will ever lead to such theories, for it is aimed in a different direction altogether. (p. 377) [emphasis as quoted]
The editors of Cognition invited those whose work was criticized to respond in print. This paper is an attempt to explore some of the basic issues which were raised. On first reading the paper, I had several reactions: Dresher and Hornstein express a number of specific criticisms of current artificial intelligence research. I find myself in agreement with many of the *Preparation of this paper was made possible by a grant from the National Science Foundation. I would like to thank Danny Bobrow, Annette Herskovits, Ron Kaplan, David Levy, Andee Rubin, Brian Smith and Henry Thompson for insightful comments on an earlier draft.
152
T. Winograd
comments which deal with technical details, including some concerning details of my own previous work. However, they make a number of other technical comments with which I do not agree. These lead me to believe that they have not had any experience with the concepts and problems of computing, and this has led to a variety of misinterpretations of the work they criticize. They adopt unquestioningly and dogmatically a paradigm for the study of language which has been developed and articulately expounded by Noam Chomsky. The real point of their paper is that Al researchers are not working within this paradigm. They argue their point in a style which is an unintentional caricature of how an established scientific paradigm argues against a prospective competitor. Either they are not familiar with the work of philosophers of science such as Thomas Kuhn* who view science as a succession of paradigms, or they disagree with it so profoundly that they do not even consider the possibility that their methodological assumptions are social conventions rather than eternal truths. They conclude with an impassioned plea for a recognition of the complexities of human language. I wholeheartedly agree with their point, but it is not a conclusion based on the rest of the paper. It does not address the same issues, or even the same researchers. There is not sufficient space to explore all of these issues, and it seems most profitable to concentrate on the deeper significance of the paper. I will first build up a framework in which to view the paradigm differences between work in artificial intelligence and the authors’ stated views of what a “scientific theory of language” can encompass. The more specific reactions listed above will be discussed within that context. I will argue that the currently dominant school of Chomskian”” linguistics is following an extremely narrow and isolated byway of exploration. The limitations result *It has become overly fashionable for anyone whose work is not generally accepted in a scientific field to claim that this is because he or she is engaging in a “scientific revolution” and that all objections to the work are due to a defense by the old established paradigm. However, even at the risk of guilt by association, I feel that Kuhn’s observations apply so well to linguistics (even more so than to the hard sciences for which he originally made his case1 that it is of value to point them out, and at least raise legitimate questions about the set of values and methodological assumptions which are taken for granted in current work. **I am aware that the views expressed by Dresher and IIornstein are not identical to those of Chomsky. In many ways, they abstract and emphasize methodological issues which Chomsky is very careful to hedge in his own writings. However, I feel that their conception is quite close to that of many other linguists, psychologists, and philosophers who have studied Chomsky’s writings. It has been noted only half facetiously that Freud would not have been a Freudian. In the same sense, there exists an influential C’homskian dogma, whether or not Chomsky himself would agree to its style, or to all of the conclusions which have been drawn from it.
Artificial intelligence and the study of language 153
not from the structure of language, but from arbitrary set of meta-linguistic beliefs. 2. On the “scientific”
a commitment
to a specific
study of language
The strongest first impression of Dresher and Hornstein’s paper is that it is a prescriptive formulation of how language should be studied. They place their major focus on statements like [emphasis added] : In this paper, we will show that, contrary to these claims, current work in AI does not in any way address the central questions that any scientific inquiry into language ought to address. (p. 322) We have just seen that for a theory of language to be of scientific must address itself to... (p. 329)
interest,
it
If Winograd’s question is to be of linguistic, or more generally, of scientific interest, an answer to it must address itself to the principles of UC. (p. 333) The attainment of &hank’s professed goal of creating “a theory of human natural language understanding” . . . is impossible if it is not carried on in the context of a study of the principles of UC... (p. 337) .. . the requirements (P. 355)
of a scientific
theory of language can only be met by...
It is clear that they are not concerned with debating specific aspects of the analysis proposed in the papers they criticize. They are not arguing that a specific theory or set of theories is wrong, but that the entire enterprise is misguided in its very foundations. They return again and again to the criticism that the approach is not “scientific” and does not provide “explanatory” theories. As Kuhn (1962) and others have pointed out, arguments about what is “scientific” and what is “explanatory” are characteristic of debates between alternative paradigms. But paradigms differ in more than the substance, for they are directed not only to nature but also back upon the science that produced them. They are the source of the methods, problem-field and standards of solution accepted by any mature scientific community at any given time. As a result, the reception of a new paradigm often necessitates a redefinition of the corresponding science. Some old problems may be relegated to another science or declared entirely “unscientific”, others that were previously non-existent or trivial may, with a new paradigm become the very archetypes of significant scientific achievement. And as the problems change, so, often, does the standard that distinguishes a real scientific solution from a mere metaphysical speculation, word game, or mathematical play. -Kuhn (1962), p. 103.
154
T. Winograd
In this light, Dresher and Hornstein’s attack can be viewed as a clear statement of the ways in which the work in artificial intelligence does not operate under the set of accepted standards of the “normal science” of language as currently practiced by the followers of Chomsky. I would agree with almost all of Dresher and Homstein’s pronouncements on how work in artificial intelligence strays from “the scientific study of language” if they were translated according to the following rules: 1. The prefix “mis-“, and related words such as “wrong” the word “different”, 2. The word “scientific” is replaced by “Chomskian”. This translation
would lead to statements
are replaced
by
such as:
[difjerentZy construed] what the goals of an theory of language should be. (p. 377)
...workers in Al have misconstrued explanatory
...current AI research into language is headed in a wrong [different] direction, and it is this research that is unlikely to contribute to a scientific [Chowzskian] theory of language. (p. 322)
It would be both premature and self-inflating to proclaim that a scientific revolution is under way in which the Chomskian paradigm will be overthrown by the new “computational paradigm” which includes the work currently being done in artificial intelligence. The issues are far from settled, and only a relative handful of people are working within the new paradigm. We may or may not succeed at redefining the science of language, and cannot reasonably claim to have already done so. However, it is increasingly clear that we can identify a coherent and absorbing body of problems and techniques which have the potential to become the central focus for a science of language. I do not believe that it is possible to provide logically compelling arguments that one or another paradigm is right*, and it is inevitable that most of the people currently working within the Chomskian paradigm will continue within it. This paper is an attempt to provoke the thinking of those who are not committed to either paradigm, and who therefore can act as observers of the rules within which each side plays the game. The following *“When paradigms enter, as they must, into a debate about paradigm choice, their role is necessarily circular. Each group uses its own paradigm to argue in that paradigm’s defense. -the status of the circular argument is only that of persuasion. It cannot be made logically or even probabilistically compelling for those who refuse to step into the circle. The premises and values shared by the two parties to a debate over paradigms are not sufficiently extensive for that. As in political revolutions, so in paradigm choice ~ there is no standard higher than the assent of the relevant community.” -~ Kuhn, 1962, p. 94.
Artificial intelligence and the study of language
155
sections are an attempt to describe and contrast the two approaches. In doing this I will quote extensively from the Dresher and Hornstein paper, and from Chomsky’s work (primarily Aspects ofa Theory of Syntax (1965) and Reflections on Language (1975)) as well, since he is the most articulate and respected exponent of the current linguistic orthodoxy. In this comparison, I cannot pretend to be a disinterested observer, but have tried to accurately represent both sets of assumptions, without portraying either of them as logically necessary or objectively verifiable. The evaluation of which is “better” must inevitably be relative to the beliefs and values of the reader.
3. The Chomskian paradigm and the notion of “universal grammar” Dresher and Homstein’s arguments center around the role that “universal grammar” (UC) must play in the study of language. Their definition of that term is rather vague: Note that we are using “universal grammar” in a rather special sense. We do not mean to imply that all languages have the same grammar; nor does the term necessarily cover all those features that all languages might happen to have in common. Rather, we are referring to that set of principles according to which all grammars of human languages must be constructed. (p. 323) At first glance, this makes sense - it labels as “universal” those principles which are necessary for all grammars. However, the term “universal is indeed being used in a “rather special sense” which hides grammar” implicit assumptions about a number of crucial issues. The assumptions are better discernible in Chomsky’s more carefully crafted definition: Let us define “universal grammar” (UC) as the system of principles, conditions and rules that are elements or properties of all human languages not merely by accident but by necessity - of course, I mean biological, not logical necessity. (Chomsky, 1975, p. 29). Taken out of context, this definition appears to justify statements such as “a theory of human natural language understanding is impossible if it is not carried on in the context of a study of the principles of UG”. By including the entire “system of principles, conditions, and rules that are elements or properties of all human languages” it is hard to imagine how any generalization about languages would not be a part of UG. In its original context, however, this definition is the first step in a neat piece of intellectual legerdemain which gives the illusion that the detailed methodology of Chomskian linguistics must follow logically from any attempt to understand universal principles of language. The major steps are:
156
T. Winograd
Step 1: Equating
“grammar”
with “principles,
conditions,
and rules”
In any science, it is necessary to define terms with precise meaning. In doing so, there is no stricture against using words whose informal meaning does not correspond to the definition. The fact that the Holy Roman Empire was neither holy, nor Roman, nor an empire does not prevent the combination from being a useful label. “Universal Grammar” as Chomsky defines it is not “grammar”, according to the common useage of that word. The reader who agrees with the vague but reasonable notion that any theory of language must deal with the “principles, conditions, and rules that are elements or properties of all human languages” discovers that in the argumentation that follows, it is taken for granted that he or she has agreed that some kind of “grammar” is the central focus of language study. By then using “grammar” in its more usual senses within the same arguments, Chomsky is able to let a number of methodological assumptions slip by unnoticed, as we will see below. Step 2: Isolating “grammar”
The next study of distinction through through details of
from the study of linguistic processes
step is to remove from the purview of “universal grammar” all the processes and mechanisms which underlie language use. The between “competence” and “performance” is introduced quite sensible statements about the need to look at language idealized abstractions rather than trying to deal with irrelevant language behavior:
Linguistic theory is concerned primarily with an ideal speaker-listener, . .. who knows the language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors... in applying his knowledge of the language in actual performance. (Chomsky, 196.5, p. 3). distinction between competence (the speakerhearer’s knowledge of his language) and performance (the actual use of language in concrete situations). ...In actual fact, it [performance] obviously could not directly reflect competence. A record of natural speech will show numerous false starts, deviations from rules, changes of plan in mid-course, and so on. (Chomsky, 1965, p-4).
We thus make a fundamental
The desire for simplification through idealization is quite reasonable, akin to the physicist’s desire to study the mechanics of ideal frictionless objects before dealing with the details of a pebble rolling down a riverbed. In this “performance” covers the details of how the language user formulation, behaves in a particular instance, while “competence” deals with those more
Artificial intelligence and the study of language
157
universal properties which apply to all instances. But much of the Chomskian paradigm is based on a shift of the scope of these terms, in which all aspects of language having to do with process of any kind get relegated to the status of “performance”, with the corresponding assumption that they are not of interest for the theory of UG. Dresher and Hornstein say, for example : The scope of a theory of grammar is basically limited to these kinds of concerns, i.e., to the tacit knowledge that a speaker has of the structure of his language - his linguistic competence. However, a theory of grammar does not exhaust the subject matter of research into language. In particular, a study of competence abstracts away from the whole question of linguistic performance, which deals with problems of how language is processed in real time, why speakers say what they say, how language is used in various social groups, how it is used in communication, etc. (p. 328) [emphasis as quoted]
Universal grammar, which was initially defined as “the system of principles, conditions, and rules that are elements or properties of all human languages” is implicitly redefined as excluding the study of language comprehension and production along with all aspects of language as a means of communication. In the Chomskian paradigm for the scientific study of language, there is an assumption that valid generalizations can be made about the set of sentences judged grammatical by a native speaker, but that it is not possible to form scientific theories of the mechanisms by which people actually use language. The study of the development of cognitive structures (“acceptance of rules”, in the first sense) poses problems to be solved, but not, it seems, impenetrable mysteries. The study of the capacity to use these structures and the exercise of this capacity, however, still seems to elude our understanding. (Chomsky, 1975, p. 77)
The implied belief that processing is an “impenetrable mystery” cannot be falsified with examples. It is a paradigm-defining assumption about the range of phenomena it is considered acceptable to study. Every science must make such assumptions, in order to provide a limited enough perspective in which to focus scientific effort. But the choice is a matter of faith, not logic. The work which falls in what I have called the “computational paradigm” has as its main focus the study of the capacity to use cognitive structures. In fact, the renouncement of Chomsky’s assumption is the central unifying force in the body of work criticized by Dresher and Hornstein, and can be taken as a useful test of whether someone is working within a “computational” paradigm.
158
T. Winograd
Step 3: Reifying the “grammar” of the language user
The next step in shifting the meaning of “grammar” is to use it in referring to individual objects, instead of as an abstraction of “the system of principles, . ..” The theory of language is simply that part of human psychology that is concerned with one particular “mental organ”, human language. Stimulated by appropriate and continuing experience, the language faculty creates a grammar that generates sentences with formal and semantic properties. (Chomsky, 1975, p. 36) [emphasis added]
A grammar is something which is created by the language faculty of an individual language user. Again, this sentence is a careful blend of technical terms with suggestive commonsense meanings. One of the major confusions about “generative grammar” throughout its history has been due to the dissonance between the apparent meaning of the verb “generate” and the technical meaning it has been assigned. In its straightforward interpretation, the quoted statement above implies that: In the mind of each person who learns a language there is a mental structure called a “grammar” which was constructed by the “language faculty” This grammar
is used in generating
sentences
of the language
This interpretation is a clearly understandable, if inaccurate, concept of grammar. It corresponds to people’s common sense notions of how a body of rules is learned and applied. There is a job to be done (generating sentences) and a set of rules (a grammar) which tell you how to go about doing it. The rules can be codified, transmitted, and have the kind of existence which justifies talking about “the set of rules” as if it were a thing. However, “generate” does not mean “produce”, as Chomsky has found it necessary to point out again and again. To avoid what has been a continuing misunderstanding, it is perhaps worthwhile to reiterate that a generative grammar is not a model for a speaker or a hearer. It attempts to characterize in the most netitral possible terms the knowledge of the language that provides the basis for actual use of a language by a speaker-hearer. When we speak of a grammar as generating a sentence with a certain structural description, we mean simply that the grammar assigns this structural description to the sentence. When we say that a sentence has a certain derivation with respect to a particular generative grammar, we say nothing about how the speaker or hearer might proceed in some practical or efficient way to construct such a derivation. (Chomsky, 1965, p. 9)
As with “grammar”, “generate” is only a word, and he who uses it is free to make arbitrary definitions. There is a clear mathematical relationship
Artificial intelligence and the study of language
159
between a formal grammar and the language it “generates”. Chomsky’s exposition of this concept was a major impetus in creating a whole field of mathematics dealing with formal languages. It puts grammar on the level of a mathematical abstraction, dealing not with the use of language, but with the non-psychological notion of “derivation”. The problem with this redefinition is that it pulls the rug out from under the naive notion of a grammar as a set of rules for “doing something”. In the naive model, the set of rules “exists” in the mind of the person who uses them, just as a set of rules for driving “exist” in a law book. Although Chomsky insists that the grammar is not a set of rules used by a speaker in generating utterances, this independent existence of the grammar is allowed to stand unquestioned. As a result, it is assumed that “the grammar of the language user” is a legitimate object of study. The linguist is seen not as “inventing a set of formal rules whose application leads to the set of sentences of the language” but as “discovering the grammar of an idealized speaker of English”. This leads to the error of believing that the form of the grammar reflects facts about the properties of the language user, rather than properties of the linguistic system used in writing the grammar. This would be a reasonable claim for a set of rules which attempted to reflect the actual processes of language use, but is misplaced in the abstract notion of “generative” used by Chomsky. As a parallel, it is clear that we can write systems of differential equations, using Newtonian physics, which describe the motions of the planets. These equations can be viewed as a kind of “grammar” which formally “generates” the orbits we observe. However it would be an obvious mistake to say that the planet possesses a grammar which corresponds to our equations, or that in refining our mathematical formalism we are somehow studying universal properties of planets. Step 4: Identifying “grammar” with a formal syntactic system
Up to this point, “grammar” is being used in a quite general sense which includes the set of rules and regularities which apply to a language. Even leaving out all concern with the actual processes of language use, as was done in step 2, we might still expect it to deal with a variety of issues having to do with the structure of language as it relates to meaning. However, both in linguistic tradition and in the mathematics of formal languages the word “grammar” is used much more precisely. A grammar is a formal system of rules for characterizing a set of possible arrangements of symbols. The two concepts of grammar are quite distinct, and it is a significant leap to believe
160 T. Winograc!
that theories of grammar (in the limited formal sense) form a major part of the theory of Universal Grammar (in the sense defined above). My own, quite tentative belief is that there is an autonomous system of formal grammar, determined in principle by the language faculty and its component UC. This formal grammar generates abstract structures that are associated with “logical forms” by further principles oi grammar. (Chomsky, 1975, p. 43). Chomsky quite explicitly hedges in making the leap from “grammar” to “formal grammar”. But despite his protestations about feeling tentative, a whole school of linguistics has been based on the “autonomy of syntax” thesis. The substance of Chomskian linguistics is the study of syntax - of grammar in its most restricted sense. The body of work, of articles, books, and lectures, deals first and foremost with the detailed rules concerning the arrangement of words into “grammatical” sentences. My objection is not to the idea that someone would want to study syntax, or even (although I would argue with their scientific taste) that they want to study it as though it had a formal autonomy. The problem is that through this rather specialized concern the inconsistent use of the word “grammar”, and methodology has been elevated to the position of being the only “scientific” study of language. Step 5: Equating “explanation” with “simplicity of mechanism”
The final step in narrowing the scope of linguistic science comes in establishing the sorts of explanation which will be valued in the study of formal grammars. In his early work, Chomsky developed several basic mathematical results concerning the correspondence between the “power” of a formal rule-based mechanism, and the classes of “languages” (sets of strings of symbols) which it could be used to describe. Although there has been a good deal of disagreement about the relevance of this theory to the study of human language, the impressive mathematical results based on restricted formal languages have left their psychological mark on the field. Chomskian linguistics has been dominated by a style of research in which the major emphasis is on finding the “simplest” set of formal mechanisms which can generate the grammatical sentences of a natural language. There is no agreed upon notion of simplicity, and only a vaguely formulated notion of the “evaluation metric” which can be applied to grammars. Much of the debate within the paradigm consists of variation after variation on the formal mechanism, with each change justified by arguments that it allows the language to be generated by a simpler or more regular set of rules and meta-rules. Often, the notion of looking for the simplest mechanism is confused with the notion of finding restrictions on the possible
Artificial intelligence and the study of language
161
classes of grammars. It is assumed that constraints on the possible grammars which could generate human languages must correspond to facts about the mechanisms which people bring to bear on learning a language. It is assumed that such constraints “explain” rather than simply describe the properties of language. As with the definitions above, Chomsky’s statement is much more careful than the conclusions which have been drawn by most Chomskians. He talks of a theory as the degree to which it about “explanatory adequacy” accounts for the universal properties of human languages, as reflected in their learnability. He says: We are very far from being able to present a system of formal and substantive linguistic universals that will be sufficiently rich and detailed to account for the facts of language learning. To advance linguistic theory in the direction of explanatory adequacy, we can attempt to refine the evaluation measure for grammars or to tighten the formal constraints on grammars so that it becomes more difficult to find highly valued hypotheses compatible with primary linguistic data... the latter, in general, being the more promising. (Chomsky, 1965, p. 46).
The discovery of formal constraints on classes of grammars is indeed one possible approach to finding explanations for the properties of human language. The fact that so far it has been largely unsuccessful is not sufficient proof that it is not promising. But in incorporating Chomsky’s modest methodological suggestion into the theory as a whole, it has often become the only acceptable kind of explanation: Minsky presents a totally unconstrained system capable of doing anything at all. Within such a scheme explanation is totally impossible (p. 357). It is a commonplace tional power enables vanishes. (p. 357)
of research into language that unconstrained one to do anything. If one can do anything,
transformaexplanation
There is no simple answer to the question “What is explanation?” Indeed, there are whole bodies of philosophy dealing with this problem. The computational paradigm, as described below, has a very different approach to explanation, which is not based on the notions of formal generative power. As with the other issues we have discussed, there can be no measure of what is “explanatory” without appeal to the assumptions of the paradigm. Step 6: Justifying
the methodology
by appeal to problems of “learning”
Having reduced the scope of “the scientific study of language” to the study of constraints on classes of grammars (where a grammar is “a system of rules that in some explicit and well defined way assigns structural descriptions to
162
T. Winograd
sentences” (Chomsky, 1965, p. 8)) there is a need to provide arguments as to why this limited study should be of interest. This need has provoked a line of argument which can be summarized as: Our basic goal is to understand
the nature of human
All people ficulty.
languages
learn
their
native
without
cognition. formal
training
or dif-
Fonnal grammars describing the syntax of these different languages share certain regularities. Some of these regularities can be captured by putting formal constraints on the form of grammars which generate the grammatical sentences of languages. The fact that these are properties of all languages must mean that they reflect universal properties of the human capacity to learn language. Therefore by studying the properties of classes of formal grammars we can determine those facts about languages which make them “learnable”, and therefore reflect universal facts about human cognition. It is important to recognize that much of Chomsky’s motivation in pursuing this line of argument was his opposition to the behaviorist school of psychology, and his belief that the structure of language provided a powerful argument for the existence of innate specialized cognitive capacities. There is nothing essentially mysterious about the concept of an abstract cognitive structure, created by an innate faculty of the mind, represented in some still-unknown way in the brain, and entering into a system of capacities and dispositions to act and interpret. On the contrary, a formulation along these lines, embodying the conceptual competence-performance distinction seems a prerequisite for a serious investigation of behavior. Human action can be understood only on the assumption that first-order capacities and families of dispositions to behave involve the use of cognitive structures that express systems of evaluation, judgment and the (unconscious) knowledge, belief, expectation, like. (Chomsky, 1975, pp. 23-24).
In arguing against strict empiricism, he found it necessary to demonstrate that there were formal methodologies which could be used to reveal the nature of mental constructs. He is joined in this view by almost everyone who works in artificial intelligence. In fact, much of the work criticized in the Drcsher and Hornstein paper is an attempt to better understand the “cognitive structures that express systems of (unconscious) knowledge, belief, expectation, evaluation, judgment and the like”. The problem is that in much of the Chomskian literature, the problem of how syntax is learned has been taken not as a demonstration of the feasibility of developing a nnentalistic science, but as a definition of the study of language.
Artificial intelligence and the study of language
163
The central problem of a theory of language is to explain how people learn their native language... The question - How does someone learn a language? reduces to a new question - How does someone construct a grammar? (p. 323) is a rcductio ud absurdum of Chomsky’s argument. Indeed, language learning is an important problem, but it is hardly “the central problem” to the problem of constructing a and it certainly does not “reduce” grammar. If “How does someone learn a language?” were the central problem, then the entire Chomskian methodology would be largely irrelevant, since it deals in only the most peripheral way with empirical questions of language acquisition. Again, my objection is not to the fact that some people are interested in studying how languages are learned, or that they believe they will get useful insights by looking at formal properties of grammars. It is to the blindness engendered by the insistence that this enterprise constitutes the whole of “the scientific study of language”. The following section is an attempt to provide a new angle from which to view the nature and effect of their assumptions. This
4. Language
as biology
- a metascientific
metaphor
In explaining why current linguistics does not attempt to deal directly with the question “How is language organized to convey meaning?“, Dresher and Hornstein draw an analogy to the study of biology: ...biologists rarely attempt to tackle head-on the problem, “What is life?“, but generally break it up into smaller and more modest problems, such as “What is the structure of the cell”. As a means of getting to the intractable - “How is language organized to convey meaning” - current linguistic theories ask, “What are the principles of UC?” (p. 333)
I believe that this comparison can usefully be extended as a way of clarifying the meta-scientific issues raised by the Chomskian theorists. There is a more than superficial correspondence between the “study of living things” and the “study of language” and our experience with biology can serve as one model for what a “scientific” study of language might be. I will show the similarities through parodies of statements which have been made about the science of language, reformulating them as statements about the science of biology. At first glance, some of them may seem overly stated, or merely clever. However the exercise is being done with very serious intent. As mentioned above, it is not possible to debate the assumptions of competing paradigms in traditional formal deductive terms, since there is not a sufficient set of shared premises. What is needed are tools which allow us
164
T. Winograd
to extend the domain of our thinking, and metaphor is one of the most accessible of these tools. In some sense, this metaphor is the main “argument” of my paper. Principle 1: The centrality of universal anatomy First let us look at Chomsky’s definition of universal grammar, reformulated as a “definition” of “universal anatomy”. As mentioned above with respect this definition can be made independently of any normal to “grammar”, usage of the word “anatomy”: Let us define “universal anatomy” (UA) as the system of principles, conditions, and rules that are elements or properties of all living things not merely by accident but by necessity - of course, I mean physical, not logical necessity.
This definition cannot be shown wrong, but it seems misleading in two ways. First, the use of the word “anatomy” strongly biases the question of which “elements or properties” are to be considered. Second, the emphasis on “all living things” seems to imply no interest in principles, conditions, or rules which are applicable to only some, but not all*. If taken seriously, this would exclude almost the entire study of biology, limiting its domain to those properties shared by bacteria, sea urchins, and people. There are possible motivations for this kind of strong reductionism. There are indeed general principles of cellular biology, and these form a “basis” for all of the higher properties of living things. However, there is a tremendous difference between forming a basis and forming an explanation. DNA research is one of the most exciting and productive areas of biology today, but there is more to life than DNA. There are whole fields of science (anatomy, physiology, embryology, ecology) which deal with elements or properties of living things at a level which cannot be reduced to a discussion of the genetic mechanics**. It is hard to imagine what biology would have been like over the past hundred years if it had been dominated by a dogma that only the study of “universal anatomy” was appropriately “scientific”. *Any linguist reading this definition will also note the ambiguity inherent in the use of the quantifier “all”. A phrase such as “the principles that are elements of all human languages” can mean those principles which are applicable to every language, or all those principles which are applicable to any language. On the assumption that the ambiguity was not a conscious attempt to confound, Chomsky’s later statements make it clear that it must be interpreted in the former way only those principles which apply to every language are included in UG. **Haraway (1976) dcscribcs the rise of an “organismic” paradigm for the study of biology, and the ways in which it rejects the reductionistic approach of ardent DNA researchers such as Watson and Crick. There arc a number of fascinating parallels between the biological controversies she describes, and the current debates in linguistics.
Artificial intelligence and the study of language
Principle 2: Concentrating Paraphrasing
Dresher
165
on ontogeny and Hornstein:
The central problem of a theory of living things is to explain how an organism grows from a single cell to its full form and function... The question -how does an organism develop? - reduces to a new question - How does the form of an organism get constructed? .. . the relevant principles cannot be specific to any one organism, but must be equally applicable to the construction of the form of all organisms. It is in this sense that a theory of living things will involve the study of universal anatomy (UA).
In this form it becomes apparent how a true and important observation (about the importance of using the process of development as a key to understanding) is being twisted into a strange methodological axiom. Morphogenesis is one of the most important open problems in biology, and has been a source of important questions and observations. But it is not “the central problem”, and it does not “reduce” to a simpler problem involving how the forms develop. The developing biochemical processes within the organism play a tremendous role in creating the evolving sequence of forms, and in many ways can be viewed as more primary*. Principle 3: There is an abstract formalization
of structure
Even if we limit our interests to the study of anatomy, there is still an open question as to what kinds of theories can explain it. It is in the notion of “explanatory theory” that Chomskian linguistics seems to have strayed the farthest from other areas of science. Formal constraints are viewed as explanatory, while considerations of process are considered extraneous. There would be a clear analog in biology (paraphrasing Chomsky): My own, quite tentative belief is that there is an autonomous system of formal anatomy, determined in principle by the nature of living things and its component UA. This formal anatomy generates abstract structures that are associated with “physiological forms” by further principles of anatomy. In fact, such have actually
notions of formal anatomy could be applied to studies which been done in biology. Just as linguists can point to phenomena
such as “structure-dependence” and the “coordinate structure constraint”, biologists have noted generalizations such as the fact that organisms with
*For a discussion along parallel lines in linguistics, see Halliday (1975), discusses the ways in which the development of communicative functions in the development of syntactic competence.
Learning How fo Mean. He serves as a primary element
166
T. Winogad
spiral shapes are in the form (as stated in Bateson’s Rule):
of equi-angular
(logarithmic)
spirals,
or that
When an asymmetrical lateral appendage (e.g. a right hand) is reduplicated, the resulting reduplicated limb will be bilaterally symmetrical, consisting of two parts each a mirror image of the other and so placed that a plane of symmetry could be imagined between them. (Described in G. Bateson, 1972, p. 380).
Such generalizations seem to apply to a wide variety of different living things, and over a broad range of cases. As such they must reflect principles of “universal anatomy”. One can imagine the development of “generative anatomy” in which mathematical rules dealing with shapes are applied to “generate” possible forms for animals. It is even possible that some general characteristics of the mathematical formalism could correspond to universal properties of biological form. For example, there are limited kinds of symmetry found in living organisms, and it should be possible to set up the derivation of forms in such a way that other symmetries would not be generated*. Generative linguistics is based (however tentatively) on the belief that generalizations which hold over human languages can be best explained by building formal theories of “competence” which do not attempt to deal with the processes of language use or language acquisition, but instead seek an abstract “neutral characterization” of the constraints on possible languages. There is no valid argument that this approach is wrong, or that an abstract “generative anatomy” would be wrong. It can only be argued that it appears inappropriate, given the range of things which we expect it to explain. Biologists would, however, have grounds for objection if it were decreed that only theories of this sort are to be called “explanatory”. In fact, the use of this word seems perverse. It seems that even partially sketched theories which deal with the actual biological phenomena and processes “explain” far more than an extensive and successfully fit mathematical abstraction of the resulting forms. A recapitulation
of’ the metaphor
If biology had followed would be:
the lines of current
linguistics,
the resulting
dogma
*There is no study of “generative anatomy”, but observations of generalizations like those above have served as the basis for looking into the interactions between process and structure. Work such as D’Arcy Thompson’s On Growfll arrd Form (first published in 1917) looks for explanations of these regularities in terms of the way animals grow and live, and the effects of the physical processes involved. It is interesting to note that within biology, D’Arcy Thompson was criticized for being overly mathematical, and not resting his work on an “explanation” of the phenomena. Compared to current generative linguistics, however, his work is not at all abstract, with its extensive attention to physical processes and analogies with non-biological physical systems.
Artificial intelligence and the study of language
167
The central problem of biology is how organisms grow to be as they are This growth process is too difficult to study directly, but we can get insight into it by studying those properties which hold for the structure of all organisms (universal anatomy) The only scientific theories of biology of possible forms for organisms.
are those which constrain
the class
These theories are best stated as rigorous structures of rules which generate (in a formalizable geometric sense) possible forms. There is no simple falsehood in this view, but in the context of what we know about biology, it appears myopic and farfetched. Only one biologist in a hundred conducts work which fits this framework at all, and there are whole libraries of work which would not be considered “scientific” if it were taken seriously. I believe that the situation in linguistics is less obvious, but not all that different.
5. The computational
paradigm
In the light of the preceding sections, it should be clear that I do not view my work or that of the others discussed by Dresher and Hornstein as a better way of finding answers to the questions posed by Chomskian linguistics. The difference is one of paradigms, not methods. It would be misleading to imply that there is a well-defined, coherent paradigm which unites the work which they criticize. There is no single spokesperson who fills the role that Chomsky has in linguistics, and no catechism to be found in the writings. Those readers interested in fleshing out the sketchy picture provided here will have to glean it from research monographs in the area*. In the following paragraphs I can claim only to provide my own interpretation. It is certain that the other researchers criticized in the Dresher and Hornstein paper would not agree with me totally, and in fact it is likely that they would voice substantial objection to many points. *My own views are expanded in Winograd (1976), and will be developed in a forthcoming book. Schank’s current views (which have evolved substantially since the work cited) are best presented in Schank et al. (1975) and Schank and Abelson (in press). Kaplan has discussed his recent work extensively in Kaplan (1977). Other important works in the area are by Charniak and Wilks (1976). Norman and Rumelhart (19751, and in collections of papers edited by Reddy (1975), Bobrow and Collins (19751, and Schank and Nash-Webber (1975). The Journalof the Association for Comnutational Linguistics has published work in this area over the past few years. The journal kognitive Science began publication in January 1977, and is dominated by adherents to the paradigm described here. A number of linguists, including Chafe, Fillmore, G. Lakoff and Morgan have rejected many of the Chomskian assumptions, and are looking at language in a style which is quite compatible with the computational paradigm. For more discussion of the connections, see Winograd (1976).
168
T. Winograd
A. The basic paradigm
The computational paradigm for the study of language is based on a set of assumptions about the nature of language and the methods by which it can be understood. Informally stated, those on which there is broad agreement include: The essential properties of language reflect the cognitive structure of the human language user, including properties of memory structure, processing strategies and limitations. The primary focus of study duction and understanding context. The structure of but serves primarily as a the cognitive structures of
is on the processes which underlie the proof utterances in a linguistic and pragmatic the observable linguistic forms is important, clue to the structure of the processes and of the language user.
Context is of primary importance, cognitive structures of speaker linguistic text or facts about produced.
and is best formulated in terms of the and hearer, rather than in terms of the the situation in which an utterance is
It is possible to study scientifically the processes involved in cognition, and in particular of language use. Some parts of these processes are specialized for language, while other parts may be common to other cognitive processes. B. The centrality
of process.
The most important unifying feature of the computational paradigm is the belief that the processes of language use should form the focus of study. We share with Chomsky the belief that it is possible to scientifically study mental objects (in our case, the processes; in his, the grammar) which are not directly observable through textual or experimental observations. But we explicitly reject the Chomskian view that processes are inaccessible to scientific study and that formal properties of grammars are the only basis for linguistic science. The major object of study is the cognitive processes of the language user. This shapes the research in several ways: The use of the computer
as a metaphor.
is not applied to this paradigm because The name “computational” computers are used in carrying out the research. One could imagine the concepts being developed without any direct use of computers, and a large percentage of the current applications of computers to the study of language do not fall within this paradigm at all. What is central is the metaphor provided by viewing human cognitive capacity as a kind of “physical symbol
Artificial intelligence and the study of language
169
and drawing parallels between it and those physical symbol system”*, systems we are learning to construct out of electronic components. The parallels are not at the level of the physical components, but at the level of the abstract organization of processes and symbol structures. Like any metaphor, the computer metaphor has its limitations**, and provides only a direction of thought, rather than a concrete body of theory. In some sense, the entire paradigm can be described as the search for those insights which can be developed from this metaphor. Attention
to properties
of whole systems:
Work within the Chomskian paradigm has generally been based on isolating one specific component of language, such as syntax or formal semantic features. In answer to Schank’s criticisms, Dresher and Hornstein correctly point out that this is a methodological, not a theoretical stance. No linguist has held the “absurd position” that there is no interaction between the components. However they then proceed to say “...it is not obvious a priori which phenomena of language are to be assigned to the syntactic component and which to the semantic, or some third, component” (p. 18). There is a strong basic belief that the best methodology for the study of language is to reduce the language facility to a set of largely independent “components”, and assign different phenomena to each of them. This is in direct contrast to a system-centered approach which sees the phenomena as emerging from the interactions within a system of components. Much of the work in the computational paradigm has taken this more systemic viewpoint, emphasizing the mechanisms of interaction between components and concentrating on “process structures” - those aspects of logical and temporal organization which cut across component boundaries. In some cases this has led to investigations into the ways in which the processes of language use are related to a larger range of cognitive processes, such as those involved in planning and visual scene analysis. Viewing learning
in a secondary
role:
Although questions of language learning are relevant, they appear in a different perspective. The fundamental question is “What mental structures and processes make it possible for a person to use a language?” One key part of using a language is learning it, and no full theory of language can ignore issues of learning. But the place is secondary rather than primary. It may be *See Newell and Simon (1976) for a definition of this term and its implications for the study of computation and cognition. **Weizenbaum (1976) argues at length that this metaphor has disastrous consequences for humanity if taken as a view of the “whole person”. I am in full agreement with his basic point, but I find that many of his specific arguments about linguistics are based on misunderstandings akin to those of Dresher and Homstein.
170
T. Winograd
impossible to totally understand physiology without knowing embryology, but there is a good deal which can be said about the functioning of fully formed structures independently of their origins. C. The importance
of‘representatiovl
Dresher and Hornstein correctly to the computational paradigm:
observe
the importance
of representation
There are several revealing respects in which Schank’s work resembles that of Minsky and Winograd. Foremost among these is an emphasis on representation over explanation. (p. 376)
The fact that they consider representation and explanation to be competitive rather than complementary reflects one of the fundamental gaps in their understanding of computation. They begin with an intuition drawn from traditional mathematics, which is one of the major threads in the fabric of the Chomskian paradigm - the view that logical equivalence is of primary interest in forming theories. From this standpoint, two mechanisms can be considered formally different only if they lead to different sets of possible results, independent of the computational processes by which they arrive at them. This approach has been of great use in developing the theory of formal languages, but is very misleading when dealing with actual computation processes. There is a fundamental theorem of computer science which can be loosely paraphrased as: If there are no limitations on the amount of memory or processing time, then any machine with a certain minimal set of mechanisms can perform exactly the same set of computations as any other machine, no matter how complex, which includes that same minimal set.
But this theorem is of interest only when we are dealing with the abstract case in which “there are no limitations on the amount of memory or processing time”. As soon as we try to apply computation theory to real systems (whether natural or constructed) we must deal with the fact that every such system is limited in both time and memory. Two systems which in the formal sense can have entirely different properties are “equivalent” if they operate with resource limitations. Furthermore, these differences can be related in systematic and scientific ways to the “representations” used in the different systems*. discussed below, the word “representation” carries with it some *Like the word “understanding” dangers. The set of structures within a computational system does not need to “represent” any reality which exists outside of it. Maturana (1970) has pointed out the problems in taking the notion of “representation” as anything but a metaphor in describing a cognitive system.
Artificial intelligence and the study of language
17 1
As an example, we can look at two different representations of arithmetic. Imagine two people called “the calculator” and “the logician”. The calculator knows the usual simple facts and rules about addition, multiplication and so on. The logician knows a formal axiomatization of arithmetic, and a set of procedures for making deductions in a formal logical system. If we ask the question “Is (A + B) + C always the same as A + (B + C)?“, the logician will immediately answer, while the calculator will only be able to decide after great thought (if at all) that it is a consequence of the rules for carrying out addition. On the other hand, if we ask “Is 52 times 84 equal to 4368?” the calculator will answer immediately while the logician will spend hours going through a proof with thousands of steps. If we only care whether these two people would ever disagree in their answers, the difference in representation is irrelevant, but if we are interested in how they can use arithmetic, the difference in representation is crucial. The effects of representation are just as concrete and formalizable as the logical equivalence, but in a different domain - the domain of process. This is of course an oversimplified example, but it points out the importance of looking at differences in the accessibility of information and the infermce procedures which operate on it. The intellectual substance of artificial intelligence (and of much of computer science) lies in the study of the properties of different representations, and of different process structures which arise from operations using these representations*. It is through an understanding of the deep properties of representations that we hope to find useful “explanations” of cognitive processes such as language use. The nature of the representations underlying human language use is an area of open and active debate and research. Much of the work quoted by Dresher and Hornstein deals with this issue. There are some researchers who emphasize the “procedural” aspects - the structure of the computations and others who are more concerned with the “declarative” aspects ~ the nature of the representations stored in memory. There are some who believe that the representations for syntax and meaning are quite different, and others who assume that they are essentially similar. However, there is broad agreement that it is an issue of central importance to understand the properties of these representations, and develop a better understanding of how they take part in computational processes. Minsky’s (1975) frame paper has to be understood in this light. Dresher and Hornstein point out (again correctly, from my standpoint) that his *Knuth (1968, 1969, 1973) provides a compendium of the representations commonly used in conventional programming. The papers in Bobrow and Collins (1975) debate many detailed issues about representations used in artificial intelligence research. Bobrow and Winoprad (1977) develop some of the issues of accessibility and inference in a representation language.
172
T. Winograd
is not a theory in the usual sense of the word. Their “frame theory” expressed confusion about the relationship between rules, patterns and processes in Minsky’s statements is a fair reaction to the lack of specificity in his formulation. However, the paper was of importance because it influenced researchers to look at a class of representations with different computational properties from those which many of them had been studying. It did not postulate a theory, but laid out a direction of exploration*. It is inappropriate to conclude that studies of representation must “presuppose the types of explanatory theories which it is the aim of scientific research to discover” (p. 63). Quite the reverse, any explanatory theories will have to deal with the observed facts about representation and computation. D. The relevance of programming
to theory
One of the areas of greatest confusion in understanding the significance of work done in artificial intelligence lies in the relationship between computer programs and computational theories. Many people (including many AI researchers) have the impression that somehow “the program is the theory”. This leads to endless argumentation in which the critic says “That theory can’t be right because this detail isn’t sufficiently justified, that detail doesn’t correspond to the facts of human language understanding, etc.” while the defender says “I don’t see any competing theories which even try to account, however badly, for the things this program attempts”. What is clearly at stake is the nature of appropriate “theory”, and on this issue as with the ones discussed above, there is wide variation within the research community. The views expressed here are my own, but I believe they would be acceptable to a large fraction of those who work within a computational paradigm. First, a program is not a theory, even if it is totally correct as a model. If I have a complete blueprint for a complex mechanical device, it is not a “theory” of how that device works. But it would be foolish not to see a blueprint as a valuable part of an “explanation” of that device. Similarly, a program which completely duplicated the processes of human language use would still not be a theory. But any program which is built can be viewed as a hypothesized partial blueprint and can be a step towards understanding. *The importance of imprecisely specified concepts is a feature of the development of all sciences. In discussing the importance of the concepts of “resonance” and “field” in the work of the biologist Paul Weiss, Haraway (1976, p. 153) notes: “The term resonance did not imply a specific mechanism any more than the term field implied that its basis was understood. Rather, the principle, first described in 1923, suggested the nature of the relationship so as to stimulate research founded on fruitful analogies”
Artificial intelligence and the study of language
There is a good deal to be learned in devising such hypotheses. and Homstein point out:
173
As Dresher
Thus, Winograd believes that ‘the best way to experiment with complex models of language is to write a computer program which can actually understand language within some domain’. (p. 333-334)
Apart from questioning the wisdom of using the phrase “actually understand”* I still believe this to be true. It is based on the belief that the important properties of language will be explained through the way different aspects of language are embodied in processes and the ways these processes interact in language use. Through studying the structure and behavior of computer programs which carry out analogous processes, we will develop a better understanding of this interaction. Much of the work in AI is based on a methodological assumption that it is most profitable at this stage of the science to develop a body of alternative blueprints - to explore the possibilities before focusing on closely honed explanation. This has the same status as the Chomskian assumption that syntax should be thoroughly studied before turning to problems of meaning. It can only be validated by demonstrating the results eventually achieved by the work of those who believe it. It is one of the major areas in which Dresher and Homstein find most AI research unacceptable. The analogy with biology is once again applicable. There is an important level of analysis at which a living organism is seen as a complex system of biochemical interactions. The usefulness of this approach depends on understanding biochemistry in its own right - knowing what kinds of processes can take place, what substances result, what conditions are necessary. The biochemist who experiments with the properties of synthesized substances is operating in a style which is close to that of the AI researcher who experiments with the properties of synthesized programs. There is no guarantee that the substances which are created or the processes which happen in the test tube correspond to the actual substances and mechanisms in a living *The authors echo the concerns of Dreyfus (1972) and Weizenbaum (1976) about the careless use of the word “understand”. In AI, language-understanding systems are systems which can carry out certain limited tasks involving language in some way: e.g. answer questions about baseball or engage in limited dialogue about a particular small world of blocks. Why these systems are graced with the epithet “language-understanding ’ rather than, say, “language receiving and responding” has never been adequately explained (p. 33 1) Aside from its patronizing tone, this remark does point to an important issue. Our use of the word “understand” in human interactions implies a kind of empathetic process which is outside the realm not only of artificial intelligence, but of linguistics as a whole. Using “understand” to characterize a situation of instrumental communication is in a way impoverishing its meaning. Perhaps “comprehend” would be a better term for those aspects of understanding which linguists attempt to study.
174
T. Winograd
organism, but the understanding which is gained through experimentation is invaluable in building models and performing experiments on living systems themselves. In this context, the criteria on choosing what is to go into a computer program are quite different than they would be if the program were to be taken naively as a theory. Dresher and Hornstein criticize the AI approach: We cite this example [a mechanism for conjunction] as being characteristic of Winograd’s overall approach, which is to arbitrarily stipulate what are in reality matters that can only be decided by empirical research, and which can only be explained on the basis of theoretical work (p. 350) [emphasis in original] It would be equally valid to criticize a biochemist for “arbitrarily stipulating” the mixture of chemicals in an experiment because the properties of those chemicals can only be decided by empirical research and explained on the basis of theory*. Dresher and Hornstein argue that the desire to build working computer systems is antithetical to the development of linguistic theory. “If one approaches the task with a ‘practical desire’, the question of universal principles need hardly ever arise . .. on the contrary, it leads one away from a consideration of these issues.” (p. 16). It is indeed valid for them to question the relative priorities of motivation in carrying out research. A researcher for a pharmaceutical company can spend years trying to synthesize an effective but not previously patented variant on a known drug without ever adding to our understanding of biochemistry. A person can write a “usable language understanding system” for limited purposes without dealing with any of the scientifically important issues. But this is a specific choice, not an inevitable consequence of combining practical and theoretical goals. Many important insights into human biochemistry have come out of research whose goals included practical pharmacology the desire to synthesize a “usable drug”. An AI researcher can choose to ask questions about universal principles and to use the practical goals as a framework providing rough boundaries for the phenomena to be studied.
*It is perhaps too early to compare the state of artificial intelligence to that of modern biochemistry. In some ways, it is more akin to that of medieval alchemy. We are at the stage of pouring together different combinations of substances and seeing what happens, not yet having developed satisfactory theories. This analogy was proposed by Dreyfus (1965) as a condemnation of artificial intelligence, but its aptness need not imply his negative evaluation. Some work can be criticized on the grounds of being enslaved to (and making too many claims about) the goal of creating gold (intelligence) from base materials (computers). Hut nevertheless, it was the practical experience and curiosity of the alchemists which provided the wealth of data from which a scientific theory of chemistry could be developed.
Artificial intelligence and the study of language
E. The relevance
of theory
175
to programming
The fact that Dresher and Hornstein accept the claim that effective programs can be written without dealing with “universal principles” is another clue to their lack of experience with programs and computation. They state that: Thus, one could start with fairly primitive components (a small number of syntactic patterns, a small lexicon, etc.) which could be improved indefinitely (by adding more syntactic patterns, more lexical items) according to practical constraints such as time, money and computer space. (p. 330)
They cannot be faulted for being trapped by this fallacy, since it has infected computer science in various guises throughout its history. However it has proved time and time again to be wrong. Computer programs which try to deal with complex problem areas simply bog down if they are not built with a structure whose complexity and form mirror the properties of the domain in which they work. The failures of the early “learning” programs based on simple perceptrons, and the limited success of the “theorem provers” built over the past ten years are testimony to the importance of this principle. F. The organicistjreductionist
debate
The conclusion of Dresher and Homstein’s paper is a valid (if somewhat histrionic) statement of one side of a debate within artificial intelligence. If taken literally, the image they present is false. There are few if any in the “AI community” who believe that: ...the fundamental theoretical problems concerning the organization of human cognitive abilities have been solved, and all that remains is to develop improved techniques for the storage and manipulation of vast quantities of information... the principles have been discovered, the information is easily accessible, the techniques are almost perfected... (p. 396)
However, if a caricature come from of physics) conditions).
read in the bombastic spirit with which it was written, it paints of a widely held view that success in artificial intelligence will finding a few underlying basic principles (analogous to the laws and simply applying them to complex situations (boundary Herbert Simon has been a strong exponent of this view:
A man viewed as a behaving system is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself .. . I myself believe that the hypothesis holds even for the whole man, .. . generalizations about human thinking . . . are emerging from the experimental evidence. They are simple things, just as our hypotheses led us to expect. Moreover though the picture will continue to be enlarged and clarified, we should not expect it to become essentially more complex; (Simon, 1969, p. 24-25 and 52-63).
176
T. Winograd
There are two different threads of reductionism which have been applied to artificial intelligence and the study of language. The simple form, expressed by Simon, views the entire range of observed behavior as being produced by a simple mechanism. A more sophisticated form underlies Chomskian linguistics and some work in AI, including that of Kaplan and Wanner which Hornstein and Dresher criticize. This form is based on assuming that there is a division of the complexities into well-defined components, or faculties, and that these can be studied independently without putting a major emphasis on their interactions. I personally am at the opposite end of the spectrum ~ one which might be in opposition to this “reductionist” position*. Organilabelled “organicist”, cism is the view that “The organism in its totality is as essential to an explanation of its elements as its elements are to an explanation of the organism” (Haraway, 1976, p. 34). It emphasizes the interactions and complexities of the whole, instead of reducing explanation to the finding of simple rules from which all of the properties can be derived. I agree with the position stated by Dresher and Hornstein: The fundamental problems of a scientific theory of language have not been solved, and if research into language has shown anything it has demonstrated that ~ as is the case with explanation in other domains - an explanation of the human language faculty will involve the elaboration of unanticipated theories of tremendous complexity. (p. 396).
My statements in previous sections about the importance of viewing cognitive capacities as systems of complex interactions, and about the need for the complexity of programs to reflect the complexities of their domain are statements of one school of thought within the procedural paradigm, while Simon’s statements (and some of those made by Minsky and Schank) reflect another. Work in artificial intelligence will provide new grounds for continuing an age old and vital epistemological debate, but the arguments will not be along the lines drawn by Hornstein and Dresher. They make the mistake (as does Weizenbaum (1976) in his much more extensive treatment of the same issue) of trying to equate the reductionist views expressed by Simon with the computational paradigm as a whole. Things are, fortunately, more complex than that.
*Iiaraway (1976) discusses the corresponding debate in biology at length. It is interesting to note that among all of the metaphors which she describes as having guided thinking within the different paradigms, computers are the only one which has been extensively used by both the reductionists and the organicists.
Artificial intelligence and the study of language
177
G. The role of syntux
Dresher and Homstein quote a number of passages in which different researchers express disagreement with the degree to which current linguistic science emphasizes syntax. They state (correctly under the translation rules described in section 2): Like Winograd, his [Minsky’s] emphasis on language as a device for conveying meaning (an emphasis which is inherent in the task of communicating with machines) leads him to misconstrue [differently construe] the aims of syntactic research (p. 33.5)
a basic view of language as a means of communication leads one to “differently construe” the aims of syntactic research*. The emphasis is on “explaining” syntactic phenomenon in terms of the processes which go on in understanding and production. However, there is a wide spectrum of attitudes towards the role which syntax should play. The work of Wanner and Kaplan deals entirely with syntax, while the statements from Schank express his view that syntax is of very little value in understanding language. The authors point out (correctly from my point of view) the weaknesses in Schank’s arguments about the irrelevance of syntax, but fall into an equally fallacious view of its centrality. They say: Indeed,
If . .. syntax plays no major role in conveying meaning, we would expect that the sentence Tall man the hit small round ball a should convey about the same meaning as the sentence The tall man hit a small round ball. But of course the first of the sentences conveys no meaning at all. (p. 365). It is not clear what they mean by “conveys no meaning at all”, but they must be using the words “convey” and “meaning” in a rather special sense. A young child, a novice second language learner, and a telegrapher all manage to convey a good deal of meaning, while rarely producing a sequence of words which an adult native speaker would be willing to class as “grammatical”. Even a list of words can convey meaning: A person who runs up to us on the road and gasps out “... skid . . . crash . . . ambulance...” has conveyed a good deal. The study of how this happens cannot be excised by fiat from the “scientific study of language”. Syntax is important, but it is only one of many levels of structure which are vital to conveying meaning. This is not the appropriate place to lay out the debates on details of how syntax and other aspects of language can be related within a framework that emphasizes the processes. What is important is to recognize that for the great
*It is not clear why Dresher and Hornstein seem to feel that an emphasis function of language is somehow more inherent in the task of communicating is in the act of communicating with other people.
on the communicative with machines than it
178
T. Winograd
majority of people working within a computational paradigm, the question is considered empirical, to be resolved by experimenting with possible models for those processes.
6. Conclusions Much of this paper has been built around analogies between linguistics and biology. The most exciting aspect of these analogies is that the parallel current debates in the two areas reflect a change in world view which is one of the major intellectual events of our century. This change includes most of the natural and social sciences in its scope, and could on our entire view of human existence and society.
have profound
effects
“It is possible to maintain that branches of physics, mathematics, linguistics, psychology, and anthropology have all experienced revolutionary and related changes in dominant philosophical perspective. The primary element of the revolution seems to have been an effort to deal with systems and their transformations in time; that is, to take both structure and history seriously without reducing wholes to least common denominators. Organization and process become the key concerns rather than last ditch incantations. (Haraway, 1976, p. 17).
In a way, Chomsky took linguistics the first step along this path. He introduced the notion of “transformation” as a fundamental keystone of the structure of language. But in the twists and turns of the theory described in Section 3, he distorted this insight. Process became a piece of the formal mechanism, rather than the focus of study. The computational paradigm grew out of a desire to look directly at the cognitive processes of people using language. Dresher and Homstein are right to be critical of many of the details of its analysis, and to ask how much has really been accomplished. The current work is suggestive and enticing, but not authoritative or logically compelling. There is no clear set of “problems” which have been solved in a way which would prove its correctness, or disprove the statements they make in defense of the Chomskian paradigm. But paradigm debates are not really about relative problem-solving ability, though for good reasons they are usually couched in those terms. Instead, the issue is which paradigm should in the future guide research on problems many of which neither competitor can yet claim to resolve completely. A decision between alternate ways of practicing science is called for, and in the circumstances that decision must be based less on past achievement than on future promise. The man who embraces a new paradigm at an early stage must often do so in defiance of the evidence provided by problem-solving. He must, that is,
Artificial intelligence and the study of language
179
have faith that the new paradigm will succeed with the many large problems that confront it, knowing only that the older paradigm has failed with a few. A decision of that kind can only be made on faith. (Kuhn, 1962, pp. 157-158.)
References G. (1972) Steps to an Ecology ofMind. New York, Ballantine. D. G. and Collins, A. (1975) Representation and Understanding. New York, Academic Press. D. G. and Winograd, T. (1977) An overview of KRL: A knowledge representation language. Cogn. Sci. (1: 1) Charniak, E. and Wilks, Y. (1976) Computational Semantics. Amsterdam, North Holland. Chomsky, N. (1965) Aspects of a Theory of Syntax. Cambridge, M.I.T. Press. Chomsky, N. (1975) Reflections on Language. New York, Pantheon. Dresher, E. and Hornstein, N. (1976) On some supposed contributions of artificial intelligence to the scientific study of language. Cogn. (4:4); 321-398. Dreyfus, Il. (1965) Alchemy and Artificial Intelligence. Santa Monica, RAND Corporation. Dreyfus, H. (1972) What Computers Can’t Do. New York, Harper and Row. Halliday, M. A. K. (1975) Learning How to Mean - Explorations in the Development of Language. London, Edward Allen. Haraway, D. (1976) Crystals, Fabrics and Fields. New Haven, Yale University Press. Kaplan, R. (1977) Models of comprehension based on augmented transition networks, in Proceedings of M.1.T;Bell Telephone Convocation on Communication (May 1976). Cambridge. Knuth, D. E. (1968, 1969, 1973) The Art of ComputerProgramming,Vols. 1, 2, 3. Reading, Addison Wesley. Kuhn, T. (1970) The Structure of Scientific Revolutions (2nd edn.). Chicago, University of Chicago. Minsky, M. (1975) A framework for representing knowledge, in P. Winston, (ed.), The psychology of Computer Vision. New York, McGraw-Hill. Maturana, H. (1970) Biology of Cognition, Urbana Biological Computor Laboratory, Univ. of Illinois, Rept. No. 90. Maturana, H. (1975) The biological basis of cognition, to be filled in. Newell, A. and Simon, H. (1976) Computer Science as an empirical inquiry: symbols and search. Commun. ACM(19:3):113-126. Norman, D., Rumelhart, D. and the LNR Research Group (1975) Explorations in Cognition. San l:rancisco, Freeman. Reddy, D. R. (1975) Speech Recognition. New York, Academic Press. Schank, R. (1975) Conceptual Information Processing. Amsterdam, North Holland. Schank, R. and Abelson, R. (in press) Scripts, Plans, Goals, and Understanding. Hillsdale, N.J. Erlbaum. Schank, R. and Nash-Webber, B. (eds.) (1975) Theoretical Issues in Natural Language Processing. Cambridge, Bolt Beranek and Newman. Simon, H. A. (1969) The Sciences of the Artificial. Cambridge, M.I.T. Press. Thompson, D’Arcy W. (1969) On Growth andForm. Cambridge, Cambridge University 1969. Abridged edition edited by J. T. Bonner -Original first edition published in 1917. Weizenbaum. J. (1976) Computer Power and Human Reason. San Francisco, Freeman. Winograd, T. (1976) Towards a procedural understanding of semantics, Revue Intern. Phil. (34):117118,260-303. Winograd, T. (in preparation). Language as a Cognitive Process. Reading, Addison Wesley. Bateson, Bobrow, Bobrow,
Cognition, 5 (1977) 181-183 @Elsevier Sequoia S.A., Lausanne - Printed in the Netherlands
In defense
Discussion
of Roger Brown against
himself
PETER SCHijNBACH Ruhr-UniversitZt Bochum
In his memorial tribute to Eric Lenneberg (Brown, 1976) Roger Brown presents an admirable review of research on the problem of codability and recognition of colors started by their own “Study in Language and Cognition” (Brown and Lenneberg, 1954). Surprisingly, Brown concludes that a Whorfian interpretation of his and Lenneberg’s results, as well as similar subsequent data, is no longer tenable. Gallantly he concedes to his former doctoral student Eleanor Rosch Heider (1972) that “just two of the four experiments in her article are needed to undo ‘A Study in Language and Cognition’!” (Brown, 1976, p. 149). With that much chivalry I cannot agree. I have no difficulty with Brown’s main points derived from Heider’s experiments and the study by Berlin and Kay (1969): Focal color areas seem to be nearly invariant across many language communities if not universal. At least on the average focal colors are more easily codable than nonfocal colors. Yet, focal colors were recognized far more often than filler colors by both speakers of English and speakers of Dani who have only two color terms, mili and molu. Thus, “the differential ‘Codability’ which had been invoked to explain the original Recognition results could not be invoked to explain the differential results for the Dani; for the Dani all colors were the same with respect to ‘Codability’.” (Brown, 1976, p. 151). At this point one should remember that Heider used a comparatively simple recognition task. Only one chip was exposed at a time for 5 seconds, to be identified after a 30 second interval in an array of 160 colors. However, 1 still go along with Brown when he says: “Focal colors are human universals and linguistic codes are all designed to fit them. . . .Focal colors are more memorable, easier to recognize, than any other colors, whether the subjects speak a language having a name for the focal colors or not.” (1976, p. 15 1). But why was he moved by these conclusions to borrow with respect to his and Lenneberg’s original study W. S. Gilbert’s verdict: “Reduced - to a special case of a completely misconceived lemma” (p. 15 1, italics mine)? What surprises me is that Brown does not consider in this context his own finding, replicated by Lantz and Stefflre (1964), that the correlation between recognition and codability scores increases as the importance of storage in the recognition task increases. If four colors instead of one have to
182 P. Schbnbach
be remembered for the recognition task then perhaps the linguistic codes available to a particular group of speakers do enter more forcefully as partial determinants of cognitive processes. I foresee the counterargument that with a more demanding storage and recognition task the immediate advantage of focal colors with respect to memorability and recognition over boundary colors probably becomes even more pronounced; an even greater proportion of focal over boundary colors is identified correctly, and consequently also the correlation between codability and recognition increases. Such an alternative proposition, however, has not yet been tested, and this is precisely my point. How would the Dani have performed in comparison to their English speaking controls during the recognition task with four colors instead of one to remember? Any difference between the two gradients of performance between the one-color and the four-color task would at least be compatible with a weak version of Whorfs hypothesis. A bold prediction would be that with focal colors to be remembered the Dani gradient would be steeper (comparatively more errors of recognition with four colors) than the English gradient, whereas with nonfocal colors to be remembered the Dani gradient just could be flatter than the English one. The latter hunch takes its lead from the finding by Lantz and Stefflre (1964, p. 479) that errors of recognition tend towards the more easily codable typical colors. Unfortunately, I don’t have any Dani at hand to test this prediction. Towards the end of his paper (p. 152) Brown quotes Heider’s conclusion: “In short, far from being a domain well suited to the study of the effects of language on thought, the color space would seem to be a prime example of the influence of underlying perceptual-cognitive factors on the formation and reference of linguistic categories” (1972, p. 20). One can elaborate on this. In the choice of their domain of discourse Brown and Lenneberg were partly guided by the principle of simplicity that was made explicit shortly afterwards by Lenneberg and Roberts (1956) in their methodological treatise. In the meantime many social psychologists have become wary in applying this principle to their research, especially if it is research on language. Nowadays it seems clear that Brown and Lenneberg chose a task on such a simple level that it did not provide any important role for language factors to operate as mediators of cognitive processes. It was this conclusion (Schonbach, 1970, p. 38 f.) which made me select attitude formation as a more complex task (with its own problems, to be sure). Between, say, 1950 and 1955, however, it was eminently defensible to start on a simple level with fairly good control over the variables. Besides, the insights gained from the research sequence engendered by “A Study in Language and Cognition” are very impressive indeed, as Brown justly points out (I 976, p. 15 1) after
In defense of Roger Brown against himself
183
acknowledging defeat at Eleanor Rosch Heider’s hands. But furthermore, there may not even be any defeat. Comparing the correlations between codability and recognition in the one-color and the four-color conditions one may argue for the possibility that Heider’s and Brown’s conclusion is in need of a modification: Despite the fact that the color space is a domain ill suited for the study of the effects of language on thought, even in this domain some linguistic determinism can be observed with a comparatively minor increase in the complexity of the cognitive task.
References B., and Kay, P. (1969) Basic Color Terms: Their Universalityand Evolution. Berkeley, University of California Press. Brown, R. W. (1976) In memorial tribute to Eric Lenneberg. Co,g., 4, 125-153. Brown, R. W., and Lenneberg, E. H. (1954) A study in language and cognition. J. abnorm. sot. Psychol., 49, 454462. Heider, E. R. (1972) Universals in color naming and memory. J. exp. Psychol., 93, 10-20. Lantz, D., and Stefflre, V. (1964) Language and cognition revisited. J. abnorm. sot. Psychol., 69, 472-481. Lenneberg, E. H., and Roberts, J. M. (1956) The language of experience; A study in methodology. Intern. J. amer. Ling. (Memoir No. 13). Schonbach, P. (1970) Sprache und Attitiiden; iiber den EinfTuss der Bezeichnungen Fremdarbeiter und Gastarbeiter auf Einstellungen gegeniiber auslkndischen Arbeitern. Bern, Huber. Berlin,
Cogktion, @Elsevier
5 (1977) 185-187 Sequoia %A., Lausanne
Discussion - Printed
in the Netherlands
In reply to Peter Schijnbach*
ROGER BROWN Harvard
University
The possibility that linguistic codability becomes more important in the memory task as the importance of storage increases was one of my own first reactions to the Dani data in Heider’s 1972 JEP paper. As you say, the importance of codability increases with the importance of storage in the original Brown-Lenneberg data and also for the Lantz-Stefflre data. Furthermore, as you also say, Heider’s discrimination task was closely similar to the simplest task in the other two studies. As a matter of fact, my reaction was a little stronger than yours because I remembered something you do not mention: the correlations between codability and recognition in the simplest condition of the Brown-Lenneberg study (condition A, 1 chip for 7 seconds) were not significant even at the 0.05 level. Only when 4 colors were used (conditions B, C, and D) was codability significantly correlated with recognition, and these conditions, utilizing 4 colors, involved no significant differences among themselves, but the codability-recognition correlations for B, C, and D were all about 1% times as large as for A. So I first thought that Heider had not made ,a fair test in using only one chip at a time with the Dani. Then I remembered that Lantz-Stefflre had obtained a significant correlation (+0.5 1; p < 0.0 1) using their “communication accuracy” measure in even the simplest condition, so perhaps the test was fair after all. However, the whole idea of a fair test seems inappropriate in view of Heider’s statement that the Dani had only two color words and that they “chanted” these in the naming task at a constant rate, and so it would seem that no sort of differential language index could have been obtained; not naming agreement, not length of name, not latency of naming response, and not communication accuracy. For her Experiment III, Heider (1972) obtained no linguistic index. Her hypothesis and conclusion was simply that: “... focal colors can be remembered more accurately than non-focal even by speakers of a language that *This short paper is, in fact, an excerpt from a personal letter written by Roger Brown to Peter SchGnbach, in response to his In Defense of RogerBrown Against Himself (Cog. 5, 00 (1977)).
186
Roger Brown
lacks basic hue terms and in which, therefore, focal colors do not represent the best examples of basic color names”. (Heider, 1972, p. 15) While this conclusion seemed to me to be justified by her Dani data, and seems so still, I did mention to Dr. Heider, when I first learned of the results, that I thought it would be a good idea to try four colors to test the storage hypothesis. I cannot now remember whether she found the Dani unwilling to attempt that task or whether she simply had no later opportunity to try it or exactly what happened. Your note has motivated me to read again quite carefully Heider’s statement of her Method, and I find the following: “... only Dani whose colorname usage was restricted to the two basic terms ‘mili’ (roughly ‘dark’) and ‘mola’ (roughly ‘light’) were used”. (Heider, 1972, p. 16). I put this together with my memory that some Dani occasionally used a couple of other color terms, and I think many demonstrated that they could construct color name;, by reference to familiar objects and other means that all peoples seem to have, if there were a premium on differentiation. “A premium on differentiation” is precisely wilat “communication accuracy ” involves and is the reason why it predicts recognition better than naming agreement does. So now I am not quite sure that a test could not have been made with the Dani of the relation between recognition and “communication accuracy”. This latter would be the appropriate linguistic index to use. with the simplest (one-chip-at-a-time) procedure for studying recognition because onl~l communication accuracy has been shown to be significantly related to recognition, for speakers of languages-other-than-Dani, in the simplest condition. However, “communication accuracy” is a measure that is difficult to interpret within a Whorfian frame, and even if it were related to recognition among the Dani, it would be difficult to construct an argument that the linguistic variable was the cause of the ability to recognize focal colors. Especially difficult in view of all the other evidence that the focal colors approach universality regardless of the basic color lexicon. Nevertheless, as I see it, the possibility that the linguistic factor becomes more important as the role of storage in the memory task increases, is, as you say, still open. Furthermore, one can see how it might be true even for Dani. For those Ss who used only rnili and nzolu, the memory-stretching technique of rehearsal could be of little use. But if there are Dani who, in communication accuracy conditions, create differentiating names, then name rehearsal would help more for four colors than for one color. Providing, I suppose, that the constructed names were neither too long nor too original to be easily rehearsed. I do not think it is necessary to go into all this, and there may be considerations 1 have missed because your note makes the point that the storage
In reply to Peter Schijnbach
187
hypothesis is still open and that is what matters. But it has been useful to me to be stimulated by your note to look once again, and even more closely, at what others did, and what I thought. As to my feeling “defeated” by Eleanor Heider and assuming a W. S. Gilbert tone, that is a joke. I honestly do not feel defeated - and not just because Eleanor Heider was my student. I do not mean to sound pious, but I am not concerned with “prevailing”. What I am concerned about, and the reason why there is no sting for me as there would not have been for Eric Lenneberg, in the later history of color naming, is the quite surprisingly impressive set of seemingly true generalizations that have been added to psychology in part because of reactions to “A Study in Language and Cognition”. I think it helps a science to keep track of its genuine advances if one publicly says so when he judges that he was mistaken in some earlier opinion. One last point on which I should not like to be misunderstood. The paper “Reference” is strictly limited to the fate of a roughly Whorfian hypothesis in the domain of color naming and recognition. Personally, I believe, as you .c do, that in other cognitive domains, Whortian hypotheses may prove to be more nearly correct. And, above all, I never forget that Whorf had relatively little to say, in his work as a whole, about lexical differences. For the most part, he was concerned with semantic differences expressed in the grammars of languages, rather than in their lexicons. On this, his own chosen home ground, I still find his ideas powerfully stimulating and would bet that many are true. But they have never been tested. Not because they are untestable, but because no psychologist has yet been sufficiently ingenious.