2 Reply to Arbib and Gunderson In December , 1972 , Michael Arbib and Keith Gunderson presented papers to an American Ph...
8 downloads
700 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
2 Reply to Arbib and Gunderson In December , 1972 , Michael Arbib and Keith Gunderson presented papers to an American Philosophical Association symposium on m.y earlier book , Con ten t and Consciousness, to which this essay was a reply . 1 While one might read it as a defense of the theory in my first book , I would rather have it considered an introduction to the off spring theory . In spite of a few references to Arbib 's and Gunderson 's papers and my book , this essay is designed to be comprehensible on its own , though I would not at all wish to discourage readers from explor ing its antecedents . In the first section the ground rules for ascribing mental predicates to things are developed beyond the account given in Chapter 1. There I claimed that since intentional stance predictions can be made in ignorance of a thing 's design- solely on an assumption of the design's excellence- verifying such predictions does not help to confirm any particular psychological theory about the actual design of the thing . This implies that what two things have in common when both are correctly attributed some mental feature need be no independently describable design feature , a result that threatens several familiar and compelling ideas about mental events and states. The second section threatens another familiar and compelling idea, uiz., that we mean one special thing when we talk of consciousness, rather than a variety of different and improperly united things .
I Suppose two artificial intelligence teams set out to build face-recognizers. We will be able to judge the contraptions they come up with , for we know in advance what a face-recognizer ought to be able to do . Our expectations of face-recognizers do not spring from induction over the observed behavior of large numbers of actual face-recognizers , but from a relatively a priori source : what might be called our intuitive
24
BRAINSTORMS
epistemic logic , more particularly , " the logic of our concept " of recognition . The logic of the concept of recognition dictates an open-ended and shifting class of appropriate further tasks, abilities , reactions and distinctions that ideally would manifest themselves in any face-recognizer under various conditions . Not only will we want a face-recognizer to answer questions correctly about the faces before it , but also to " use" its recognition capacities in a variety of other ways, depending on what else it does, what other tasks it performs , what other goals it has. These conditions and criteria are characterized intentionally ; they are a part of what I call the theory of in ten tional systems, the theory of entities that are not just face-recognizers , but theorem -provers, grocery -choosers, danger-avoiders, music appreciators . Since the Ideal Face-Recognizer , like a Platonic Form , can only be approximated by any hardware (or brain ware) copy , and since the marks of successful approximation are characterized intentionally , the face-recognizers designed by the two teams may differ radically in material or design. At the physical level one might be electronic , the other hydraulic . Or one might rely on a digital computer , the other on an analogue computer . Or , at a higher level of design, one might use a system that analyzed exhibited faces via key features with indexed verbal labels- " balding " , " snub-nosed" , " lantem -jawed " - and then compared label -scores against master lists of label scores for previously encountered faces, while the other might use a system that reduced all face presentations to a standard size and orientation , and checked them quasi-optically against stored " templates " or " stencils " . The contraptions could differ this much in design and material while being equally good- and quite good- approximations of the ideal face-recognizer . This much is implicit in the fact that the concept of recognition , unlike the concepts of , say, protein or solubility , is an intentional concept , not a physical or mechanistic concept . But obviously there must be some similarity between the two facerecognizers , because they are, after all , both face-recognizers . For one thing , if they are roughly equally good approximations of the ideal , the intentional characterizations of their behaviors will have a good deal in common . They will often both be said to believe the same propositions about the faces presented to them , for instance . But what implications about further similarity can be drawn from the fact that their intentional characterizations are similar ? Could they be similar only in their intentional characterizations ? Consider how we can criticize and judge the models from different points of view . From the biological point of view , one model may be applauded for utilizing elements bearing a closer resemblance in
Reply to Arbib and Gunderson
25
function or even chemistry to known elements in the brain . From the point of view of engineering , one model may be more efficient , fail safe, economical and sturdy . From an " introspective " point of view , one model may appear to reflect better the actual organization of processes and routines we human beings may claim to engage in when confronted with a face. Finally , one model may simply recognize faces better than the other , and even better than human beings can. The relevance of these various grounds waxes and wanes with our purposes. If we are attempting to model ' 'the neural bases" of recognition , sturdiness and engineering economy are beside the point - except to the extent (no doubt large) that the neural basesare sturdy and economi cal. If we are engaged in " artificial intelligence " research as contrasted with " computer simulation of cognitive processes" , 2 we will not care if our machine 's ways are not those of the man in the street , and we will not mind at all if our machine has an inhuman capacity for recognizing faces. Now as " philosophers of mind " , which criterion of success should we invoke ? As guardians of the stock of common mentalistic concepts , we will not be concerned with rival biological theories , nor should we have any predilections about the soundness of " engineering " in our fellow face-recognizers . Nor , finally , should we grant the last word to introspective data , to the presumed phenomenology of face-recogni tion , for however uniform we might discover the phenomenological reports of human face-recognizers to be, we can easily imagine discovering that people report a wide variety of feelings , hunches , gestalts, strategies, intuitions while sorting out faces, and we would not want to say this variation cast any doubt on the claim of each of them to be a bona fide face-recognizer . Since it seems we must grant that two face-recognizers , whether natural or artificial , may accomplish this task in different ways, this suggeststhat even when we ascribe the same belief to two systems (e.g., the belief that one has seen face n more than once before ), there need be no elements of design, and a fortiori of material , in common between them . Let us see how this could work in more detail . The design of a facerecognizer would typically break down at the highest level into subsystems tagged with intentional labels: " the feature detector sends a report to the decision unit , which searches the memory for records of similar features , and if the result is positive , the system commands the printer to write 'I have seen this face before ' " - or something like that . These intentionally labelled subsystems themselves have parts , or elements, or states, and some of these may well be intentionally labelled in turn : the decision unit goes into the conviction -that -I 've-
26
BRAINSTORMS
seen-this -face -before state , if you like . Other states or parts may not suggest any intentional characterization - e.g., the open state of a parti cular switch may not be aptly associated with any particular belief , intention , perception , directive , or decision . When we are in a position to ascribe the single belief that p to a system , we must , in virtue of our open -ended expectations of the ideal believer -that -p , be in a position to ascribe
to the system
an indefinite
number
of further
beliefs , desires ,
etc . While no doubt some of these ascriptions will line up well with salient features of the system 's design , other ascriptions will not , even though the system 's behavior is so regulated overall as to justify those ascriptions . There need not , and cannot , be a separately specifiable state of the mechanical elements for each of the myriad intentional ascriptions , and thus it will not in many cases be possible to isolate any feature
of
the
system
at any
level
of abstraction
and say , " This
and
just this is the feature in the design of this system responsible for those aspects of its behavior in virtue of which we ascribe to it the belief that p . " And so, from the fact that both system S and system T are well characterized as believing that p , it does not follow that they are both in some state uniquely characterizable in any other way than just as the state of believing that p . (Therefore , Sand T 's being in the same belief state need not amount to their being in the same logical state , if we interpret the latter motion as some Turing -machine state for some shared Turing -machine interpretation , for they need not share any relevant Turing -machine interpretation .) This brings me to Arbib 's first major criticism . I had said that in explaining the behavior of a dog , for instance , precision in the inten tional story was not an important scientific goal , since from any par ticular intentional ascription , no precise or completely reliable inferences about other intentional ascriptions or subsequent behavior could be drawn in any case, since we cannot know or specify how close the actual dog comes to the ideal . Arbib finds this " somewhat
defeatist " , and urges that " there is nothing which precludes description at the intentional level from expressing causal sequences providing our intentional language is extended to allow us to provide descrip tions with the flexibility of a program , rather than a statement of general tendencies " . Now we can see that what Arbib suggests is right . If we put intentional labels on parts of a computer program , or on states the computer will pass through in executing a program , we gain access to the considerable predictive power and precision of the
Reply to Arbib and Gunderson
27
program .* When we put an intentional
label on a program
state , and
want a prediction of what precisely will happen when the system is in that intentional state , we get our prediction by taking a close look not at the terms used in the label - we can label as casually as you like - but
at the specification of the program so labelled . But if Arbib is right , I am not thereby
wrong , for Arbib and I are thinking
of rather different
strategies. The sort of precision I was saying was impossible was a precision prior to labelling , a purely lexical refining which would per mit the intentional calculus to operate more determinately in making
its idealized predictions . Arbib , on the other hand , is talking about the access to predictive power and precision one gets when one sullies the ideal by using intentional ascriptions as more or less justifiable labels
for program features that have precisely specified functional relations
inter -
.
One might want to object : the word " label " suggests that Arbib gets his predictive power and precision out of intentional description by mere arbitrary fiat . If one assigns the intentional label " the belief -that p state " to a logical state of a computer , C , and then predicts from C 's program what it will do in that state , one is predicting what it will do when it believes that p only in virtue of that assignment , obviously . Assignments of intentional labels , however , are not arbitrary : it can become apt so to label a state when one has designed a program of power and versatility . Similarly , one 's right to call a subsystem in his system the memory , or the nose -shape -detector , or the jawline analyzer hinges on the success of the subsystem 's design rather than any other feature of it . The inescapably idealizing or normative cast to inten tional discourse about an artificial system can be made honest by excellence of design , and by nothing else . This idealizing of intentional discourse gives play to my tactic of ontological neutrality , which Gunderson finds so dubious . I wish to maintain physicalism - a motive that Gunderson finds congenial - but think identity theory is to be shunned . Here is one reason why . Our imagined face -recognizers were presumably purely physical entities ,
and we ascribed psychological
predicates to them (albeit a very
* These predictions are not directly predictions of causal sequences , as he suggests, since what a system is programmed to do when in a certain state , and what its being in the associated physical state causes to happen can diverge if there is mal function , but if our hardware is excellent we can safely predict causal sequences from
the
program
.
28
BRAINSTORMS
restricted set of psychological predicates , as we shall see) . If we then restrict ourselves for the moment to the " mental features " putatively referred to in these ascriptions , I think we should be able to see that identity theory with regard to them is simply without appeal . The usual seductions of identification are two , I think : ontological econ omy , or access to generalization (since this cloud is identical with a collection of water droplets , that cloud is apt to be as well ) . The latter motive has been all but abandoned by identity theorists in response to Putnam 's objections (and others ) , and in this instance it is clearly unfounded ; there is no reason to suppose that the physical state one identified with a particular belief in one system would have a physical twin
in the
other
system
with
the same intentional
characterization
.
So if we are to have identity , it will have to be something like David son 's " anomolous monism " . 3 But what ontic house -cleaning would be accomplished by identifying each and every intentionally charac terized " state " or " event " in a system with some particular physical state or event of its parts ? In the first place there is no telling how many different intentional states to ascribe to the system ; there will
be indefinitely
many candidates . Is the state of believing that
100 < 101 distinct from the state of believing that 100 < 102 , and if so , should we then expect to find distinct physical states of the system to ally with each ? For some ascriptions of belief there will be , as we have seen , an isolable
state
of the
program
well
suited
to the label , but
for
each group of belief -states thus anchored to saliencies in our system , our intuitive epistemic logic will tell us that anyone who believed p , q , r , . . . . would
the
behavior
have
of the
to
believe
system
s,
would
t,
u,
v , . . . as
harmonize
well
well
, and
while
with
the
furth ~r
ascription to it of belief in s, t , u,u, . . . (this being the sort of test that establishes a thing as an intentional system ) , we would find nothing in particular to point to in the system as the state of belief in S, or t or u or v . . . . This
should
not
worry
us , for
the
intentional
story we tell about an entity is not a history of actual events , processes , states , objects , but a sort of abstraction .* The desire to identifv each -
-
-
~ ~
.,
-
-
-
-
-
and every part of it with "" some node or charge or region just because some parts can be so identified , is as misguided as trying to identify each line of longitude and latitude with a trail of molecules - changing , * Cf . G . E .M . Anscombe
account
processes , it would it describes
, Intention
[ of the practical an order
in general which
( 2nd
ed . 1963 ) , p . 80 : " But
if
Aristotle
's
syllogism ] were supposed to describe actual mental be quite
is there
absurd . The interest
whenever
actions
See also Quine " On -the Reasons for Indeterminacy Philosophy , LXVII (March 26 , 1970 ).
are
done
of the account with
intentions
is that ."
of Translation " , Journal of
Reply to A rbib and Gunderson
29
of course, with every wave and eddy - just because we have seen a bronze plaque at Greenwich or a row of posts along the Equator . It is tempting to deny this , just because the intentional story we tell about each other is so apparently full of activity and objects : we are convicted of ignoring something in our memory , jumping to a conclu sion, confusing two different ideas. Grammar can be misleading . In baseball, catching a fly ball is an exemplary physical event-type , tokens of which turn out on analysis to involve a fly ball (a physical object ) which is caught (acted upon in a certain physical way ). In crew , catch ing a crab is just as bruisingly physical an event-type , but there is no crab that is caught . Not only is it not the case that oarsmen catch real live (or dead) crabs with their oars; and not only is it not the case that for each token of catching a crab, a physically similar thing - each token 's crab- is caught , it is not even the case that for each token there is a thing , its crab , however dissimilar from all other such crabs, that is caught . The parallel is not strong enough , however , for while there are no isolable crabs that are caught in crew races, there are isolable catch ings-of -crabs, events that actually happen in the course of crew races, while in the caseof many intentional ascriptions , there need be no such events at all . Suppose a programmer informs us that his face-recognizer " is designed to ignore blemishes " or " normally assumes that faces are symmetrical aside from hair styles " . We should not suppose he is alluding to recurrent activities of blemish -ignoring , or assuming, that his machine engages in , but rather that he is alluding to aspects of his machine 's design that determine its behavior along such lines as would be apt in one who ignored blemishes or assumed faces to be symmetri cal. The pursuit of identities , in such instances, seems not only superfluous but positively harmful , since it presumes that a story that is, at least in large part , a calculator 's fiction is in fact a history of actual events, which if they are not physical will have to be non -physical . At this point Gunderson , and Thomas Nagel,4 can be expected to comment that these observations of mine may solve the mind -body problem for certain machines- a dubious achievement if there ever was one- but have left untouched the traditional mind -body problem . To see what they are getting at , consider Gunderson 's useful distinction between " program -receptive and program -resistant features of mental ity " .s Some relatively colorless mental events, such as those involved in recognition and theorem -proving , can be well -simulated by comput er programs , while others , such as pains and sensations, seem utterly unapproachable by the programmer 's artifices . In this instance the distinction would seem to yield the observation that so far only some program -receptive features of mentality have been spirited away