Bayesianism without the Black Box Mark Kaplan Philosophy of Science, Vol. 56, No. 1. (Mar., 1989), pp. 48-69. Stable URL: http://links.jstor.org/sici?sici=0031-8248%28198903%2956%3A1%3C48%3ABWTBB%3E2.0.CO%3B2-R Philosophy of Science is currently published by The University of Chicago Press.
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.
JSTOR is an independent not-for-profit organization dedicated to and preserving a digital archive of scholarly journals. For more information regarding JSTOR, please contact
[email protected].
http://www.jstor.org Fri May 18 08:48:20 2007
BAYESIANISM WITHOUT THE BLACK BOX* MARK KAPLANt Department of Philosophy
The University of Wisconsin-Milwaukee
Crucial to bayesian contributions to the philosophy of science has been a characteristic psychology, according to which investigators harbor degree of confidence assignments that (insofar as the agents are rational) obey the axioms of the probability calculus. The rub is that, if the evidence of introspcction is to be trusted, this fruitful psychology is false: actual investigators harbor no such assignments. The orthodox bayesian response has been to arguc that the evidence of introspection is not to be trusted here; it is to invcstigators' dispositions-not to their felt convictions-that the psychology is meant to be (and succeeds in being) faithful. I arguc that this response, in both its orthodox and convex-set bayesian forms, should be rejected-as should the regulative ideals that make the response seem so attractive. I offer a different variant of baycsianism, designed to give the evidence of introspcction its due and thus realize (as I claim the other forms of bayesianism cannot) the prescriptive mission of the bayesian project.
It has been a little over fifty years since the bayesian perspective on probability and decision received its first sophisticated articulation in the work of Frank Ramsey and Bruno de ~ i n e t t i The . influence of that perspective has continued to spread, notably in the philosophy of science. In recent years, in particular, there has been a steady stream of articles and books seeking to convince philosophers of science that the bayesian perspective not only sheds light on the relation between theory and practical deliberation but also provides a perspective within which many of the traditional problems that have confounded our understanding of scientific inquiry (such as the paradoxes of confirmation, the paradoxes of the lottery and preface) can be overcome.' Central to the bayesian perspective, and a critical actor in bayesian contributions to the philosophy of science, is the view that a scientific investigator is to be represented-not as believing or disbelieving sci*Received October 1986; revised June 1987. t I would like to thank Daniel Hausman, Paul Horwich, Richard Jeffrey, Patrick Mahcr, Robert Schwartz, Julius Sensat, James Van Aken, and, especially, Joan Weiner for criticism of carlier drafts of this paper. I have also benefited from conversations with Isaac Lcvi, David Lewis, and Edward McClennan. 'Counting only books, entrants into the stream of bayesian-inspired treatments of scientific inquiry in the last twenty years include Eclls (1982); Good (1965); Hesse (1974); Horwich (1982); Jeffrey (1965 and 1983a); Lcvi (1967 and 1980); Roscnkrantz (1977); and Skyrms (1984). Philosophj, of Science, 56 (1989) pp. 48-69.
Copyright 0 1989 by the Philosophy of Science Association
BAYESIANISM WITHOUT THE BLACK BOX
49
entific hypotheses and theories-but rather as harboring a degree of confidence assignment to hypotheses and theories. This function assigns to each hypothesis and theory a real number in the closed interval [O, I] and (this is a central bayesian result) insofar as the investigator is rational, the function will obey the axioms of (at least) the finitely additive probability calculus. But it is this very psychology that gives many philosophers pause. They are simply not convinced that scientific investigators do harbor degree of confidence assignments. Indeed, when they reflect on their own doxastic attitudes, they find it difficult to believe that they themselves harbor degree of confidence assignments. The worry is that, whatever the bayesian perspective on scientific inquiry has to offer, it is available only at the price of telling a fictional story about the doxastic attitudes of rational investigators. The relief offered may be tempting but the price is too high. The orthodox bayesian response has been to argue that the foregoing worry arises out of a misunderstanding of the bayesian proposal. The bayesian concedes that an investigator will not typically find, upon introspection, that her confidence in the truth of hypotheses comes neatly indexed by degree. But, he maintains, it is not degrees of confidence as presented immediately to introspection-not degrees of felt convictionthat he means to be attributing to investigators. Confidence insofar as it is manifested in an investigator's behavior, not confidence as felt on introspection, is what concerns the bayesian. And it is only the former that, he thinks, comes in degrees. In attributing a degree of confidence assignment to an investigator X, the bayesian means to be attributing to X a set of dispositions to behave. The fact that X's degree of confidence assignment does not reveal itself to casual inspection or introspection is thus, according to the orthodox bayesian, neither alarming nor surprising. Our dispositions often fail to reveal themselves quite so easily. It surely can be true, without its being apparent to others via casual inspection or to X via introspection, that (for example) X has the disposition to thrive under a certain sort of pressure (for example, suppose that X has never been placed under that sort of pressure). The orthodox bayesian holds that there is no reason why it cannot likewise be true, without its being apparent to others via casual inspection or to X via introspection, that X has that set of dispositions to behave that constitutes harboring a certain degree of confidence assignment. The orthodox bayesian does not mean, of course, to say that X's degree of confidence assignment is doomed forever to remain beyond our (her) ken. He means only to say that we (including X) must be content to learn about the nature of her degree of confidence assignment the way we learn about any other of X's dispositions: by subjecting X to a test designed to
50
MARK KAPLAN
reveal the disposition. To see the sort of test the orthodox bayesian has in mind,2 suppose that we want to find out what degree of confidence X assigns to h. First, we find something that X regards as a prize and whose value to her is unaffected by the truth-value of h. Suppose that $100 is such a prize. Let an h-ticket be a ticket that entitles the bearer to receive $100 if h is true and $0 if h is false. And let a chance-ticket be a ticket that entitles the bearer to a chance equal to iz of receiving $100 and a chance equal to (1 - n) of receiving $0-where the value of n is written on the face of the ticket. We can now ask X to tell us what number a chance-ticket would have to bear on its face for her to be indifferent between having that chance-ticket and having an h-ticket-that is, for her to be disposed to treat the two tickets as interchangeable. The number she nominates, claims the orthodox bayesian, is the number that represents her degree of confidence assignment to h. What the orthodox bayesian response says, in effect, is that inside each rational investigator is a black box containing her degree of confidence assignment. The investigator herself may be to some degree, or even entirely, unsure as to what exactly the shape of the assignment is. So may we. But we (she) can reveal the contents of her black box in at least a piecemeal fashion by using the sort of test just described.' This response does not, in my opinion, carry conviction. It seems to me that neither the test proposed, nor any other test, provides the slightest evidence that any orthodox bayesian black box exists. It is not the obstacles to operationalizing the orthodox bayesian's test that are the trouble. It is certainly true that there are factors that can distort the results of such a test (for example, if X suspects that we are untrustworthy and that we can more easily fix the results of a chance-based lottery than we can fix the truth-value of h, X may nominate a number greater than the one she would nominate were she to treat the offer as bona fide). And some of these factors may be difficult to detect and/or control. But this is just to say that the test is fallible. On any reasonable view about how we decide what dispositions to ascribe to people, we cannot help but employ fallible behavioral tests. There is no reason why the bayesian's behavioral test to reveal X's degree of confidence assignment to h should be expected to exhibit a degree of infallibility that we do not demand of the behavioral tests we employ to reveal X's other dispositions. Nor is it the orthodox bayesian's pragmatic bent-his conviction that an investigator's doxastic attitudes are mirrored in (because they are, in 'This particular version of an orthodox bayesian test (there is a host of others) is duc to Howard Raiffa. Sec Raiffa (1968, chaps. 4 and 5). 'For a recent response along thcsc lines, sec Eclls (1982, p 41ff.) Thc black-box mctaphor is due to I J Good (Good 1962)
BAYESIANISM WITHOUT THE BLACK BOX
51
part, responsible for) her preferences and decisions-that is the source of the trouble. On the contrary. It seems to me the singular virtue of the bayesian program that it recognizes a fundamental relation between rational conviction and rational preference. The sources of trouble lie elsewhere. Suppose we do subject X to the test described above and suppose we have good reason to think that her response is not going to be affected by any distorting factors. And suppose that, submitting to our requirement that she nominate a number, X nominates 0.75 as the number a chanceticket would have to bear on its face for her to be indifferent between that ticket and an h-ticket. The orthodox bayesian contends that we should infer that X has thereby revealed her indifference between a ticket to the h-lottery and a chance-ticket with 0.75 on its face; we should infer that she has thus revealed that she assigns to h a degree of confidence equal to 0.75. But even if we are willing to infer that X has displayed her indifference between the two tickets, why should we infer that this is an indifference that the test revealed? It is, after all, possible that prior to her subjection to the test there was no number that had to be written on the face of a chance-ticket in order for her to be indifferent between that ticket and an h-ticket. It is possible that it was only while deliberating about what number to nominate that she came to regard a chance-ticket with 0.75 on its face and an h-ticket as equally valuable (see Jeffrey 1984, pp. 29-30, where much the same point is made). To rule out that possibility, we would need some reason to think that there is something about X that, antecedent to the test, committed X to picking a ticket with 0.75 on its face-for example, that X had before, under very similar circumstances, engaged in deliberations about what number to nominate and had settled on 0.75. But for most hypotheses and investigators it is safe to say that we will have no reason to think any such thing. Thus, even if we grant that, with tests like the one above, we (X) can get X to display in a piecemeal way a fragment of a degree of confidence assignment, the orthodox bayesian's claim that this fragment existed prior to the testingthat there is something about X, something in her black box that, prior even to her being aware that she was going to be subjected to the tests, committed her to nominate the number-remains entirely gratuitous. But even this is to grant the orthodox bayesian too much. For it is not only compatible with (and, indeed, in no way disconfirmed by) the test result that, prior to the test, there was no particular number that a chanceticket had to have on its face in order for X to be indifferent between that ticket and an h-ticket; it is also compatible with the test result that there is still no such number. It is possible that X is undecided as to what number to choose. It may be that it is only because we required her to
52
MARK KAPLAN
nominate a number-only because we did not offer her any mode of response through which she could express her indecision-that she nominated the particular number she did. But if X is undecided-if she would have been no less happy having nominated a number somewhat greater or somewhat less than the one she actually nominated-then it is false that a chance-ticket has to have 0.75 on its face for her to be disposed to treat it as interchangeable with an h-ticket. With respect to the number a chance-ticket must bear on its face in order for her to view it as interchangeable with an h-ticket, X has no disposition-that is, with respect to h, she has no degree of confidence assignment. Of course, were the possibility of X's being undecided a mere possibility, there would be no reason for the orthodox bayesian to pay it much notice. But it is not a mere possibility. It has long been known that subjects of bayesian attempts to elicit preferences often display discomfort at having to affect the sort of precision in judgment the testers demand.4 These signs of discomfort seem to have a message: the tester is looking for the subject to confess to a disposition-to a degree of confidence assignment-that she simply does not have. And if this is indeed their message, then the worry which the bayesian's black box psychology is supposed to allay turns out to be a real worry after all. For this worry is nothing more than an expression of the very sort of discomfort just mentioned-discomfort at the prospect of having to admit to harboring a degree of confidence assignment to a hypothesis when introspection offers no reason to suppose that one harbors any such thing. The truth about our doxastic attitudes is then just as it looks to be on the surface: we don't have degree of confidence assignments. Some orthodox bayesians, however (see, for example, Good 1962), read a different message in these signs of discomfort. They suggest that an investigator's seeming reluctance to assign a precise degree of confidence to a hypothesis-far from being a sign that she harbors no such assignment-is nothing more than a sign that, suffering (as we all do) from less than perfect insight, she harbors a second-order uncertainty (perhaps itself reflected in a second-order degree of confidence assignment), as to the value her,first-order assignment assigns to the hypothesis. This interpretation of subjects' discomfort, however, looks hard to sustain. First, the interpretation simply does not fit the phenomenological facts in many instances. When we suffer indecision when asked to assign a probability to the hypothesis that there is a bus-drivers' strike in Verona today, it certainly seems that it is our ignorance about the labor situation %ec, for example, Luce and Ralffd (1957, pp 305-106) I have heard ~ttold that, when asked why he d ~ d not provide h ~ subjects s w ~ t hsome form of response through whlch they could express then lndecis~on,one researcher rcpllcd, "Look, I have enough trouble ds ~t 1s gcttlng my subjects to choose probabilit~csl"
BAYESIANISM WITHOUT THE BLACK BOX
53
in Verona, not ignorance about the contents of our own minds or of our dispositions, that is to blame. Second, the credibility of the interpretation under discussion depends entirely upon the credibility of the claim that prior to the test an indecisive subject has a first-order degree of confidence assignment about whose contents she can be unsure. And, as I have already argued, this claim is entirely gratuitous. But then the orthodox bayesian may complain that his interpretation looks hard to sustain only because, yet again, it has been misunderstood. He is not trying to offer a theory that is true to the phenomenological life of an agent, the orthodox bayesian may argue. Rather he is trying to offer a theory that will at the very least enable us to explain, for everything we want to call an instance of rational behavior on the part of a scientific investigator, why that investigator behaved as she did. The theory postulates that, in each such case, the behavior manifests the dispositions of an investigator who harbors a rational degree of confidence assignment. Given the pretensions of his theory, he may maintain, it is inappropriate to judge the legitimacy of his attributing a first-order degree of confidence assignment to an investigator and/or a second-order degree of confidence assignment to that investigator as I have judged it-according to whether there is evidence of that investigator's having consciously adopted a firstorder assignment and/or evidence of her having consciously wondered about the shape of that assignment. The legitimacy of these attributions is rather to be judged by assessing the adequacy of the explanations of her behavior in which these attributions play a role.' The trouble is that the orthodox bayesian's proposal to divorce his psychology from the evidence of conscious doings undermines all prospect of receiving a favorable assessment. He is suggesting that, despite the way things look upon the surface-despite the fact that many scientists fail to couch their arguments in probabilistic language, would disavow having conciously engaged in probabilistic deliberations, would disavow a degree of confidence assignment-these scientists' behavior is nonetheless best explained as the behavior of investigators who have degree of confidence assignments. The explanation proposed would not be particularly worrisome if it were merely suggesting a novel characterization of the methodology of these scientists. There is no reason to suppose that an investigator has any privileged position in assessing the adequacy of a characterization of her own n~ethodology.Others may be as well placed, and indeed better placed, to tell whether she is, for example, guilty of having begged a 'Mellor secms to bc advocating such a vicw on pp. 156-158 of Mellor (1980). Mcllor and Eclls actually advocate a stronger view, namely that all human behavior is rational bayesian behavior.
54
MARK KAPLAN
question. What is worrisome is that the orthodox bayesian is suggesting that the best explanation of rational scientific behavior can be one on which the scientist fundamentally misperceives what doxastic attitudes she is harboring and what reasonings she is engaging in-one in which the best practitioners of that deliberate self-conscious activity we know as science are actually sleepwalking their way through what is, ,for them in reulity, a forced march through a series of bayesian calculations. Such an unappealing and implausible story of the surreptitious march of reason through dark regions of the scientific mind might nonetheless command our assent were the evidence in its favor sufficiently strong. But, of course, quite the contrary is the case. The whole point of the orthodox bayesian's gambit here is to use the alleged theoretical attraction of orthodox bayesian psychology to legitimize his attempt to insulate that psychology from the discomforting evidence of conscious doing^.^ The attraction gone, the psychology has nothing left to speak in its favor. And, given the way many orthodox bayesians mean to earn their keep, it is just as well. Orthodox bayesianism is at least as heavily invested in the enterprise of prescription as it is in the enterprise of description. Orthodox bayesians have for years taught bayesian methods of decisionmaking in departments of statistics and schools of business all over this country. Their aim has been a laudable one: to improve the decisionmaking, evidence-gathering and evidence-evaluating procedures of decision-makers in business, science and other endeavors. But once the satisfaction of orthodox bayesian canons becomes the sort of thing scientific investigators routinely accomplish without consciously attending to the task of trying to satisfy those canons and without the benefit of bayesian instruction, it becomes hard to see the point of teaching bayesian methods of decision-making to anyone. Why bother to teach people to do consciously what they already do by second nature? In short, divorced from the evidence of conscious doings, orthodox 'Note that, once sct frcc from thc constraints imposed by thc evidence of hcr conscious doings, thcre is little clse to constrain our attribution of doxastic attitudes and valuations to an investigator. So long as we do not have to show cvidcnce of her having consciously committed hcrsclf to doxastic attitudes and valuations in order to attribute to hcr those valuations and attitudes, then no matter what she has done (indeed, no matter whether wc want to count hcr behavior as rational or not), thcre will always be some story available to us on which that behavior satisfies orthodox bayesian canons. Orthodox bayesians have not been loath to take advantage of this absence of constraint. See, for example, the sizeable literature on the Allais and Ellsberg paradoxes in which many orthodox bayesians attempt, through the imaginative attribution of doxastic attitudes and valuations, to vindicate the rationality of the behavior displayed by subjects who appear resolutely to violate orthodox bayesian canons. Also see the literature that tries to explain away the copious behavioral evidence (see Kahneman, Slovic and Tversky 1982) to the effect that subjects typically and systematically violate orthodox bayesian constraints on rational judgment.
BAYESIANISM WITHOUT THE BLACK BOX
55
bayesian psychology degenerates into a bizarre account of the workings of the scientific mind-an account that would undermine the justification for the (laudable) curricular innovations that have been most responsible for the growth in esteem of bayesian methods. And united with the evidence of conscious doings, orthodox bayesian psychology looks to be gratuitous at best. The verdict is clear: the naive objection with which we began has survived its sophisticated rebuttal. The price the orthodox bayesian demands for the illumination he has to offer us in the philosophy of science is indeed too high. He is asking us to grant that every investigator is imbued with a degree of confidence assignment. It is pure fiction. But even if the orthodox bayesian were to admit that he is not entitled to the wholesale attribution of degree of confidence assignments to scientific investigators, the integrity of his representation of the scientific investigator might still be sustainable. He could argue that, even if descriptively inaccurate, his model of the scientific investigator provides an attractive regulative ideal-an attractive picture of how a rational scientific investigator ought ideally to comport herself. And, he could continue, it is with the articulation of just such a regulative ideal-the articulation of a standard by which the cogency of a piece of scientific argument is to be judged-that the philosophy of science is properly concerned. Thus, he could conclude, the worries just rehearsed in no way impeach the orthodox bayesian contribution to the philosophy of science. Unfortunately for the orthodox bayesian, there are other worries that do. Consider the following two cases: Case 1: You have before you an opaque urn containing 100 balls of the same size and weight, 50 of which are black and 50 of which are white. The urn has been thoroughly shaken and a ball has been drawn and not yet examined. Case 11: The same as Case I except that you know nothing about the proportion of black to white balls in the urn. According to the orthodox bayesian's regulative ideal, you ideally should, in both cases, assign a precise degree of confidence to the hypothesis (call it b) that the ball that has been drawn is black. In the first case it is easy to see why you should comply. Since the objective chance of the ball's being black is 0.5, you should assign b a degree of confidence equal to 0.5. You have a good reason to pick 0.5. But what should you assign to b in the second case? An orthodox bayesian would suggest that, in the absence of any evidence favoring one member of the pair, b and -b, over the other, you should assign each the same degree of confidence, that is, a degree of confidence equal to 0.5. Thus, to the orthodox bayes-
56
MARK KAPLAN
ian, the two cases warrant identical assignments.' What a peculiar verdict. It is indeed true that, as in Case 1, the evidence in Case I1 favors neither of the two hypotheses, b and -b, over the other. But that is because, in Case 11, you have no evidence whatsoever about the proportion of black to white balls in the urn. If you have no reason in Case I1 to assign one of the two hypotheses a higher value than the other, it is only because you have no reason to assign either of the hypotheses any - -particular value at all. But then it would seem that the appropriate response to Case I1 would be to acknowledge as much and to refrain from assigning a value to either hypothesis. The orthodox bayesian's regulative ideal-in particular, the view that an investigator ought ideally to adopt a degree of confidence assignment-runs roughshod over the gross difference in the quality of the evidence present in Cases 1 and 11. In requiring you to assign a precise degree of confidence to b in Case 11, it cannot help but conflate the absence (in Case 11) of any reason to think either one of the pair of hypotheses, b and -b, is more likely true than the other with the presence (in Case 1) of an excellent reason for thinking they are equally likely.8 The moral 'For example, Harold Jeffreys writes: If there is no reason to believe one hypothesis rather than another, the probabilities are equal . . . to say that the probabilities are equal is a precise way of saying that we have no good grounds for choosing between the alternatives. . . . The rule that we should take them equal is not a statement of any belief about the actual composition of the world, nor is it an inference from previous experience; it is merely the formal way of expressing ignorance. (Jeffreys 1961, pp. 33-34) 'An orthodox bayesian might respond that he is guilty of no such conflation. I have merely created an impression of conflation, he might argue, by focusing too narrowly on your degree of confidence assignment to b. Take a broader perspective, he might continue, and orthodox bayesianism enables you to see in the rest of your degree of confidence assignment the appropriate reflection of the difference in your epistemic situations in the two cases. For example, consider your degree of confidence assignment to b conditional on your subsequently drawing ten straight black balls (your assignment to ( b / t ) ) .Given your knowledge of the contents of the urn in Case I, the value you assign to ( b / t ) will presumably be equal to the value you assign to b. You know that there are 50 black balls in the urn and it is this (and what you know about the method of drawing balls) that gives you reason to assign b a value of 0.5. Drawing ten black balls in a row gives you no reason to revise that assignment. But in Case 11, matters are different. Initially ignorant of the contents of the urn, you should treat the evidence of subsequent trials as important new evidence concerning its contents and, hence, concerning the likelihood of b: your assignment to ( b / t ) should be greater than your assignment to b (see Jeffrey 1983a, pp. 195-197). Notice, however, that this response only directs attention away from-but does not answer-the charge I have made against orthodox bayesianism. Even if it be granted that the orthodox bayesian has the resources to allow the rest of your degree of confidence assignment to respond to the difference in the evidentiary circumstances-even if it be granted that, if you assign b a value equal to 0.5 in both cases, then you have good reason to assign ( b / t ) a greater value than b in Case I1 and the same value in Case I-he still lacks the resources to allow the doxastic attitudes you adopt toward b to respond in kind: you still have no good reason to assign b a value equal to 0.5 in the two cases-because
BAYESIANISM WITHOUT THE BLACK BOX
57
would seem to be that if we want to give evidence its due-if we want to maintain that a rational investigator ought ideally to adopt only those doxastic attitudes she has good reason to adopt-we had better conclude that, insofar as he holds that a rational investigator ought ideally to adopt a degree of confidence assignment, the orthodox bayesian has gotten hold of the wrong regulative ideal.9 But if it is neither the case that scientific investigators actually adopt, nor the case that they ideally should adopt, degree of confidence assignments, how can a bayesian representation of the doxastic attitudes of scientific investigators hope to shed any light at all on rational inquiry? Is there any insight to be rescued from the wreck of bayesian orthodoxy? One proposal is that an investigator X be represented as harboring, not one unique degree of confidence assignment, but rather a convex set of degree of confidence assignments. Only when all the assignments in the set agree on what value is to be assigned to h will X be said to assign h a unique degree of confidence. In all other cases there will be an interval such that for each real-valued number in that interval there will be a degree of confidence assignment in the set of assignments characterizing X's doxastic attitudes that gives h that number. That is to say, in all other cases X can be said to be of more than one mind when it comes to assessing the likelihood of h." The proposal has undeniable attractions. On the descriptive side, it allows us to admit as true what the orthodox bayesian would deny: namely, that investigators often, and perhaps much more often than not, do not harbor precise degree of confidence assignments to hypotheses. And, on the normative side, it allows us to begin to fashion a regulative ideal that will enable a rational investigator to give evidence its due. We can say that while, in Case I, you should assign b a degree of confidence equal
you still have no good reason in Case I1 to assign b any particular value at all. (My argument is, as a consequence, not directed simply against the propriety o f Jeffreys' advice (see footnote 7 ) concerning Case 11-that you should assign b a degree o f confidence equal to 0.5. Suppose you assign b some other value, n , in Case 11. Modify Case I so that the urn contains lOOn black balls and 100-100n white balls. The same worry arises as before. This is because my argument is directed against the propriety o f assigning b any particular degree o f confidence.) 'This critique o f the orthodox bayesian regulative ideal is not new. See, for example, Levi (1974 and 1980, pp. 85-91). Indeed this line o f criticism has formed a central part o f the attack by non-bayesian statisticians on the propriety o f bayesians' ubiquitous reliance on prior probabilities in statistical inference. ''Such a proposal has been put forward by Isaac Levi. See Levi (1974 and 1980, chaps. 4 and 9 ) . For my purposes, it is convenient to lump convex-set bayesianism together with the view that investigators are to be represented as harboring interval-valued confidence assignments. The latter view has been advanced in Koopman (1940);Good (1962);Smith (1961);Kyburg (1961);Williams (1976);Wolfensonand Fine (1982).The extent to which the two views diverge is discussed in Levi (1980, pp. 197-204).
58
MARK KAPLAN
to 0.5, you should not do so in Case 11. Rather, for every value in the interval [0, 11, there should be in your set of degree of confidence assignments an assignment that gives b that value. Should your evidence improve, should you learn something about the proportion of black to white balls in the urn (for example, that there are at least 10, and no more than 90, black balls in the urn), you should narrow the interval accordingly (adopt the interval [. 1, .9]). Finally, the proposal still manages to keep faith with its orthodox predecessor: it requires that each degree of confidence assignment in your set of assignments be an orthodox bayesian degree of confidence assignment (that each of your many minds be an orthodox bayesian mind) and that, as a regulative ideal, each of these assignments obey the axioms of the probability calculus. But a familiar sort of trouble looms. In representing investigators as harboring convex sets of degree of confidence assignments, the proposal is committed to saying that, for each investigator X and each hypothesis h, there is an interval with a unique upper bound and a unique lower bound that represents the range of values assigned to h by X's set of assignments. And I suspect that few who find it difficult to endorse the orthodox bayesian's psychology will find this doctrine any more credible. For if introspection is any guide, then, in all but the tidiest statistical cases, there simply is no precise range of values that accurately represents an investigator's indeterminate doxastic attitude toward a hypothesis: an investigator would be as hard put to come up with unique upper and lower bounds for the range the current proposal ascribes to her as she would be to come up with the precise degree of confidence assignment ascribed to her by the orthodox bayesian. There is always, of course, the option of positing a new black boxthis one containing an investigator's convex set of degree of confidence assignments. The convex-set bayesian hasn't as convenient a behavioral test at his disposal as his orthodox cousin, but he can still hold that, while not obvious to casual inspection, the upper and lower bounds of the range of values an investigator's set of degree of confidence assignments gives to a hypothesis h can be revealed by some sort of interrogation." The "See, for example, Levi (1980, pp. 210-214). For Levi, it is an investigator's "commitments" that constitute the contents of her black box. By an agent's "commitments", Levi means (I extrapolate from his discussion of an investigator's commitment to a standard for serious possibility) the set of doxastic and valuational attitudes that the investigator would harbor "were [slhe ideally situated (i.e., endowed with perfect computational facility, memory, and emotional or social health) and were [slhe also rational" (p. 10). He continues, "I do not suppose, however, that agents (persons or institutions) are ideally situated and rational. I do take them to be real agents with commitments of various kinds . . . ; and I urge them to be rational in the sense that they live up to their commitments insofar as they are able" (pp. 10-11). What makes Levi's a black box psychology is, of course, the fact that he is prepared to attribute to investigators commitments of which they are unaware. Levi is indeed at pains to make it quite clear that he has no complaint
BAYESIANISM WITHOUT THE BLACK BOX
59
trouble is that the only sort of interrogation that will do the job is an interrogation that will not stop until it has succeeded in eliciting an upper and lower bound. And while it is surely plausible to suppose that, given no other option, an investigator will indeed confess to unique upper and lower bounds, it is equally plausible to suppose that, given no other option, she will confess to a unique real-valued assignment. We have already seen that there is no reason to suppose (and indeed good reason to deny) that the latter sort of confession will generally constitute a revelation of a prior doxastic commitment, or even a display of a current doxastic commitment, on the part of the investigator. There is no more reason to suppose that the former sort of confession will be any more informative. Moreover, the convex-set bayesian's regulative ideal is, like his psychology, but a marginal improvement over its orthodox bayesian predecessor. Consider the following case. Case 111: You have consulted three doctors about whether a medical diagnosis g is correct. Two doctors say that g is very likely true. The third holds that it is probably false. According to the regulative ideal championed by the convex-set bayesian, you are free to remain uncommitted as to what degree of confidence to assign g. But, ideally, you are obligated to nominate an interval to specify the exact range of values that the degree of confidence assignments in your convex set assign to g. Unfortunately, unlike Case 11, Case I11 offers you no obvious candidates for either the upper or lower bound on that interval. Given the evidence available, you may well feel that 0 is to be ruled out as too low and 1 as too high. You may even feel that 0.1 is too low and 0.9 is too high. But what reason could you possibly have to nominate one particular number, say 0.77, instead of a number arbitrarily close to it, say 0.78, as the upper bound on the interval? And if you have no reason to pick one particular number rather than the other it would seem that you are best off refraining from picking any particular number. Just as the orthodox bayesian's regulative ideal ignored an important difference between the evidentiary situations in Cases I and 11, the convex-set bayesian's regulative ideal ignores an important difference between the evidentiary situations in Cases I1 and 111. In Case 11, your information about the composition of the urn and the system under which the ball was drawn gives you good reason to pick the interval [O, I], Gust as learning that there are at least 10, and no more than 90, black balls with the orthodox bayesian's wholesale ascription of black boxes to investigators-only with the orthodox bayesian's account of what reason requires such a black box to contain (p. 187).
60
MARK KAPLAN
in the urn, would give you good reason to pick the interval [. 1 , .9]). In Case 111, your information gives you no grounds on which to nominate a determinate interval. The moral is familiar: if we want to give evidence its due-if we want to maintain that a rational investigator ought ideally to adopt only those doxastic attitudes (nominate only those intervals) that she has good reason to adopt (good reason to nominate)-we must conclude that the convex-set bayesian, like his orthodox relative, is advocating a mistaken regulative ideal. Thus the two forms of bayesianism so far examined suffer from rather similar defects. Both subscribe to psychologies that seem implausible on their face and, in their black-box reformulations, seem to pay more allegiance to wishful thinking than to the evidence. And both forms of bayesianism offer regulative ideals that would pass off false precision as a virtue. Is it really true that, in the end, the bayesian contribution to the philosophy of science comes to nothing more than a sustained exercise in fatuous psychology and false precision'? I want now to argue that it is not. I want to offer a sketch of a proposal that provides both a credible psychology and an attractive regulative ideala regulative ideal that keeps faith with the orthodox bayesian's probabilistic constraint on rational degree of confidence assignments without succumbing to the siren song of false precision. I call the proposal "bayesianism without the black box". The psychology of bayesianism without the black box shrinks from the ambition (and resulting excesses) of its predecessors. The undeniable fact is that actual investigators are not prepared to confess to the kind of doxastic attitudes necessary for these investigators to count as deliberate, conscious bayesians. Rather than seek somehow to characterize investigators as-unbeknownst to themselves (and, in the case of the consciously antibayesian, despite themselves)-bayesians nonetheless, bayesianism without the black box acknowledges the facts as they appear on the surface to be the facts simpliciter. It sees as its challenge, not the artful redescription of the psychology of consciously non-bayesian agents, but rather the elaboration and defense of a regulative ideal that will tell such agents how to become (and why they should want to become) conscious, deliberate bayesians. The psychology of bayesianism without the black box must therefore, of necessity, be a modest one. It is in the nature of its prescriptive mission that bayesianism without the black box must take investigators as they come and try to convince these investigators to embrace bayesianism. Thus the present proposal assumes only that investigators harbor at least some confidence-rankings-that is, for each investigator there are at least some hypotheses that she is prepared to rank according to how confident she is in their truth. True to the bayesian creed, a confidence-ranking is
BAYESIANISM WITHOUT THE BLACK BOX
61
understood to mirror a preference-ranking: for an investigator to rank g above h is (among other things) for her to prefer to have something she values ride on the truth of g rather than have the same item of value ride on the truth of h.12 But, true to its own prescriptive mission and unlike the bayesianisms rehearsed earlier, bayesianism without the black box has no interest in attributing to investigators preferences or doxastic attitudes of which the investigators are unaware. Indeed there is no room in this-or any-prescriptive bayesianism for positing preferences and degrees of confidence of which agents are unaware. An investigator certainly may have dispositions to behave (even dispositions to choose) of which she is unaware. But such dispositions, lying as they do beyond the investigator's ken and (hence) control, are of little interest to a theory that means to tell the investigator how to deliberate. All such a theory can ask of the investigator is that she impose coherence upon what falls under her purview: her conscious commitments to choose. Such a theory may, of course, demand that the investigator adopt commitments in cases in which she has hitherto not done so. It may offer her instruction on how to check that the commitments sit well with her-that they are ones on which she will be disposed to act. What it cannot do is demand of an investigator that she succeed in consciously adopting commitments that will sit well with her. It cannot require her, on pain of irrationality, to impose order upon dispositions whose existence may well have eluded her best efforts at discovery. Thus, once a bayesian recognizes his project as a prescriptive one-once he recognizes that, having failed to convince us that we are all really unwitting bayesians, he must convince us how and why we should become bayesians-he has little choice but to reserve the epithet "prefer''Likewise an investigator ranks h evenly with g just in case she is indifferent between having something she values ride on the truth of h and having the same item of value ride on the truth of g. She fails to rank h with respect to g just in case she fails to rank in preferability the prospect of having something she values ride on the truth of h and having the same item ride on the truth of g . Note that the distinction between X's being indifferent between two options and being undecided about how to rank the two options is not just an intuitive distinction. It is a pragmatic distinction as well. According to bayesianism without the black box (see the regulative ideal described below), indifference will be a transitive relation for a rational investigator and the "undecided-how-to-rank" relation will not. That is to say, i f X behaves as she ideally should and she is indifferent between option A and option B and between option B and option C , she will be indifferent between A and C. But suppose that, with good reason, X ranks h as more likely than g and X has absolutely no evidence bearing on the truth value off. X should then remain undecided as to how to rank f and h and undecided as to how to rank f and g. And, since confidence rankings mirror preferences, that is to say, if she behaves as she ideally should, X will remain undecided between ( A ) having something she values ride on the truth of h and (B) having that item ride on the truth o f f ; X will remain undecided between B and (C) having the same item ride on the truth of g; yet X will remain decided in preferring A to C . I am indebted to Patrick Maher for pressing me to say something on this matter.
62
MARK KAPLAN
ence", as it appears in his theory, for conscious commitments to choose. So, for bayesianism without the black box, preference-rankings and confidence-rankings of which an investigator is unaware are preference-rankings and confidence-rankings that the investigator does not have. By way of further keeping to its resolve to take investigators as they come, the present proposal does not assume that any investigator is prepared to rank all hypotheses with one another-that any investigator harbors one overall confidence-ranking of all hypotheses. An investigator may, for example, rank both f and g above h yet fail to rank f and g relative to one another. This may be because she simply hasn't given the matter any thought-she has never considered the question of how to rank f and g-or because, having given the matter thought, she still remains undecided. It is, of course, compatible with the present proposal (and, indeed, advisable given the appropriate evidence) that an investigator nominate a determinate degree of confidence or a determinate interval of values with respect to some hypotheses. What then of the regulative ideal'? From the perspective of bayesianism without the black box, the regulative ideal put forward by the orthodox bayesian imposes obligations upon a rational investigator only when she confronts a hypothesis with respect to which she has evidence of the highest quality-only when, for some rational number in [O, 11, her evidence gives her good reason to assign that particular number as her degree of confidence in the hypothesis. Bayesianism without the black box recognizes that an investigator most often confronts hypotheses with respect to which she hasn't evidence of that quality-and indeed that, almost as often, she confronts hypotheses with respect to which she hasn't even the quality of evidence that would warrant her nominating the precise interval that the convex-set bayesian's regulative ideal would require. Yet the requirements of the orthodox bayesian ideal are not a matter of indifference to bayesianism without the black box. Where C is a set of confidence-rankings, let us say that M is an orthodox bayesian model of C just in case M is a degree of confidence assignment that both satisfies the axioms of the probability calculus and preserves the set of rankings dictated by C. The regulative ideal put forth by bayesianism without the black box will be satisfied by investigator's set of confidence-rankings C only if: (i) C has an orthodox bayesian model; (ii) for any pair of hypotheses, h and g , if every orthodox bayesian model of C assigns to h the same degree of confidence it assigns to g , then C ranks h evenly with g ; and (iii) for any pair of hypotheses, h and g , if every orthodox bayesian model of C assigns to h at least as great a degree of confidence
BAYESIANISM WITHOUT THE BLACK BOX
63
as it assigns to g and some model assigns h a greater degree of confidence than it assigns to g , then C ranks h above g. The first clause, while accommodating the requirement that an investigator be satisfied with the adoption of a mere set of confidence-rankings when the quality of the evidence available to her warrants nothing more precise, demands that it be at least conceivable that this set of confidencerankings survive the acquisition of evidence of the highest quality, that is, evidence that would warrant a degree of confidence assignment. Thus the requirement that the investigator's set of confidence-rankings have an orthodox bayesian model. l 3 The first clause, then, concerns itself with sins of commission. It puts forth a standard for criticizing a set of confidence-rankings for what that set contains. In contrast, the next two clauses concern themselves with sins of omission-with putting forward a standard for criticizing a set of confidence-rankings for what the set leaves out. Suppose, for example, that X's set of confidence-rankings C does not rank h and g, but every orthodox bayesian model of C assigns h the same degree of confidence as g. In such a case, any evidence of the highest quality that preserves the rankings stipulated by C will require h to be ranked evenly with g. That is to say, in such a case, it is a condition of C's surviving the acquisition of highest quality evidence that h be ranked evenly with g. It is only right, then, that our regulative ideal should express (as clause (ii) does) the verdict that, in such a case, C is to be criticized for failing to rank h evenly with g. Likewise, clause (iii) maintains that, if C does not rank h and g but every orthodox bayesian model of C assigns h a greater degree of confidence than it assigns g , then C is to be criticized for failing to rank h above g. There is, of course, one more sin of omission that clause (iii) means "Clause (i) is a terminological variant of a constraint I advance in Kaplan (1983, p. 568). Much the same constraint is advanced by Brian Ellis in Ellis (1979, pp. 15-16) although he seems at the same time to be sympathetic both to orthodox bayesian and convex-set bayesian regulative ideals. Ellis' constraint is endorsed by Brian Skyrms (Skryms 1984, pp. 28-29), although for reasons rather different from those offered here. Also endorsing much the same constraint (but, again, for different reasons than those offered here) is Richard Jeffrey (Jeffrey 1983b, pp. 139-141). Both Jeffrey and Skyrms hold, as I do, that the satisfaction of some terminological variant of (i) is required of the rational agent-in the sense that an agent cannot, on pain of irrationality, maintain a confidence-ranking in the face of a demonstration that her ranking violates (i). Where we differ is in our attitude towards the orthodox bayesian regulative ideal. Skyrms offers no argument against seeking to satisfy that ideal and, hence, appears to consider its satisfaction to be, at the very least, permissible. Jeffrey holds the satisfaction of the orthodox bayesian ideal to be not only permissible but desirable insofar as time and energy allow. In contrast, I have argued that, the quality of our evidence being what it is, to view the satisfaction of the orthodox bayesian ideal as even permissible is to sanction false precision.
64
MARK KAPLAN
to condemn. Suppose C does not rank h and g and, as in the previous cases, every orthodox bayesian model of C assigns h a degree of confidence at least as great as the one it assigns g. But suppose that, unlike the previous cases, this is a case in which some of the models assign h the same degree of confidence as g and some assign h a greater degree of confidence than g. It is thus a condition of C's surviving the acquisition of evidence of the highest quality that h be ranked either with g or above g. At best this would seem to indicate an equivocal verdict. It would seem that, by parity of reasoning, the verdict of bayesianism without the black box should be that X is thus obligated to rank h with g or above g. But notice that to say that her adoption of C commits X to rank h with g or above g is to say that her adoption of C gives X good reason to regard h as being as likely as g and no reason not to regard h as more likely than g (although, perhaps, also no reason to assign h any particular degree of confidence or even a determinate interval). And if X has good reason to regard h as being as likely as g and no reason not to regard h as being more likely than g , X surely has good reason to prefer a bet on h to an identical bet on g . But it is central to the bayesian credo (and, as noted earlier, to the credo of bayesianism without the black box) that a rational agent will prefer a bet on one hypothesis to an identical bet on a second hypothesis if and only if she invests more confidence in the first (see, for example, definition D4 in Savage 1972, p. 31). Thus, the verdict of clause (iii) is unequivocal: if C fails to rank h with respect to g , yet some orthodox bayesian models of C assigns h a greater degree of confidence than they assign g , some assign h the same degree of confidence they assign g and none assigns g a greater degree of confidence than it assigns h, then C is to be criticized for failing to rank h above g.14 141n this respect, clause (iii) is stronger than the constraint I offer to Kaplan (1983, p. 568). It is, however, important to be clear on the exact nature of its strength (as David Lewis made clear to me by questioning its propriety via the following example). Suppose there are three urns. Each contains 100 balls and each ball is either black or white. Urn 1 contains 25 black balls. Urn 2 contains either 25 or 50 black balls. Urn 3 contains either as many black balls as Urn 2 or 25 more. Each urn has been thoroughly shaken and a ball has been drawn but has not yet been examined. Let b, be the hypothesis that the ball drawn from Urn n is black. Your confidence-ranking over b,, 6, and b, will have four orthodox bayesian models: Models
b,
b,
b3
Thus, by clause (iii), you should invest more confidence in b, than in b, and more confidence in b, than in b,. But this does not mean that you should settle on the degree of confidence assignment characterized by Model IV-a move that would certainly be unwarranted in the circumstances. On the contrary. The prescription issued by (iii) presup-
BAYESIANISM WITHOUT THE BLACK BOX
65
The regulative ideal just described necessarily does more than just impose constraints on confidence-rankings. It is, after all, a consequence of the psychology of bayesianism without the black box that, for any hypotheses h and g , X's confidence-ranking of (failure to rank) h and g will mirror her preference-ranking of (failure to rank) the option of having something of value ride on the truth of h and the option of having the same item ride on the truth of g. Thus, in imposing constraints on the confidence-rankings an investigator ought ideally to adopt, the regulative ideal advanced above constitutes a fragment of theory of rational decision. It is not difficult to see in that fragment the shape the whole must take. Characteristically, an orthodox bayesian theory of rational decision consists in a set of constraints whose satisfaction by an agent requires that she have a single preference-ranking over all conceivable options; that she harbor a degree of confidence assignment to every hypothesis; that she harbor a utility assignment to sure outcomes that is a measure of their preferability for her; that these two assignments jointly determine a utility assignment to all options that is a measure of the preferability of these options for her. Where P is a set of preference-rankings, let us say that M is an orthodox bayesian model of P just in case M is a utility assignment that preserves the rankings specified by P. It is a consequence of any orthodox bayesian theory of rational decision (and the mirroring of confidence-rankings by preference-rankings) that to say (as the regulative ideal advanced above does) a rational investigator's set of confidence-rankings C ought ideally to have orthodox bayesian models and bear the relation described in clauses (ii) and (iii) to those models, is just to say that the set of preference-rankings C* that C mirrors ought ideally to have orthodox bayesian models to which it (C*) bears the same relation. So construed, the regulative ideal constrains only a proper subset of an investigator's set of preference-rankings. It places constraints only on the subset of her set of preference-rankings that directly mirrors her confidence-rankings-that ranks, with respect to pairs of hypotheses, the preferability of having an item of value ride on the truth of one hypothesis rather than another. Hence the claim that the regulative ideal offers but a fragment of a theory of rational decision. To fill out the regulative ideal, bayesianism without the black box simply broadens the constraint it imposes on this proper subset of an investigator's preference-rankings and
poses that you remain in suspense between all four models: it means to tell you what your confidence-ranking over b,, b, and b, should be, given that you do remain in suspense between the four models.
66
MARK KAPLAN
imposes the constraint on the entire set. l5 A set of preference-rankings P will then satisfy the regulative ideal of bayesianism without the black box only if: (i) P has an orthodox bayesian model; (ii) for any pair of options, A and B, if every orthodox bayesian model of P assigns to A the same utility as it assigns to B, then P ranks A evenly with B; (iii) for any pair of options, A and B, if every orthodox bayesian model of P assigns to A at least as great a utility as it assigns to B and some model assigns A a greater utility than it assigns to B, then P ranks A above B. Thus, without assuming anything more extravagant than that every investigator (knowingly) harbors at least some confidence-rankings, bayesianism without the black box is able to construct a regulative ideal that places substantive constraints upon the doxastic attitudes and preferences of rational investigators. Insofar as this regulative ideal says, in effect, that an investigator's set of confidence-rankings ought ideally to be representable as a set of probability-rankings, it is a regulative ideal that will accommodate virtually any insights into the enterprise of rational inquiry that the orthodox bayesian's regulative ideal has to offer. (Of coursk, these are no longer to be viewed as insights into the enterprise of inquiry as it is generally conducted-that is, by those who are not consciously bayesian. The insights into the conduct of scientific inquiry bayesianism without the black box affords-the rational reconstructions of scientific practice implausibly construed by its predecessors as explanations (at some psychologically deep level) of why scientists behave as they do-are to be viewed rather as inducements to become a conscious bayesian: they are insights available, and only available, to those who consciously embrace bayesianism.) Yet it is a regulative ideal that, unlike its orthodox and convex-set bayesian predecessors, will also accommodate an epistemology that recognizes in a thoroughgoing fashion the virtue of doxastic indecision in the face of evidence of poor quality. The upshot is that the central insights of bayesianism can be wrest free from its characteristic addiction to black-box psychology and to the advocacy of the false precision that makes it seem so flattering to suppose we are accurately described by such a psychology. The result is, of course, not as powerful a bayesianism as either the orthodox or convex-set bayesians offer. The bayesianism that is left offers no general psychology of I5In Kaplan (1983, pp. 557-559) I offer independent reasons why a rational agent's set of confidence-rankings ought to satisfy this sort of constraint. See too Skyrms (1984, pp. 28-29).
BAYESIANISM WITHOUT THE BLACK BOX
67
actual scientific inquiry. The mission of this bayesianism is prescription rather than description. Moreover, when it comes to prescription, the bayesianism that is left countenances (and, indeed, demands) more doxastic indecision than do the other bayesianisms. And it is to be remembered that, for every hypothesis with respect to which an investigator manifests doxastic indecision, there is, according to this bayesianism, an infinite class of decision problems which no longer admit of solution.16 The abandonment of description for prescription has, of course, its positive side. As noted earlier, the importance of bayesianism as a prescriptive doctrine varies inversely with its accuracy as a descriptive doctrine. There is no need to teach the virtues to the already virtuous. And if we are not already virtuous-if we are beings who have use for guidance in the conduct of inquiry and deliberation-there is surely an important sense in which it is more valuable to have a theory that is able to offer us such guidance than to have a theory that merely redescribes what we already know how to do well enough. In contrast, there looks to be nothing redeeming about a prescriptive doctrine that deems an infinite class of decision problems to be insoluble. A prescriptive decision theory earns its way by offering us guidance. To the degree that bayesianism without the black box fails to offer the guidance we ask of it, we are bound to find it wanting. But, of course, if simple guidance were really all we were after, we would be much easier customers to satisfy than we are in fact. Advice I6To say to an agent that her decision problem does not admit of a solution is not, of course, to advise paralysis. It is rather to say that there is no reason to regard one option as best and that, among the options for which no superior option has been found, there is nothing left but to pick one. Such advice is by no means foreign to bayesianism. It is precisely the advice the orthodox bayesian will offer to an agent who is indifferent between two options to which no other option is superior. Where bayesianism without the black box departs from its orthodox bayesian predecessor is thus not in its being prepared to issue such advice, but rather in its being prepared to issue such advice in yet another circumstance, that is, when, ignoring the call to false precision issued by the orthodox bayesian, an agent rightly refrains from adopting a precise degree of confidence assignment and when, without her adoption of a precise degree of confidence assignment, expected utility considerations will not suffice to render any option most preferable. Its distaste for the false precision of orthodox bayesianism is not, however, alone responsible for the present proposal's disposition to view decision problems as insoluble in such a circumstance. For example, Isaac Levi (1980) offers very much the same epistemological critique of the orthodox bayesian regulative ideal. Nonetheless, by regarding expected utility considerations as just the first among a number of lexicographically ordered considerations an agent may rationally bring to bear upon decision problems, Levi is able to offer a decision theory that provides solutions to problems of the sort just described. What distinguishes the present proposal from Levi's (and what renders the former less generous with advice) is the fact that bayesianism without the black box shares with orthodox bayesianism a commitment to the view that expected utility considerations, reflecting as they do the contributions that opinion and valuation make to rational preferability, exhaust the considerations relevant to rational decision-making.
68
MARK KAPLAN
is always easy to come by. The fact is, however, that what we really want is the guidance of reason-we want to find reasons for adopting solutions to our decision problems. Were we only better situated epistemically-were we always privy to evidence of sufficiently high quality to warrant our assigning to hypotheses precise degrees of confidenceorthodox bayesianism might deliver the reasoned guidance we seek in the quantity we seek it." But, as I have argued, we are not so situated. The regulative ideal of bayesianism without the black box is motivated by the conviction that we should recognize our epistemic situation for what it is and settle for the guidance of reason in whatever quantity our epistemic situation makes available. We can, perhaps, take some solace in the fact that even Descartes, who had as rosy a view of our epistemic situation as any philosopher, recognized that reason cannot hope to outdo its competition in the number of opinions it delivers-only in their propriety. REFERENCES Dempster, A. P. (1967), "Upper and Lower Probabilities Induced by a Multivalued Mapping", Annals of Mathematical Statistics 38: 325-339. Eells, E. (1982), Ratiotzul Decision and Causality. Cambridge: Cambridge University Press. Ellis, B. (1979), Rational Belief Systems. Oxford: Basil Blackwell. Good, I. J. (1962), "Subjective Probability as the Measure of a Non-measurable Set", in E. Nagel, P. Suppes and A. Tarski (eds.), Logic, Methodology and the Philosophy of Science. Stanford: Stanford University Press, pp. 319-329. , (1965), The Estimation of Probabilities. Cambridge, Mass.: MIT Press. Hesse, M. (1974), The Structure of Scientific Inference. Berkeley: University of California Press. Horwich, P. (1982), Probability and Evidence. Cambridge: Cambridge University Press. Jeffrey, R. C. (1965), The Logic of Decision. New York: McGraw-Hill. . (1983a), The Logic of Decision, 2nd ed. Chicago: University of Chicago Press. . (1983b), "Bayesianism with a Human Face", in J. Earman (ed.), Testing Scientific Theories. Minneapolis: University of Minnesota Press, pp. 133-156. . (1984), "An Assessment of the Subjectivistic Approach to Probability", Epistemologia 7: 9-21. Jeffreys, H. (1961), Theory of Probability, 3rd ed. Oxford: Oxford University Press. Kahneman, D., Slovic, P., and Tversky, A. (eds.) (1982), Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kaplan, M. (1983), "Decision Theory as Philosophy", Philosophy of Science 50: 549577. Koopman, B. 0. (1940), "The Bases of Probability", Bulletin of the American Mathematical Society 46: 763-774.
"I say "might" to acknowledge that, even if we were so situated epistemically, the orthodox bayesian's regulative ideal might still be open to the charge of requiring preference (indifference) where none is warranted. Isaac Levi has argued (1980) that investigators often harbor distinct values which they are unable to weigh relative to one another in the way orthodox bayesians demand-and that it is unreasonable to demand that an investigator who harbors such values integrate them even when she can find no (to her) satisfactory grounds for integrating them in one way rather than another. If Levi is right (and I think he is), then the refusal to adopt either preference or indifference between two or more options can be warranted even in the presence of evidence of the highest quality. Note that the theory of rational decision advanced above accommodates this sort of refusal.
BAYESIANISM WITHOUT THE BLACK BOX
69
Kyburg, H. E. (1961), Probability and the Logic of Rational Belief. Middletown, Conn.: Wesleyan University Press. Levi, I. (1967), Gambling with Truth. New York: Knopf. . (1974), "On Indeterminate Probabilities", The Journal of Philosophy 71: 391418. . (1980), The Enterprise ofKnowledge. Cambridge, Mass.: MIT Press. Luce, R. D., and Raiffa, H. (1957), Games and Decisions. New York: Wiley. Mellor, D. H. (1980), "Consciousness and Degrees of Belief", in D. H. Mellor (ed.), Prospects for Pragmatism. Cambridge: Cambridge University Press, pp. 139-173. Raiffa, H. (1968), Decision Analysis. Reading, Mass.: Addison-Wesley. Rosenkrantz, R. (1977), Inference, Method and Decision. Dordrecht: Reidel. Savage, L. J. (1972), The Foundations of Statistics, 2nd ed. New York: Dover. Skyrms, B. (1984), Pragmatics and Empiricism. New Haven: Yale University Press. Smith, C. A. B. (1961), "Consistency in Statistical Inference and Decision", Journal of the Ro>lal Statistical Society, ser. B , 23: 1-37. Williams, P. M. (1976). "Indeterminate Probabilities", in M. Przelecki, M. Szaniawski and R . Wojcicki (eds.), Formal Methods in the Merhodology of Empirical Sciences. Dordrecht: Reidel, pp. 229-246. Wolfenson, M . , and Fine. T. (1982), "Bayes-like Decision Making with Upper and Lower Probabilities", The Journal of the American Statistical Association 77: 80-88.