Foundationalism, Coherentism, and the Idea of Cognitive Systematization Nicholas Rescher The Journal of Philosophy, Vol. 71, No. 19, Seventy-First Annual Meeting of the American Philosophical Association Eastern Division. (Nov. 7, 1974), pp. 695-708. Stable URL: http://links.jstor.org/sici?sici=0022-362X%2819741107%2971%3A19%3C695%3AFCATIO%3E2.0.CO%3B2-Y The Journal of Philosophy is currently published by Journal of Philosophy, Inc..
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/jphil.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.
JSTOR is an independent not-for-profit organization dedicated to and preserving a digital archive of scholarly journals. For more information regarding JSTOR, please contact
[email protected].
http://www.jstor.org Wed Jun 6 07:47:42 2007
THE JOURNAL OF PHILOSOPHY VOLUME LXXI, NO. 19, NOVEMBER 7, 1974
FOUNDATIONALISM, COHERENTISM, AND T H E IDEA
OF COGNITIVE SYSTEMATIZATION *
I. THE SYSTEMATIC IDEAL OF SCIENTIFIC KNOWLEDGE
T
HE idea of system is central to the theory of knowledge in general and the epistemology of science in particular. A system is not just a constellation of interrelated elements, but one of elements assembled together in a functional unity, a purposively ordered network of interrelationships. The idea of cognitive systematization-the systematic structuring of bodies of information-is of particular importance. From antiquity to Hegel and beyond, philosophers have insisted that our knowledge should be developed in a systematic way, that it should be articulated within the unifying framework of one all-embracing structure that brings the interrelatedness of its elements to view and leaves nothing wholly isolated and unconnected. In particular, the concept of scientific systematization points toward the ideal of a perfect science in which all scientifically relevant facts about the world are allocated a place with due regard to their cognitive connections. It is universally admitted that such synoptic and comprehensive systematization is not a descriptive aspect of scientific knowledge as it stands now (or at some other historical juncture), but it represents an idealization toward which science can and should progress along an evolutionary route. In the case of cognitive systematization, the inherent teleology of order revolves about the factor of intelligibility, and the linkages of interrelationships represent principles of clarification within a particular explanatory order. A cognitive system provides illumination; their systematic interconnections render the facts at issue
* T o be presented in an APA symposium on Foundationalism and Coherence, December 28, 1974; Mark Pastin will comment: see this JOURNAL, this issue, pp. 709/10. Work on this paper was supported by a grant in support of research from the National Science Foundation (No. GS-37883).
696
THE JOURNAL OF PHILOSOPHY
amenable to reason by organizing them within a framework of ordering principles that bring their interrelationships to light. Such systematization facilitates understanding because the system provides the structure of interrelatedness through which the cognitive role of its elements are brought to light: systematicity provides the channels through which explanatory power can flow. Throughout the history of Western philosophy, it has been insisted that men do not genuinely know something unless they know it systematically. Plato's position in the Theaetetus that a known fact must have a logos (rationale), Aristotle's insistence in Posterior Analytics that strict (scientific) knowledge of a fact about the world calls for its accounting in causal terms, Spinoza's celebration of what he designates as the second and third kinds of knowledge (in Book I1 of the Ethics and elsewhere), all point to the common, fundamental idea that what is genuinely known is known in terms of its systematic footing within the setting of a rationale-providing frarnework. One reason for such a canonization of systematicity lies deep in the very nature of scientific understanding. Scientific explanation in general proceeds along subsumptive lines, particular occurrences in nature being explained with reference to covering generalizations. But the adequacy of such an explanation hinges upon the status of the covering generalization: is it a "mere empirical regularity," or is it a thesis whose standing within our scientific system is more firmly secured? This latter question leads straightaway to the pivotal issue of how firmly the thesis is embedded within its wider systematic setting in the branch of science at issue. Systematization here affords a criterion of the appositeness of the generalizations deployed in scientific explanation. Systematicity is in the first instance a category of understanding, akin in this regard to generality, simplicity, or elegance. Its primary concern is with the nature of what is known: just as one selfsame range of things can be characterized simply or complexly, so it can be characterized systematically or unsystematically. Systematicity is a feature not so much of the objects of knowledge as of the structure or the modus operandi of our knowledge of them. Systematicity relates in the first instance not to what we know, the items of information we have, but rather to how we know it: it is a matter not so much of the disparate bits and pieces that constitute "our knowledge," as of the way in which they are organized. Systematicity as an epistemic concept is not inherently much laden with ontological overtones. The real need not be rational to
FOUNDATIONALISM AND COHERENCE
697
admit of rational study. Knowledge need not share the features of its objects: to speak of a sober study of inebriation or a dispassionate analysis of passions is not a contradiction in terms. One cannot make the transcategorical inference from the fact that our knowledge of some range of phenomena is systematic to the conclusion that these phenomena must themselves be systematic. T h e psychopathology of irrational thought and action can be rational discipline. An orderly knowledge of chaos is possible-indeed, the aim of the theory of probability is to reduce chance and haphazard to discipline and to afford an orderly study of random phenomena. The grandiose and heroic vision that all of man's knowledge of the physical universe forms part of one single all-embracing cognitive system is of ancient and respectable lineage. This ideal is indeed ancient, for it was adumbrated by Parmenides, elaborated by Plato, developed in loving detail and with immense labor by Aristotle, and espoused by a whole host of thinkers from the Church Fathers to Leibniz and Hegel and beyond. However, though relatively little dissent from the thesis of the ideal systematicity of knowledge is to be met with throughout philosophical history, there is a substantial diversity of opinion as to what sort of system is to provide the model or paradigm for proper cognitive systematization. T h e mainstream of the Western tradition in the theory of knowledge casts mathematics-and, in particular, geometry-in this paradigmatic ro1e.l But almost from the very first there was a succession of doubters sniping from the sidelines and advocating discordant views as to the proper systematic structure for the organization of scientific knowledge regarding how things work in the world. A small but constantly renewed succession of writers have steadfastly insisted that the geometric model is not of sufficiently general applicability and that we must seek elsewhere for our paradigm of scientific systematization. Let us begin with a closer look at the two major contestants in this dispute. 11. THE EUCLIDEAN MODEL OF KNOWLEDGE
T h e model of knowledge canonized by Aristotle in Posterior Analytic~sees Euclidean geometry as the most fitting pattern for the organization of anything deserving the name of a science (to put 1 I am aware of Aristotle's famous disclaimer that one must not expect to obtain in every branch of knowledge the precision of reasoning one meets with in mathematics. But this does not undo the fact that in specifying how scientific knowledge should properly be articulated (in Posterior Analytics) Aristotle takes geometry as his model. I thus take Aristotle's remark to pertain to the exactness or precision of our knowledge and not to its organization.
698
THE JOURNAL OF PHILOSOPHY
it anachronistically, since Euclid himself postdates Aristotle). Such a conception of knowledge in terms of the geometric paradigm sees the organization of knowledge in the following terms. Certain theses are to be basic or foundational: like the axioms of geometry, they are to be used for the justification of other theses without themselves needing or receiving any intrasystematic justification. Apart from these fundamental postulates, however, every other thesis of the system is to receive justification of a rather definite sort. For every nonbasic thesis is to receive its justification along an essentially linear route of derivation or inference from the basic (axiomatic, unjustified) thesis. There is a step-by-step recursive process, first of establishing certain theses by immediate derivation from the basic theses, and then of establishing further theses by sequential derivation from already established theses. In the setting of the Euclidean model every (nonbasic) established thesis is connected to certain basic theses by a linear chain of sequential inferences. On such an approach to cognitive systematization, the whole body of knowledge obtains the layered make-up reminiscent of geological stratification: a bedrock of basic theses surmounted by layer after layer of derived theses, some closer and some further removed from the bedrock, depending on the length of the (smallest) chain of derivation that links this thesis to the basic ones. In this way, the system receives what may be characterized as its foundationalist aspect: it is analogous to a brick wall, with a solid foundation supporting layer after successive layer. A prominent role must inevitably be allocated here to the idea of "relative fundamentality" in the systematic order-and hence also in the explanatoy order of things the systematization reflects.2 It is almost impossible to exaggerate the influence this Euclidean model of cognitive systematization has exerted throughout the intellectual history of the West. From Greek antiquity through the eighteenth century it provided an ideal for the organization of information whose influence was operative in every field of learning. From the time of Pappus and Archimedes and Ptolemy to that of Newton's Principia and beyond, the axiomatic process was regarded as the appropriate way of organizing scientific information. And this pattern was followed in philosophy, in science, and even in ethics, as the more geometric0 approach of Spinoza vividly 2 Think here of Aristotle's extensive concern with "priority" in the order of justification, and his requirement that in adequate explanations the premises must be "better known than and prior to" the conclusion.
FOUNDATIONALISM AND COHERENCE
699
illustrates. For over two millenia, the Euclidean model has provided the standard ideal for the organization of knowledge. 111. AN ALTERNATIVE TO THE EUCLIDEAN MODEL: THE NETWORK MODEL
The major alternatives to the Euclidean model which have been proposed and supported most prominently have certain general features in common. I shall focus on these shared features here, to portray what might count as a common denominator of these nongeometric models, and shall refer to this common-denominator theory as the network model. This approach to cognitive systematization also has an ancient and respectable lineage.3 In roughest outline, this network model sees a cognitive system as a family of interrelated theses not necessarily of hierarchial arrangement, but rather linked among one another by an interlacing network of connections. These interconnections are inferential in nature, albeit not necessarily deductive (since the providing of "good evidential reasons" rather than "logically conclusive grounds" is ultimatelj involved). A network system does without one advantageous feature that characterizes Euclidean systems par excellence. Since everything in a deductive system hinges upon the axioms, these will be the only elements that need independent support or verification. Once they are secured, all else is supported by them. The upshot is a substantial economy of operation: since everything pivots about the axioms, the bulk of our epistemological attention can be confined to them. A network system, of course, lacks axioms and so lacks this convenient feature of having one delimited set of theses to carry the burden of the whole system on their shoulders. Although a network system gives up Euclideanism at the globai level, it may still exhibit a locally Euclidean aspect. Some of its theses may rest on others, and even do so in a rigorously deductive sense. But the resulting network of deductive interrelations will in its over-all aspect have not a hierarchial structure as in Figure 1 but a cyclic structure as in Figure 2. Thus even when its linkages operate along wholly deductive lines, this model would still depart drastically from the geometric paradigm. For from the network standpoint, the Euclidean model has the decisive flaw of inflating 3 It was launcl~edin antiquity by the Academic Skeptics (especially Arcesilaus and Carneades), and latterly espoused by Leibniz (especially in the little essay "On the Method for Distinguishing Real from Imaginary PhenomenaM)- in both instances in the specific context of specifically perceptual knowledge. Subsequently it was generalized and transmuted by Kant and then developed substantially by the English neo-Hegelians (especially F. H. Bradley, B. Bosanquet, and H. H. Joachim).
THE JOURNAL OF PHILOSOPHY
Figure 1
Figure 2
what is at most a local feature of deductive connection among particular subgroups of included theses into a global feature that endows the whole system with a deductive structure. Some particularly vivid illustrations of the network approach to organizing information come from the social sciences, for example, the problem of textual interpretation and exegesis. Here there is no limit of principle to the width of the context of examination and the depth of analysis of detail. T h e whole process is iterative and cyclical; one is constantly looking back to old points from new perspectives, using a process of feedback to bring new elucidations to bear on preceding analyses. What determines correctness here is the matter of over-all fit, through which every element of the whole interlocks with some others. Nothing need be more fundamental than anything else: there are no absolutely fixed pivot-points about which all else revolves. One has achieved adequacy when everything stands in mutual coordination with everything else. T h e most critical points of difference that separate this network model of cognitive systematization from its Euclidean counterpart are as follows: (1) T h e network model dispenses altogether with the need for a category of basic (self-evident or self-validating) protocol theses capable of playing the role of axiomatic supports for the entire structure. (2) I n the network model the process of justification need not proceed along a linear path. Its mode of justification is in general nonlinear, and can even proceed by way of (sufficiently large) cycles and circles. (3) The structure of the arrangement of theses within the framework of the network model need not be geological: no stratification of theses into levels of greater or lesser fundamentality is called for. (Of course, nothing blocks the prospect of differentiation; the point is that this is simply not called for by the modus operandi of the model.) (4) The network model accordingly abandons the conception of priority or fundamentality in its arrangement of theses. I t replaces such fundamentality by a conception of enmeshment-in terms of the multiplicity of linkages and the patterns of interconnectedness with other parts of the net.
FOUNDATIONALISM AND COHERENCE
701
On the network-model approach to the organization of information, there is no attempt to erect the whole structure on a foundation of basic elements, and no necessity to move along some unidirectional path-from the basic to the derivative, the simple to the complex, or the like. One may think here of the contrast between the essentially linear order of an expository book, especially a textbook, and the inherently network-style ordering of an entire library or an encyclopedia. Again, the contrast between a taxonomic science (like zoology or geology) and a deductive science (like classical celestial mechanics) can also help to bring out the difference between the two styles of cognitive organization. IV. KNOWLEDGE AS TRUE, JUSTIFIED BELIEF T o a really surprising extent, modern epistemologists have, notwithstanding massive departures irom classical modes of thought, continued remarkably faithful in their attachment to the central themes of the Euclidean model of cognitive systematization. They often begin with an adherence (at however schematic a level) to the ancient conception of knowledge as true, justified belief,' and join to this view the doctrine that the deductive approach to systematization affords the appropriate means of justification needed to implement this principle. They thus arrive at an essentially Euclidean view of the structure of knowledge. Even when they depart from Euclideanism in giving u p the idea that the only available means of linking conclusions inferentially to premises is by means of steps that are specifically deductive in character, modern epistemologists still continue for the most part to accept at face value the argument of Aristotle's Posterior Analytics to the following effect: Some hold that, owing to the necessity of knowing the primary premisses, there is no scientific knowledge. Others think that there is, but that all truths are demonstrable. Neither doctrine is either true or . . . necessary. . . . T h e first school, assuming that there is no way of knowing other than by demonstration, maintain that an infinite regress is involved. . . . T h e other party agrees with them as regards knowing, holding that it is only possible by demonstration, but they see no difficulty in holding that all truths are demonstrated, on the ground that demonstration may be circular and reciprocal. Our own doctrine is that not all knowledge is demonstrative: on the contrary: knowledge of the immediate premisses is independent of demonstration.6 4The theory is to be met with in Plato's Theaetetlls (200 D ff.) where the element added to truth and belief is the existence of a rationale or account (logos). Since this factor is essentially discursive, the theory is there criticized as conflicting with a foundationalism that admits of basic elements. 6 Posterior Analytics, I , 3; 72b5-24 (Oxford translation).
702
THE JOURNAL OF PHILOSOPHY
The road thus indicated by Axistotle is followed by epistemologists whose name is by now legion, who feel constrained to have recourse to cognitive ultimates to serve as the basic, axiomatic premises of all knowledge. Accordingly, they commit themselves to a category of basic belief's which, though themselves unjustified --or perhaps rather self-justifying in nature-can serve as justifying basis for all the other, nonaxiomatic beliefs. Thus axiom-like foundations still play a central roIe for these epistemologists, even when they are no longer used to provide a rigidly deductive basis. Committed to a quest for ultimate bedrock "givens" to provide a foundational basis on which the rest of the cognitive structure can be erected, these theorists are thus generally characterized as foundationalists. The foundationalist approach to justification leads to a theory of cognitive systematization whose general structure is effectively identical with that of the Euclidean approach. Certain theses are "selfevident," and these are available as basis for an inferentially derivative justification of other beliefs which can then serve to justify still others in their turn. The inherent difficulty of applying this geometric approach to the systematization of factual knowledge relates to the question of the sorts of theses that are to serve as basic (in the role of ultimate axioms). On the one hand, they must be very secure (certain, "selfevident" or self-evidencing)to be qualified to dispense with all need for further verification. But on the other hand, they will have to be enormously content-rich, since they must carry on their back the whole structure of knowledge. These two qualifications for the axiomatic role (content and security) clearly stand in mutual conflict with each other. This tension makes for a weak point, an Achilles heel that critics of the Euclidean model of knowledge have always exploited. Thus consider-for the sake of illustration-the sorts of theses needed for common-life claims about the realm of everyday experience. On the one hand, if I claim "I see a book" (say) or "I see an apple," my claims go far beyond their evidential warrant in sense perception. A real apple must, for example, have a certain sort of appearance from the (yet uninspected) other side, and a certain sort of (yet uninspected) subcutaneous make-up. And if anything goes wrong in these respects my claim that it was an apple I saw (rather than, say, a clever sort of apple substitute, or something done with mirrors and flashes of colored light) must be retracted. The claim to see a n apple, in short, is not sufficiently secure. Its content ex-
FOUNDATIONALISM AND COHERENCE
7O3
tends beyond the evidence in hand in such a way that the claim becomes vulnerable and defeasible in the face of further evidence. If, on the other hand, one "goes for safetyH-and alters the claim to "It seems to m e that I see an apple" or "I take myself to be seeing an apple9'-this resultant claim in the language of appearance is effectively immune from defect and so secure enough. But such assertions purchase this security at the price of content. For no amount of claims in the language of appearance-however extensively they may reach in terms of how things "appear" to me and what I "take myself" to be seeing, smelling, etc.-can ever issue in any theoretically guaranteeable result regarding what is actually the case in the world. While they themselves are safe enough, appearance-theses will fall short on the side of objective content. This dilemma of security-vs-content represents the Achilles heel of foundationalist theory of factual knowledge. In the face of such difficulties in the foundationalist program, it deserves emphatic stress that the foundationalist approach to justification does not represent the only way of implementing the justificatory process inherent in the approach to knowledge as true, justified belief. It is important to recognize that the network model of cognitive systematization affords a perfectly workable alternative approach (an alternative first mooted in classical antiquity by the Academic Skeptics). This alternative approach might be characterized as the coherentist theory of epistemic justification. Since I have dealt with this issue in considerable detail elsewhere, the present discussion will sketch the working of this coherentist conception of cognitive systematization only in very general terms. The procedure of a coherence analysis calls for two types of epistemic inputs: i. "Data": theses that can serve as truth-candidates in the context of the inquiry. These are not certified truths (or even probable truths) but theses that are in a position to make some claims upon us for acceptance. They are prima facie truths in the sense that we would incline to grant them acceptance-as-true if (and this is a very big IF) there were no countervailing considerations upon the scene. (The classical example of "data" in this sense are those of perception and memory.) ii. Plausibility ratings: comparative evaluations of our initial assessment (in the context of issue) of the relative acceptability of the "data." This is a matter of their relative acceptability at first view and, so to speak, in the first analysis, prior to their systematic evaluation, and thus without any prejudgments as to how they will fare in the final analysis.
704
THE JOURNAL OF PHILOSOPHY
Given inputs of this sort, the coherence analysis sets out to sift through the truth-candidates (data) with a view to adjudicating any conflicts that may arise. That family among the truth candidates which are best attuned to one another is to count--on this criterion-as best qualified for acceptance as presumably true. The acceptability-determining mechanism at issue proceeds on the principle of maximizing our recognition of the claims implicit in the data, in line with the guidance of their plausibility ratings, implementing the idea of compatibility screening on the basis of "best-fit" considerations. Mutual coherence becomes the arbiter of acceptability. The coherentist approach is, accordingly, quite prepared to dispense with any requirement for self-evident protocols to serve as the foundations of the cognitive system. The justification of a system-included thesis will not proceed by derivations from the axioms, but comes to obtain through the pattern of its interrelationships with the rest. In general terms, the coherence criterion of truth operates as .) of suitably follows. One begins with a set S = {PI, P,, P,, "given" propositions: these are not given as true, but merely as Plausible or potential truths, i.e., as truth-candidates-and in general as competing (i.e., mutually inconsistent) ones. The task to which the coherence theory addresses itself is that of bringing order into S by separating the sheep from the goats, distinguishing what suitably qualifies for acceptance as true from what does not. A truth candidate comes to make good its claims to recognition as a truth through its consistency with as much as possible from among the rest of such data. The interaction of observation and theory provides an illustration. Take grammar. Here one moves from the phenomena of usage by way of a best-fit principle (an "inference to the best explanation" as it were) to the framework of laws, and then moves back again to the phenomena by way of subsumption. Something may well get lost en route in this process of mutual attunemente.g., some of the observed phenomena may simply be dismissed (say as "slips of the tongue"). Again, fitting curves to observation points in physics also illustrates this sort of feedback process of discriminating the true and the false on best-fit considerations. The coherence theory thus implements F. H. Bradley's dictum that system (i.e., systematically) provides a test-criterion most appropriately fitted to serve as arbiter of truth. The coherence approach can be thought of as representing, in effect, the systems-analysis approach to the criteriology of truth.
..
FOUNDATIONALISM AND COHERENCE
7O5
The coherentist criterion thus assumes an entirely inward orientation: it does not seek to compare the truth candidates directly with "the facts" obtaining outside the epistemic context; rather, having gathered in as much information (and this will include also misinformation) about the facts as possible, it seeks to sift the true from the false within this body. The situation arising here resembles the solving of a jigsaw puzzle with superfluous pieces that cannot possibly be fitted into the orderly picture in whose construction the "correct solution" lies. On this approach, the validation of an item of knowledge-the rationalization of its inclusion alongside others within "the body of our knowledgew-proceeds by way of exhibiting its interrelationships with the rest. They must all be linked together in a connected, mutually supportive way (rather than having the form of an inferential structure built up upon a footing of rock-bottom axioms). We operate, in effect, with the equation: "justified" = "systematized." The coherentist can thus continue to adopt the thesis that knowledge is "true, justified belief," construing this as tantamount to claiming that the known is that whose acceptance-as-true is warranted in terms of an appropriate sort of systematization. However, since the systematization at issue is now of the network type, the impact of the thesis is drastically altered. For we now espouse a variant view of justification, one which radically reorients the thesis from the direction of the foundationalists' quest for an ultimate basis for knowledge as a quasi-axiomatic structure. Now 'justified' comes to mean not "derived from basic (or axiomatic) knowledge," but rather "appropriately interconnected with the rest of what is known." It is interesting to reconsider from this standpoint the currently much controverted thesis that knowledge is tantamount to true, justified belief. The controversy has focused upon E. Gettier's well-known counterexamples, which show that certain claims that one would not wish to countenance as knowledge at all can yet represent both true and justified beliefs when the elements of truth and justification reach them by sufficiently different routes. (For example, the man who believes P-or-Q, when all his justification relates solely to Q, which is false, whereas the truth of the disjunction inheres in that of P alone, for whose belief no justification is at hand.) One way of reading the literature of this dispute is as showing that knowledge can be extracted inferentially only from knowledge, and not from something that is epistemically less than knowledge (such as justified belief). From the network theorists'
7 0 ~
THE JOURNAL OF PHILOSOPHY
point of view, there is no harm in drawing the conclusion that ex nihilo nihil operates as an epistemic principle with respect to knowledge. Such a result is unpalatable only for those who propose to use the formula at issue (viz., that knowledge is true, justified belief) as a reductive definition of knowledge rather than merely as a descriptive remark about its systematic interrelatedness. These considerations have a special force when the epistemic "justification" at issue takes the form of a deductive demonstration. T h e coherentist continues to recognize the force of the Aristotelian argument that, when one knows the conclusion of a demonstration o n the basis of this demonstration, then one must also know the premises, and that there is no demonstrative route to the extraction of knowledge from something that is less than knowledge. But he construes the moral of the story in a different, nonAristotelian way; not as showing-through the requirement of avoiding an infinite regress-the need for ultimate, epistemically basic premises that must be nondiscursively known, but rather as revealing demonstration to be a means for organizing realized knowledge by revealing its systematic interrelationships.8 This sort of self-contained justification of the constituents of a system of knowledge in terms of one another envisaged by the network model is clearly circular when viewed in its aggregate totality. But this circularity now lacks vicious vitiating consequences. This is so for the crucial reason that such a system-internal mode of justification of some beliefs in terms of others is not the only mode of justification of a cognitive system: the system as a whole must -and can-receive external validation as well. There will be a system-external justification that supports the network as a whole, albeit in altogether different terms, viz., in terms of the pragmatic controls of its success in enabling us to canalize our actions and expectations. On such a coherentist approach, knowledge can indeed be viewed 6 One epistemologist who also clearly insists that all knowledge is discursive and that every cognition is "determined" by others is C. S. Peirce. (See his essay on "Questions Concerning Certain Faculties".) Unhappily, however, he is still caught u p in the regressus of prior and posterior, and so sees knowledge as rooted not, to be sure, in the basic starting points of foundationalism, but in the manner of intervals that are open (in the mathematical sense) and dispense with starting points while yet retaining a rigid linear order of before and after. Moreover, Peirce goes amiss--or so, at any rate, one inclines to put it from the present perspective-in treating as a matter of cognitive psychology what is properly to be regarded as a matter of the rational systematization of knowledge, and so tends to treat from a temporal standpoint issues viewed more properly in the order of reasons than in the order of euents.
FOUNDATIONALISM AND COHERENCE
7O7
as true, justified acceptance-although a mode of "justification" is now at issue that differs drastically from the inference (be it deductive or nondeductive) of one known item of knowledge from certain others. The crucial shift is that from internal to external justification. For the sort of justification that is now central at issue -once external justification is taken into view-is at bottom generic and methodological. Thus the "justification" of the acceptance of an item does not reside in the final analysis on its being duly inferred from other items internal to its systematic environment. It is a matter not of justifying some parts in terms of others, but of the pragmatic validation of the cognitive methodology in the course of whose employment the item at issue can be vouched for.7 The coherentist alternative proceeds along roughly this route to deploy the network theory of cognitive systematization in place of the foundationalist resort to its Euclidean rival. In doing so, it bifurcates the process of justification of our knowledge, envisaging on the one hand a system-internal justification of each maintained thesis in terms of others, and on the other hand a system-external justification of the procedures of cognitive inquiry by which the system as a whole receives its over-all support. V. THE COHERENTIST INVERSION
To see more vividly some of the philosophical ramifications of this coherentist approach, let us glance back once more to the epistemological role of systematicity in its historical aspect. The point of departure was the Greek position (in Plato and Aristotle and clearly operative still with rationalists as late as Spinoza) which, secure in a fundamental commitment to the systematicity of the real, takes cognitive systematicity (i.e., systematicity as we find it in the framework of "our knowledge") as merely and simply a test of adequacy for claims to knowledge, a measure of the extent to which man's purported understanding of the world can be regarded as measuring up to the mark. Systematicity on this basis functions as a regulative ideal for the organization of knowledge and (accordingly) as a testing criterion of organizational adequacy. The coherentist theory of knowledge moves beyond this position. It views systematicity not merely as a regulative ideal for knowledge but as an epistemically constitutive principle, and extends what was a mere test of understanding into a test of the evidential acceptability of factual truth claims. It thus shifts what was initially an ideal relating to the organization of our "body of (purported) TThe line of thought at issue in this paragraph is set out more fully in my The Primacy of Practice (Oxford: Blackwell's, 1973).
708
THE JOURNAL OF PHILOSOPHY
factual knowledge" into a criterion of membership in it, a measure of the rational claims to truth that can be made on behalf of the theses one may incline to accept. What is at issue is a Hegelian inversion, shifting from the relatively innocuous principle: If a thesis is a part of real knowledge, then it must cohere fully with the rest of what is known, to its more enterprising converse: If a thesis coheres fully with the rest of what is (presumably) known, then it is a part of real knowledge. T h e extent to which this view of systematicity as an arbiter of truth is reasonable will clearly be crucially dependent upon what sort of "systematization" is to be at issue. Here we have tried to maintain ("suggest" would perhaps be a more modestly seemly word) that if systematization is construed along the lines of a suitably articulated network model, then a case for the more ambitious claim that it can actually afford a truth criterion becomes defensible with 'truth criterion' understood along the lines of a functional equivalent for an "inductive logic." VI. CONCLUSION
This discussion has contrasted two implementing models for the idea that knowledge is a system: the geometric, or Euclidean model, and the network model. These models can serve to elucidate along very diverse routes the conception of scientific knowledge as a system. T h e two theories of cognitive rationalization that resultviz., foundationalism and coherentism-have been examined and contrasted. The discussion has certainly not established coherentism decisively at the expense of its rival. But it has, I trust, succeeded at least in showing that the coherentist approach is a live, available option capable of providing an alternative to foundationalism as an epistemological program, an alternative which can avoid the inherent shortcomings of the foundationalist approach, and can exhibit the surviving serviceability of the conception of knowledge as true, justified belief. NICHOLAS RESCHER
University of Pittsburgh 8 For a fuller exposition of the coherence approach to cognitive systematization see my The Coherence Theory of Truth (Oxford: Clarendon Press, 1973).