This page intentionally left blank
Mind and Supermind Mind and Supermind offers a new perspective on the nature of be...
188 downloads
1897 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Mind and Supermind Mind and Supermind offers a new perspective on the nature of belief and the structure of the human mind. Keith Frankish argues that the folk-psychological term ‘belief ’ refers to two distinct types of mental state, which have different properties and support different kinds of mental explanation. Building on this claim, he develops a picture of the human mind as a two-level structure, consisting of a basic mind and a supermind, and shows how the resulting account sheds light on a number of puzzling phenomena and helps to vindicate folk psychology. Topics discussed include the function of conscious thought, the cognitive role of natural language, the relation between partial and flat-out belief, the possibility of active belief formation, and the nature of akrasia, self-deception, and first-person authority. This book will be valuable for philosophers, psychologists, and cognitive scientists. ke i th f rank i sh is Lecturer in Philosophy at the Department of Philosophy, The Open University. He has published in Analysis and Philosophical Psychology and contributed to Language and Thought: Interdisciplinary Themes, ed. P. Carruthers and J. Boucher (Cambridge, 1998).
CAMBRIDGE STUDIES IN PHILOSOPHY General editors e. j. lowe and walte r s i nnot t -arm st rong Advisory editors jonathan danc y University of Reading joh n hal dane University of St Andrews g i l b e rt harman Princeton University f rank jac k s on Australian National University w i l l i am g. lycan University of North Carolina, Chapel Hill sy dney sh oe make r Cornell University j ud i th j. th om s on Massachusetts Institute of Technology re c e nt t i t le s jo sh ua h of f man & gary s. ro se nk rantz Substance among other categories paul h e l m Belief policies noah le mo s Intrinsic value ly nne rudde r bake r Explaining attitudes h e nry s. ri c hard s on Practical reasoning about final ends rob e rt a . w i l s on Cartesian psychology and physical minds barry maund Colours m i c ha e l dev i t t Coming to our senses sy dney sh oe make r The first-person perspective and other essays m i c ha e l stoc ke r Valuing emotions arda de nke l Object and property e. j. lowe Subjects of experience norton ne l k i n Consciousness and the origins of thought p i e rre jacob What minds can do andre gal lo i s The world without, the mind within d. m . arm st rong A world of states of affairs dav i d coc k burn Other times mar k lanc e & joh n o ’ leary - hawth orne The grammar of meaning annet te barne s Seeing through self-deception dav i d lew i s Papers in metaphysics and epistemology m i c ha e l b ratman Faces of intention dav i d lew i s Papers in ethics and social philosophy mar k rowland s The body in mind: understanding cognitive processes log i g unnar s s on Making moral sense: beyond Habermas and Gauthier b e nnet t w. h e l m Emotional reason: deliberation, motivation, and the nature of value ri c hard joyc e The myth of morality i sh t i yaque haj i Deontic morality and control andrew new man The correspondence theory of truth jane h eal Mind, reason, and imagination pete r ra i lton Facts, values and norms c h ri stoph e r s. h i l l Thought and world way ne dav i s Meaning, expression and thought andrew m e l ny k A physicalist manifesto jonathan l . k vanv i g The value of knowledge and the pursuit of understanding w i l l i am rob i n s on Understanding phenomenal consciousness m i c ha e l sm i th Ethics and the a priori d. m . arm st rong Truth and truthmakers jo sh ua g e rt Brute rationality: normativity and human action
Mind and Supermind Keith Frankish The Open University
cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge cb2 2ru, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521812030 © Keith Frankish 2004 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2004 isbn-13 isbn-10
978-0-511-23154-4 eBook (NetLibrary) 0-511-23154-7 eBook (NetLibrary)
isbn-13 isbn-10
978-0-521-81203-0 hardback 0-521-81203-8 hardback
Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
For my parents, Arthur and Eileen Frankish, in gratitude for their never-failing support, encouragement, and love
Contents List of figures Preface
page xi xiii
1 Introduction
1
1 The core claim 2 An overview of the book 3 Methodological remarks
1 4 8
2 Divisions in folk psychology
12
1 Belief 2 Reasoning 3 Mind Conclusion and prospect
12 24 33 49
3 Challenges and precedents 1 2 3 4
52
Challenges Bayesians on flat-out belief Opinion and the Joycean machine Acceptance Conclusion and prospect
4 The premising machine
52 59 71 80 87
90
1 Premising policies 2 Premising and the role of language 3 The premising machine Conclusion and prospect
90 97 107 121
5 Superbelief and the supermind
124
1 Superbelief and superdesire 2 Challenges met 3 The supermind Conclusion and prospect
124 143 154 159
ix
Contents 6 Propositional modularity
161
1 The eliminativist threat 2 The case for propositional modularity 3 Propositional modularity vindicated Conclusion and prospect
7 Conceptual modularity
184
1 The case for conceptual modularity 2 Conceptual modularity vindicated 3 The future of folk psychology Conclusion and prospect
8 Further applications 1 2 3 4
161 165 173 183 184 190 200 202
203
Akrasia Self-deception First-person authority Scientific psychology Conclusion
203 213 218 226 233
Conclusion
234
References Author index Subject index
235 246 249
x
Figures 1 2 3 4
The two strands of folk psychology Precedents for strand 2 belief The structure of the human mind Varieties of acceptancep
xi
page 50 88 122 141
Preface It is an old adage that two minds are better than one, and the same may go for theories of mind. Anyone with even a passing acquaintance with modern philosophy of mind knows that philosophers differ widely in their view of the nature of mental states. One of the sharpest differences is that between the views of Daniel Dennett and Jerry Fodor. According to Dennett (or a slightly caricatured version of him), there is nothing more to having a belief or desire than being disposed to behave in the right way. Mentalistic discourse is a shallow, but very useful, way of characterizing and predicting people’s behaviour. According to Fodor, on the other hand (to caricature slightly again), beliefs and desires are discrete, linguistically structured representational states, and everyday mentalistic discourse incorporates a theory of the internal processes that generate behaviour. These views seem, on the face of it, straightforwardly incompatible, and it is widely assumed that endorsing one means rejecting the other. I am going to argue that this is not so. When we look carefully, we find some striking divisions in the way we use mentalistic terms and in the kinds of mental explanation we give. In a rush to establish the scientific credentials of folk psychology, philosophers have tended to gloss over these divisions, imposing a unified framework on the folk concepts and practices. This has, I think, been a mistake. If we take the divisions seriously and trace out their implications, we are led to a picture of the human mind as a two-level structure, in which the two levels are differently constituted and have different functions. And when we do this, we see that the views of Dennett and Fodor are not so opposed after all. We have, in a sense, two minds, and need a two-strand theory of mind. This book began life as a doctoral thesis at the University of Sheffield, and it owes a huge debt to the person who supervised that thesis, Peter Carruthers. Over many years Peter has given very generously of his time, and I thank him for his inspiration, support, and consistently excellent advice. Without him, this book would probably not have been written; xiii
Preface it would certainly have been much poorer. Thanks are also due to Chris Hookway, who acted as my secondary supervisor, and to George Botterill and Peter Smith, who examined the thesis and supplied helpful feedback. George also supervised my work on the initial proposal from which the thesis grew, and I thank him for his encouragement and advice in those early days. More recently, Maria Kasmirli and an anonymous referee have supplied comments on the typescript, for which I am grateful. During the course of writing I have also benefited from discussions and correspondence with many friends and colleagues, among them Alex Barber, Jill Boucher, Gavin Boyce, Andy Clark, Tom Dickins, Pascal Engel, Andr´e Gallois, Nigel Gibbions, David Harrison, Stephen Laurence, Patrick Maher, Betty-Ann Muir, Gloria Origgi, David Owens, David Papineau, Carolyn Price, and Dan Sperber. I thank them all. The influence of Daniel Dennett’s writings will be evident throughout this book. Dennett has himself indicated the need for a two-strand theory of mind, and his original essay on the topic has been an important inspiration for the ideas developed here (Dennett 1978a, ch. 16). Parts of this book make use of material from two earlier publications of mine, though with substantial revision and rewriting. Chapters 3 and 5 draw on my ‘A matter of opinion’ from Philosophical Psychology, 11 (1998), pp. 423–42, with thanks to the editor, William Bechtel, and to the publishers, Taylor and Francis (http://www.tandf.co.uk/journals). Chapter 4 draws upon my ‘Natural language and virtual belief ’, in Peter Carruthers and Jill Boucher (eds.), Language and Thought: Interdisciplinary Themes (Cambridge: Cambridge University Press, 1998), pp. 248–69, with thanks to the editors and to Cambridge University Press. Earlier versions of some of the chapters of this book were used as teaching texts for my thirdyear course ‘Mind, action, and freedom’, which I taught in the Department of Philosophy at the University of Sheffield during the Spring semester of 1997. I am grateful to all my students in that class for their comments and questions – mentioning in particular Clare Heyward and Intan M. Mohamad. I should also like to express my gratitude to Hilary Gaskin, Mary Leighton, Pauline Marsh, and Lucille Murby for their help in preparing this book for the press. Finally, very special thanks are due to my parents, to whom this book is dedicated, and to Maria, whose contribution has been the most important of all.
xiv
1 Introduction In this opening chapter I shall introduce my core claim, provide an overview of the chapters to follow, and make some remarks about the aims and scope of the project. 1 th e core c la i m The concept of belief is a multi-faceted one. A belief ascription may pick out an episodic thought or a long-held opinion, a considered conviction or an unthinking assumption, a deliberate judgement or a perceptual impression. In the first person, it may express a tentative suggestion or an item of profound faith, a speculative hypothesis or a confident assertion, a routine recollection or a revelatory self-insight. This diversity is not in itself a problem; many everyday concepts have a similar richness of structure. The concept of belief is special, however. For many philosophers and psychologists believe that it can be co-opted to play a very precise role. They believe that our everyday practices of psychological description, explanation, and prediction – practices often referred to as folk psychology – are underpinned by a primitive but essentially sound theory of human cognition, whose concepts and principles will be central to a developed science of the mind. That is to say, they believe that the concept of belief, together with those of other folk-psychological states, can be integrated into science and applied to the business of serious scientific taxonomy. I shall refer to this view as integrationism.1 In a weak form, at least, integrationism is an attractive position: there is a strong case for thinking that the broad explanatory framework of folk psychology is as sound as those of other special sciences (Fodor 1987, 1
The term ‘folk psychology’ is often used to refer to the putative theory underpinning our everyday practices of psychological explanation and prediction, as well as to the practices themselves. To avoid confusion, I shall use the term only in its broader sense, to refer to the practices.
1
Mind and Supermind ch. 1; Botterill and Carruthers 1999, ch. 2). Of course, folk psychology cannot be integrated into science just as it stands. At the very least, it will be necessary to identify its theoretical core – to analyse its central concepts and to articulate its fundamental principles and assumptions. It may also be necessary to refine this core and make it more precise, and perhaps even to revise it in some ways. And, of course, we shall need to confirm that the resulting theory is sound and, more specifically, compatible with what we know about human biology and neurology. A huge amount of work has been devoted to these tasks, yet no consensus has emerged. There are deep and seemingly intractable disputes about the nature of belief – its metaphysics, semantics, causal role, and relation to language. And there are continuing worries about the compatibility of the folk theory with our best neuroscientific theories. In the light of these problems, some writers have concluded that we should abandon integrationism and eliminate folk-psychological concepts from science, while others argue that only attenuated versions of the folk concepts can be retained (for the former view, see Churchland 1979, 1981; Stich 1983; for the latter, Clark 1993b; Dennett 1987; Horgan and Graham 1990). In this book I shall be outlining an alternative integrationist strategy which promises to resolve some of the theoretical disputes just mentioned and to establish the compatibility of folk psychology and neuroscience, while at the same time preserving a robust common-sense conception of the mind. The strategy involves questioning an assumption common to most existing integrationist projects. This is that belief is a unitary psychological kind – that whenever we ascribe a belief to a person, creature, or system, we ascribe essentially the same kind of state.2 Of course, no one denies that belief has varied aspects and manifestations – it is widely accepted that beliefs can be both occurrent and standing-state, explicit and tacit, conscious and non-conscious, and so on. But it is generally assumed that these are different aspects or variants of the same underlying state. So occurrent beliefs can be thought of as activations of standing-state beliefs, tacit beliefs as dispositions to form explicit beliefs, conscious beliefs as beliefs that are the object of higher-order beliefs, and so on. This assumption – the unity of belief assumption, as I shall call it – shapes the direction of most integrationist projects. Typically, theorists begin by articulating a core notion of belief, and then go on to show how different varieties of belief can be 2
Here, as throughout, I focus primarily on belief. I assume, however, that parallel claims can be made for desire, and perhaps for other mental states, too, and shall occasionally indicate how these would go.
2
Introduction defined in terms of it. The unity of belief assumption is often coupled with a parallel assumption about reasoning. Theorists tend to assume that this, too, has a uniform character, and to advocate single-strand theories of reasoning, which identify thought processes with a single, generic kind of activity – computational operations in a mental language, say, or associative processes of some kind. This assumption – I shall call it the unity of processing assumption – is not quite so pervasive as the parallel one about belief, and has occasionally come under challenge from psychologists. It is common in the philosophical literature, however, and shapes many of the debates there. There have, it is true, been dissenting voices, suggesting that the apparent uniformity of folk-psychological discourse masks important psychological distinctions. Some writers distinguish passive belief formation from active judgement. Philosophers of science, too, commonly mark a distinction between partial and flat-out belief (sometimes called ‘acceptance’). And Daniel Dennett has argued that we must distinguish non-verbal beliefs from a class of language-involving cognitive states which he calls opinions. The distinction between the two states is, he claims, a very important one: My hunch is that a proper cognitive psychology is going to have to make a sharp distinction between beliefs and opinions, that the psychology of opinions is really going to be rather different from the psychology of beliefs, and that the sorts of architecture that will do very well by, say, nonlinguistic perceptual beliefs (you might say animal beliefs) is going to have to be supplemented rather substantially in order to handle opinions. (1991b, p. 26)
Indeed, Dennett suggests that a failure to distinguish these states lies at the root of many philosophical misconceptions about belief (see the references to ‘opinion’ in Dennett 1987; see also his 1991d, p. 143, and 1994, p. 241). It would not be too surprising if something like this were true. After all, everyday users of folk psychology are interested primarily in behavioural prediction and explanation, not precise psychological taxonomy. If two psychological states or processes were similar enough to be lumped together for everyday purposes, then we should not expect folk psychology to make a sharp distinction between them – though it might register their distinctness in indirect ways. The states and processes in question might nonetheless differ significantly, and it might be important for a developed psychology to distinguish them – even if we continued to conflate them for everyday purposes. That is to say, folk-psychological concepts may 3
Mind and Supermind turn out to be what Block calls mongrel concepts (Block 1995), and it may be necessary to distinguish different versions of them if they are to be integrated into science. The introduction of new distinctions like this is common in integrationist projects. Consider, for example, how psychological theory has adopted the common-sense concept of memory, while at the same time distinguishing various kinds of it – long-term, shortterm, episodic, procedural, semantic – each with different functions and properties. I believe that something similar will happen with the folk concepts of belief and reasoning. These, I shall argue, conflate two different types of mental state and two different kinds of mental processing, which form two distinct levels of cognition. That is, I shall be arguing that the search for a single theoretical core to folk psychology is misguided: folk psychology has – or a rational reconstruction of it will have – two distinct theoretical cores. (I shall say more in a moment about the relative roles of analysis and rational reconstruction in this project.) In short, we need a two-strand theory of mind. Only by developing such a theory, I believe, can we resolve some deep disputes about the mind and provide a sound basis for integrating folk psychology into science. 2 an ove rv i ew of th e book To date, there have been few sustained attempts to develop two-strand theories of belief. Few theorists have sought to link up the various distinctions that have been proposed or to explore their implications for issues in philosophy of mind. Dennett is one of the few exceptions here, drawing on a number of sources in a richly suggestive essay (Dennett 1978a, ch. 16). However, he does not work out his ideas in a systematic way and tends to treat opinion as something of a cognitive side-show, which is not directly implicated in reasoning and the guidance of action. And while some psychologists have advanced ‘dual-process’ theories of reasoning, there have been few attempts to integrate these theories with two-strand theories of belief or to consider their philosophical consequences. This book aims to remedy these omissions. Chapter 2 begins by highlighting some divisions in the folk notion of belief – divisions relating to consciousness, activation level, degree, method of formation, and relation to language. These divisions, I argue, are real and run deep and link together in a natural way to yield a tentative two-strand theory of belief – the first strand non-conscious, partial, passive, and non-verbal, the 4
Introduction second conscious, flat-out, active, and often language-involving. I then move on to look at similar divisions in our view of reasoning. Again, I argue that these run deep and indicate the need for a two-strand theory – the strands corresponding closely to the two strands of belief. The final section of the chapter looks at some further divisions in folk psychology, concerning the ontological status of belief and the function of psychological explanation. I identify two broad interpretations of folk psychology, which I call austere and rich, and which correspond roughly to the views of philosophical behaviourists and functionalists respectively. On the austere interpretation, folk psychology is a shallow theory, which picks out behavioural dispositions and offers explanations that are causal only in a weak sense. On the rich interpretation, it is a deep theory, which aims to identify functional sub-states of the cognitive system and to offer causal explanations of a more robust kind. I suggest that these two interpretations each have a firm basis in the folk outlook and that a reconstructed folk psychology needs to admit both. The two interpretations, I argue, indicate the need for two theories, corresponding to the two strands of mentality identified earlier: an austere theory for the non-conscious strand, and a rich theory for the conscious one. It is one thing to identify two strands of mentality, of course, another to construct a substantive two-strand theory of mind. A developed theory will need to explain how the two strands are related to each other, what role they play in reasoning and action, and how they combine to form a single intentional agent. Chapters 3, 4, and 5 are devoted to this task. Chapter 3 begins by setting out some challenges to the proposed two-strand theory. Prominent among these is what I call the ‘Bayesian challenge’ – the challenge of reconciling our common-sense belief in the existence and efficacy of flat-out belief with a Bayesian view of rational decision-making. I then review some precedents for a two-strand theory of mind, seeking hints as to how to develop the theory and respond to the challenges. I focus in particular on possible models for the conscious, flatout, language-involving strand of belief, and on suggestions as to how this strand might be related to the other, non-conscious strand. Although none of the models examined fits the bill exactly, I identify several promising ideas, including the behavioural view of flat-out belief developed by some Bayesians, Dennett’s picture of the conscious mind as a virtual machine, and Cohen’s account of acceptance as a premising policy. I conclude the chapter by suggesting how elements of these views can be combined to give a picture of the conscious mind as a premising machine, formed by the adoption 5
Mind and Supermind and execution of premising policies, and driven by non-conscious, partial beliefs and desires. Chapter 4 is devoted to filling in the picture of the premising machine sketched in the previous chapter. I discuss the nature and scope of premising policies and distinguish several varieties of them, including a goal-oriented form. I then look at what is involved in executing these policies and what role natural language plays in the process. Finally, I consider how premising policies are related to other mental states and how they influence action. Crucially, I argue that an agent’s premising policies are realized in their nonconscious, partial beliefs and desires – and thus that the premising machine constitutes a distinct level of mentality which supervenes on the one below it.3 To emphasize the point, I call premising policies supermental states, and the level of mentality they constitute the supermind. By analogy, I call the non-conscious attitudes in which the supermind is realized the basic mind. Because the supermind is realized in the basic mind, I argue, supermental explanations of action are not in competition with those pitched at the basic level. Rather, each corresponds to a different level of organization within the agent. Chapter 5 shows how we can use the framework developed in the previous chapter to flesh out the two-strand theory outlined in chapter 2. I begin by arguing that conscious, flat-out beliefs can be identified with a particular subclass of premising policies, and thus that they, too, are supermental states. The upshot of this is that our two-strand theory of mind becomes a two-level one, with conscious, flat-out states realized in non-conscious, partial ones. (In line with the terminology adopted earlier, I call the former superbeliefs and the latter basic beliefs.) I then go on to highlight the attractions of this view, and to show how it can resolve the challenges posed in chapter 3. The chapter concludes with some remarks on the function of the supermind. I argue that supermental capacities carry with them considerable cognitive benefits. The supermind is a slow but highly flexible system, which can kick in whenever faster but less flexible basic processes fail to yield a solution. Moreover, because supermental processes are under personal control, we can reflect on them, refine them, and supplement them. The flexibility, adaptability, and improvability of human cognition flow directly from the supermind. 3
Throughout this book I use ‘they’ as a gender-neutral third-person singular pronoun. This usage has a long history in English prose and is, I think, the least inelegant way of avoiding an impression of gender bias.
6
Introduction Having developed the core theory and shown how it can vindicate some important aspects of folk psychology, I then move on to consider some further folk commitments and to show how these, too, can be vindicated by the theory. There is a case for thinking that folk psychology makes some substantial assumptions about the functional architecture of the cognitive system. In particular, it has been argued that there is a folk commitment to the theses of propositional modularity and conceptual modularity – accounts of how propositional attitudes and their component concepts are stored and processed (Ramsey et al. 1990; Davies 1991). The idea that there is a folk commitment to these theses is worrying, since they in turn seem to involve claims about the architecture of the brain, and therefore to run a risk of empirical falsification. If the folk are committed to them, then their conception of the mind may be seriously mistaken and ripe for revision or even elimination. In chapters 6 and 7 I show how the two-level theory developed in earlier chapters can vindicate folk psychology’s architectural commitments. Chapter 6 deals with propositional modularity and chapter 7 with conceptual modularity. In each case I begin by arguing that there is indeed a folk commitment to the thesis, building on arguments in the literature. I then show that this commitment can be vindicated at the supermental level, without involving claims about the structure of the brain. I show that the supermind exhibits both propositional modularity and conceptual modularity, and thus that the folk assumptions are correct. I argue, however, that there is no inference from this to claims about the structure of the brain. The supermind is implemented, not in the hardware of the brain, but in basic-level intentional states and actions. And the basic mind need not itself exhibit propositional or conceptual modularity in order to support a supermind that does. The upshot of this is that the folk architectural commitments are compatible with any account of the underlying neural architecture. Given this, the threat to folk psychology – in this guise at least – vanishes. The final chapter outlines some further applications of the proposed theory – starting with a discussion of akrasia and self-deception. These conditions can seem puzzling, and it is sometimes suggested that they reveal the presence of conflicting subagents within the human psyche (Davidson 1982; Pears 1984). Here a two-level theory offers a different and, I think, more attractive perspective. The conditions can be thought of as involving a conflict, not between subagents, but between levels of 7
Mind and Supermind mentality – the attitudes at one level in tension with those at the other. In each case, I sketch the two-level account and show that it offers an economical way of resolving the associated puzzles. The chapter then moves on to look at first-person authority. Again, this can appear puzzling. How can we be authoritative about our mental states, given that there are independent behavioural criteria for their possession (Heal 1994)? And, again, the present theory offers a fresh perspective. The thought is that first-person authority proper extends only to supermental states, and is primarily a matter of control. Superbeliefs can be actively formed and processed, and in self-ascribing these states we are not simply reporting that we meet the criteria for their possession, but committing or recommitting ourselves to meeting them. The authority attaching to such an ascription, then, is that of a sincere commitment, rather than that of a reliable report. In addition to helping to clarify philosophical debates about belief, the theory developed here may have application to issues in developmental and clinical psychology, and the final chapter closes with some brief remarks on this. In particular, I suggest that the theory may be able to shed light on the nature of autism. 3 m eth odolog i cal re mar k s It may be useful at this stage to add some remarks about the scope and status of the theory I shall be developing. First, note that in distinguishing different kinds of belief I shall focus on differences in structure, function, and constitution, rather than content. This is not because I think that the two do not differ in their representational properties. We may need a different theory of content for each kind of belief, and there may also be differences in the range and determinacy of the contents associated with each. But it makes sense to consider broad structural and constitutive questions first. The distinction between the two kinds of belief can be drawn most clearly in this way, and once it is in place, questions about content may become more tractable. The same semantic questions will arise for each type of belief, and we shall be in a better position to address them when we know what kind of attitude is involved in each case. I shall say something about concept possession at the two levels in chapter 7, but there will be no space for extended discussion of issues of content. In this respect the present book can be thought of as preparing the ground for future work. 8
Introduction Secondly, in proposing a two-level theory of mind, I do not mean to claim that there are only two levels of cognition. My aim is to reconstruct folk psychology – to show how we can bring our various beliefs about the mind into a coherent theory – not to provide a complete framework for psychological theorizing. In particular, I do not mean to deny the existence of a level of sub-personal psychology underlying the folk-psychological levels (we might call it a ‘sub-mind’), though I do claim that the folk are uncommitted as to the existence and nature of such a level. I shall say more about this later. Thirdly, I want to add a note about the account of the basic mind I shall be defending. As I mentioned, I shall advocate what I call an austere view of this level, which treats mental states simply as behavioural dispositions. Folk psychology’s architectural commitments, I shall argue, relate to the supermind, not the basic mind. (Again, this does not mean denying the existence of sub-personal psychological structures underlying the dispositions that constitute the basic mind – denying, for example, that there is a sub-personal language of thought or a range of domain-specific cognitive modules. The claim is merely that folk psychology does not postulate such structures.) This is, I think, the correct view to take. As I shall argue in chapter 2, there is a strand of folk psychology which has no architectural commitments, and one of the virtues of a two-level theory is that it can reconcile the existence of this strand with that of another which does have such commitments. Moreover, it is tactically the right position for me to adopt. For one of my aims is to show how a richly structured supermind can be realized in basic mental states and processes. And in adopting an austere view of the basic mind, I present myself with the hardest case here: if I can show that a richly structured supermind can be realized in an austere basic mind, then it should not be more difficult to show that it can be realized in a richer one. However, nothing in my account of the supermind relies on an austere view of the basic mind, and it is possible to endorse the former while rejecting the latter. So if you balk at austerity, then feel free to substitute whatever view of the basic mind you prefer. In doing so, you will deprive the account of some of its conciliatory power, but the overall shape of the two-level theory and the description of the nature and function of the supermind will remain unaffected by the change. Finally, let me say something about the status of the two-level theory I shall develop and the means by which it will be derived. The first part of this is easily done. The theory is an empirical one – a model of how the human mind might be organized – and its evaluation will require empirical 9
Mind and Supermind investigation. The theory will not be derived by empirical investigation, however, and I shall not be drawing on experimental data in support of it – though I shall aim to say nothing that is incompatible with it.4 The exclusion of such data reflects the scope of the present inquiry. If folk psychology is to be integrated into science, then two things must be done. It will be necessary, first, to identify and regularize its theoretical core, and secondly, to establish that the resulting theory is true. The primary aim of this work is to accomplish the first task – to articulate a theory which best systematizes our common-sense intuitions about what the mind is like. My theorizing will thus be constrained by a belief in the fundamental soundness of the descriptive and explanatory practices of folk psychology, and the data upon which I shall draw will be the product, not of experiment, but of reflection on those practices. This is not to say, however, that the work will be merely one of conceptual analysis. I do not claim that a two-level framework is revealed simply by careful examination of folk-psychological practice. I do not think that the practice is sufficiently well-defined for that. Nor do I claim that a two-level framework is tacitly assumed by users of folk psychology. I suspect that, for the most part, they simply conflate the states and processes that I aim to distinguish. (The generalizations which underpin everyday psychological discourse are, I think, sufficiently loose to hold true of both levels of cognition.) So while the present work will begin with conceptual analysis, highlighting a number of divisions within folk psychology, it will not stop there, but will go on to engage in rational reconstruction – to seek to identify the theoretical framework which best regiments our folk usage. Thus the aim will be, not to elucidate a pre-existing theoretical framework, but to supply one where it is lacking. And the resulting theory will be the product, not only of an analysis of folk discourse, but also of abductive inference from it. This is not all, however. The theory will also have important implications for the second of the two parts of the integrationist project – the task of showing that the core folk theory is true. For if the arguments in the later chapters are sound, then this task will take on a very different aspect. According to those chapters, folk psychology involves no commitments as to the internal architecture of the brain: the supermind is constituted by 4
This is a self-denying ordinance, since there is much experimental evidence that could be cited in support of the thesis. There is a particularly interesting consilience between the ideas that will be developed here and the dual-process theories of reasoning developed by Jonathan Evans and David Over, among others (see Evans and Over 1996). I shall say a little about these theories in chapter 8.
10
Introduction various personal attitudes and activities, grounded in a basic mind which itself consists of a set of multi-track behavioural dispositions. If this is right, then vindicating folk psychology will be a matter for behavioural observation and interpretation. It will involve showing that people can be interpreted as intentional agents, and that, so interpreted, they turn out to exhibit the attitudes and actions that are constitutive of supermentality. And the existing evidence in favour of these claims is, as we shall see, very strong. To sum up, then, I shall be claiming that a two-level theory of the sort I propose offers the best way of giving folk psychology an explicitly theoretical structure and the best hope of vindicating the explanatory practices it supports. The folk may not have a two-level theory of mind, but it’s time they got one.
11
2 Divisions in folk psychology This chapter sets out to challenge the unity assumptions and to establish the framework for a two-strand theory of mind. It highlights a number of divisions in folk psychology – distinctions we draw between types of belief and reasoning, and tensions in our thinking about the mind and mental explanation. These divisions do not compel us to abandon the unity assumptions; they can be explained away or dismissed as superficial. But, as we shall see, a two-strand theory can account for them in a particularly attractive way. The chapter is divided into three parts. The first looks at divisions in our view of belief, the second at related divisions in our view of reasoning, and the third at some deeper tensions in our view of the mind. Since the purpose of the chapter is to gather data, it will necessarily have a somewhat disjointed character, but connections will emerge as we go on, and by the end we shall have the outline of a tentative two-strand theory of mind. 1 belief This part of the chapter reviews some distinctions we draw between different types of belief. There are some features common to all the types surveyed. They all have propositional content (or at any rate, token beliefs do; it is often argued that propositional content can be attributed to beliefs only as they are entertained in particular contexts); they all have mind-to-world direction of fit (they represent their contents as obtaining, rather than as to be made to obtain); and they all guide inference and action in a way that reflects their content and direction of fit – prompting actions and inferences that are rational in the light of them. These common features provide some grounds for the view that they are all variants of a single core state (what I called ‘the unity of belief assumption’). When we look more closely, however, the differences appear as marked 12
Divisions in folk psychology as the similarities, and a serious challenge emerges to the unity of belief assumption.1 1.1 Conscious versus non-conscious The first distinction I want to mention is that between conscious and non-conscious beliefs. By conscious beliefs I mean ones that we are apt to entertain and act upon consciously. Non-conscious beliefs, on the other hand, are ones that influence our behaviour in an automatic, unreflective way, without being consciously entertained. For example, my behaviour when driving is guided by non-conscious beliefs about the rules of the road. It is worth stressing that to say that a belief is non-conscious, in this sense, is not to say that its possessor is not consciously aware of its existence. For example, noticing the way I place my feet as I walk down the street, I may consciously conclude that I have a non-conscious belief that it is dangerous to tread on the cracks in the pavement. That is to say, I can be conscious of a belief without the belief itself being conscious – that is, without its being apt to be consciously entertained and acted upon. (Consciously thinking that I believe that it is dangerous to tread on the cracks is different from consciously thinking that it is dangerous to tread on the cracks.) Widespread acceptance of the existence of non-conscious mental states may owe something to the influence of Freudian psychoanalytic theory (though the ease with which elements of Freudian theory were absorbed into popular culture suggests that the seeds of the notion have been long present in folk psychology). However, the conception of the nonconscious mind invoked here is far less theoretically loaded than Freud’s. The Freudian Unconscious is a collection of repressed memories and desires, often of a traumatic or sexual nature, which manifest themselves in pathological behaviours of various kinds. The non-conscious mind, as pictured here, is much more mundane. It consists for the most part of everyday beliefs and desires, such as my beliefs about the rules of the road, which are formed in the normal way and which help to shape normal behaviour. There is no implication that the states involved have been repressed, or that subject is unwilling to acknowledge their existence and 1
For another approach to the ambiguity of belief, involving a fourfold classification, see Horst 1995.
13
Mind and Supermind influence. (This is not to deny that some of our non-conscious beliefs and desires may conflict with our conscious ones, or that some may reflect hidden fears and anxieties. But these cases will be the exception rather than the rule.) The existence of non-conscious mental states of this anodyne kind is now widely acknowledged, and I shall assume that folk psychology embraces it.2 Now, in itself, the conscious/non-conscious distinction does not pose a significant challenge to the unity of belief assumption. Conscious and non-conscious beliefs might both belong to the same basic psychological kind, differing only in their possession, or lack, of some consciousnessconferring property – possibly a relational one (see, for example, Armstrong 1968, 1984; Carruthers 1996b; Rosenthal 1986, 1993). Indeed, token beliefs often seem to switch between conscious and non-conscious forms – sometimes being consciously entertained, sometimes influencing our behaviour non-consciously (again, my beliefs about the rules of the road are examples). I shall be arguing that this is a mistake, and that conscious and nonconscious beliefs are of fundamentally different types. As we shall see, the conscious/non-conscious distinction aligns with a number of others, suggesting that conscious and non-conscious beliefs are differently constituted and belong to different systems. On this view, then, it is a mistake to think that a token belief can switch between conscious and non-conscious forms, sometimes operating at a conscious level and sometimes at a non-conscious one. In such cases, I shall argue, one has in fact two distinct token beliefs, of different types but similar content, which operate in different ways and at different levels. (Note, however, that I shall not be offering a theory of consciousness itself. I am interested in how conscious beliefs are constituted and how they function, not in what makes them conscious, and everything I say will be broadly compatible with all the various theories on the latter point.) 1.2 Occurrent versus standing-state It is common to accept that beliefs can exist in two forms – both as dormant states of one’s cognitive system and also as active mental events. At any moment, we all possess a huge number of beliefs which are not 2
It is true that there is some philosophical resistance to the notion of non-conscious mentality (see, for example, Searle 1992), but I do not think that the folk share these worries.
14
Divisions in folk psychology currently active in our minds. I believe – among many other things – that cider is made from apples, that my surname ends with an ‘h’, and that Tallahassee is the state capital of Florida. These are not things I think about much, but I believe them, and my belief would manifest itself in appropriate circumstances (if I were questioned, say). Beliefs like these are sometimes referred to as standing-state beliefs, and are contrasted with occurrent ones – that is, with episodes in which a belief is brought actively to mind. (The term ‘occurrent belief ’ is not an everyday one, of course; indeed, there is no distinctive popular name for these episodes – ‘thought’ is perhaps the nearest, but it is not unambiguous.) We are often aware of experiencing such episodes. For example, just now I was reflecting that the weather is unseasonably warm and thinking that it would be wise to turn the thermostat down. Episodes like this can occur spontaneously without any effort on our part – thoughts just strike us or pop into our heads – but we also can set out to induce them deliberately, as when we rack our brains for the answer to a question. Occurrent thoughts – at least when conscious – invariably occur serially, and they often form coherent sequences, linked by bonds of association or justification. Much of our conscious mental life consists of such trains of thought. This is the so-called ‘stream of consciousness’ which some novelists have tried to reproduce.3 Most writers on belief have recognized the existence of both occurrent and standing-state beliefs, though not all have given them equal weight in their theorizing. Early modern philosophers tended to concentrate on occurrent belief and to neglect the standing-state variety. Twentiethcentury behaviourists, by contrast, switched the focus to standing-state belief and largely ignored the occurrent form. In part, this reflected a difference of theoretical interest – in one case in the phenomenology of belief, in the other in its role in the explanation of action – but an adequate account of belief should accommodate both aspects.4 The dominant contemporary view – the representational theory of mind – sees the two varieties as different aspects of the same state, differing in their level of activity. The theory identifies standing-state beliefs with stored representations, and occurrent beliefs with activations of these representations, 3 4
For discussion of the nature of occurrent thought and an argument for its distinctness from other kinds of mental state, see Swinburne 1985. Henry Price characterizes the traditional view as the Occurrence Analysis and the behaviourist one as the Dispositional Analysis, and argues that elements of both are needed (Price 1969). For an account of belief which combines both phenomenological and actionbased criteria, see Braithwaite 1932–3.
15
Mind and Supermind preparatory to their employment in reasoning and decision-making (see, for example, Fodor 1987). On this view, then, occurrent belief again occupies centre stage, with standing-state beliefs remaining idle until activated in occurrent form. This view has an intuitive appeal. It does often seem as if occurrent activation is required for a belief to play a role in reasoning and decisionmaking. I am driving to work as usual. Suddenly, it occurs to me that roadworks are due to start today, and I decide to take a different route. I do so, moreover – or so it seems to me – precisely because the thought about the roadworks occurred to me. Had it not done so, I would not have changed course (other things being equal, that is). The belief ’s emerging in occurrent thought was necessary for it to play a role in my decisionmaking. Or think about absent-mindedness. The light bulb blows and I go off in search of a new one. When I return to the room, my hand again goes to the light switch. Why? It seems that I had forgotten that the bulb was blown. Yet surely my memory is not that bad. I had not ceased to believe that the bulb was blown and would immediately have avowed that belief if questioned. The belief, it seems, was stored in my memory, but somehow failed to influence my behaviour. The natural way of explaining this would be to say that it failed to occur to me – that is to say, that it failed to influence how I behaved because it failed to become occurrent at an appropriate moment.5 We hold, then, that some standing-state beliefs require activation in occurrent form in order to influence action. But it is not clear that we think that all of them do. We certainly do not suppose that all beliefs require activation as conscious occurrent thoughts. We accept that beliefs can influence action non-consciously. And we allow that animals act on their beliefs – without, I think, thereby committing ourselves to the view that they have conscious occurrent thoughts. (As Malcolm notes, it would sound funny to say of an animal that a thought occurred to him, or struck him, or went through his mind; see Malcolm 1973.6 ) Of course, it is possible that in these cases the relevant beliefs are activated as non-conscious occurrent thoughts, but, as we shall see, it is doubtful that there is any folk 5 6
For further defence of this view, see Goldman 1970, ch. 4. This indicates, Malcolm goes on, that having thoughts is not the paradigmatic form of mental activity. For we find it very natural to speak of animals thinking and to explain their behaviour by reference to what they think. Malcolm concludes that there is no single paradigm or prototype of thinking and suggests that it was by treating having thoughts as the paradigm form that Descartes was led to deny that animals can think.
16
Divisions in folk psychology commitment to this view. That is to say, occurrent belief, as we commonly conceive of it, may be unique to the conscious mind, and a behaviourist perspective may be more appropriate for the non-conscious mind. I shall say more about this in the next part of this chapter.7 With the distinctions between standing-state and occurrent belief in place, I can now restate more clearly the proposal I am making. I am suggesting that there are two types of standing-state belief: those of the first type receive activation as conscious occurrent thoughts and influence action at a conscious level, and those of the second do not receive conscious occurrent activation and influence action non-consciously. Henceforth, when I talk of conscious and non-conscious beliefs it is these two types of standing-state belief I shall mean. (Of course, there is a sense in which all standing-state beliefs are non-conscious, in virtue of the fact that they are not currently present to consciousness, but I am using the term to denote availability to conscious thought rather than actual presence.) Note that a standing-state belief counts as conscious only if it is apt to be activated as a conscious occurrent thought with the same content, and that we can entertain conscious occurrent thoughts about our non-conscious standingstate beliefs without thereby rendering those beliefs conscious. So, for example, I can entertain the conscious occurrent thought that I have a nonconscious standing-state belief that it is dangerous to tread on the cracks in the pavement without thereby coming to have a conscious standing-state belief that it is dangerous to tread on the cracks in the pavement. 1.3 Flat-out versus partial We often speak of belief as a binary, or flat-out, state, which is either categorically present or categorically absent. So, for example, I believe that my car is green but do not believe that it has an automatic gearbox. A binary view of belief is also implicit in much of our reasoning, which frequently operates upon unqualified propositional attitudes. We reason, for example, that we want a beer, that if we go to the fridge we can get a beer, and thus that a trip to the fridge is called for. This is not the whole picture, however. For we also speak of having degrees of confidence, or ‘partial beliefs’, which are continuously variable. So, for example, I am very confident (though not certain) that tap water is safe to drink, somewhat 7
It is worth stressing that not all occurrent thoughts are activations of previously formed beliefs; some involve the formation of new ones, while others are idle speculations or fantasies. I shall say more about these other forms of occurrent thought later.
17
Mind and Supermind less confident that my car is in good working order, and still less confident that it will not rain today. Degrees of confidence are sometimes referred to as subjective probability assignments, and it can be argued that rational decision-making should be sensitive to them, in the way prescribed by Bayesian decision theory. (I shall say more about this shortly.) The easiest way of ascertaining what degree of confidence a person has in various propositions is to offer them bets on their truth; if they are rational and the payoffs are linear, then their betting behaviour will vary in accordance with their degrees of confidence. So here we have another division – between a qualitative, flat-out form of belief and a quantitative, partial form. This is not, of course, to say that the two forms are fundamentally distinct. We might maintain that one of them is the core form of belief and the other a derivative or subspecies of it. There are two options here, depending on whether we take partial or flat-out belief to be the core state. Neither is particularly attractive, however. The former option – taking partial belief as core – has been widely canvassed, and I shall consider it in detail in the next chapter. Here I want to deal with the latter – the view that flat-out belief is the core state and partial belief the derivative. What are the possibilities here? One option, advocated by Gilbert Harman, is to suppose that when we talk of degree of belief we are referring to how strongly held our flat-out beliefs are, where this is a matter of how hard it would be for us to give them up (Harman 1986, ch. 3). So, one either believes a proposition or does not believe it; but if one does, then one does so with a certain degree of strength or attachment. (Harman stresses that this need not involve making an explicit assessment of how important the belief is to us; our degree of attachment to it may be implicit in the way we reason – the more attached to it we are, the more powerful the reasons needed to get us to abandon it.) Now, I think that it is quite right to say that flat-out beliefs can be held with different degrees of attachment, but it is implausible to identify degrees of confidence with these degrees of attachment. For we can have a degree of confidence in a proposition without having a flat-out belief in it at all. I am fairly confident that it will not rain tomorrow, but I do not believe flat-out that it will not. Indeed, the claim that degrees of confidence require flat-out belief leads to absurdity. For according to Bayesian principles, rational agents will entertain some degree of confidence in every proposition of whose falsity they are not completely certain – including pairs that are contradictory. Yet it is absurd to say that a rational agent will have a 18
Divisions in folk psychology flat-out belief in every contingent proposition. This option is unattractive, then. A second option is to hold that when we talk of degrees of confidence what we are really referring to is flat-out beliefs in objective probabilities – where to say that an event has a certain objective probability is to say something about the frequency with which events of that type happen. So, for example, when we say that someone is 50 per cent confident that a coin toss will come up heads, what we mean is that they believe flat-out that the objective probability of its coming up heads is 0.5, where this in turn means that they believe flat-out that, if tossed repeatedly, the coin would come up heads 50 per cent of the time. This view is unattractive, however, for two reasons. First, it makes the ability to entertain beliefs about objective probabilities and frequencies a prerequisite for having partial beliefs. And this is implausible. We want to say that people unfamiliar with those notions can nonetheless have degrees of confidence. (This is not to say that we never form flat-out beliefs about objective probabilities, just that our degrees of confidence cannot be identified with them.) Secondly, on the proposed view it would follow that we cannot have degrees of confidence in single events, since single events do not have objective probabilities, understood as frequencies. It is meaningless to talk of the frequency of a single event. Yet we do have degrees of confidence in single events – for example, I am fairly confident that my friend will call this evening to return the book I lent her. So degrees of confidence cannot be construed as flat-out beliefs in objective probabilities. A final option is to identify degrees of confidence with flat-out beliefs about one’s own behavioural dispositions – say, about how willing one would be to bet on various outcomes. So, for example, we might say that to be 50 per cent confident that one’s car is in good working order is to have the flat-out belief that one would be willing to bet on the car’s being in good working order at odds of evens or better. This proposal is also unattractive, however. For, as with the previous one, it overestimates the intellectual requirements for having partial beliefs. One can have a degree of confidence in something without having beliefs about one’s betting behaviour, or, indeed, without understanding how betting works. A similar objection will hold, I think, for any other analysis of degrees of confidence in terms of flat-out beliefs about one’s own behavioural dispositions. On a first pass, then, the division between flat-out and partial belief stands up well. Of course, this still leaves us with a question about the nature 19
Mind and Supermind of partial belief. What exactly is it to have a certain degree of confidence in something? I shall return to this question later in the chapter. 1.4 Active versus passive There is a long tradition in philosophy of maintaining that beliefs can be actively formed – that we have the power to decide what attitude to take towards a proposition, through an act of deliberate judgement or assent. The idea is that we can consider a proposition, reflect upon the evidence for and against it, and then decide whether or not to accept it as an object of belief. In the past, many philosophers took it for granted that we have this power, and many would have identified what I have been calling occurrent beliefs with episodes in which a thought is entertained prior to assent or rejection.8 Yet many contemporary philosophers deny that we have a power of active judgement, and insist that belief formation is always passive. They concede, of course, that we can indirectly influence what we believe – for example, by practising autosuggestion or by selectively focusing on favourable evidence – but deny that we can form beliefs directly, by one-off acts of judgement.9 Both points of view have some plausibility. It is undoubtedly true that much belief formation is passive. One has only to think of those beliefs that derive from perception and memory. In such cases, belief is forced upon us without any effort or choice on our part. I assume that nonconscious beliefs, too, are passively formed. Sometimes, however, we seem to take a more active role in belief formation. We frequently talk of deciding, judging, or making up our minds that something is true – and we speak of these episodes as datable, one-off actions which directly produce belief. Talk of making up our minds is particularly suggestive here. Of course, sometimes, when we speak of a person having made up their mind, what we mean is that they have made a decision to do something – that they have formed an intention, not a belief. But we also speak of making up 8 9
The most famous defence of the freedom of assent is in book 4 of Descartes’s Meditations (Descartes 1984, vol. I). See also his Principles of Philosophy 1, 39 (Descartes 1984, vol. II). These writers would agree with Hume’s assertion that belief ‘depends not on the will, but must arise from certain determinate causes and principles, of which we are not masters’ (Hume, 1739/1888, p. 624), though few would endorse his reason for it – that belief is a feeling or sentiment (but see Cohen 1992).
20
Divisions in folk psychology our minds about matters of fact – about the truth of a theory, say, or the safety of a course of action. For example, suppose that there is controversy about the safety of eating beef: a deadly disease is rife among cattle, and some scientists claim that it can be transmitted to humans through beef products, though others insist that adequate safety measures are in place. Is it safe to eat beef?10 The evidence is inconclusive, but we need to take a view. So, having reflected on the evidence and the risks, we make up our minds on the matter. We also speak of making up our minds about what we want, and often urge people – children especially – to do it. These phenomena have received little attention from contemporary philosophers of mind. Yet, as Annette Baier emphasizes in one of the few discussions of the subject, they have some distinctive features (Baier 1979). (Baier focuses on the process of changing one’s mind, but her observations apply equally to that of making it up, which she treats as the original activity of which a change of mind is the revision.) A change of mind, Baier argues, is a special kind of cognitive process, distinct from the routine acquisition or updating of information. If I think that the ice will support me, and discover from bitter experience that it will not, then I can be said to have learned better, but not to have changed my mind. Changes of mind, Baier goes on, are not forced upon us, by either external or internal influences, but are the product of free reflective judgement. They follow upon a re-evaluation of our options, perhaps in the light of new or previously neglected information, and they involve a considered judgement about the appropriateness of some action or attitude. The moral of Baier’s analysis is that change of mind is a genuinely personal activity – it is something one does, not something that happens to one, and it requires attitudes and skills of some sophistication. Of course, to say that making up, or changing, one’s mind involves active reflection is not to say that it involves active judgement. Perhaps all we actively initiate is the reflection; we just have to wait and see whether it subsequently produces a new belief. That is a possible view of the matter – but not, I think, an attractive one. The point of making up, or changing, one’s mind about a topic is precisely to settle, or resettle, what one thinks about it. And one does not achieve this simply by pursuing reflection in 10
Safety is, of course, a relative matter. In asking whether it is safe to eat beef, I mean safe by whatever standard one adopts for matters of this kind. With the presence of BSE (or ‘mad cow disease’) in the British beef herd, and the apparent emergence of a human form of the disease, this question was one that faced many British consumers in the mid 1990s.
21
Mind and Supermind the hope that it will generate a robust conviction. That is a good way never to arrive at a settled view of anything. To make up one’s mind, one needs to be able, not only to initiate and sustain reflection, but also to foreclose on it – to cease pondering and adopt a definite view of the matter. We must, as Henry Price puts it, ‘come off the fence on one side . . . prefer or plump for one of the alternatives, accept it or commit ourselves to it, and reject the others’ (Price 1969, p. 206). This process, Price suggests, is like the resolution of a conflict (‘making up’ a previously ‘divided’ mind) and is analogous to the decision-making that terminates practical reflection (p. 206).11 On this view, then, changing or making up one’s mind involves two elements: a reflective re-evaluation of one’s options, and a kind of free, creative judgement. It involves looking and then leaping. This suggests, then, that we need to recognize the existence of two broad kinds of belief formation and revision: one passive and unreflective, the other involving personal reflection and active assent. 1.5 Language-involving versus not language-involving It is sometimes claimed that we can think in natural language – that natural language can act, not only as a medium for the expression of thoughts, but also as a vehicle of thought itself (Bickerton 1990, 1995; Carruthers 1996b; Dennett 1991a; Harman 1973). This view has a strong intuitive appeal. We often seem to form or recall a belief in the act of verbally articulating it to ourselves. Think, for example, of looking out of the window, seeing the louring clouds, and saying to oneself, ‘It’s going to rain.’ Here, it seems, one does not form the belief and then articulate it (why would one do that?). Rather, one forms the belief in the act of articulating it; the linguistic action is partially constitutive of the thought. Many people claim that much of their conscious thinking takes place in this way, in the form of a self-directed sub-vocalized monologue. There are also theoretical reasons for believing that some kinds of thought involve natural language, and a powerful argument can be run for the view that conscious propositional thinking is language-based (Carruthers 1996b, 1998). But, of course, it is implausible to suppose that all thought involves natural language. It is hard to deny that animals and pre-linguistic infants can think, and there is no pre-theoretical reason to suppose that our non-conscious thoughts involve 11
Price makes these remarks in the course of outlining a view (‘the Occurrence Analysis’, as he calls it) which he does not himself fully share; but he gives every impression of endorsing them.
22
Divisions in folk psychology natural language. So here, again, there is a division between two types of thought. (It is worth stressing that the distinction just made concerns medium of representation. To say that a thought is not language-involving is to say that it does not employ natural language as a medium of representation. Such a thought may, however, be dependent on language in another way. There are beliefs which we would never form without language, simply because the concepts involved are ones that can only be acquired through language. Beliefs about black holes, for example, are language-dependent in this way, since without language we would never acquire the concept of a black hole. I shall not be concerned here with this sort of language dependency, and shall assume that it is independent of the previous sort – that a thought may be language-dependent in this sense without being language-involving and vice versa.) 1.6 Some links We have looked at some divisions in our view of belief. They are a varied bunch, relating to consciousness, activation level, degree, mode of formation, and medium of representation. Already some links are emerging, though. Conscious beliefs are activated in episodes of occurrent thought, and such episodes often seem to involve acts of judgement or assent. Assent, in turn, introduces a qualitative attitude: one either assents to a proposition or does not; there is no halfway house (Price 1969, p. 207). (Of course, one might assent to an estimate of probability, but then it is the content of the attitude that is qualified, not the attitude itself.) And our conscious thoughts and judgements are often framed in natural language (though they may not always be so; I shall say more later about the extent of language involvement in conscious thought). Non-conscious beliefs, on the other hand, are different. They are not actively formed, and there is no pre-theoretical reason to think that they require occurrent activation or involve natural language. It is plausible, too, to locate our partial beliefs at the non-conscious level; our degrees of confidence are not matters of immediate conscious awareness and reveal themselves most clearly in our behaviour. So we can tentatively distinguish two types of belief: one which is conscious, subject to occurrent activation, flat-out, capable of being actively formed, often language-involving, and, consequently, unique to humans and other language users; and another which is nonconscious, possibly not subject to occurrent activation, partial, passively 23
Mind and Supermind formed, probably non-verbal, and common to both humans and animals. We have, then, the beginnings of a two-strand view of belief. The reader may ask why we should regard these two kinds of belief as distinct psychological states, rather than as varieties of a single generic type. After all, I call them both beliefs, so I must be assuming the coherence of some broader notion of belief which encompasses both. My response is to distinguish levels of description. At a very broad level of description, it may be appropriate to stress the similarities between the two kinds of belief and to group them together for explanatory and predictive purposes. That is what folk psychology does, and for its purposes such a classification is generally quite adequate. But at a finer level of description, it is important to distinguish them and to recognize that neither can be regarded as a subspecies of the other. Moreover, it is at this level of description that many philosophical debates about belief are pitched. Do beliefs require occurrent activation? Are they graded or ungraded? Can they be actively formed? Do they involve language? These are questions we need to resolve if we are to integrate the folk concept into a serious science of the mind, but they cannot be resolved so long as we stick with the inclusive folk classification, since they have no uniform answer. To repeat a point made earlier: the pressure to distinguish the two kinds of belief comes from the demands of integrationism. Finally, a word about desire. Although I have focused on belief, very similar considerations apply to desire. It, too, appears to have two forms – one conscious, apt to be occurrently activated, active, flat-out, and frequently language-involving; the other with the opposite properties. I shall assume, then, that each form of belief is associated with a complementary form of desire, giving us a two-strand view of both states. 2 reas on i ng I shall now move on to challenge the unity of processing assumption – the assumption that we have a single, unified, reasoning system. Again I shall review a number of common-sense distinctions and suggest that, collectively, they motivate a two-strand approach. The claim that we should reject the unity of processing assumption is not without precedent. A number of psychologists have proposed dual-process theories of reasoning, distinguishing a slow, serial, rule-based, conscious system and a fast, parallel, associative, non-conscious one (see Evans and Over 1996; Wason 24
Divisions in folk psychology and Evans 1975; Sloman 1996; Stanovich 1999).12 These theories are motivated by experimental data on reasoning and rationality, but, as we shall see, a similar picture is latent within folk-psychological discourse itself. 2.1 Conscious versus non-conscious I have already introduced the idea that some of our reasoning is nonconscious. Everyday experience, I think, strongly supports this view. We can perform many complex activities without conscious thought. Think of driving, for example. Controlling a car, following a route, anticipating the behaviour of other road-users – these are difficult tasks requiring considerable intelligence. Yet we often perform them without any conscious thought at all. Or consider chess. Skilled chess players can evaluate hugely complex positions very quickly, with little conscious thought. Typically, they will consciously assess only a few of the most promising strategies, ignoring the many thousands of others available to them. Again, it seems, much rapid and complex non-conscious processing must be involved in weeding out the unpromising options and selecting a few of the best for conscious evaluation. The existence of non-conscious reasoning presents an immediate challenge to the unity of processing assumption. For conscious and nonconscious reasoning can proceed independently, on different topics – strongly suggesting that the two are supported by distinct systems. (Think of driving again: one can drive to work, non-consciously making all the required calculations, while one’s conscious mind is wholly occupied with other matters.) This conclusion is reinforced by reflection on the role of conscious thought in the guidance of action. Suppose that there were just a single reasoning system, whose processes were sometimes conscious and sometimes not. Then a belief would come to influence action by being taken as an input to this system, and it would not much matter whether this happened consciously or non-consciously. That is to say, consciousness would be an optional extra, which made no direct difference to a belief ’s causal role (though it might, of course, make an indirect difference, 12
Paul Smolensky sketches a similar picture from the perspective of cognitive science, distinguishing a non-conscious intuitive processor, which operates on connectionist principles, and a conscious rule interpreter, which executes rules formulated in natural language (Smolensky 1988).
25
Mind and Supermind by making the subject aware of its existence). And this is counter-intuitive. Introspection suggests that the conscious status of an occurrent thought is as important to its causal role as its activity level. Recall my thought about the roadworks, which led me to deviate from my normal route to work. It was – or so it seemed to me – precisely because I consciously recalled that the roadworks were due to start that I changed course. If I had not done so, I would not have acted upon that information – at any rate, not in that way, at that time. Consciousness was not an optional extra, but crucial to the belief’s efficacy. This suggests that conscious activation involves a distinct pathway to behavioural influence, independent of the non-conscious one. It seems likely, then, that the conscious and non-conscious reasoning systems are distinct. It is compatible with this, however, that both systems are similarly constituted. Processing could be of a unitary kind, even if it proceeds in more than one strand, and in this weaker sense the unity of processing assumption could be sustained. In what follows I am going to question whether even this weaker sort of unity holds. (Henceforth, references to the unity of processing assumption are to this weaker claim.) As with belief, the division between conscious and non-conscious reasoning aligns with various others, suggesting that the two systems are differently constituted and that the states directly available to the one are not directly available to the other. 2.2 Explicit versus non-explicit Conscious reasoning typically involves entertaining sequences of explicit propositional thoughts – occurrent beliefs and desires – which form chains of sound or probable inference. I shall refer to reasoning of this kind as explicit reasoning and shall contrast it with non-explicit reasoning. Note that ‘explicit’ here does not imply ‘conscious’: it is possible that non-conscious reasoning involves explicit propositional thoughts, too (though I shall be questioning whether it does in fact do so). ‘Non-explicit’ is a blanket term for any mental processing which does not involve explicit propositional thoughts. Now, conscious reasoning is explicit, but what of the non-conscious variety? Those who subscribe to the unity of processing assumption will say that it too is explicit – that non-conscious thought processes involve sequences of non-conscious occurrent beliefs and desires. It is not obvious, however, that there is any folk commitment to this view. Indeed, 26
Divisions in folk psychology it seems rather counter-intuitive. For one thing, it is doubtful that there would be time for all the necessary non-conscious occurrent thoughts to occur. Think, for example, of all the calculations needed in order to guide a car safely from one’s home to one’s workplace. One needs to calculate how to steer in order to avoid obstacles and follow the desired route; how fast to travel, given the traffic laws, road conditions, and time available; when to overtake; when to change gear; when and how sharply to brake; and so on. This is a massive computational burden, and it is hard to see how it could be discharged if every relevant belief and desire had to be activated in occurrent form. At any rate, it does not seem to be part of the folk view that such activation must occur. (As Andy Clark points out, skilled drivers do not expect their actions to be the product of sequences of discrete inferential steps; Clark 1993b, p. 230.) Or think about the expert chess player. It would require huge sequences of explicit propositional operations to arrive at judgements of the sort routinely made by skilled chess players – sequences which only the most powerful computers can execute. It seems more likely that the processes involved are non-explicit. Or, finally, consider inference. A friend points to a parked car which has its headlights on and says, ‘The owner of that car will be annoyed with himself.’ I immediately understand the remark; yet its comprehension depends on a host of beliefs about the nature and function of cars and the beliefs and habits of car drivers. And, again, it is doubtful that there would be time for all of these to be individually accessed and activated. It may be objected that I am adopting too strict an interpretation of the claim that non-conscious reasoning is explicit. Not all the beliefs and desires involved in an episode of explicit reasoning need be activated as occurrent thoughts. After all, even conscious reasoning often depends on suppressed premises and background assumptions. We think, ‘Looks like rain – better take the umbrella.’ We do not add ‘Rain is unpleasant’, ‘Umbrellas protect against rain’ – though, of course, we believe these things and would not arrive at the conclusion if we did not. And, it might be suggested, the same may go for non-conscious reasoning – many of the premises involved may be implicit. This is a fair point, but it does not do much to support the claim that non-conscious reasoning is explicit. Non-conscious reasoning often involves making extremely complex assessments, drawing on many different factors (again, think of driving or chess-playing), and it seems unlikely that such assessments could be plausibly reconstructed as sequences of occurrent thoughts, even if some 27
Mind and Supermind of the premises were suppressed. Besides, the fact that conscious reasoning depends on suppressed premises and background assumptions hardly tends to support the view that all reasoning is explicit – rather the opposite. If beliefs can influence our reasoning without being activated as occurrent thoughts, then why are any of them activated in that way? Why is not all reasoning non-explicit?13 So we have a tension. The folk are committed to the view that conscious reasoning is explicit, but are not committed to the view that non-conscious reasoning is. So they are not committed to the unity of processing assumption, and should be at least receptive to the idea that there are two strands or levels of reasoning – one conscious and explicit, the other non-conscious and perhaps involving non-explicit processes. This in turn tends to confirm that we have two kinds of belief: conscious beliefs, which require activation as occurrent thoughts, and non-conscious ones, which may not. Let me pause to add a qualification to the picture that is emerging. I have suggested that conscious reasoning forms a distinct strand of cognition, and that the beliefs which figure in it require activation in occurrent form. This claim needs qualifying, however. For, as I noted above, conscious reasoning often depends on suppressed premises and background assumptions which are not explicitly activated. (By background assumptions I mean beliefs which influence the outcome of our reasoning without figuring as premises in it. For example, the assumption that it is safe to go out at night may influence my practical reasoning about what to do tonight, affecting which options are considered and which conclusions embraced.14 ) I shall say that beliefs which form suppressed premises and background assumptions in a reasoning episode are implicitly active in it, and I want to make two points about them. First, we should not think of these beliefs as belonging to a different strand of mind from their explicit counterparts. The distinction between the two does not correspond to that between the 13
14
Arthur Walker has defended the view that reasoning involves occurrent thoughts against the rival view that it consists in transitions between standing-state beliefs (Walker 1985). Only on the former view, Walker argues, can we make sense of the idea that a person may re-infer an already accepted conclusion from new evidence. While I find this persuasive as an argument for the view that some of our reasoning must be explicit, I do not think that it supports the view that all of it must be. (Walker does not consider non-conscious reasoning and seems to view inference as a conscious phenomenon with a distinctive phenomenal aspect; p. 207.) Michael Bratman emphasizes the necessity of making background assumptions like these in one’s practical reasoning (Bratman 1987, 1992).
28
Divisions in folk psychology two strands of reasoning I have tentatively identified. Rather it is intrinsic to the conscious explicit strand. A belief counts as implicitly active only relative to an episode of explicit reasoning – it is one that is involved in the episode without being explicitly involved in it. If non-conscious reasoning is non-explicit, then we cannot make an explicit/implicit distinction for the beliefs involved in it – in a sense they are all implicit. (Another reason for regarding suppressed premises and background assumptions as belonging to the conscious level is that, like conscious beliefs in general, they are flat-out ones. They are things we take for granted in our reasoning, and taking for granted is an all-or-nothing attitude. Even probabilistic reasoning, when explicit, requires flat-out background assumptions; see Lance 1995.) Secondly, although it is not true to say that all the beliefs involved in conscious reasoning require occurrent activation, it remains true, I think, that all the non-obvious ones do. Suppressed premises and background assumptions are things we take for granted in our reasoning – they define the normal background to it. And any belief which does not form part of the normal background – which is, let us say, epistemically salient – will typically require occurrent activation. 2.3 Classical versus probabilistic It is plausible to think that rational decision-making should be sensitive to considerations of probability. Suppose that I am trying to decide what to do on my afternoon off: play squash, go for a walk in the country, do the shopping, take the car in for servicing, and so on. Which is most desirable? In each case the desirability of the action will vary depending on what background conditions obtain. If my knee injury is not completely healed, then it would be better to avoid squash; if it rains this afternoon, then a walk in the country would not be fun; and so on. So my decision should reflect how probable I think it is that each of these conditions obtains and how desirable I find the various outcomes contingent upon them. (These values are known as my subjective probabilities and desirabilities – subjective since they reflect my idea of what is probable and desirable, rather than some independent measure of those things.) These intuitions can be developed into a full-blown probabilistic decision theory – Bayesian decision theory. In brief, the procedure involves taking each candidate action in turn and calculating its weighted desirability relative to each possible background condition. This is given by multiplying the desirability of the outcome the action would have if the background condition obtained 29
Mind and Supermind by the probability that the condition does obtain. Summing these values for each background condition gives the overall estimated desirability of the action. The optimum action is the one with the highest estimated desirability.15 There is also a well-developed probabilistic logic of inductive inference – Bayesian confirmation theory – which tells us how to adjust our confidence assignments in the light of new evidence. Now, these theories are intended primarily as normative, not descriptive, ones. Nevertheless, much of our reasoning does yield to description in Bayesian terms. In particular, our decision-making can often be interpreted as the upshot of probabilistic reasoning, sensitive to the sort of factors mentioned. Indeed, it can be shown that any agent whose preferences satisfy certain intuitively reasonable conditions can be interpreted as assigning degrees of probability and desirability to relevant sets of propositions and outcomes and as maximizing estimated desirability relative to those assignments. (Demonstrations of this are called representation theorems; see Ramsey 1926; Savage 1972; and, for surveys, Eells 1982; Fishburn 1981.) Animal behaviour, too, can often be interpreted in this way (Jeffrey 1985). It may be objected that to say that we can be represented as engaging in Bayesian reasoning is not to say that we do engage in it: representation theorems do not show that Bayesian decision theory characterizes internal psychological reality (Goldman 1986, p. 327). Indeed, there is reason to think that the human brain does not perform calculations of probability and desirability, but relies instead on ‘fast and frugal heuristics’ which generate responses that are quite good enough for most everyday purposes (see, for example, Gigerenzer et al. 1999). I am going to set this objection aside for a moment. How serious it is depends on what the function of a theory of mind is, and I shall return to it later in this chapter, when we have looked at that question. Grant for the moment that at least some of our decision-making can legitimately be characterized in Bayesian terms. An odd consequence follows. For our conscious reasoning very rarely takes a Bayesian form. We generally prefer to reason from unqualified premises to unqualified conclusions, employing classical inference schemata, such as the practical syllogism. And we find it very hard to identify our own assignments of 15
For detailed explication and defence of Bayesian decision theory, see Jeffrey 1983; Kaplan 1996; Savage 1972. It is common to refer to desirabilities as ‘utilities’ and estimated desirability as ‘expected utility’. The terminology used here (which is borrowed from Jeffrey 1983) seems to me more natural.
30
Divisions in folk psychology probability and desirability, as revealed in our choices. So we have a puzzle. There are two incompatible ways of explaining a subject’s decisions and inferences: by reference to their conscious classical reasoning and their professed beliefs and desires, and by reference to non-conscious Bayesian calculations involving assignments of probability and desirability of which the subject is unaware.16 How should we respond to this puzzle? One option – proposed by Ronald de Sousa – is to think of the two kinds of explanation as characterizing different levels of mental processing. There is a level of non-conscious non-verbal deliberation, de Sousa suggests, which is common to humans and animals and whose workings can be characterized in Bayesian terms, and there is a level of conscious verbalized reasoning which is found only in humans and which operates according to classical principles (de Sousa 1971, pp. 57–8). Here, then, is another motive for questioning the unity of processing assumption. 2.4 Active versus passive I suggested earlier that beliefs can be actively formed, and I now want to suggest that they can be actively processed too. Some reasoning processes, I suggest, are intentional actions, initiated and controlled at a personal level. This is most obvious in cases where we employ some explicit inferential procedure – constructing a syllogism, say, or writing out a long division. In such cases the overt actions involved can be thought of as constituting a larger inferential action – making a calculation or deriving a conclusion – which is under fully personal control. We can do similar things in our heads, articulating an argument in inner speech or visualizing the steps of a mathematical calculation. This is not all, however. Even when no explicit procedures are employed, it can still be appropriate to think of an inference as intentional. I am pondering the car’s innards, trying to identify a particular component. ‘That is the cylinder head and that is the air inlet’, I mutter to myself, ‘but what is this?’ – staring at the component in question and furrowing my brow. Suddenly the answer comes to me: 16
As Hempel notes, this gives a ‘peculiar twist’ to the idea of rational action: though . . . subjects make their choices in clearly structured decision situations, with full opportunity for antecedent deliberation and even calculation, they act rationally (in a precisely defined quantitative sense) relative to subjective probabilities and utilities which they do not know, and which, therefore, they cannot take into account in their deliberations. (Hempel 1965, p. 483, quoted in de Sousa 1971, p. 57)
31
Mind and Supermind ‘It must be the fuel lead.’ Here I do not employ any explicit procedure to arrive at the answer and I have no conscious awareness of how I get there. But still, I suggest, getting there involves personal activity on my part, though it is hard to say exactly what this activity is. (We would simply say that I was thinking, or trying to work out, what the component was. I shall offer a more illuminating characterization in chapter 4.) In claiming that some reasoning episodes are intentional, I am, of course, supposing that they have belief/desire explanations, and the reader may ask what the motivating beliefs and desires for a reasoning episode might be. The answer, I think, is simple: the desire to find a solution to some problem and the belief that doing this – writing out a syllogism, running through an argument in inner speech, or just ‘thinking’ – is a way to get one. These beliefs and desires will, I assume, usually be non-conscious. Of course, it would be wrong to suppose that all, or even most, of our reasoning involves personal activity, even of the inchoate kind just described. Sometimes a solution pops into our heads unbidden long after we have consciously given up trying to find it. And, of course, we have no direct control over our non-conscious reasoning. (It is true that we tend to use active verbs for non-conscious inferences. We say such things as ‘I braked because I realized that the other car was going to pull out’, even though the action was not preceded by any conscious inferential activity. But these locutions should not be taken literally – compare the way we speak of digesting our food. In such cases, I suggest, we are inferring the non-conscious reasoning that led to the action, and assimilating it to the pattern of our conscious active deliberation.) So here is another division – between reasoning that is intentional and reasoning that is not. 2.5 Language-driven versus not language-driven The final distinction I want to mention concerns the role of natural language in reasoning. I claimed earlier that natural language can serve as a medium for the representation of thoughts, but it is plausible to think that it can serve as a medium of inference, too. Some reasoning processes, it seems, constitutively involve the manipulation of natural-language sentences – written, vocalized, or, most often, articulated in inner speech. As I mentioned in the previous section, we can use language to perform explicit inferential operations, as in the construction of syllogisms. There are also other, less formal, examples of language-based reasoning. We often reason 32
Divisions in folk psychology things out in the course of conversation with a friend or in interior monologue with ourselves. In such cases, it seems, we are not simply recapitulating reasoning that has already been conducted in some inner medium, but conducting the reasoning in the course of articulating it: the linguistic activities implement the reasoning process, carrying it forward and shaping its direction. (Since language use is an intentional activity, this observation lends support to the earlier suggestion that some reasoning is active.) It seems, then, that some of our reasoning is language-driven. It is implausible, however, to suppose that all of it is. Animals, I take it, are capable of reasoning, as are people with severe aphasia (Varley 1998). Nor is there any obvious reason to think that non-conscious reasoning is language-driven. It certainly does not involve inner speech, and while it might involve sub-personal linguistic processes, there is no pre-theoretical reason to think that it does. Either way, it is unlikely that language is extensively involved in our non-conscious reasoning, much of which is directed to the control of behaviour that is well within the scope of nonlinguistic creatures. So we have a final division – between reasoning that is language-driven and reasoning that is not. 2.6 More links Again, the tensions we have noted link up naturally into binary opposites: non-conscious reasoning may be non-explicit, can be characterized in Bayesian terms, is not intentional, and is rarely or never language-driven. Conscious reasoning, on the other hand, is explicit, is usually classical, can come under intentional control, and is often language-driven. Moreover, the two kinds of reasoning align nicely with the two strands of belief identified earlier: the former with the non-conscious, partial, passive, nonverbal strand, the latter with the conscious, flat-out, active, languageinvolving strand. Thus we can supplement our two-strand view of belief with a two-process view of reasoning, giving us a tentative two-strand theory of mind. For convenience, I shall refer to these two strands as strand 1 and strand 2 respectively. 3 m i nd In this part of the chapter I turn to two further divisions in the folk view of the mind, not so obvious at an everyday level, but soon apparent on philosophical reflection. The first concerns the ontological status of 33
Mind and Supermind mental states, the second, the nature of mental explanation. Again, there is pressure to close these divisions by adopting a unitary approach, and again I shall suggest that a two-strand approach is preferable. 3.1 Ontology Belief possession is typically associated with the possession of various behavioural dispositions. If you believe that something is true, then you will typically be disposed to act in ways that would be rational on the assumption of its truth. (What ways these are will, of course, depend on the circumstances and your background beliefs and desires; the dispositions associated with a belief are, in Ryle’s phrase, multi-track.) Now, some theorists take belief ascriptions to refer simply to these multi-track behavioural dispositions: to ascribe a belief to a person is, they claim, first and foremost to say something about what the person is disposed to do. I shall refer to views of this kind as dispositionalist. (A more familiar term is behaviourist, but the word has unfortunate associations.) Other theorists, by contrast, take belief ascriptions to refer to the causal bases of these dispositions – to the states which give rise to the behaviour associated with the belief. I shall refer to these as categorical-state theorists. The most popular and attractive categorical-state theories – the various brands of functionalism – identify these states with functionally defined states of the brain.17 On the face of it, dispositionalist and categorical-state theories appear strongly opposed. They treat beliefs as very different kinds of thing – one as powers or tendencies of the organism as a whole, the other as states of its central nervous system. This opposition may be specious, however. For a strong case can be made for thinking of dispositional states and properties as functional ones. According to this view, to ascribe a disposition to an object is to ascribe to it a state or property with a certain causal role – a state or property which, in the right circumstances, causes the events which manifest the disposition. So in ascribing fragility to a glass we are ascribing to it a state which in the right circumstances causes shattering. If we couple this view with the idea that functional states are token-identical with the states that realize them (not type-identical, of course, since functions are multiply realizable), then we can regard token dispositions as identical 17
Among the dispositionalists, I count Davidson, Dennett, and Ryle; among the categoricalstate theorists Armstrong, Fodor, and Smart. Note that throughout I assume a broadly realist view of dispositions. The case for this view is very strong (see Armstrong 1968, ch. 6; Mellor 1974; Mumford 1998, ch. 3).
34
Divisions in folk psychology with their token categorical bases. On this view, then, the gap between dispositionalist and categorical-state views of belief narrows. Both sides can agree that beliefs are to be typed by their functional role, and both can identify token beliefs with token brain states.18 For all this, there is an important distinction lurking here, albeit not one best captured by the dispositional/categorical distinction. For though dispositionalists can be regarded as endorsing a form of functionalism, their functionalism is typically very different from that of categorical-state theorists. Dispositionalists think of beliefs as what I shall call thickly carved functional states – that is, as states of the whole cognitive system, defined primarily by their relations to inputs and outputs (perceptual stimuli and intentional actions). Thus, on a dispositionalist view, to possess a belief is to be in a state which produces a certain pattern of behavioural responses to perceptual stimuli, the nature of the responses varying with one’s background beliefs and desires, themselves similarly characterized. However, the view involves no assumptions about the nature of this state or about the processes which produce the responses. It is thus compatible with dispositionalism that the internal basis of one belief may overlap in complex ways with those of others – indeed, that it may be nothing less than the whole cognitive system – and that reasoning may involve global patterns of neural activity. (Note that since dispositionalists must refer to the agent’s background beliefs and desires in characterizing the role of a given belief, it follows that their definitions of mental-state terms will be holistically intertwined. This is not a problem, however: holistic intertwining of this kind is evident in many conceptual schemes; see Carruthers 1986, pp. 104–7.) Categorical-state theorists, on the other hand, tend to think of beliefs as finely carved functional states. They regard them as functional sub-states of the cognitive system, defined not only by their relations to behavioural 18
Functionalist theories of dispositions are defended in Mumford 1998; Prior 1985; and Prior, Pargetter, and Jackson 1982. The account in the text draws heavily upon Mumford. Note that on a functionalist view the dispositional/categorical distinction is naturally understood as a relative one. A functional system may be realized in more basic, lower-level, functional systems, which are themselves realized in still more basic functional systems – and so on, all the way down (so-called homuncular functionalism or homunctionalism; see Dennett 1975; Lycan 1990). On this view, then, one and the same state or property may count as categorical relative to a higher level of organization, and as dispositional relative to a lower-level one (Mumford 1998, ch. 9). Whether dispositional properties must ultimately bottom out in genuinely categorical, non-dispositional ones is a matter of some controversy. For the view that they need not, see Blackburn 1990; Mellor 1974; Mumford 1998, ch. 10; and Popper 1957.
35
Mind and Supermind outputs and perceptual inputs, but also by their internal relations to each other and to other mental states. On this view, action is the product of explicit reasoning involving occurrent beliefs and desires, and mental states are defined in part by their role in this reasoning. So to say that a person has a belief is to say that they have an internal state which can be activated in occurrent form and which then interacts in characteristic ways with other occurrent beliefs and desires, leading ultimately to action. Belief ascriptions thus carry implications about the structure of the cognitive system and the processes that generate action. (It is true that dispositionalists will also need to refer to other mental states in characterizing the role of a given belief, but they will view them quite differently – as background conditions, rather than as discrete causally interacting entities.) As I said, the contrast here is not best captured by talk of dispositions and categorical states; it is better to think of it as a contrast between different varieties of functionalism. To characterize it, I shall speak of austere and rich versions of functionalism – or, more simply, of austere and rich views of the mind.19 I want to stress that I shall treat both of these positions as broadly realist ones. It is true that austere theorists are sometimes described as anti-realists about the mind – as holding that folk psychology is merely an interpretative device, rather than an empirical theory with ontological commitments (see, for example, Botterill and Carruthers 1999, ch. 2). Some austere theorists have encouraged this by describing their position as instrumentalist. (Dennett adopted this tag in some of his earlier writings, though he has since abandoned it and now stresses his realist credentials; see his 1991c.) However, I think that these descriptions are unhelpful – at least, given the way I have characterized the austere position. The disagreement between austere and rich theorists is not over the reality of mental states, but over their nature. The former regard them as multi-track behavioural dispositions, the latter as functional sub-states of the cognitive system (that is, as thickly carved functional states and finely carved ones, respectively). Of course, there is a sense in which austere theorists are anti-realists about beliefs; they deny the existence of beliefs as conceived of by rich theorists (or at least they do if they also subscribe to the unity of belief assumption). But to assume that this makes them anti-realists tout court is to assume 19
This terminology is an adaptation of that used in Horgan and Graham 1990. Note that ‘austere’ and ‘rich’ are broad terms, which cover a variety of more specific positions. Indeed, richness can be regarded as a matter of degree: the more complex one’s view of the internal functional structure of the mind, the richer it is. In chapters 6 and 7 we shall look at arguments for the view that folk psychology is committed to an even richer view than that described in the text.
36
Divisions in folk psychology that the rich conception of belief is the only viable one. If we do not make that assumption, then we can accept that both groups are realists in their own way. Austere and rich views are, on the face of it, straightforwardly incompatible, and they yield different accounts of who and what counts as a genuine believer. When used in an austere sense, ‘belief ’ will have a wider extension than when used in a rich sense. Some systems which qualify as believers in the former sense will lack the right internal architecture to qualify in the latter. The two views also have different consequences for the epistemology of mind. On an austere view, all the factors relevant to determining an individual’s belief state are overt. Beliefs are multi-track behavioural dispositions, and if a person’s behaviour can be reliably and comprehensively interpreted as a rational expression of a particular belief, then they count as possessing it (Davidson 1975; Dennett 1981b). (If two or more equally good interpretations are possible, then we may have to say that the agent’s mental state is simply indeterminate. This is not an unacceptable consequence, however (see Dennett 1991c), and in practice the scope for divergent interpretation will be limited, especially when long and complex behavioural sequences are considered.) On a rich view, by contrast, it is not the case that all the evidence for a person’s belief state is overt. Beliefs are internal states which play a certain role in the processes leading to behaviour, and behavioural interpretation provides at best good but defeasible evidence for their existence. Austere and rich views also have different implications for the possibility of irrationality. On an austere view, attributions of mental states carry a presumption of rationality. We regard a person as possessing a certain belief only if their behaviour can be interpreted as a rational expression of it. Likewise, we regard a behavioural episode as an intentional action only if it can be interpreted as a rational manifestation of the agent’s mental states. Behaviour that cannot be interpreted in this way will have to be written off as non-intentional – as behavioural ‘noise’. (There may be room for some flexibility here, given the vagueness in the notion of rationality, but stark or systematic irrationality is ruled out; see Dennett 1982.) On a rich view, by contrast, the link between mental states and actions is less tight, and there is scope for blatant irrationality. Beliefs and desires influence action by way of explicit reasoning processes, and these processes may occasionally go astray or be distorted by emotional or other factors. When this happens the mental states involved will lead to actions which they do not justify and which are thus genuinely irrational. 37
Mind and Supermind The conflict between austere and rich views of the mind is well established in the philosophical literature – Dennett being the leading advocate of the former and Fodor of the latter (see, for example, Dennett 1987; Fodor 1987). However, the roots of the conflict are clearly detectable in folk psychology itself. We have already seen that the folk are at least partially committed to a rich view – that they hold that some beliefs require occurrent activation and influence action by way of explicit reasoning. (Further evidence for a folk commitment to rich functionalism will be reviewed in chapter 6.) Yet, as we have also seen, this commitment is not unqualified. Think again of the beliefs involved in non-conscious reasoning, such as those that guide routine driving behaviour or the actions of an expert chess player. The folk, I suggested, are not committed to the view that such beliefs require, or typically receive, occurrent activation. In these cases the folk appear happy to take a much more austere view. The tension is also evident in the way we use folk psychology. On the one hand, we ascribe folk-psychological states very liberally – not only to other humans, but also to animals, artefacts, and even plants (Dennett 1981b). This is often an extremely effective way of understanding and predicting their behaviour, and it is hard to deny its appropriateness. Yet, at the same time, we often feel inclined to say that animals and artefacts do not really possess the ascribed states – at least, not in the same way that we ourselves do. This strongly suggests that folk psychology has a dual function. So here we have another tension in folk psychology. It is not particularly troubling in everyday situations, but it is not something a developed science of the mind could tolerate. It would be disastrous to permit a systematic ambiguity in one of its central concepts. So some revision or regularization of folk usage seems to be in order. Now, once again, the traditional approach here is to seek a unitary solution: to advance a global defence of either austere or rich functionalism. But another option would be to align the two views with the two strands of mentality identified earlier – to see the rich view as characterizing conscious thought, and the austere one its non-conscious counterpart. I shall return to this suggestion in a moment, after discussing a related matter. 3.2 Explanation Questions about the nature of mental states are bound up with questions about the function of mental explanation. It is a commonplace that there are two ways in which citing an agent’s mental states can explain their 38
Divisions in folk psychology behaviour. It can do so either by rendering it intelligible – by showing how it is rational in the light of the agent’s beliefs and desires – or by identifying its cause – by picking out some state or event which was causally responsible for it. It is uncontroversial that mental explanations have the former function. And it is plausible to regard them as having the latter, too. In trying to understand the causes of a road accident, for example, we might find it natural to refer to the beliefs of the drivers involved. And philosophical objections to a causal reading are now widely agreed to have been flawed (Audi 1973, 1985; Davidson 1963; Dretske 1989; Goldman 1970). Despite this, however, there remains scope for dispute about the kind of causal explanation that folk psychology offers – and the dispute corresponds closely to that between the austere and rich versions of functionalism identified in the last section. I begin by distinguishing two kinds of causal explanation. I shall assume that causation is primarily a relation between events (understanding ‘event’ in the everyday sense as an episode involving change of some kind), and that the most basic kind of causal explanation is one that identifies a causal event. So, for example, we might explain a car’s skidding by saying that it was caused by the driver’s braking suddenly. I shall speak of such causal events as dynamic causes. But standing states can also be cited in causal explanations. For example, we might cite a car’s having worn tyres in explanation of its skidding when the driver braked suddenly. This state (unlike many other states of the car) was causally relevant to the skidding. How to analyse claims of causal relevance is not completely clear, but I take it that they are closely bound up with counterfactual claims: if the car had not had worn tyres, then it would not (other things being equal) have skidded on braking. The presence of the state was a necessary condition for the causal event to produce its effect.20 I shall refer to causally relevant states as sustaining causes of events.21 Return now to mental explanation. Beliefs are causes of action, but are they sustaining causes or dynamic ones? The answer depends on whether our view of the mind is austere or rich. The austere functionalist will say that beliefs are only sustaining causes. On an austere view, to have a belief is to have a disposition to produce certain behavioural outputs in response to 20 21
For a counterfactual analysis of causal relevance, see LePore and Loewer 1987 and 1989; for an alternative approach, see Braun 1995. I borrow the terminology of dynamic and sustaining causes from Audi (Audi 1993). Nothing hinges on its use, however. If you balk at speaking of standing states as causes, then think of them simply as causally relevant states.
39
Mind and Supermind certain stimuli – this disposition being construed as a functional state. Now, we can regard these states as causal; each token disposition will be identical with some token realizing state, which causes the events that manifest the disposition (see Dennett 1981a, pp. 49–50). (The resulting explanations will, it is true, be fairly uninformative. To say that I stopped at the red light because I believed that a red light is an instruction to stop would be to say that I stopped because I was in a state which typically causes stopping at red lights. But not all causal explanations are informative, and if causation is an extensional relation, then it will be possible to redescribe the causal relata in a more informative way.) However, persisting states like this can be only sustaining causes, not dynamic ones. It is true that (if event causation is assumed to be primary) a sustaining cause will not manifest itself without some triggering event which serves as a dynamic cause of the ensuing effect, but austere functionalists cannot hold that belief–desire explanations refer to such events. On their view, beliefs and desires are just not the right sort of things to be triggering events. This is not to say that folk explanations of action will never pick out dynamic causes; those which refer to the impacts of external stimuli may do so. But belief–desire explanations will not. For example, suppose that I see a dog and run away, and that I do so because I believe that dogs are dangerous and want to avoid danger. Here, on an austere view, my seeing the dog will count as a dynamic cause of my action, but my beliefs and desires about dogs and danger only as sustaining causes of it. Of course, various internal events will occur in the mediating process between the perception and the ensuing action, and these might also be identified as dynamic causes of the latter; but austere functionalists cannot maintain that folk-psychological explanations serve to pick them out. On their view, the folk vocabulary simply does not carve things finely enough.22 Rich functionalists, by contrast, are not so restricted. For they hold that the folk-psychological vocabulary picks out internal events as well as persisting states – occurrent beliefs and desires as well as standing-state ones. And these internal events can serve as dynamic causes of action. 22
Thus Davidson, defending a broadly austere position, explains that the events which cause actions are those which initiate the intentional states that rationalize them: In many cases it is not difficult at all to find events very closely associated with the primary reason [for an action]. States and dispositions are not events, but the onslaught of a state or disposition is. A desire to hurt your feelings may spring up at the moment you anger me; I may start wanting to eat a melon just when I see one; and beliefs may begin at the moment we notice, perceive, learn, or remember something. (Davidson 1980, p. 12)
40
Divisions in folk psychology Indeed, the contrast with the austere position is even more marked. For on a rich view, not only can belief–desire explanations advert to dynamic causes; they typically will do so. This is a corollary of how rich functionalists conceive of beliefs and desires. On their view, to have a standing-state belief with content p is to be in a persisting state such that, given appropriate stimuli, either external or internal, one will entertain an occurrent belief with content p – an event which may then be the dynamic cause of further occurrent thoughts or overt actions. That is to say, on their view, standingstate beliefs are not dispositions to act, but dispositions to have occurrent thoughts, which may then in turn cause actions. So standing-state beliefs will be sustaining causes of occurrent thoughts, but not (or not directly) of the overt actions which those thoughts cause. Similarly with desires. So, if the aim of belief–desire explanation is to identify the causes of actions, rather than the causes of those causes, then successful explanations of this kind will refer to occurrent beliefs and desires, and therefore to dynamic causes. The picture here is complicated slightly by the fact that a dynamic cause may be effective only in the presence of certain background conditions, which will count as sustaining causes of the resulting effect. And in the case of occurrent beliefs and desires these background conditions may include the presence of various beliefs and desires that are not occurrently activated (the suppressed premises and background assumptions mentioned earlier). So, for example, a background condition for my desire for bread to cause me to set off for the shop might be that I have the belief that it is safe to go out, and this belief could thus be cited as a sustaining cause of my action. So it is not true to say that on a rich view of the mind, belief–desire explanations will always advert to dynamic rather than sustaining causes. Still, they typically will. As I noted earlier, suppressed premises and background assumptions are things we take for granted – things we treat as part of the normal background to an episode of reasoning. And it will rarely be informative to cite such beliefs in explanation of an action. (Though there will be exceptions; suppose, for example, that you were explaining my shopping trip to someone from a war-torn country, where every journey out was fraught with danger.) So austere and rich views of the mind each support a different sort of psychological explanation. It follows that if, as I have claimed, the folk shift between the two views, then we should expect to find them shifting between the two kinds of psychological explanation, too – sometimes aiming to pick out dynamic causes, sometimes content to identify sustaining 41
Mind and Supermind ones. And vice versa: if we do find this duality in folk explanatory practice, then this will tend to confirm that the folk alternate between austere and rich views of the mind. And this is indeed what we find. Some intentional explanations clearly aim to identify dynamic causes. Someone watching me drive to work and noting my sudden deviation from my normal route might seek to know what event had precipitated the change. And this inquiry might naturally be answered by saying that it had suddenly occurred to me that roadworks were due to start – an explanation adverting to a conscious occurrent belief which triggered the action. Other cases are different, however. Suppose that a novice watching a championship chess match wants to know why the players make the moves they do and why they ignore others which seem on the face of it more attractive. An expert replies by providing them with information about the players’ mental states – their strategic aims, both short-term and long-term, their beliefs about the rules of the game, their knowledge of the standard openings and strategies, and so on. Here it is less plausible to think that the aim of the explanation is to identify dynamic causes. The expert is not claiming that the players occurrently entertained the beliefs and desires referred to – certainly not as conscious occurrent thoughts, or even, I suggest, as non-conscious ones. At any rate, the explanation would not be vitiated if it could be shown that the players had not entertained such thoughts. For the questioner is not much interested in the sequence of causal events that led to the players’ actions – which will in all probability have been exceedingly complex. Rather, their concern is to understand the rationality of their actions – to see why their moves were wise ones for them to make. Of course, it matters that the explanation given be true; the questioner does not want just any old rationale for the players’ behaviour, but the actual one. It is sufficient for that, however, that the explanation picks out sustaining causes of the players’ actions – that the players possessed the beliefs and desires cited and would not have made the moves they did if they had not.23 3.3 From theory to theories It appears, then, that belief–desire explanation has a dual function – sometimes picking out dynamic causes, sometimes sustaining ones. This in turn 23
For further evidence that folk-psychological explanation does not require a strong causal reading, see Anscombe 1957, sect. 11.
42
Divisions in folk psychology confirms the earlier suggestion that folk mental concepts have both austere and rich versions. The upshot is that we can separate out two different folk theories – one austere, the other rich – each supporting a different conception of the mind and mental explanation. Now, this would not in itself compel us to adopt a two-strand theory of mind. We might see the two theories as describing different aspects of a single cognitive system. We could regard one as a competence theory, which characterizes the powers of the system at a shallow level, and the other as a performance theory, designed to identify the causal mechanisms supporting those powers.24 The two theories would differ in scope and falsification conditions, and their concepts would have different extensions, but each might be refined and regularized to meet the standards appropriate for theories of its type. From our present perspective, however, there is another – and, I think, more attractive – option. For the distinction between the two versions of folk psychology corresponds at least roughly to that between the two types of belief identified earlier. The evidence for a rich view stems mainly from conscious thought, that for an austere one from its non-conscious counterpart. So another way to regularize folk practice would be to think of each theory as characterizing one of the two strands of belief we identified earlier – that is, to adopt an austere view of strand 1 belief and a rich view of strand 2 belief. I propose, then, that a revised folk psychology should incorporate two sub-theories: an austere theory of the strand 1 mind, which picks out thickly carved functional states and sustaining causes of action, and a rich theory of the strand 2 mind, which picks out finely carved functional states and dynamic causes. Of course, this involves rejecting the idea that the sub-theories, austere and rich, are related as competence and performance theories – that the latter characterizes the causal mechanisms supporting the states and processes described by the former. It would be absurd to suggest that non-conscious states and processes are supported by conscious ones! Rather, we must think of the two theories as characterizing distinct strands of mentality. Let me stress that this two-strand framework is offered as a regularization of folk psychology, not as an exhaustive psychological taxonomy. It 24
The terminology of competence and performance derives originally from Chomsky, who employs it to mark a distinction in linguistic theory. I use the terms here in the looser sense adopted by Daniel Dennett (Dennett 1981a). Dennett illustrates the use of the terms by reference to the physicist’s distinction between kinematics and dynamics – the former providing an idealized, abstract level of description, and the latter a deeper, genuinely causal one.
43
Mind and Supermind may be that there are other strands of cognition, unfamiliar to the folk, and that we shall need new branches of intentional psychology to describe them. Indeed, there will almost certainly be a level of sub-personal cognitive psychology underlying the strand 1 mind. (Here the terminology of competence and performance really is appropriate. Theories of sub-personal psychology will aim to characterize the causal underpinnings of the dispositions picked out by the austere folk theory.) Moreover, the sub-personal system might turn out to share some properties with the strand 2 mind. It might consist of discrete representational states which can be occurrently activated and which enter into explicit inferential processes. It might even exploit the representational resources of the language system. Nothing I have said here rules that out. My claim is merely that folk psychology does not involve any commitments as to the nature of this level (except negative ones – that it is not conscious, not under active control, and so on). The folk are not concerned with the character of sub-personal cognition, and it is precisely their nonchalance about it that makes an austere view of the non-conscious mind so attractive. (This is not to say that people never speculate about sub-personal processes – just that the core practices of folk-psychological explanation and predication do not depend on assumptions about them.)25 Does this mean that we should regard the folk theory of the strand 1 mind merely as a placeholder for a scientific theory of sub-personal cognition? In a sense, yes. If we want to understand the workings of the non-conscious mind, then we shall have to move beyond the austere folk perspective. But for many purposes that perspective may be perfectly adequate. For the high-level sciences of human behaviour, such as sociology and economics, the details of sub-personal cognition will often be largely irrelevant. And for everyday use, too, it is likely that theories of subpersonal cognition will be unwieldy and redundant, though elements of them may be incorporated into folk use, especially if they shed light on common pathological conditions. (This happened with Freudian psychoanalytic theory, and I suspect that it will happen with modern evolutionary psychology.) Finally, note that the proposal just made puts the earlier discussion of strand 1 belief in a slightly different light. I suggested that there is no pretheoretical reason to suppose that strand 1 beliefs receive occurrent activation or involve natural language, though I did not rule out the possibility 25
For discussion of the relation between folk psychology and sub-personal cognitive psychology, see Dennett 1981a.
44
Divisions in folk psychology that they might. However, if these states are simply behavioural dispositions, then we can indeed rule this out. Behavioural dispositions manifest themselves directly in action, not in episodes of occurrent thought. Nor does it make sense to think of dispositional states as employing a linguistic medium – such a claim can be made only for occurrent thoughts or stored representations. It remains possible, of course, that some of the subpersonal states and processes underlying the strand 1 mind are explicit and language-involving. 3.4 A Bayesian mind? With our two-strand theory fleshed out, I want now to return briefly to a couple of issues postponed from earlier, both relating to the strand 1 mind and the role of Bayesian idioms in characterizing it. The first issue concerns the nature of partial beliefs. I promised to say a little more about what these are, and, having argued for an austere view of the strand 1 mind, I am now in a position to do this. Partial beliefs are Bayesian subjective probabilities, and these states in turn are multi-track behavioural dispositions, understood as thickly carved functional states (functions from inputs to outputs). To have a certain set of subjective probabilities is to be disposed to make the choices that a rational Bayesian agent with those probabilities would make, given one’s subjective desirabilities. Since one can have such a disposition without having made explicit judgements of probability, it follows that no special conceptual apparatus is needed in order to possess partial beliefs. Likewise, partial desires are subjective desirabilities, understood in the same way. The second issue concerns strand 1 reasoning. I claimed that this could be characterized as Bayesian, but noted an objection to this claim. Just because we can be represented as Bayesian reasoners, the objection ran, it does not follow that we are Bayesian reasoners – that Bayesian theory describes real psychological processes. Given an austere view of the strand 1 mind, however, this objection loses its force. For on this view a theory of inference serves as a framework for behavioural interpretation rather than as a model of internal processes. We credit people with the mental states which provide the best overall interpretation of the choices they make, on the assumption that these choices are related to their mental states in the way prescribed by the theory, under its normative aspect. There is, however, no assumption that their choices are the product of actual calculations of the sort specified in the theory: the theory characterizes 45
Mind and Supermind the agent’s behavioural dispositions, but not the processes which produce their behaviour. From this perspective, then, a theory of inference has only a shallow descriptive role, and being representable as a Bayesian reasoner is sufficient for being one. The reader may ask why, given this general approach, a Bayesian interpretation should be privileged. There will always be alternative ways of interpreting an agent’s choices, which represent them as having different attitudes and as adhering to different norms (for a demonstration of this, see Zynda 2000). So what grounds are there for regarding the Bayesian representation as giving a privileged description of reality? Indeed, why not interpret the agent as having flat-out beliefs and desires and adhering to the norms of classical practical reasoning? In short, how do we choose which normative theory to use as the basis for interpretation? The right response here is to say that the choice of interpretative framework is a pragmatic matter. If rival frameworks differ in their behavioural predictions, then we can choose between them on the basis of their success. If they are predictively equivalent, then we can treat them as alternative ways of characterizing the same underlying dispositions, and choose between them on grounds of simplicity and conformity to our pre-theoretical intuitions.26 I strongly suspect that these criteria will dictate a probabilistic framework of some kind, rather than a classical one. It seems likely that a classical framework would be severely limited in its descriptive and predictive power unless the attributed beliefs and desires were assigned degrees of strength. And from an austere perspective, it is unclear how a qualified classical framework of this kind would differ from a probabilistic one.27 It may be objected here that there is evidence that human reasoning cannot be interpreted as Bayesian. There is a large experimental literature in the ‘heuristics and biases’ tradition showing that in certain test situations people regularly respond in ways that violate Bayesian principles (see, for example, Nisbett and Ross 1980; Kahneman et al. 1982; Piattelli-Palmarini 1994). Moreover, the responses of the test subjects seem to be the product of intuition rather than conscious reasoning – suggesting that Bayesianism 26
27
Note that to say that the choice of interpretative frameworks is a pragmatic matter is not to say that psychological descriptions are observer-relative. As Dennett points out, different interpretations can each highlight different, but equally real, patterns in an agent’s behaviour (that is, distinct, though compatible, behavioural dispositions) (Dennett 1991c). Dennett is thinking of different interpretations within a particular normative framework, but the point applies equally to ones that rely on different frameworks. Davidson also argues that behavioural interpretation requires a probabilistic framework; Davidson 1975.
46
Divisions in folk psychology is an inappropriate interpretative framework for the non-conscious strand of mentality. (The results also suggest that the underlying sub-personal processes are not Bayesian. But that claim is not in itself troublesome, for the reasons given above.) There are two broad ways of responding to this. The first is to stick with a Bayesian approach and write off any recalcitrant behaviour as ‘noise’. What is required for interpretability is a broad adherence to norms, not an exceptionless one. After all, there will always be some recalcitrant data, whatever interpretative framework we employ. (Moreover, it may be that the heuristics-and-biases results are less embarrassing than they seem at first sight. There is evidence that the subjects’ errors are due to the artificial format of the test problems, and that when the same problems are posed in a more familiar form, the responses are much more in line with Bayesian principles: see Gigerenzer 1991; Cosmides and Tooby 1996.) The second response is to employ an interpretative framework which is tailored to reflect the capacities and functions of our sub-personal cognitive mechanisms and which is thus a more accurate predictor of actual human responses (see Stich 1982). This framework would still need to be a probabilistic one, for the reasons given above, but it might differ in significant ways from standard Bayesian theory. I suspect that a strong case can be made for this second option, but for simplicity’s sake I am going to adopt the first and take Bayesianism as the default position. However, nothing in what follows depends on a commitment to strict Bayesianism, and another normative framework could be substituted for it without substantially affecting the arguments. There is another issue I want to address briefly. Since I am proposing a two-strand theory of mind as a regularization and development of folkpsychological practice, it is important that folk idioms can be retained in characterizing both of the strands. But on the proposed Bayesian framework, they appear ill-equipped to characterize the strand 1 mind. The problem concerns mental explanation. The folk practice is to single out a small number of beliefs and desires in explanation of an action. But if we interpret strand 1 actions as the product of Bayesian reasoning, then it is not clear that this practice can be sustained. For Bayesian reasoning is holistic – the product of all our partial beliefs and desires. What role is left for the folk practice of singling out individual ones? There is, I think, a perfectly good role for it. The crucial point to remember is that, on an austere view, mental explanation singles out sustaining causes of action, not dynamic ones, where claims about sustaining 47
Mind and Supermind causation can be cashed out in counterfactual terms. Thus, when we cite a partial belief or desire in explanation of an action, we are saying that the action was counterfactually dependent on its presence. And, given this, the folk practice makes good sense. For an action may be counterfactually sensitive to changes in some, but not all, of the agent’s partial beliefs and desires. Take the case where I see a dog and run away. And suppose that I would not have run away if I had attached a much lower probability to the proposition that dogs are dangerous, but that I would still have run away if I had attached a much lower probability to the proposition that Tallahassee is the state capital of Florida. Then this fact alone justifies us in singling out my attitude to the former proposition rather than my attitude to the latter in explanation of my action – in saying that I ran away because I believed that dogs were dangerous. (Note that the claim here is not that my action would have been sensitive to any change in my attitude to that proposition – only that it would have been affected by substantial ones. If I had been just a bit less convinced that dogs were dangerous, I might still have decided to run.) It may be objected that if strand 1 reasoning can be interpreted as Bayesian, then actions will be counterfactually sensitive in this way to changes in any of the agent’s partial beliefs and desires. For Bayesian reasoning is holistic – its outcome determined by all of the agent’s probabilities and desirabilities. This is too swift, however. We need to distinguish two different outcomes of the notional Bayesian process: the complete assignment of estimated desirabilities to the candidate actions, and the determination of which action has the highest estimated desirability – which comes top in the estimated desirability ranking. If the process is holistic, then the complete assignment of estimated desirabilities will indeed be sensitive to changes in any of the agent’s probabilities and desirabilities: if any of them had been different, then the assignment would have differed in some way. But it is very unlikely that the determination of which action comes top will also be sensitive to any such change. In my dog encounter, for example, it seems likely that running away would still have come out as the most attractive action, no matter what probability I had assigned to the proposition that Tallahassee is the state capital of Florida. So an action’s coming top, and therefore being executed, will be counterfactually sensitive to substantial changes in some, but not all, of the agent’s probabilities and desirabilities, and we may legitimately single out these states in explanation of it. It is true that an action will be counterfactually sensitive in this way to many probabilities and desirabilities, few of which we would 48
Divisions in folk psychology normally think of citing in explanation of it. For example, my decision to run away from the dog would be sensitive to substantial changes in the probability I attach to the proposition I can run. But this is by no means an unacceptable consequence; explanation invariably involves picking out a few of the more salient factors from among a host of others. I conclude that the adoption of a Bayesian view of the strand 1 mind is not incompatible with the retention of folk-psychological explanatory practices – though it does require them to be reconstrued. In the following chapters I shall from time to time follow the folk practice of singling out individual beliefs and desires in explanation of actions generated at the strand 1 level – always on the understanding that such idioms are to be construed in the way just described. I shall address one final worry. In adopting an austere view of the strand 1 mind, I am committed to anti-realism about strand 1 mental processes. Strand 1 states may be real dispositions, but strand 1 processes are merely notional – an interpretative fiction, employed to characterize strand 1 states. And this may seem implausible. Surely, when we talk of non-conscious mental processes we mean to refer to real processes, not notional ones? (Indeed, a commitment to realism was implicit in my own earlier discussion of strand 1 reasoning – in the discussion of whether it was explicit or non-explicit, active or passive, and so on.) I concede the point: there is a commitment to realism here. However, it can be reconciled with the austere perspective advocated. When we refer to non-conscious mental processes, we are, I suggest, referring to the real sub-personal processes in virtue of which we are interpretable as Bayesian reasoners. We take no stand as to the nature of these processes, however, but simply quantify over them (or if we do characterize them, do so in negative terms – as not under active control, not involving inner speech, and so on). In this respect, the view remains perfectly austere. Henceforth, references to strand 1 reasoning or to non-conscious mental processes should be understood in this way. conc lu s i on and p ro spe c t I have provisionally identified two strands of belief, associated with two kinds of mental processing and two conceptions of mind and mental explanation. Their characteristics are summarized in figure 1. Let me emphasize that I am not offering this neat and tidy formulation simply as an analysis of folk-psychological practice, but as a theoretical 49
Mind and Supermind
Belief
Reasoning
Mind
Strand 1
Strand 2
•
Non-conscious
•
Conscious
•
Not apt to be activated in occurrent form
•
Apt to be activated in occurrent form
•
•
Flat-out
Partial
•
•
Can be actively formed
Passively formed
•
•
Frequently language-involving
Not language-involving
•
Unique to humans and other language users
•
Common to humans and animals
•
Non-conscious
•
Conscious
•
Interpretable as Bayesian
•
Usually classical
•
Depends on sub-personal processes that are not under active control, may be non-explicit, and are probably not language-driven
•
Can be actively controlled
•
Explicit
•
Frequently language-driven
•
Mental states are thickly carved functional states (austere functionalism)
•
Mental states are finely carved functional states (rich functionalism)
•
•
Belief--desire explanations pick out sustaining causes
Belief--desire explanations typically pick out dynamic causes
Figure 1 The two strands of folk psychology
regularization of it. The roots of it are present in folk discourse – the divisions are there, as are the links between them. And this way of ordering things is, I think, the natural one. But I do not doubt that counter-examples could be produced – instances of folk locutions which suggest a different classification, conjoining properties and processes which are here labelled distinct. I am not offering this as the only classification possible, but as the best and most consistent one. We have a framework, then, which nicely organizes some of our common-sense intuitions about the mind. But it raises as many questions as it answers. First, there are constitutive questions. I have suggested that strand 1 beliefs are realized in sub-personal states and processes, but what of the strand 2 kind? I have claimed that they are functional substates of the cognitive system, but how are they realized? It is widely held that beliefs are token-identical with brain states, but this is not essential to the folk outlook. (It is quite coherent to claim that beliefs are states of a non-physical soul.) And even if we assume (as I shall) that mental states are physically constituted, we do not have to regard them as realized directly in states of the brain. Secondly, there are questions about the relation between the two strands of mind. How are the two belief systems 50
Divisions in folk psychology related to each other, and what are their respective roles in the guidance of action? Thirdly, there are questions about the function of the two strands. Why do we need two strands of cognition? If complex tasks such as driving and chess-playing can be controlled non-consciously, what is the role of conscious reasoning? In short, we need to convert this outline framework into a coherent and plausible psychological theory. This will be the task of the next three chapters.
51
3 Challenges and precedents The previous chapter set out some reasons for adopting a two-strand theory of mind and sketched the outlines of such a theory. This chapter goes on to look at some problems facing the proposed theory and at some precedents for it in the philosophical literature, including proposals by Ronald de Sousa, Daniel Dennett, and Jonathan Cohen. Throughout, I shall be searching for hints as to how to flesh out the theory – in particular as to how strand 2 beliefs are constituted and how they are related to strand 1 beliefs – and towards the end I shall pool these hints and suggest what shape a developed two-strand theory should take. 1 c hal le ng e s At the end of the last chapter I outlined some general problems facing a two-strand theory. I now want to introduce some further problems, related to specific features of the proposed theory. 1.1 The Bayesian challenge I have claimed that we possess a kind of belief which is ‘flat-out’ – which is unqualified and does not come in degrees. (It is worth stressing that it is the attitude which is unqualified in flat-out belief, not necessarily the content. It is possible for a flat-out belief to have a qualified content – for example, that there is a 75 per cent chance of rain tomorrow.) The claim that we possess flat-out beliefs is not a particularly radical one – indeed, many philosophers of mind write as if flat-out belief is the only kind we have. It faces a serious objection, however. For it is hard to see how it could be rational for flat-out belief to influence behaviour. Bayesian decision theory teaches us that the rational way to make decisions is to assign degrees of probability and desirability to the various possible outcomes of candidate actions and then to choose the one which offers the best trade-off of 52
Challenges and precedents desirability and likely success. Flat-out belief just does not come into the picture. As Patrick Maher puts it: a person would be rational to accept a bet on the proposition A at odds of 9:1 just in case he is at least 90% confident that A is true. Whether or not the person believes [i.e. believes flat-out] that A is true is quite irrelevant; all that matters to the rationality of the decision is the degree of confidence the person has in A. (Maher 1986, p. 377)
In such a case – or in any other which offered a similar trade-off of risks and rewards – it would be rational for one to perform the action if and only if one were at least 90 per cent confident that A was true. A degree of confidence of 90 per cent would be both necessary and sufficient to render the action rational. Whether one also believed flat-out that A was true would be irrelevant. Nor should forming a flat-out belief alter one’s degrees of confidence. The fact that one has formed a flat-out belief in a proposition p does not make p more probable and should not make one more willing to act on it (barring certain self-fulfilling cases, that is, such as when one forms the belief that one will not get to sleep). It is true that deliberating about whether or not to believe a proposition may alter one’s confidence in it. For the process may lead one to acquire new evidence for it or to re-evaluate old. But forming the belief itself should have no effect on one’s confidence assignments. Embracing this conclusion, Maher declares flat-out belief irrelevant to rational action: the rational act for a person to choose in a given context does not at all depend on the beliefs [i.e. flat-out beliefs] that the person holds. (Maher 1986, p. 380)1
Those who accept the existence of flat-out belief thus face a challenge. They must explain how flat-out belief can have any psychological role, given its apparent irrelevance to rational action. I shall refer to this as the Bayesian challenge.2 1
2
Maher also considers other, indirect, ways in which flat-out belief might be relevant to rational action (for example, by defining the range of actions we regard as available to us), but argues that none of them is plausible (Maher 1986). Note that even if Maher were wrong about this, we would still have a counter-intuitive result. For flat-out belief would still be irrelevant to action in the way that it typically seems to be – namely, by justifying it. I borrow the term ‘Bayesian challenge’ from Kaplan (Kaplan 1996). As Maher points out, the challenge is a wide-ranging one, which threatens, for example, to undermine the foundations of pragmatism. Pragmatists hold that beliefs can be justified by the practical consequences of holding them. But if the Bayesian is right, flat-out beliefs cannot be justified in this way, since they will have no practical consequences for a rational agent (Maher 1986, p. 364).
53
Mind and Supermind Of course, to say that flat-out belief is irrelevant to rational action is not to say that it is irrelevant to human action. Perhaps we all habitually lapse from the high standards of Bayesian rationality, ignoring the subtleties of confidence and acting on the basis of unqualified beliefs. Such a habit might be justified, given our cognitive limitations – in particular our lack of skill at conscious probabilistic reasoning. If accepting probable propositions flatout helps to make our calculations more tractable, then the departure from strict rationality which it involves might be justified by the accompanying reduction in computational demands. This suggestion has some attractions; it is a common theme in naturalistic epistemology that apparently irrational inferential habits may be justified in the light of wider cognitive biases and limitations (see, for example, Cherniak 1986; Stein 1996; Stich 1990). The suggestion still leaves us with a problem, however. For I have claimed that we have partial beliefs as well as flat-out ones, and that our non-conscious reasoning can be interpreted as maximizing estimated desirability relative to these states (though not necessarily as a result of actual Bayesian calculations). We are, after all, sensitive to considerations of risk and desirability, even if we rarely make conscious judgements about them. But if this is so, why do we bother to engage in classical practical reasoning as well? Why not leave everything to the non-conscious processes? The Bayesian challenge becomes still more potent if we adopt an austere view of partial belief, as advocated in chapter 2. For on this view, ascriptions of partial belief involve a presupposition of rationality: to have a certain degree of confidence in a proposition is to be disposed to behave in ways that would be rational given that degree of confidence in that proposition. And this means that the concepts of partial belief and intentional action are conceptually intertwined. We credit an agent with just those degrees of partial belief and desire that best rationalize their intentional actions. And we regard a bit of behaviour as an intentional action just in case we can represent it as the rational expression of the agent’s partial beliefs and desires. Given a Bayesian view of practical rationality, it follows that habitual violation of Bayesian norms is conceptually impossible: the behaviour in question simply would not qualify as intentional. From this perspective, then, the Bayesian challenge has a special bite: flat-out belief is irrelevant to intentional action tout court. Indeed, on this view of the matter, it is hard to see what sense we could make of a person’s acting upon a flat-out belief. For any action they performed would, as a matter of conceptual necessity, already have a complete intentional explanation 54
Challenges and precedents in terms of their partial beliefs and desires. How, then, does flat-out belief get into the picture? Note that a similar problem will arise for any two-strand theory of mind which treats one of the strands as austere – whether or not it also treats it as Bayesian. Any such theory will have to explain how the other strand can have any role in the explanation of action, given that all our actions will already have intentional explanations at the austere level, simply in virtue of how we attribute states at that level. By adopting an austere view of one strand we seem to condemn the other to inefficacy. (The need to address this problem seems to constrain Dennett’s attempt to construct a two-strand theory of mind, to be discussed later in this chapter.) Thus, although I have posed the present challenge in Bayesian terms, a similar challenge would arise whatever normative theory we were to adopt as our interpretative framework for the austere strand of mind. Note, too, that if we hold that the austere strand is non-conscious, then this problem concerns the efficacy of conscious thought. 1.2 Williams’s challenge I have claimed that strand 2 beliefs can be formed actively, through oneoff acts of judgement, assent, or making up of mind. (Since these terms all have slightly different connotations, I shall prefer the descriptive term active belief adoption.) I shall call this claim activism, and in this section I am going to introduce an objection to it. Let me begin by defining the version of activism I shall defend. There are three points to make. First, in claiming that we have the power of active belief adoption I mean that we are able to form beliefs in a direct, non-instrumental way, through one-off mental acts. The claim that we can induce ourselves to believe things by indirect means, such as subjecting ourselves to indoctrination, is not controversial. Secondly, I shall assume that the act of adopting the belief that p can be intentional under that very description.3 That is to say, my claim is that we can contemplate a proposition, weigh up the reasons for and against adopting a belief in it, and then decide whether or not to do so. Thirdly, I shall defend activism only for beliefs of the strand 2 variety, and shall treat it as a weak, permissive thesis. That is to say, my claim is that it is possible to form strand 2 beliefs actively, not that this is the only way to form them. 3
This distinguishes my version of activism from a weaker version defended by Philip Pettit (Pettit 1993, ch. 2).
55
Mind and Supermind Activism is not a popular doctrine among contemporary philosophers. It is common to think of mental states as formed by sub-personal processes over which we have no direct control. More importantly, there is a challenge to the very coherence of activism – a doctrine which some think can be written off a priori. The argument goes like this. Actions are things that are responsive to practical reasons. So if active belief adoption were possible, then we would be able to adopt beliefs for purely practical reasons and regardless of the evidence for their truth. In other words, we would be able to adopt beliefs ‘at will’: with suitable inducement, we would be able to adopt any belief we liked, even if we had no reason to think it true. And, as Bernard Williams has argued, this claim – voluntarism or volitionism, as it is usually known – is not merely false, but necessarily false. It is not just a contingent fact that we cannot adopt beliefs at will, Williams argues, as it is a contingent fact that we cannot blush at will; rather, it is in the nature of belief that it cannot be adopted at will (Williams 1970; see also O’Shaughnessy 1980, Vol. I, pp. 21–8; Pojman 1985, 1986; Scott-Kakures 1993).4 But if voluntarism is necessarily false and activism entails voluntarism, then activism too must be necessarily false. I shall refer to this as Williams’s challenge. 1.3 Fodor’s challenge Another claim made in the previous chapter was that some thoughts are language-involving. I claimed that we can think by entertaining and manipulating representations of natural-language sentences in inner speech. I now want to define this claim more precisely and outline a problem for it. I begin with the definition.5 There are four points to make here. First, the claim is that some thoughts and thought processes constitutively involve representations of natural-language sentences: the representations are the medium in which the states and processes are realized. This claim, which has been labelled the cognitive conception of language, should be distinguished from the weaker claim that language can act as a cognitive tool, 4
5
Briefly, Williams argues that attempts to form beliefs at will would be self-defeating, since we could not regard a mental state as a belief if we knew that it had been formed without regard to its truth. Several writers have questioned the soundness of Williams’s argument (see, for example, Bennett 1990; Winters 1979), but for present purposes I shall grant its conclusion. My strategy will not involve denying the falsity of voluntarism, but questioning whether activism entails its truth. The following draws in part on ideas outlined in Carruthers 1996b, ch. 2.
56
Challenges and precedents which enhances and extends our cognitive abilities (Clark 1998; Vygotsky 1934/1986).6 Secondly, I shall assume that the representations involved in inner speech are auditory or kinaesthetic images (that is, images of hearing the sentences uttered or of the vocal movements necessary to utter them). Thirdly, I shall take it that these imaged sentences come with semantic interpretations attached – just as heard or overtly vocalized ones do. So when I entertain the sentence ‘Visiting relatives can be boring’, I know straight off whether it refers to the tediousness of making family visits or to that of hosting them.7 It is sometimes objected that natural language could not serve as a medium of thought since its sentences are ambiguous in a way that thought is not (see, for example, Pinker 1994, ch. 3). This assumes, however, that the sentences involved are uninterpreted ones, and defenders of the cognitive conception need not take that view. Of course, if inner speech comes with semantic interpretations attached, then there must be some associated semantic processing going on at a non-conscious level, but this is not incompatible with the cognitive conception. The cognitive conception holds that some thought processes constitutively involve inner speech, and it is compatible with this that the processes in question also constitutively involve supporting episodes of non-conscious semantic processing. Finally, my version of the cognitive conception is limited in scope. To begin with, it is restricted to conscious (strand 2) thought. I shall assume that non-conscious thought does not involve natural language. The claim is further restricted, too. I do not claim that all conscious thought is language-involving; some, for example, seems to involve entertaining visual images of scenes and objects, rather than auditory images of sentences. I do not even claim that all conscious propositional thinking involves language – though I suspect that this may be the case (for more on this see the next chapter). My claim is merely that much of it does. The cognitive conception is not widely accepted. It is more common to think of language merely as an input–output system for thought, which itself takes place in an internal sub-personal medium (‘Mentalese’). I shall refer to this as the communicative conception of language.8 A number of 6
7 8
The phrase ‘the cognitive conception of language’ is coined in Carruthers 1996b. For more discussion of the cognitive conception and its relation to other views of the role of language in cognition, see the editors’ introduction to Carruthers and Boucher 1998. The example is taken from Pinker 1994, p. 209. For defence of the claim that inner speech is entertained as interpreted, see Swinburne 1985. Again, the term is Carruthers’s.
57
Mind and Supermind arguments have been offered in support of the communicative conception and against the cognitive conception. I shall not rehearse these here (for a survey and a persuasive set of replies, see Carruthers 1996b, ch. 2). Rather, I shall focus on one fundamental question for the cognitive conception: how could natural language be constitutively involved in thought? How could images of sentences assume the causal roles of thoughts? How could speaking to oneself constitute thinking in any substantial sense? The problem is particularly pressing if we accept, as many psychologists now do, that the language faculty is a modularized, peripheral system, which originally evolved for communicative purposes (Chomsky 1975, 1988; Fodor 1983; Pinker 1994). Modular systems like this have their own dedicated inputs, outputs, and internal processing structures, and are relatively encapsulated from the rest of cognition (Fodor 1983). How could such a system play a role in central cognition – that is, in flexible, intelligent, non-encapsulated, thought? I shall refer to this as Fodor’s challenge – Fodor being a prominent defender both of the communicative conception and of the modularity of language. 1.4 From challenges to precedents A developed version of our two-strand theory will have to provide responses to these challenges, and I shall return to them frequently in the course of this chapter and the following ones. As a first step in the development process, I want to look now at some precedents for a two-strand theory. A number of writers have claimed that there exist two distinct kinds of belief, or belief-like attitude, which are conflated in everyday discourse. The distinction is drawn in different ways by different writers, though a common theme is that one of the attitudes is graded, the other, flat-out. Some writers even claim that the flat-out attitude is voluntary, language-involving, and connected with conscious occurrent thought – just like strand 2 belief. In the rest of this chapter I shall survey some of these two-strand theories in order to see what lessons they hold for the present project. As we shall see, none of the accounts addresses exactly the same range of concerns as ours, and none of the proposed distinctions corresponds exactly to that between strand 1 and strand 2 belief. However, the review will throw up a number of important suggestions, and I shall propose that by picking and mixing elements from different accounts we can flesh out our two-strand theory and identify responses to the challenges facing it. 58
Challenges and precedents 2 baye s i an s on f lat - out b e l i e f A central problem for our two-strand theory, highlighted by the Bayesian challenge, is how to find room for both degrees of confidence and flat-out belief within a single mind. This problem is one that other writers have addressed. In particular, some Bayesians recognize the need to say something about flat-out belief, and there is a large Bayesian-inspired literature on the psychology and epistemology of flat-out belief. In this section I shall look at some of this work. I shall not attempt an exhaustive review of the literature – much of which is preoccupied with epistemological issues tangential to our main concerns – but merely highlight some of the ideas most relevant to our project. The review will also fulfil a commitment made in the previous chapter. 2.1 The need for flat-out belief Some hardcore Bayesians deny the need for a theory of flat-out belief. The only doxastic concept we need, they insist, is that of degree of confidence (see, for example, Jeffrey 1970). Others, however, argue that we cannot dispense with the qualitative notion, pointing in particular to the role of flat-out belief in scientific inquiry. I shall say a little about this, as it will help to establish the background to the discussion, as well as confirming the existence and importance of flat-out belief. Note that many philosophers of science use the term ‘acceptance’ to refer to flat-out belief. This term is a slippery one, however, which is used in many different ways, sometimes in direct contrast to ‘belief ’, and for the present it is less confusing to speak simply of flat-out belief. According to the standard probabilistic logic of inductive inference – Bayesian confirmation theory – rational inquirers ascribe probabilities to their hypotheses, and update these probabilities as new evidence comes in, in a way that reflects their assessment both of the independent probability of the evidence and of its probability given the truth of the hypothesis. As a normative theory, this has many attractions, and Bayesian philosophers of science aim to show that good scientific practice adheres to it – that good scientists do indeed update their confidence assignments in accordance with Bayesian principles. As a descriptive theory, however, Bayesianism is seriously inadequate. For scientists do not just ascribe probabilities to their theories, laws, and generalizations; they also accept some of them outright, assert them categorically, and use them as premises in deriving further 59
Mind and Supermind explanations and predictions. As Patrick Maher has emphasized, Bayesians cannot ignore this fact (Maher 1993, ch. 7). For scientists rarely record their probability assignments – indeed, most would be hard-pressed to say what they are. For the most part, all we have is their categorical judgements – statements of what they accept flat-out. Thus, if Bayesian theorists are to show that good scientific practice adheres to Bayesian norms, then they will need an understanding of what flat-out belief is and how it is related to confidence. Other aspects of scientific practice, Maher argues, confirm the need for a theory of flat-out belief. He claims, for example, that without such a theory we cannot explain the dynamics of scientific progress. Science proceeds by qualitative leaps from one theory to another. Scientists rarely abandon a theory, even when disconfirming evidence presents itself, until an alternative theory of equal or greater explanatory power is available. That is to say, alternative theories act as catalysts for the rejection of old ones. From a Bayesian perspective, however, it is hard to see why this should be. After all, an alternative theory will rarely constitute evidence against an old one. Again, it seems, we need to think in terms of flat-out belief. We need to think of scientists as adopting a flat-out doxastic attitude towards their current theories and as maintaining this attitude until more attractive replacement theories present themselves. Flat-out belief, then, is central to the psychology of scientific inquiry. Note that ‘scientific inquiry’ can be understood in a broad sense, to include, not just institutional science, but any form of rational inquiry which adheres to the scientific model – that is, which proceeds by the construction and testing of hypotheses. If we take this view, then flat-out belief will be a very important psychological state indeed. 2.2 The confidence view There is a traditional answer to the question about the nature of flat-out belief, which is endorsed by many Bayesians. It is that flat-out belief is itself a state of confidence: to believe a proposition flat-out is to have a certain level of confidence in it. This view – I shall call it the confidence view – has had many adherents over the years, and I want to pause awhile to consider it.9 It is not, of course, a two-strand theory of belief, but 9
Foley traces it back to Locke, while Maher argues that it is present in Hume (Foley 1992; Maher 1986). Modern adherents include Chisholm (1957) and Sellars (1964).
60
Challenges and precedents a way of accommodating our two strands of belief talk within a singlestrand theory. It is important to consider it nonetheless, since it forms the backdrop to other Bayesian approaches to flat-out belief. The discussion will also discharge a commitment made in the previous chapter. There I pointed out that the division between flat-out and partial belief – and, with it, one motive for postulating a two-strand theory of mind – would disappear if it could be shown that one of these states was a variety of the other. At the time I considered and rejected the view that partial beliefs were varieties of flat-out belief, but postponed consideration of the opposite view – that flat-out belief could be identified with a level of partial belief. This is, of course, the confidence view, and I now want to explain why it is unattractive. An immediate problem for the confidence view is to say what the threshold for flat-out belief is. It must be either certainty or some level of confidence lower than certainty. Yet neither option is attractive. Suppose it is certainty (1 on the scale of 0–1 on which degrees of confidence are standardly measured). But then it follows that we have very few flat-out beliefs. To assign a probability of 1 to a proposition is to cease to contemplate the possibility that it is false, and, consequently, to ignore the undesirability of any outcome contingent upon its falsity.10 One consequence of this is that if one is certain that p, then one should be prepared, on pain of irrationality, to bet everything one has on its truth for no return at all. For one will simply discount the possibility of losing the bet. (This is the problem of the ‘all-for-nothing bet’; see Kaplan 1996; Maher 1986.) In this sense there are very few things of which we are certain – no non-trivial propositions, at any rate. Yet we have flat-out beliefs in many non-trivial propositions. So flat-out belief is not the same thing as certainty.11 10
11
In assessing a course of action, the Bayesian calculates the desirabilities of the various possible outcomes it might have, weighting each by its likelihood. Now suppose that a particular action, A, would have a very bad outcome if condition C obtained. That is to say, suppose that the desirability of performing A in C – symbolized as des(A(C)) – is strongly negative. (Attractive outcomes have positive desirabilities, unattractive ones negative desirabilities, and neutral ones a desirability of zero.) Normally, this would count against performing A – even if one were fairly confident that C did not obtain. But now suppose that one is certain that C does not obtain – i.e. one assigns prob(C) a value of 0. Then the weighted desirability of performing A when C obtains – prob(C) × des(A(C)) – will also be zero, no matter how negative des(A(C)) is. That is, one should be indifferent between performing A and maintaining the status quo, and should be willing to perform it for no return at all. The possibility of the bad outcome will be completely ignored. Some writers do nonetheless hold that flat-out belief (or ‘acceptance’) requires certainty. See, for example, Harsanyi 1985; Levi 1980.
61
Mind and Supermind Perhaps, then, flat-out belief corresponds to a high level of confidence, albeit one that falls short of the maximum? For example, we might say that a person has a flat-out belief in a proposition if their confidence in it is greater than their confidence in its negation – that is, if it exceeds 0.5. There are problems with this view, too, however. For the norms of flatout belief seem to be different from those of high confidence. The line of thought is this. It is plausible to think that rational flat-out belief is closed under conjunction – that a rational agent will believe the conjunction of any propositions they believe.12 If flat-out belief is high confidence, however, this will not always be so. Illustrations of this are provided by the paradoxes of the lottery and the preface (for the former, see Kyburg 1961, 1970; for the latter, Makinson 1965). Consider first a fair lottery with a large number of tickets and a guaranteed winner. Then a rational agent might have a threshold-passing degree of confidence in each of the propositions ‘Ticket 1 will not win’, ‘Ticket 2 will not win’ . . . and so on, while at the same time having a very low degree of confidence in the conjunction of these propositions – which, of course, amounts to the claim that no ticket will win. The preface case is similar. A historian has compiled a lengthy work containing many factual statements. They have a threshold-passing degree of confidence in each statement, taken individually, yet do not have a threshold-passing degree of confidence in the conjunction of these statements. For the conjunction is equivalent to the claim that none of the statements in the book is false, and – as the historian acknowledges in their preface – this is highly unlikely to be the case.13 These cases involve extremely long conjunctions, of course, but similar cases can be described with far shorter ones. For on Bayesian principles, it will frequently be rational to assign a lower probability to 12
13
It might be better to say ‘closed under perceived conjunction’ – that is, to say that rational agents will believe any claim they recognize to be a conjunction of any propositions they believe. It is plausible to think that norms of rationality should be tailored to our needs and capacities as finite agents. For discussion, see Cherniak 1986; Stein 1996. The moral sometimes drawn from the lottery paradox is that high confidence is not sufficient for flat-out belief, since in this case – given the unacceptability of the conjunction – we should not believe the conjuncts, despite their high probability (see Kaplan 1981b; Maher 1986). The moral sometimes drawn from the preface paradox is that high confidence is not necessary for flat-out belief, since the historian should believe the conjunction, despite its low probability (see Kaplan 1981b). By adjusting the size of the lottery and the length of the book, it is easy to show that there can be no level of confidence that is either necessary or sufficient for acceptance.
62
Challenges and precedents a conjunction than to any of its conjuncts individually.14 And given the right numbers, this might make the difference between meeting and failing to meet a threshold for belief – even when the number of conjuncts is small. The upshot of these examples is that, if flat-out belief is a thresholdcrossing degree of confidence, then a rational agent will either violate conjunctive closure or hold inconsistent beliefs (believing both the conjunction and its negation). Some defenders of the confidence view accept this – the more common option being to abandon or qualify the commitment to conjunctive closure (see, for example, Foley 1979, 1992, 1993; Hawthorne and Bovens 1999; Kyburg 1961, 1970). This approach has some plausibility. After all, reasonable people do in fact often hold sets of beliefs like those involved in the lottery and preface cases. Abandoning conjunctive closure has a price, however. For, as Kaplan emphasizes, it would undermine some very basic argumentative practices (Kaplan 1981a, 1981b). Reasoning often involves conjoining propositions – putting together things one believes and then deriving a conclusion from them. A person who rejected conjunctive closure would not regard such inferences as compelling. Nor, by the same token, would they be troubled by the discovery that their beliefs conjointly entailed a contradiction. To abandon conjunctive closure is, in Kaplan’s words, to ‘license rational persons to blithely ignore reductio arguments’ (Kaplan 1981b, p. 133). Besides (as Kaplan also emphasizes), what is at issue is not just the wisdom of adhering to conjunctive closure, but the very possibility of doing so. Even critics of conjunctive closure accept that we could adhere to it, if we chose to. But on the confidence view we do not have this option, since we automatically count as believing those propositions we regard as highly probable and as failing to believe those we regard as improbable, regardless of their logical relations to one another (Kaplan 1995). There is another reason for rejecting the confidence view – this time relating to the guidance of behaviour. The common-sense conception of flat-out belief is, I take it, that of a state which makes some qualitative difference to one’s behavioural dispositions. Of course, just how the addition 14
The probability of the conjunction p & q is equal to the probability of p multiplied by the probability of q given p (q/p). Hence unless one assigns q/p a probability of 1, one should assign a lower probability to the conjunction than to p. But one will assign q/p a probability of 1 only if one is either certain that p entails q or certain that q already obtains and is independent of p – and in most cases one will be certain of neither.
63
Mind and Supermind of any particular flat-out belief changes one’s dispositions will depend on what other mental states one has – what other beliefs, what desires and intentions, and so on. But whatever the background, the addition of a new flat-out belief ought to make some qualitative difference. This is true even if we restrict our attention to the context of theoretical inquiry. If a scientist forms the flat-out belief that a certain theory is true, then we should expect this to make a qualitative difference to what they are disposed to say and do in their professional capacity. But on the confidence view, the acquisition of a new flat-out belief will not necessarily involve any significant change at all. The move from a degree of confidence which just falls short of the threshold to one which just exceeds it may or may not make a significant difference to how one is disposed to act. It will all depend on one’s background probabilities and desirabilities. Given one background, a threshold-crossing change in one’s confidence assignment to a particular proposition may make a great difference; given another, it may make none. At any rate, it will not, as a rule, make a greater difference than any other change of similar extent anywhere else along the confidence scale. We can make the same point in a slightly different way. I take it that the folk notion of flat-out belief is that of an explanatorily salient psychological state. That is, the folk view is that our flat-out beliefs can be cited in explanation of our actions – or of some of them, at least. But this will be the case only if there are robust, counterfactual-supporting generalizations linking flat-out belief with action. And on the confidence view there will be no such generalizations, since, as we have just seen, the acquisition of a new flat-out belief will often make no significant difference to how one is disposed to act. And, given this, it is hard to see why the state should hold any theoretical interest for us.15 It is worth stressing that the objection here is not that flat-out beliefs will be causally idle – they will, after all, possess just as much causal power as the states of confidence in which they consist. Rather, it is that they will not possess their causal powers in virtue of being states of flat-out belief: there will be no psychological laws defined over flat-out beliefs. The confidence view thus avoids the Bayesian challenge only at the cost of making flat-out belief qua flat-out belief explanatorily idle. I conclude, then, that the confidence view does not capture the common-sense conception of flat-out belief. 15
Stalnaker makes a very similar point (1984, p. 91).
64
Challenges and precedents 2.3 The behavioural view If we reject the confidence view, then the Bayesian challenge returns in full force. If flat-out belief is not a state of confidence, then what is it, and how can it have any influence on rational action? There is a general solution to this problem, of which there are different versions. This is to think of flat-out beliefs as specific behavioural dispositions arising from the agent’s partial beliefs and desires – high-level behavioural dispositions, we might call them. Thus, to say that a person has a flat-out belief with content p is to say that they have partial beliefs and desires such that they are disposed to act in a certain way – a way characterized by reference to the proposition p (examples will make this clear shortly). I shall refer to this as the behavioural view of flat-out belief. On this view, then, flat-out beliefs are not just a subclass of partial beliefs, as on the confidence view, but neither are they something over and above partial beliefs and desires. Rather, they are constituted by, or realized in, them. Talk of realization is, I think, appropriate here. A highlevel behavioural disposition (say, the disposition to save money) exists in virtue of a set of underlying partial beliefs and desires which are, given a normal cognitive background, logically sufficient for it. (Most highlevel dispositions will, of course, be multiply realizable, being supported by different partial beliefs and desires in different individuals or in the same individual at different times – think of the different reasons we may have for saving money.) Thus, on the behavioural view, flat-out beliefs are realized in sets of partial beliefs and desires – themselves, I have suggested, behavioural dispositions of a more basic kind. This view not only has an attractive economy, but also promises a neat solution to the Bayesian challenge. If flat-out beliefs are realized in partial beliefs and desires, then they will have an influence on action precisely equal to that of the partial states which realize them. There is thus no conflict between the efficacy of flat-out states and that of partial ones; rather, flat-out states are effective in virtue of the underlying partial ones. It is important not to confuse the behavioural view just outlined with the view I called austere functionalism. Austere functionalism is a thesis about the nature of belief in general; it is the view that beliefs are multi-track behavioural dispositions, understood as thickly carved functional states. I suggested that we should take an austere view of partial belief. The behavioural view, on the other hand, is a thesis specifically about flat-out belief, and its relation to partial belief and desire. It is the view that flat-out 65
Mind and Supermind beliefs are high-level behavioural dispositions, which arise from the agent’s partial beliefs and desires. Another difference is that on some versions of the behavioural view, flat-out beliefs are dispositions of a highly specific nature (dispositions to perform actions of a particular type), whereas according to austere functionalism they are multi-track dispositions, which manifest themselves in quite different actions in different circumstances. It is also worth stressing that the behavioural view need not entail an austere view of flat-out belief. As we shall see in later chapters, it can support a rich view of it, which represents flat-out beliefs as capable of selective occurrent activation. Given this general approach, what sort of high-level behavioural disposition might flat-out belief consist in? One option is to identify it with a disposition to act as if the proposition believed were true – that is, to prefer options that would be more attractive if it were true to ones that would be more attractive if it were false. Following Kaplan, I shall refer to this as the act view.16 This view is unattractive, however, as Kaplan points out (Kaplan 1996, pp. 104–6). For unless they are certain that p is true, a rational agent will be disposed to act as if p is true in some circumstances and disposed to act as if it is false in others – it will all depend on the options available. For example, consider two scenarios. In (1) you are offered a choice between the status quo and betting £1 on p for a return of £10; in (2) you are offered a choice between the status quo and betting £10 on p for a return of £1. And suppose that your confidence in p is just over 0.5. Then, if rational, you will be disposed to take the bet in scenario (1) – thereby acting as if p is true, but disposed to reject the bet in scenario (2) – thereby acting as if not-p is true. It follows that a defender of the act view must either insist that flat-out belief requires certainty or accept that flat-out belief can vary with context, and neither option is attractive. As we saw earlier, the claim that flat-out belief requires certainty is implausible. And flat-out belief, as commonly understood, is unqualified as to context as well as to attitude. If we believe a proposition, then we believe it in all contexts, not just when certain options are presented to us. (Assuming, that is, that we think of it under the same mode of presentation each time; it is possible to take different attitudes to the same proposition when it is presented to us in different linguistic guises. On the act view, however, a much more radical kind of context-dependency is possible; it 16
As defenders of the act view, Kaplan cites Braithwaite (1932–3); Churchman (1956); Rudner (1953); and Teller (1980).
66
Challenges and precedents is possible to believe a proposition in one deliberative context and disbelieve the same proposition, under the very same mode of presentation, in another.) A more plausible option is to think of flat-out belief as a much more specific high-level disposition, linked to just one type of activity. In particular, it has been suggested that it consists in a disposition to certain kinds of linguistic activity. This view has its roots in an influential 1971 paper by Ronald de Sousa, and I shall begin by saying a little about this. De Sousa starts by characterizing a certain act which he calls assent.17 This is an act of sincere assertion, either overt or silent, directed upon a natural-language sentence. To assent to a sentence is to affirm an epistemic commitment to it – to bet on its truth, as it were. Assent is an intentional action, de Sousa says, determined by our subjective probabilities and desirabilities – specifically our epistemic ones. We have a lust for truth – a hankering for objects of unqualified epistemic virtue: We would rather have a largish set of propositions labelled ‘true’ – even though their title to that label may not be impeccable – than a very small set labelled ‘absolutely certain’ and an immense, unwieldy set consisting of all other propositions with grades on them. (de Sousa 1971, p. 56)
To satisfy this epistemic lust, de Sousa explains, we are prepared to overlook the subtleties of confidence and make all-out bets on truth. If we are sufficiently confident of a proposition’s truth, then our epistemic lust will dispose us to assent to it, thereby showing that we include it in the stock of sentences we label ‘true’. De Sousa goes on to identify the disposition to assent with a state of flat-out belief – he calls it belief proper – which exists alongside our fluctuating subjective probabilities (de Sousa 1971, p. 64). Here, then, we have a version of the behavioural view which represents flat-out belief as a linguistic disposition grounded in our degrees of confidence and epistemic desirabilities. De Sousa does not develop the account in any detail, but Mark Kaplan and Patrick Maher have each produced a worked-out theory of flat-out belief along these lines and stressed its advantages over the confidence view. I shall focus on Kaplan’s version.18 17 18
There is in fact an ambiguity in de Sousa’s notion of assent, which I shall discuss later. The sense discussed in the text is the primary one. Bas van Fraassen also links flat-out belief (‘acceptance’) in inquiry to assertion. Accepting a theory, he claims, involves a commitment to assert the theory ex cathedra (van Fraassen 1980).
67
Mind and Supermind Like de Sousa, Kaplan identifies flat-out belief with a disposition to sincere assertion – though he focuses on overt assertion in theoretical contexts rather than on candid covert assertion. He calls this the assertion view of belief, and his most recent formulation of it runs as follows: You count as believing [i.e. believing flat-out] P just if, were your sole aim to assert the truth (as it pertains to P), and your only options were to assert that P, assert that ∼P or make neither assertion, you would prefer to assert that P. (Kaplan 1996, p. 109)
On this view, Kaplan points out, there is no simple relationship between flat-out belief and confidence. The truth can be thought of as a comprehensive error-free account of the world, and in deciding what to assert when one’s aim is to assert the truth one must strike a balance between the aims of attaining comprehensiveness and of avoiding error. The assertability of a proposition will thus be a function, not just of one’s confidence in its truth, but also of one’s estimate of its informativeness, together with the relative strength of one’s desires to shun error and attain comprehensiveness.19 This view avoids the pitfalls of the act view. Understood in this way, flat-out belief does not require certainty, nor is it context-dependent. Provided one has a constant disposition to assert that p on occasions when one wants to assert the truth, one counts as believing p flat-out, even if one does not always act as if it were true. The assertion view also has some clear advantages over the confidence view. First, it is not in tension with our common-sense commitment to conjunctive closure. The tension arose because confidence is not preserved over conjunction. But once we abandon the view that flat-out belief is a level of confidence, this no longer presents a problem. We can insist that we should be prepared to conjoin any pair of claims we believe, even if this means believing propositions to which we assign a relatively low probability – the increased risk of error in such a case being offset by the gain in comprehensiveness.20 Secondly, on 19
20
Other factors, too, should play a role, Kaplan argues. In particular, we should aim to make the set of hypotheses we accept structurally sound – that is, we should ensure that each of its members remains credible, given the conjunction of the preceding ones. This claim is crucial to Kaplan’s solution of the lottery paradox: see Kaplan 1996, pp. 139–40. This, in essence, is how Kaplan resolves the preface paradox. The conjunction of claims in the historian’s work constitutes a highly vulnerable, but also highly informative, historical theory. And although the historian thinks that this theory is unlikely to be error-free, they should continue to defend it in the interests of comprehensiveness.
68
Challenges and precedents the assertion view flat-out belief will make a qualitative difference to one’s behavioural dispositions. This is definitional: flat-out belief is defined as a state which disposes one to certain acts of assertion. Flat-out belief will accordingly have some explanatory salience: there will be counterfactualsupporting generalizations linking flat-out belief with certain linguistic actions. It may be objected that it is not facts about flat-out belief as such that are doing the explanatory work here. Our decisions about what to assert in the context of inquiry will be wholly determined by our partial beliefs and desires – by our assessments of the precise trade-off of probability and informativeness which candidate propositions afford. All the real causal work will be done at the level of partial belief and desire, and flat-out belief will be epiphenomenal. Kaplan does not address this objection, but there is a powerful response available to him. As I pointed out earlier, on a view of this kind, our flat-out beliefs are realized in our partial beliefs and desires: they consist in high-level dispositions which have their basis in our partial beliefs and desires, and causal generalizations defined over them hold in virtue of more fundamental ones defined over those states. There is thus no question as to whether the real causal work is being done by the flat-out states or the partial ones; rather, the latter implement the causal processes involving the former. On this view, then, flat-out belief will be no more causally idle than any other high-level state – chemical, biological, social, or whatever. (Note that defenders of the confidence view also hold that flat-out beliefs are realized in partial states – in threshold-passing degrees of confidence. However, they cannot maintain that psychological generalizations defined over flat-out beliefs hold in virtue of generalizations defined over their realizing states, since, on their view, there are no psychological generalizations defined over flatout beliefs.) This view of flat-out belief is, then, somewhat closer to the folk one than is the confidence view. And in treating flat-out belief as a disposition to assert, it at least partially vindicates the intuition that there is a constitutive link between this kind of belief and natural language. Nevertheless, the assertion view is not the folk one. For although it accords flat-out belief some psychological salience, it does not accord it the sort of salience we commonly take that state to have. For we think of flat-out belief as a state which enters into practical decision-making and which has an open-ended role in the guidance of action. Yet, as Kaplan himself emphasizes, on the assertion view flat-out belief manifests itself only in the
69
Mind and Supermind context of theoretical inquiry and is linked to only one action – assertion.21 Another difference is that on the folk view, belief ’s influence on action is desire-mediated – beliefs influence action only in combination with desires. Yet on the assertion view flat-out beliefs dispose us to act directly, without the involvement of desires. Indeed, the assertion view offers no account of what flat-out desires might be. In short, the assertion view characterizes an etiolated theoretical kind of belief, not the full-blooded action-guiding variety. I see no reason to deny that this kind of belief exists. We do sometimes give theoretical endorsement to a proposition without taking it to heart and acting upon it in everyday life. For example, we might accept, and sincerely assert, that certain foods are unhealthy, without this having much influence on what we eat. It is implausible, however, to think that all flat-out beliefs are like this – that we only form flat-out beliefs in the context of disinterested inquiry and that those we form in that context never influence how we behave outside it. A final reason for rejecting the assertion view is that it appears to leave no scope for the active adoption of flat-out beliefs. On the assertion view, a decision to adopt a flat-out belief would be effective only if it brought about changes to the agent’s partial beliefs and desires such that they became rationally disposed to assert the target proposition in the context of inquiry. And it is hard to see how this could happen, or how it could be rational, even if possible. Why should a decision to adopt the belief that p make one more confident that p is true, or increase one’s estimate of p’s informativeness, or alter the relative strengths of one’s preferences for truth and informativeness in one’s account of the world? The assertion view, then, sits uncomfortably with the claim that flat-out beliefs can be actively adopted. This problem extends to other versions of the behavioural view: why should a decision to believe – presumably itself motivated by our partial beliefs and desires – bring about changes in our partial beliefs and desires? But if it does not, then, on the behavioural view, it will be ineffective in generating a new flat-out belief.22 21
Kaplan writes: It is, I think, a mistake to suppose that we need recourse to talk about belief [i.e. flat-out belief] in order adequately to describe the doxastic input into rational decision making. Here Jeffrey is right. That task has been taken over, without residue, by our talk of confidence rankings. (Kaplan 1996, p. 107)
22
See also Maher 1993, pp. 149–52. Although both Kaplan and Maher propose decision-theoretic accounts of rational flatout belief, neither claims that flat-out belief is directly subject to the will. Their theories
70
Challenges and precedents 2.4 Where next? The views examined above represent the main lines of thinking about flatout belief among Bayesian or Bayesian-leaning writers. Some promising suggestions have emerged. The behavioural view, exemplified by the act and assertion views, offers an attractive response to the Bayesian challenge. And in treating flat-out belief as a linguistic disposition, the assertion view goes some way towards explaining how this state could constitutively involve natural language (Fodor’s challenge). However, none of the views captures our common-sense conception of flat-out belief. The most plausible of them characterizes only an etiolated theoretical form of belief and appears incompatible with the possibility of active belief adoption. I think that it is possible to produce a more satisfactory account – a version of the behavioural view which accords flat-out belief a more substantial cognitive role and which does allow for active belief adoption. This account will take some time to articulate and defend, and I shall come at it indirectly, by looking at some further precedents for a twostrand theory, produced by writers who are not committed Bayesians. 3 op i n i on and th e joyc ean mac h i ne In chapter 1 I mentioned Daniel Dennett’s distinction between belief and opinion as one of the inspirations for a two-strand theory of belief (Dennett 1978a, ch. 16, 1991d). More recently, Dennett has also outlined a twolevel theory of the human mind, which pictures the conscious mind as a language-driven ‘virtual machine’ (Dennett 1991a). In this section I shall briefly consider both of these accounts and assess their relevance to the present project. 3.1 Dennett’s opinions Dennett is well known for defending an austere view of belief, similar to the view of strand 1 belief outlined in the previous chapter – the only important difference being that Dennett does not commit himself to the claim that belief is graded. Belief in this sense is a very basic mental state, are intended as normative ones, not as descriptions of actual psychological processes. Of course, it is compatible with this that we can indirectly influence what flat-out beliefs we form – for instance, by reflecting on the reasons we have for thinking something true. Indeed, a normative theory would be useless to us unless this were the case (see Maher 1993, p. 148).
71
Mind and Supermind common to humans, animals, and even mechanical systems. However, Dennett has also argued for the existence of another type of belief-like state which is differently constituted and possessed only by language-using creatures. He calls this state opinion and frequently stresses its importance. Dennett’s account of opinion has much in common with Kaplan’s account of flat-out belief, and it, too, is inspired by de Sousa’s claims about assent and ‘betting on sentences’. Dennett develops these ideas in a rather different way, however. Whereas de Sousa thinks of assent as an act of candid assertion which manifests a pre-existing state of belief proper, Dennett identifies it with an active judgement which initiates a state of opinion.23 In developing this view he draws on Baier’s work on change of mind, discussed in the previous chapter. As we saw, Baier suggests that change of mind is a distinct and sophisticated form of cognitive change, which involves the exercise of reflection and judgement. Dennett endorses this claim, suggesting that changing or making up one’s mind is an active process, involving a distinct ‘motion of mind’. This is easiest to see, he claims, in practical cases – for example, in making up one’s mind to buy something. Suppose I want a boat of a certain kind. It does not follow that, if presented with a boat of this kind, I shall immediately set out to acquire it. Something more is required. I must make up my mind to have it – must move beyond the generic desire for a boat of the kind in question and make a deliberate commitment to acquire the presented one. A commitment like this, Dennett points out, is distinct from, and can outlast, the desire which prompted it: Once I have opted . . . I may get cold feet about the whole deal. Desire may drain out of me. Having made up my mind, not to desire the boat but to buy it, I may begin having second thoughts. (Dennett 1978a, p. 302) 23
Dennett is in fact picking up on an ambiguity in de Sousa’s paper. De Sousa defines assent as sincere assertion, or a mental abstraction from such assertion, and identifies belief proper with a disposition to assent (1971, pp. 59, 64). But in places he also writes as if assent is a form of judgement which can initiate a state of belief proper. Thus he writes, expressing both views: Assent to p by X either serves to incorporate p, or shows that p is already included in the set of sentences taken by X to be true. Such inclusion constitutes the necessary and sufficient condition for X to believe that p [i.e. to believe proper that p]. (de Sousa 1971, p. 59, italics in the original) There is clearly some ambiguity here: it is hard to see how assent, understood as sincere assertion, could initiate a state of belief proper, understood as a disposition to assert. (Indeed, on one reading of the quoted passage, de Sousa is claiming that assent is necessary for belief proper, in a way that assertion is plainly not necessary for a disposition to assert.)
72
Challenges and precedents But such second thoughts, he insists, do not constitute a change of mind. That would involve breaking off the deal and reneging on one’s commitment to buy. Cognitive changes of mind, Dennett suggests, are similar. As language users, we have a desire to categorize sentences as ‘true’ or ‘false’. Making up one’s mind about a matter of fact involves going beyond this generic desire and making a commitment to (‘betting on’) the truth of a particular presented sentence. The state resulting from such an act of epistemic commitment is what Dennett calls an opinion. Opinions, he claims, are not beliefs, but states of commitment, and, like other commitments, they can outlast the beliefs and desires that originally prompted their formation – with the result that we judge one way while believing and acting another. (Dennett suggests that something of this sort happens when we fall into akrasia or self-deception: 1978a, p. 307.) Although Dennett emphasizes the active dimension of opinion formation, he is careful to add that not all opinions are the products of active decision. We can also collect sentences unthinkingly, he notes – often because we see them as sure bets (pp. 307–8). From our perspective this account has some attractions. As I mentioned, Dennett’s conception of belief is very like that of strand 1 belief (and if he were to make the relatively minor modification of treating belief as graded, the similarity would be closer still). And Dennett’s opinions have strong similarities to strand 2 beliefs – being flat-out, active, languageinvolving, and with a special link to occurrent thought (Dennett claims that at least some episodes of ‘occurrent belief ’ are in fact acts of opinion formation). Moreover, Dennett’s view of opinion can be thought of as a broadly behavioural one, in the sense defined earlier. Dennett does not make the point himself, but it is implicit in his account. To form an opinion is to make a behavioural commitment – to bet on a sentence (I shall say more about the nature and extent of this commitment shortly). Now, a behavioural commitment can be thought of as a kind of disposition – a disposition with a particular sort of basis. If one is committed to A-ing, then one will be disposed to A precisely because one believes oneself to be committed to A-ing and desires to honour this commitment – or, if belief and desire are graded, because one attaches a high probability to the proposition that one is committed to A-ing and a high desirability to honouring the commitment.24 And, as before, we can think of the commitment as realized in those states. (In this case there is less scope for 24
There is, it is true, a sense in which commitment need not involve the attitudes mentioned. If I have publicly committed myself to A-ing, then I remain publicly committed to it, even
73
Mind and Supermind multiple realization since the cited beliefs and desires are necessary for the existence of the commitment – though if the realizing states are graded, they may vary in degree from case to case.) So opinions are high-level behavioural dispositions, realized in the agent’s beliefs and desires, in line with the behavioural view. As I suggested earlier, this is an attractive position for a two-strand theorist – offering an economical account of the relation between the two strands and promising a solution to the Bayesian challenge. Moreover, unlike some other versions of the behavioural view, this one allows for the active adoption of states at the higher level (opinions, in this case). The problem for other versions, I pointed out, was that a decision to adopt a belief would not bring about the necessary changes at the lower, realizing, level – changes, for example, to one’s estimate of the target proposition’s probability or informativeness, or to one’s epistemic desirabilities. But there is no similar objection to active opinion adoption, understood as involving a form of commitment. Assuming that I have a general desire to discharge my epistemic commitments, all that will be required, in order for me to possess the opinion that p, is that I should come to believe (or to be highly confident) that I have an epistemic commitment to p. And I shall form that belief automatically as soon as I make the commitment. The suggestion that higher-level states are behavioural commitments thus allows us to reconcile the behavioural view with the possibility of active belief adoption. I shall develop this idea at greater length in the next chapter. It is true that there is still Williams’s challenge to address: what is there to stop us from forming opinions for pragmatic reasons and regardless of their truth? Dennett does not consider this – and, as we shall see in a later chapter, the issue is in fact rather complex – but there is a plausible answer available to him. He might point out that forming an opinion involves betting on truth – categorizing a sentence as true. And while we might feign to do this for pragmatic purposes, the result would not be a real opinion, but a fake one. Genuine opinion formation, by contrast, will be driven by epistemic considerations. (Alternatively, Dennett might simply deny the need to address the challenge – pointing out that it is a challenge to the possibility of active belief formation and that opinions are not beliefs.) if I subsequently forget all about it. In this sense commitment is an historical property. What I am interested in here, however, is what we may call effective commitments – commitments which are psychologically real and guide behaviour. Such commitments do require the attitudes described.
74
Challenges and precedents Dennett’s account has some attractive features, then. Still, his opinions are not our strand 2 beliefs. For, like Kaplan’s flat-out beliefs, they have a narrowly theoretical role. They are commitments to assert, not commitments to think and act.25 Thus Dennett identifies opinion with the sort of intellectual assent that often lacks real conviction, and writes as if the sentences on which people bet will typically be ones with abstruse or highly theoretical contents. (See, for example, Dennett 1978a, p. 306, 1981b, pp. 73–4n, 1987, p. 207. Dennett’s favourite example of an opinion is a bit of arcane information he once picked up from a play.) Moreover, Dennett insists that the relation between opinion and action is at best indirect: It is my beliefs and desires that predict my behaviour directly. My opinions can be relied on to predict my behaviour only to the degree, normally large, that my opinions and beliefs are in rational correspondence, i.e., roughly as Bayes would have them. (1978a, pp. 306–7)
His reasons for saying this appear to stem from his commitment to an austere view of belief: One’s behavior is consonant with one’s beliefs ‘automatically’, for that is how in the end we individuate beliefs and actions. (1978a, p. 307)
This position, however, sits uncomfortably with some of Dennett’s other claims about opinion. He frequently stresses the psychological importance of the state and claims that many conscious mental events involve the formation or recollection of opinions – including occurrent belief and changes and makings up of mind. And, given this, it is odd for him to suggest that opinion has a narrowly theoretical role. For we can make up our minds and entertain occurrent beliefs about all manner of mundane matters – whether beef is safe to eat, say, or the salesman trustworthy, or the weather threatening enough to justify taking an umbrella. And such episodes seem to have significant behavioural consequences. If I make up my mind that eating beef is unsafe, then it is natural to think that this will affect, not only what I say, but what I eat. Dennett might diffuse the tension here by denying that conscious mental events like these have a direct influence upon action. There are passages in his writing which suggest 25
Sometimes Dennett even identifies opinions with meta-linguistic beliefs – beliefs to the effect that this or that sentence is true (see, for example, his 1991d, p. 143). This view is at odds with his claim that opinion is a distinct psychological attitude.
75
Mind and Supermind such a view (see Dennett 1969, ch. 8, 1978b, 1982, and the discussion of his 1991a in the section below); but it is, I think, an implausible one. (Why, then, does Dennett lay such stress on the importance of opinion? The answer, I think, has less to do with its intrinsic importance than with the misconceptions to which he believes it gives rise. It is a recurrent theme in Dennett’s writing that it is through conflating belief with opinion that philosophers have been led to various erroneous views about belief – among them, that beliefs are linguistically structured items with fine-grained contents and that it is possible to have a massively false set of beliefs (see the references to ‘opinion’ in his 1987 and 1994). In each case, he suggests, it is the opinion system, not the belief system, that possesses the properties in question. So, for example, it is possible to have a massively false set of opinions just because opinions have no direct relation to behaviour. In short, Dennett exploits his theory of opinion in order to defuse intuitions that conflict with his view of belief – which he regards as the really important cognitive state.26 These points are well-taken, but I suspect that the tactical usefulness of opinion theory blinds Dennett to the undeveloped potential in the theory itself.) There is another worry I want to mention before moving on. It concerns the role of language. Dennett’s talk of ‘betting on sentences’ (which he borrows from de Sousa) suggests that making up one’s mind involves adopting an attitude to a particular natural-language sentence. But this is very implausible. I may remember having made up my mind about some matter without being able to remember precisely the words I used to frame the thought. If I were bilingual, I might even forget which language I had used. In these cases, it seems, what I bet on is not so much a sentence as a proposition. I shall say more about the cognitive role of language in the next chapter. 3.2 Dennett on the Joycean machine Dennett continues to invoke his account of opinion. In more recent work, however, he has sketched a different, and more ambitious, two-strand theory of mind, developed as part of a theory of consciousness. In his earlier writings on consciousness, Dennett identifies conscious mental states with those that are available to verbal report – a view 26
Clark offers a similar analysis of Dennett’s claims about opinion (Clark 1990a). See also my 1998a.
76
Challenges and precedents which suggests that consciousness is simply an access mechanism (Dennett 1978b). In later work, however, he claims that the conscious mind forms a distinct level of mental activity, which is constituted differently from the non-conscious mind and operates on different principles (Dennett 1991a). According to this view, the biological brain is a collection of specialized but unintelligent hardwired subsystems, operating in parallel and competing for control of motor systems. The conscious mind, by contrast, is a softwired, or virtual, system, which we create for ourselves by engaging in various learned behaviours – tricks and habits which effectively reprogram our biological brains. The most important of these tricks, according to Dennett, is that of private speech – talking to ourselves in overt or silent soliloquy. Private speech, Dennett claims, has a self-stimulatory effect: self-generated sentences are processed in the same way as externally produced ones and tend to evoke similar behavioural responses – often with useful results. There are various aspects to this. Asking yourself a question can prompt an instinctive verbal reply containing information which you would otherwise have been unable to access; reminding yourself of the benefits of unpleasant tasks can strengthen your resolve; commenting on your actions can make it easier to recall and evaluate the strategies you have used; repeating encouraging phrases to yourself can help to keep your spirits up – and so on (Dennett 1991a, pp. 194–7, 277–8, 301–2). Once the trick of private speech had been discovered, Dennett suggests, it would quickly have been refined (principally by suppression of the redundant act of vocalization), and a disposition to master it might have been coded into the human genome. As a result, he claims, we have become disposed to develop habits of regular private speech, thereby artificially creating an extra level of cognitive activity, which is both serial and languageinvolving. Dennett dubs this softwired system the Joycean machine27 and suggests that it performs important executive functions – helping to focus the resources of disparate neural subsystems and to promote sustained and coherent patterns of behaviour (1991a, pp. 228, 277).28 What Dennett is proposing, then, is a two-strand theory of mind, according to which an austerely characterized non-conscious level coexists with a conscious level consisting of serial self-stimulatory acts of 27 28
Referring, of course, to James Joyce and his 1922 novel Ulysses, one of the most famous attempts to record the contents of the human stream of consciousness. There is an interesting anticipation of Dennett’s claims about the Joycean machine in B. F. Skinner’s account of thinking as automatically reinforcing verbal behaviour (Skinner 1957, ch. 19, pp. 432–52).
77
Mind and Supermind inner speech. Like his theory of opinion, this account has attractions from our perspective. In particular, it does justice to the idea that there is a level of cognition that involves the tokening of conscious occurrent thoughts. (By contrast, the two-strand theories we looked at earlier were concerned primarily with standing-state belief, and said little about occurrent thought.) And it offers an attractive view of the relation between the conscious and non-conscious minds. The former is pictured as a virtual system, which is, in effect, the product of its non-conscious counterpart – constituted by various linguistic activities generated at the lower level. (I assume that inner speech qualifies as intentional, in Dennett’s austere sense, just as overt speech does.) We can think of this position as an analogue of the behavioural view. The behavioural view, as developed by Bayesian theorists, identified standing-state flat-out beliefs with high-level behavioural dispositions, realized in lower-level mental states. Dennett can be thought of as making a related claim about conscious occurrent beliefs – identifying them with behavioural episodes, motivated by lower-level mental states. Dennett’s account has other attractive features, too. Since, on his view, conscious thoughts influence behaviour only indirectly, by their effect on non-conscious processes, there is no need to appeal to them directly in the explanation of action, and thus no conflict with an austere view of the non-conscious mind. The account also answers Fodor’s challenge: language acquires a cognitive role through the development of habits of verbal self-stimulation, which exploit response patterns acquired in the course of linguistic interaction with others. Dennett’s account is plausible, then, and I do not want to deny either the existence or the importance of the processes he describes. The Joycean model is, I think, an important contribution to our understanding of the conscious mind, and the account of the strand 2 mind which I shall develop will have some similarities to it. But I do want to maintain that it is not the whole story. I have three reasons for saying this. First, the Joycean model lacks the resources to explain conscious reasoning. Joycean processes are associative, not computational, and seem more likely to generate trains of fantasy, reminiscence, and idle speculation than sequences of cogent argument. (The adjective ‘Joycean’ is thus highly appropriate for the system Dennett describes – Joyce’s hero Leopold Bloom being particularly given to flights of fancy and nostalgic musings.) And even if self-stimulation did occasionally throw up sequences of rational argument, it would be by chance rather than design. It is not clear how we could deliberately constrain the Joycean machine to produce sequences 78
Challenges and precedents of rational thought. Yet we can constrain our conscious minds to engage in rational thought: we can deliberately set ourselves to think through a problem. So self-stimulation cannot be the whole picture. Secondly, the Joycean model offers an implausible picture of the relation between conscious thought and action. Our conscious thoughts seem to exert a direct and reliable influence on our behaviour. If I consciously decide to perform some immediate action, then this is normally sufficient to produce the action. On Dennett’s account, however, conscious thoughts are acts of self-instruction or self-encouragement, and their success in guiding action depends on our reacting to them in appropriate ways at a non-conscious level. So, for example, thinking to myself that beef is unsafe to eat will not directly move me to avoid eating beef, though it may help to cajole my non-conscious mental processes into seeing that I decline the steak tartar. And unless one is preternaturally suggestible, it is very unlikely that mechanisms like this will secure conscious thought a reliable role in the control of action. Thirdly, the Joycean model of the conscious mind does not provide any account of conscious standing-state belief.29 Many conscious occurrent thoughts seem to involve the activation of previously formed standing-state beliefs or the formation of new ones, through acts of judgement or making up of mind. Yet on the Joycean model, such thoughts are just one-off selfstimulations, which either evoke some immediate response from low-level systems, or else fade unheeded. Dennett might respond by appealing to his earlier work, identifying conscious standing-state beliefs with opinions and the associated conscious thoughts with episodes in which opinions are recollected or formed. There are some attractions in this view. As we have seen, Dennett adopts a broadly behavioural view of both conscious occurrent thought and opinion – treating the former as an action and the latter as a standing commitment to action. And, given this, it is tempting to integrate the two accounts in the way suggested. It is unclear, however, that an integrated account of this kind is compatible with Dennett’s picture of the Joycean machine. What is the function of those conscious thoughts that are recollections of opinions? Are they self-stimulations? (And if so, does that mean that opinion formation involves a commitment to selfstimulation?) Or do they have some other function? And, as I argued earlier, it is implausible to identify makings up of mind with acts of opinion 29
As I explained in chapter 2, a conscious standing-state belief is one that is apt to be activated as a conscious occurrent thought. When not activated, such states are, of course, not objects of consciousness.
79
Mind and Supermind formation, at least if opinions have a narrowly theoretical role.30 Dennett gives us part of the story about conscious thought, then, but omits some crucial elements. I have argued that Dennett’s accounts of opinion and conscious thought are flawed. They have, however, thrown up two important ideas: first, that flat-out belief involves a behavioural commitment, and secondly, that conscious occurrent beliefs are intentional actions, motivated by lowerlevel mental processes. From our perspective, both ideas are attractive, showing how the behavioural view can allow for active belief adoption and how it can be extended to encompass occurrent thought. In the following chapters I shall build on these ideas, constructing an integrated behavioural view of conscious belief, which treats its standing-state form as a behavioural commitment and its occurrent form as an action which either initiates or partially discharges this commitment. First, however, I want to look at a final precedent, which introduces what will be a key element in the developed picture. 4 acc e p tanc e In recent years a number of writers have drawn a distinction between belief and acceptance. This has links with the Bayesian distinction between partial and flat-out belief (the latter, too, is often referred to as ‘acceptance’), but there are significant differences – indeed, writers on acceptance typically insist that it is not a form of belief at all, properly speaking. There are a number of independent versions of the belief/acceptance distinction, each addressing different concerns and fostering different conceptions of the two states (for a survey of some of them, see Engel 2000b). However, I want to focus on a particular cluster which draw the distinction in a broadly similar way. According to these writers, belief is a passive state, which is typically formed in response to evidence and other truth-relevant factors. Acceptance, on the other hand, is a deliberative strategy or ‘methodological posture’ (Stalnaker’s phrase), which can be actively adopted in response to pragmatic considerations. To accept a proposition in this sense is to decide to treat it as true – to take it as a premise – for the purposes of certain kinds 30
Dennett makes no explicit link between opinion and the Joycean machine (‘opinion’ occurs only once in his 1991a, in a footnote). He does, however, write in much the same terms of both. In particular, he suggests that both opinions and the Joycean machine are the products of memes – culturally transmitted ideas and artefacts which have colonized and reprogrammed the biological human brain (1991a, pp. 209–26, 1993, pp. 229–30).
80
Challenges and precedents of reasoning and decision-making. For example, I might decide that, for the purposes of deciding what food to buy, I shall take it as a premise that beef is unsafe to eat, even if I am not completely convinced that it is. Acceptance of this kind is thus a deliberative attitude, and one that figures in practical reasoning as well as theoretical inquiry. I shall refer to this as the premising conception of acceptance. A number of writers, among them Stalnaker, Bratman, Cohen, and Engel, have developed accounts of acceptance along these lines (Bratman 1992; Cohen 1989, 1992; Engel 1998; Stalnaker 1984; see also the essays in Engel 2000a). I shall focus on Cohen’s, which is the most substantial and influential.31 4.1 Cohen on belief and acceptance Belief, according to Cohen, is a disposition to entertain ‘credal feelings’. To believe that p is to be disposed to feel it true that p when you consider the matter (Cohen 1992, p. 5). Like other affective dispositions, belief is involuntary and varies in intensity (pp. 6, 11, 26). Acceptance, on the other hand, has nothing to do with feeling. To accept that p is, in Cohen’s words, to have or adopt a policy of deeming, positing, or postulating that p – i.e. of including that proposition or rule among one’s premisses for deciding what to do or think in a particular context, whether or not one feels it to be true that p. (1992, p. 4)
Acceptance, Cohen explains, differs from supposition, assumption, or pretence. Those are temporary states, which may be entered into on a casual or ad hoc basis. Acceptance, on the other hand, involves a serious 31
Edna Ullmann-Margalit and Avishai Margalit have also outlined a view similar to this – though they use the term ‘holding as true’ instead of ‘acceptance’. They also draw useful distinctions between this attitude and other, closely related ones, which they label ‘holding true’, ‘holding true come what may’, and ‘holding fast’ (Ullmann-Margalit and Margalit 1992). Of the other uses of the term ‘acceptance’, perhaps the best known is that of John Perry, who employs it to mark a semantic distinction (Perry 1993, ch. 3). Acceptance in Perry’s sense is not a kind of belief state, but a component of one. The content of a belief, according to Perry, is a worldly state of affairs – a conjunction of objects and properties. The acceptance component of a belief is the way the subject thinks of this content (‘the contribution the subject’s mind makes to belief ’). Perry claims that this component can be characterized by reference to the sentences a competent speaker would use to express the belief. Though independently useful, Perry’s distinction cuts across that between strand 1 and strand 2 belief. (As Perry points out, animals and pre-verbal children may be said to accept sentences in his sense.)
81
Mind and Supermind commitment to a premising policy and typically defers to some authority or source of data (1992, p. 13). Cohen does not say exactly what such premising policies involve or how they are executed. The issue is, in fact, rather complex, and I shall return to it at length in the next chapter. The general idea, however, is clear enough: accepting a proposition involves committing oneself to taking it as an explicit premise in one’s conscious reasoning, both theoretical and practical.32 We are able to make such commitments, Cohen implies, because our conscious reasoning is, to some extent, under our personal control; acceptance-based mental processes are, he says, ‘voluntarily guided by consciously accepted rules’ (1992, p. 56). Cohen identifies a number of properties of acceptance which distinguish it from belief. I shall highlight those that are most important for us.33 First, acceptance is voluntary: Acceptance . . . occurs at will, whether by an immediate decision or through a gradually formed intention. This is because at bottom it executes a choice – the accepter’s choice of which propositions to take as his premisses. (Cohen 1992, p. 22)
(Cohen uses the term ‘acceptance’ both for the act of adopting a premising policy and for the state introduced by such an act.) We enter an acceptance state by resolving upon a premising policy, and we know what we accept because we know what resolutions we have made. Because acceptance formation is active, Cohen claims, we are responsible for our acceptances in a way that we are not for our beliefs (1992, p. 23). Secondly, acceptance can be pragmatically motivated. We can resolve to treat a proposition as true for a variety of reasons – not only evidential, but also ethical, professional, religious, aesthetic, and so on (pp. 12, 20). For example, professional ethics may oblige a lawyer to accept that their client is innocent, even if they do not believe it. Thirdly, acceptance may be context-relative: we can accept a proposition in one context while suspending judgement or rejecting it in another (1992, p. 13). Thus, the lawyer may accept their client’s innocence when defending them in court, but not when dealing with them in private – say, in deciding whether or not to trust them with personal 32
33
For the claim that acceptance-based mental processes are conscious and serial, see Cohen 1992, pp. 14, 20–7, and 56. For discussion of the role of acceptance in practical reasoning, see Cohen 1992, ch. 2. With some qualifications, the first four of these claims are endorsed by Stalnaker, Engel, and Bratman, too, and thus represent a local consensus about this form of acceptance. The fifth claim, concerning the role of language, is specific to Cohen.
82
Challenges and precedents information. The context-relativity of acceptance follows from its responsiveness to prudential reasons, since a reason for accepting a proposition may have force only within a certain context.34 Fourthly, acceptance is a flat-out state. For any proposition, p, and any deliberative context, C, one either has or has not adopted a policy of premising that p in C (1992, p. 115). (There is, it is true, one sense in which acceptance is graded. I assume that we can have varying degrees of attachment to our premising policies – finding some easier to give up than others. But so long as we hold on to a given set of premising policies, our commitment to each of them will be the same.) Finally, acceptance is language-involving. Cohen insists that, although our acceptances need not be overtly vocalized, they must be linguistically formulated, since accepting a proposition involves feeding it to inferential processes which operate by linguistic transformation (p. 12). Creatures without language are therefore incapable of acceptance. Cohen also identifies a parallel actively formed conative state – goal adoption – which shares many of the properties of acceptance. To adopt a certain state of affairs as a goal is to commit oneself to a policy of striving to bring it about (1992, pp. 44–5).
34
David Clarke challenges the claim that acceptance can be context-relative (Clarke 1994). To accept something, Clarke claims, is to give internalized assent to it. Since assent entails belief, and since belief cannot be context-relative, it follow that acceptance cannot be context-relative either. Cases which suggest otherwise are, Clarke claims, misdescribed. He cites an example used by Bratman. Suppose that I am deciding whether or not to embark on a building project, given two estimates of its cost. In this context, Bratman argues, it might be prudent of me to accept that the higher estimate is correct, even if I believe that it is exaggerated and would not accept it in other contexts. For the error of proceeding with the project when I cannot afford to complete it would be much more costly than that of delaying when in fact I can (Bratman 1992). Clarke objects that what I accept in such a case is not that the higher estimate is correct, but that there is a certain, low, probability that it is – a proposition which I believe is true and accept in all contexts. Given the relative cost of errors, Clarke points out, this low probability may nonetheless assume considerable weight in my decision-making, leading me to act as if I accepted the higher estimate (Clarke 1994). I find this objection unconvincing. For as far as conscious acceptance goes, Bratman and Cohen seem to be right. In order to keep things simple we do often accept claims which we think unlikely to be true – it is just so much easier than trying to make explicit judgements of probability. Clarke must say that in such cases we are mistaken about what is really happening. It may seem to me that I am accepting the higher estimate as true, but what I am really doing is accepting some assessment of the probability of its being true. Presumably this is done at a non-conscious level. But this threatens to render conscious acceptance redundant: if the operative acceptance states are non-conscious ones, then conscious acceptance will be idle. This consequence is, I think, unattractive. For further discussion of the possibility of acceptance without belief, see Clarke 2000 and Cohen 2000.
83
Mind and Supermind Cohen’s distinction between belief and acceptance has obvious similarities with that between strand 1 and strand 2 belief. However, Cohen does not develop the account in a way that meshes with the twostrand framework I have proposed. Although he has a lot more to say about belief and acceptance – about their roles in purposive explanation, speech, inquiry, self-deception, and akrasia – much of it is dependent on his somewhat idiosyncratic view of belief as a matter of credal feeling, and so has little bearing on our concerns. (Thus, for example, Cohen does not consider the Bayesian challenge, since he does not think of beliefs as subjective probabilities.) Moreover, in discussing acceptance, he does not address constitutive questions of the sort in which we are interested. He does not provide any detailed account of what premising involves or how premising policies are constituted, and though he insists that acceptance is voluntary, he does not explain what its voluntariness consists in. (He does not, for example, suggest that acts of acceptance are motivated by the agent’s beliefs and desires.) Nor does he give a clear account of the role of acceptance in action. He holds that we act upon our acceptances (1992, pp. 48, 62), but does not explain what this involves, and insists that acceptance, unlike belief, does not have a causal influence upon action (p. 64). (Oddly, however, given his stress on the differences between belief and acceptance and between the correlate states of desire and goal adoption, Cohen allows that some actions have hybrid explanations, citing pairs of beliefs and goals or of acceptances and desires; p. 48.) However, we do not have to be limited by Cohen’s perspective here. His core account of acceptance offers, I think, a very promising model for strand 2 belief, and I propose to detach it from its context and to develop it in the light of ideas introduced earlier – in particular, the behavioural view. Working this out will take up most of the next two chapters, but I shall give a brief indication of the proposed line of development here. 4.2 Strand 2 belief and acceptance Acceptance in Cohen’s sense has many similarities with strand 2 belief: both are flat-out, active, language-involving states which feed into conscious, explicit, active, language-driven reasoning. Acceptance also seems likely to have the sort of cognitive importance we associate with strand 2 belief. If accepting a proposition involves committing oneself to using it as a 84
Challenges and precedents premise in reasoning and decision-making, then acceptance will be highly salient in the explanation of action and inference. Moreover, although Cohen does not present it in this way, it is very natural to adopt a behavioural view of acceptance – to regard it as a highlevel behavioural disposition, realized in lower-level intentional states and actions. Policies are behavioural commitments, and behavioural commitments, as I noted earlier, are a specialized sort of high-level disposition. Furthermore, executing a policy involves performing various intentional actions, motivated in part by the beliefs and desires that realize the policy. (I shall expand on this characterization in the next chapter.) Thus if we think of premising policies as standing-state acceptances, and the mental actions involved in forming and executing them as occurrent acceptances, then we have an integrated behavioural view of acceptance, which represents the occurrent form of the state as either initiating or activating the standing-state form. The picture that emerges, then, is of the acceptance system as a virtual structure – a ‘premising machine’, we might say – realized in lower-level intentional states and actions.35 Given this, it becomes very attractive to regard strand 2 beliefs as acceptances, and to think of the strand 2 mind as a premising machine, realized in strand 1 states and actions. As I suggested, this view offers an attractive account of the relation between the two strands of belief and promises a solution to the Bayesian challenge. Thinking of strand 2 beliefs as premising policies has other attractions, too. If the actions involved in executing a premising policy are linguistic ones, as Cohen claims, then the proposal yields a solution to Fodor’s challenge. This proposal would also explain how strand 2 beliefs can be implicitly involved in explicit reasoning, in the form of suppressed premises and background assumptions. For there are two ways of treating a proposition as true – one by employing it as an explicit premise in one’s reasoning, the other by treating it as part of the cognitive background against which explicit reasoning takes place. Cohen focuses on the former activity, but other writers on acceptance stress the latter, too (see Bratman 1992). A final attraction of the proposal is that, unlike others we have considered, it offers a model for strand 2 desire, as 35
The term ‘premising machine’ is modelled on Dennett’s ‘Joycean machine’ and is used to highlight the similarity between the acceptance system and the virtual machines created by software engineers – word processors or spreadsheets, for example. Just as a virtual machine is created by programming a flexible computational system to display an appropriate highlevel functional profile, so a premising machine is created by deploying one’s lower-level mental resources to the task of forming and executing premising policies. The term should not be taken to imply automaticity or inflexibility.
85
Mind and Supermind well as for strand 2 belief. Strand 2 desires can be thought of as policies of goal adoption, along the lines Cohen suggests. Despite all this, however, we cannot simply identify strand 2 belief with acceptance. Indeed, writers on acceptance typically deny that it is a form of belief and stress the differences between it and belief (see, for example, Bratman 1992; Cohen 1992; Engel 1998). Acceptance, they point out, is active, flat-out, responsive to prudential considerations, and context-relative, whereas belief is passive, graded, truth-directed, and context-independent. Now, from our perspective, some of the contrasts here are spurious, reflecting a failure to distinguish the two kinds of belief identified earlier. Thus, the fact that acceptance is active and flat-out distinguishes it from strand 1 belief, but not from strand 2. Other contrasts remain, however. For one thing, belief – whether strand 1 or strand 2 – does not seem to be responsive to prudential considerations in the way that acceptance is. We can accept something because our job requires us to do so, but we cannot believe it simply on that basis. (This is, of course, the basis of Williams’s challenge.)36 And belief, unlike acceptance, does not seem to be context-relative. If I believe something, then I believe it in all contexts, not just when I am engaged in certain kinds of activity.37 These considerations, then, suggest that acceptance is not a form of belief at all, but a wholly different attitude. Perhaps the intuitions marshalled in the previous chapter, apparently indicating the existence of an actively formed, flat-out, language-involving form of belief, really point to the existence of acceptance in Cohen’s sense, and perhaps, with that notion now in place, we can forget about strand 2 belief? I do not think so. I agree that Cohen’s form of acceptance exists, and that some of our conscious explicit reasoning involves acceptances. But this just reinforces the case for recognizing the existence of strand 2 belief. For while it is plausible to think that some of the states involved in 36
37
It is true that prudential considerations can move me to take indirect steps to induce a desired belief in myself – for example, by commissioning a hypnotist to work on me or by practising ‘positive thinking’. But they cannot – it is claimed – directly motivate belief formation in the way that they can directly motivate action. Another contrast sometimes mentioned is that belief, unlike acceptance, is subject to an ideal of agglomeration. Ideally, my beliefs should be mutually consistent, and if I notice a conflict among my beliefs, then this is a cause for concern. Acceptances, by contrast, are not subject to this ideal, since an agent may have sound prudential reasons for accepting incompatible propositions in different contexts.
86
Challenges and precedents conscious explicit reasoning are not beliefs, it would be perverse to claim that none of them are. Indeed, some would say that conscious occurrent beliefs are paradigm beliefs. Acceptance theory must therefore be supplemented with a theory of flat-out belief – as some writers on the subject recognize (see Bratman 1992, pp. 2–3). This, in turn, suggests that there may not be such a gulf between acceptance and belief after all. For, as far as the phenomenology goes, acceptance-based conscious reasoning seems to involve the same general kind of states and processes as its belief-based counterpart. When we switch from one to the other, we do not seem to be engaging radically different cognitive mechanisms. How, then, should we think of the relation between acceptance and strand 2 belief? It is certainly true that we cannot identify the two kinds of state. There are some clear-cut instances of acceptance which are not instances of the folk category of belief – or, indeed, of any plausible theoretical reconstruction of it. We would not say that a lawyer believed that their client was innocent if they merely accepted it for professional reasons. However, this leaves open the possibility that strand 2 belief may be a subspecies of acceptance. For example, we might identify strand 2 beliefs with those token acceptances that are motivated by global epistemic concerns rather than local prudential ones. This proposal would also remove the objection to regarding strand 2 belief as actively formed. For it would follow that, although they could be actively formed, strand 2 beliefs could not – as a matter of definition – be prudentially motivated or context-relative. In fact, the issues here are more complex than this, and I shall argue that, while strand 2 beliefs are indeed a subspecies of acceptance, it is not their motivation that distinguishes them, but the deliberative context in which they are active. This is business for a later chapter, however. conc lu s i on and p ro spe c t This chapter has introduced some challenges to the proposed two-strand theory and reviewed some precedents for it – focusing in particular on models for strand 2 belief (for a summary of their features, see figure 2). None of the models fitted the bill exactly, but the review threw up some important ideas as to how the theory might be developed and the challenges met. The principal of these was the behavioural view, which represents flat-out beliefs as high-level behavioural dispositions, grounded
87
Mind and Supermind Flat-out belief (confidence view)
Flat-out belief (act view)
Flat-out belief (assertion view)
Opinion
Consists in a thresholdpassing level of confidence in its content
Consists in a disposition to act as if its content were true
Consists in a disposition to affirm its content
Consists in a commitment to affirm its content
Identical with a state of partial belief
Acceptance (premising conception)
Strand 2 belief*
Consists in a commitment to use its content as a premise
Realized in states of partial belief and desire
Does not require certainty
Requires certainty (assuming belief is not contextdependent)
Not salient in the explanation of inference and action
Salient in the explanation of action
Does not require certainty
Salient in the explanation only of linguistic actions
Salient in the explanation of action and inference
No special link with conscious thought
Associated with conscious occurrent thought
Not language-involving
Language-involving
Cannot be actively formed
Can be actively formed
Epistemically motivated
Can be pragmatically motivated
Apparently not open to pragmatic motivation (but see chapter 5)
Contextindependent
Can be contextrelative
Apparently contextindependent (but see chapter 5)
N/a
* This description of strand 2 belief combines elements of the characterization in chapter 2 and of the proposal in part 4 of the present chapter.
Figure 2 Precedents for strand 2 belief
88
Challenges and precedents in partial beliefs and desires. The premising conception of acceptance was also identified as offering an attractive way of developing this view. The strand 2 mind, I suggested, is a ‘premising machine’ constituted by policies of reasoning and realized in intentional states and actions at the strand 1 level. If this is right, then a worked-out account of the premising machine will be a prerequisite for a developed theory of strand 2 belief. And there is still much work to be done here. Cohen’s account of premising is sketchy and underdeveloped. What exactly is involved in premising and goal adoption? What role does language have in the process? How do premising policies influence action? In the following chapter I shall set out to answer these questions and to develop a rounded picture of the premising machine. In chapter 5 I shall then go on to argue that strand 2 beliefs form a subset of premising policies and to show how this view provides an attractive way of fleshing out our two-strand theory of mind and of responding to the challenges facing it.
89
4 The premising machine In the last chapter I suggested that there is a level of mental activity which constitutively involves the formation and execution of policies of premising. In the present chapter I shall explore this idea in more detail, describing the shape of these premising policies, the role language plays in them, their relation to lower-level mental states, and their influence on action. Although the account takes its start from Cohen’s claims about acceptance and goal adoption, the aim is not to explicate his position, but to find the most satisfactory development of the ideas he introduces. 1 p re m i s i ng p ol i c i e s In this part of the chapter I shall outline the general shape of various kinds of premising policies and premising dispositions, building on and modifying Cohen’s characterizations. 1.1 Acceptance Recall Cohen’s characterization of acceptance. To accept that p is, he says, to have or adopt a policy of deeming, positing, or postulating that p – i.e. of including that proposition or rule among one’s premisses for deciding what to do or think in a particular context. (Cohen 1992, p. 4)
That is to say, acceptance involves a behavioural commitment; to accept something is to commit oneself to a policy of action. The general idea here is not new: as we saw, Dennett holds that forming an opinion involves embarking on a policy of linguistic action. The commitment in acceptance, however, is of a rather different sort: the actions involved are not overt physical ones, but covert mental ones – acts of deeming, positing, or postulating (though, as we shall see, acceptance involves an indirect commitment to overt action, too). That is, accepting a proposition involves undertaking, 90
The premising machine not just to speak as if the proposition were true, but to reason as if it were – to take it as a premise. But what exactly does that involve? How does one go about premising that something is true? Cohen does not say much about the process of premising itself, but the general outline of his view is clear enough. Three features stand out. First, premising is a conscious, voluntary activity (1992, pp. 20–7). The key idea here, I think, is one I discussed in chapter 2 – that we can carry out inferential operations intentionally, at a personal level. Secondly, premising involves the application of learned inference rules – logical, conceptual, or mathematical (pp. 23, 56, 78–9). I think that Cohen would also include rules of practical reasoning here, such as the practical syllogism (see, for example, pp. 62–3). And, thirdly, premising is a linguistic activity – premises and inference rules have to be linguistically formulated, though they need not be overtly vocalized (pp. 12, 78–9). The picture, then, is this: premising that p involves giving linguistic expression to p, and then consciously manipulating it, either on its own or together with other premises, in accordance with learned inference rules. Among such rules might be one that maps ‘All Fs are G’ and ‘x is F’ onto ‘x is G’, or one that maps ‘x is a bachelor’ onto ‘x is unmarried’. I think that this characterization needs revision – particularly in its emphasis on the need for rules and language. I shall say more about this in the next part of this chapter. But I endorse the general outline: premising, I shall assume, involves consciously and deliberately calculating some of the consequences of one’s premises. And acceptance – that is, having a policy of premising – involves committing oneself to doing this, on appropriate occasions, in appropriate contexts. (Like Cohen, I assume that acceptance may be context-relative. I shall say more about this in the next chapter.) I shall also take it that accepters commit themselves to acting upon the results of their calculations. In the case of theoretical reasoning, this will involve accepting any derived conclusions as further premises (I shall discuss practical reasoning shortly). Cohen somewhat obscures the issue here by focusing on the question of whether, in accepting a series of propositions, we already accept their consequences implicitly. Cohen denies that we do, but insists that we do implicitly accept those propositions which we currently accept to be consequences of our premises. Acceptance, as he puts it, is subjectively, though not objectively, closed under deduction (1992, pp. 27–33). Now, Cohen identifies implicit acceptance with a commitment to explicit acceptance as a premise (p. 30), so this suggests that he thinks that we are committed to accepting as premises those, and only 91
Mind and Supermind those, propositions which we accepted to be consequences of our premises at the time we originally accepted them. Whether this is the correct reading of his remarks, I am unsure; but I shall, in any case, take a more liberal line. I shall assume that in accepting a series of premises we commit ourselves to accepting, not only those propositions we currently regard as consequences of them, but also any others we may subsequently come to regard as consequences of them, as a result of further reasoning. The commitment involved in acceptance is thus open-ended. I shall also assume that we do not need to accept that a proposition is entailed by our existing premises in order for us to be committed to accepting it in turn. It is true that we shall regard ourselves as committed to accepting a proposition, and be moved to act accordingly, only if we recognize, at some level, that our existing premises entail it. But it is wrong to insist that we must accept this entailment claim – at least if that means adopting a policy of taking it as a premise; it is sufficient, I shall assume, that we simply believe it in the strand 1 way. Indeed, there would otherwise be a danger of regress. If we must accept that p entails q in order to get from acceptance of p to acceptance of q, then why not say that we must accept that p and (p entails q) entails q in order to get from acceptance of p and p entails q to acceptance of q? If acceptance of the entailment claim is not necessary in the latter case, then why is it in the former? To sum up, then, I shall assume that we have an open-ended commitment to accept any propositions we believe to be entailed by our premises, whether or not we accept that they are so entailed and whether or not we recognized that they were at the time we accepted the premises. Of course, this commitment is contingent upon a continued acceptance of the premises themselves. If we discover that our premises entail a conclusion which we are not prepared to accept, or which conflicts with other premises we have accepted in the same context, then we have the option of rejecting one or more of the original premises. (I am assuming here that we have a duty to ensure the consistency of the premises we accept within a single context, although we may accept inconsistent premises in different contexts.) There are some further qualifications I want to make to Cohen’s account. First, I shall take it that acceptance formation does not always require a conscious mental act. Accepting a proposition involves adopting a policy of taking it as a premise, but this process can vary in the degree to which it is active. In some cases it involves deliberation and decision; in others, it is unthinking and automatic. Most of us, for example, routinely accept what we are told, unless we have reason to suspect that the speaker is 92
The premising machine deceitful or misinformed.1 Secondly, I shall assume that the commitment involved in acceptance is tailored to our epistemic and practical needs. Thus, we are not committed to calculating any and every consequence of our premises, simply for the sake of it, and regardless of their relevance. Rather, the commitment is to employ our premises intelligently in solving problems we encounter and in furthering inquiries we undertake. Thirdly, I shall assume that, while we have a general commitment to keep track of our premises and to apply them intelligently to the solution of problems we encounter, this commitment is qualified in a way that reflects our cognitive abilities – our acuity in identifying problems, our knowledge of inference rules, our efficiency in recalling relevant premises and goals, and so on. Fourthly, I shall take it that we are committed to using our premises, not only in deductive inference, but also in inductive and abductive reasoning. (I shall say something later about how we might do this – plainly, rule application is not going to take us far.) And I shall assume that in accepting a proposition we commit ourselves to accepting, not only those conclusions we regard as deductively entailed by it, but also any we regard as strongly warranted by it on inductive or abductive grounds. Finally, a note on terminology. ‘Acceptance’ is a useful term, but also a potentially confusing one, having a number of distinct technical uses, as well as a less precise everyday one. In order to emphasize the technical character of my usage, and to distinguish it from other technical uses, I shall add a subscript, writing ‘acceptancep ’ and ‘to acceptp ’ (short for ‘acceptance as a premise’ and ‘to accept as a premise’). I shall also occasionally speak of adopting a proposition, p, as a premise. This is shorthand for ‘adopting a policy of taking p as a premise’ – that is, acceptingp it. 1.2 Tacit and implicit acceptancep So far, I have focused on what we may call the core form of acceptancep – the adoption of propositions as explicit inputs to conscious inference. But we can distinguish at least two other kinds which are, in different ways, non-core. First, there are propositions which we have never consciously entertained and adopted as premises, but which we would immediately treat as premises, were we to entertain them. Think, for example, of the propositions that you have never walked on the moons of Mars and that 1
It may be that Cohen would not disagree with this. Although he insists that acceptance always involves choice, he allows that choices may be made gradually, as well as through immediate decision (1992, p. 22).
93
Mind and Supermind 107,897 is less than 888,976. I shall say that such propositions are tacitly acceptedp , while ones that have been consciously entertained and adopted are overtly acceptedp . Secondly, there are premises which influence our reasoning in a non-explicit way. I noted in the last chapter that there are two ways of taking something as a premise in reasoning. One is to take it as an explicit input to inferential processes, the other to treat it as part of the cognitive background against which explicit inference takes place – to take it for granted. I shall say that propositions employed in the former way are explicitly acceptedp , those employed in the latter implicitly acceptedp .2 Even though they are not explicitly tokened, implicitly acceptedp propositions still influence our reasoning, shaping the way we process explicitly acceptedp ones. They may do this in two ways. First, they may serve as background assumptions, helping to determine the topic and scope of our explicit reasoning. For example, because I implicitly acceptp that it is safe to go out, I do not explicitly consider the possible dangers of going out, and do not hesitate to form intentions to go out. Secondly, they may figure as suppressed premises. As I noted earlier, conscious reasoning is often enthymematic. We think: ‘Looks like rain; I’d better take the umbrella’ – omitting the premises ‘Rain is unpleasant’ and ‘Umbrellas protect against rain.’ Though not explicitly entertained, these latter propositions are nonetheless implicitly involved in the inferential transition, and we manifest our attachment to them in our disposition to regard the transition as normatively warranted. A couple of notes by way of clarification. First, the distinction between implicit and explicit acceptancep is not an absolute one. The same proposition may be implicitly acceptedp in some inferential episodes and explicitly acceptedp in others. For example, I may implicitly acceptp various moral principles in my everyday practical reasoning, but entertain them explicitly if challenged to defend my actions. Strictly, then, classifications of acceptancesp as implicit or explicit should be relativized to inferential episodes. Secondly, it is worth noting how implicit acceptancep differs from tacit acceptancep . Both are dispositions rather than policies – in one case to use the content proposition as an explicit premise, in the other to manipulate other premises in accordance with the content proposition. Now, these dispositions will frequently go together: tacitly acceptedp propositions will figure as implicit premises in many inferential episodes. The categories of implicit and tacit acceptancep do not coincide, however – for 2
Note that implicit acceptancep in this sense is very different from what Cohen calls ‘implicit acceptance’. See above.
94
The premising machine two reasons. First, overtly acceptedp propositions – ones we have previously entertained and adopted – may also be implicitly involved in some inferences (again, moral principles might be an example). Secondly, a proposition can be implicitly acceptedp without being acceptedp in any other way – explicitly, overtly, or tacitly. For example, suppose I am packing for a holiday, debating with myself what to take and what to leave (the example is taken from Crimmins 1992; see also Audi 1982a). Suppose, too, that I am reasoning in a way that manifests an implicit acceptancep of the proposition that I have plenty of time to catch my plane, although I have not overtly acceptedp that proposition and do not take it as an explicit premise. And suppose, finally, that I do not in fact have plenty of time, and would instantly recognize this if I were to entertain the thought. That is, I am not disposed to take it as an explicit premise that I have plenty of time to catch my plane, and thus do not tacitly acceptp that proposition. Indeed, if anything, I tacitly acceptp the contradictory proposition that I do not have plenty of time, since if that thought were to occur to me I would immediately recognize its truth and take it as a premise. So here we have implicit acceptancep of a proposition without explicit, overt, or tacit acceptancep of it, and indeed with apparent tacit acceptancep of a contradictory one. 1.3 Goal pursuit According to Cohen, we adopt goals as well as premises – goal adoption being related to acceptance roughly as desire is to belief. Cohen does not say much about goal adoption, but I assume that it, too, involves commitment to a reasoning policy – this time for practical reasoning. (I shall use the term ‘goal pursuit’ for this state of commitment.) In adopting a goal, x, we commit ourselves to taking x, together with other relevant goals and premises, as input to conscious intentional practical reasoning. (I assume that there are distinctive forms of practical reasoning, not reducible to theoretical ones – the practical syllogism, for example.) This reasoning will tend to generate new sub-goals or prescriptions for immediate or future action, and in adopting goals and premises we commit ourselves to acting upon these results – to adopting the derived sub-goals and to performing the dictated actions, or forming intentions to perform them.3 3
I assume that intentions can be actively formed; see Pink 1996 for defence of this view. I discuss intentions briefly in chapter 8 and suggest that they are themselves a species of goal pursuit.
95
Mind and Supermind If we are unwilling to act upon a particular result, then we must go back and revise or repudiate some of the goals and premises from which it was derived. Although goals are flat-out states – one either has or does not have x as a goal – I take it that we can assign our goals relative priorities, and that practical reasoning will sometimes involve establishing or adjusting such priorities, in order to resolve conflicts or take advantage of opportunities. I shall assume that goal pursuit has similar properties to acceptancep : that the commitment to act upon our goals is open-ended; that we do not have to acceptp that a particular practical conclusion is normatively warranted in order for us to be committed to acting upon it (again, Cohen seems to take a different view: see his 1992, p. 46); that goals can be adopted passively as well as actively; and that the deliberative commitments involved in goal pursuit are tailored to our needs and abilities. I shall also recognize tacit and implicit forms of goal pursuit, similar to those of acceptancep . One tacitly pursues a goal if one is disposed to employ it as an explicit goal in practical inference, even though one has never consciously entertained and endorsed it. One implicitly pursues a goal if one treats it as part of the background against which explicit practical reasoning takes place – reasoning in ways which take its desirability for granted. (The goal of avoiding unnecessary pain might be an example.) As with the corresponding forms of acceptancep , I shall assume that tacit and implicit goal pursuit often go hand-in-hand, but can exist independently. Before moving on, I want to say something about the content of goals. Suppose I have the goal of taking exercise. What is the corresponding content that serves as input to practical reasoning? It is easy to redescribe the goal so that it has a propositional content – as, say, the goal that I take exercise. But this is not the relevant content. When considering how best to satisfy my goal of taking exercise I do not reason from the premise that I currently do take exercise (Davidson 1980, p. 86). The relevant content is rather, an optative one, which could be expressed by the sentence ‘Let it be the case that I take exercise’ (Goldman 1970, ch. 4). Following Goldman, I shall refer to this content as an optative proposition and to the content I take exercise as a declarative proposition. In practice, when we express our goals we tend to use sentences which are formally in the declarative mood. We say ‘I want to take exercise,’ ‘I need to take exercise,’ ‘I must exercise’, or something similar. Sentences such as these, then, are potentially ambiguous: they
96
The premising machine may express either a declarative proposition, which is the content of a belief about a goal (to the effect that I have it or should have it), or an optative proposition, which is the content of the goal itself. And depending which type of content they express, they will have a very different role in inference. Thus, for example, the premises ‘I want to lose weight’ and ‘If I take exercise, then I shall lose weight’ warrant the formation of an intention to take exercise only if the former expresses an optative proposition. I shall assume that when we entertain such sentences in inner speech we entertain them as interpreted – so that we are never in doubt as to which sort of content is being expressed. Given the terminology introduced in the previous paragraph, both acceptancesp and goal pursuits may be described as premising policies – the former directed upon declarative propositions, the latter upon optative ones. I shall therefore occasionally use ‘premising policies’ as a convenient term for both of them, and ‘premising’ for the activity of using their contents in conscious intentional inference. Note, however, that I shall continue to use ‘premise’ tout court for the objects of acceptancep ; any ambiguity will be resolved by context. To sum up, then, acceptingp proposition p or deciding to pursue goal x involves adopting a policy of: (1) bearing in mind that one has adopted p as a premise or x as a goal, and looking out for problems and inquiries to which it is relevant; (2) taking p or x as input to conscious intentional inference, in conjunction with other premises and goals one has adopted; and (3) acting upon the results of these calculations – adopting any derived propositions and goals as further premises and goals, and performing, or forming intentions to perform, any dictated actions. 2 p re m i s i ng and th e role of lang uag e I shall now go on to look at the activity of premising itself. What exactly does this involve? How is conscious intentional inference conducted? What procedures are used? A full answer would require empirical investigation and is beyond the scope of the present work, but I shall outline some of the main forms premising activities could take. I shall also consider what role language plays in the premising process and assess Cohen’s claim that all premising is language-based.4 4
This section revises, and somewhat corrects, ideas I originally published in Frankish 1998b.
97
Mind and Supermind 2.1 Rule application According to Cohen, premising involves applying learned inference rules – logical, conceptual, or mathematical. This is a plausible view. Premising is a conscious intentional activity – something we do and can commit ourselves to doing. That is to say, it involves actively carrying out inferential operations at a personal level, and the obvious way of doing this is by constructing formal arguments, following learned rules of inference. Before saying more, let me pause to address an immediate objection. To suppose that we apply inference rules, it might be argued, is to suppose that some inferential transitions are unrevisable (unrevisable, that is, without change of meaning), and thus that there are canonical procedures for revising and updating one’s belief system in the light of conflicts with new evidence. And it is widely accepted that there are no such inferential transitions. Any inference may be revised, provided one makes sufficiently drastic changes to other elements of one’s belief system. And in updating our beliefs, what matters is not that we respect local inference rules, but that we maintain the most stable global configuration in our belief system (the view sometimes referred to as the Quine–Duhem thesis). This is not a fatal objection, however. The issue, I take it, is not about the possibility of following inference rules. We can certainly do this overtly, in the construction of written or spoken arguments, and I see no reason to deny that we can also do it privately, in the course of silent premising. Even if we are not conscious of the rules we are following, we may still manifest our adherence to them in our attitudes to the arguments we construct – in the fact that we regard them as compelling in virtue of their form, or in virtue of the presence of certain concepts in certain salient positions. The issue, then, is not whether we can and do follow inference rules, but whether we are right to do so, and what status the rules should be accorded. Now, some people may think that inference rules reflect objective semantic facts and that the inferences they license are analytically valid and unrevisable. If the Quine–Duhem thesis is correct, then these people are misguided (and would be still more misguided if they were to base an epistemology on those beliefs). But we do not have to regard inference rules in this light. We might think of them rather as very reliable, yet defeasible, heuristics. That is, we might regard them as conferring extremely strong, but defeasible, warrant on the conclusions they license. On this view, we could still employ inference rules in premising, provided we did not take 98
The premising machine them to be decisive determinants of our inferential commitments. The fact that a conclusion followed from our premises according to an inference rule would be a very strong reason for regarding ourselves as committed to acceptingp it in turn, but not an absolutely conclusive one, and it would be open to us to reject the conclusion in the light of wider epistemic considerations. (It is important to remember that the act of deriving a conclusion is distinct from the act of acceptingp it as a further premise, and that the former can be performed and the latter withheld.) If we take this view, then the commitments involved in acceptancep should be understood to be qualified accordingly. I think we can grant, then, that it is legitimate to employ inference rules and that they will have an important role in premising. Does it follow that premising will be a linguistic activity? Cohen argues so. There is, he says, an ‘a priori conceptual requirement’ that premises must be linguistically formulated – though not necessarily overtly vocalized: Premisses and rules of inference have to be conceived in linguistic terms . . . That is how logic can get to grips with inference and formulate its principles as rules for linguistic transformation. (1992, p. 12) To take it as a premiss that p one needs to be able somehow to spell out or articulate the proposition that p, as is done in oral communication, in sub-vocal speech, or some other convention-bound way. How else can one’s conclusions be supposed to be tied to the component elements of one’s premisses? How else can they be exhibited as a transformation of – or derivation from – those premisses according to logical, conceptual, or mathematical rules? (1992, pp. 78–9)
The thought, I take it, is this. Premising involves consciously applying learned inference rules. But these rules will specify formal operations – for example, that a conditional and its antecedent jointly entail the conditional’s consequent, or that a proposition of the form x is a bachelor entails one of the form x is unmarried. And it is hard to see how we can apply such rules unless we have access to the forms of our premises – unless we can represent them in some medium to whose formal properties we have conscious access. The obvious candidate for such a medium is natural language. This may be too swift, however. There may be a way of applying inference rules directly, without manipulating sentences. (In fact, I am sceptical of the possibility, at least for creatures with minds like ours; but I shall state the case for it, before briefly mentioning my reservations.) Suppose for the sake of argument that we can entertain conscious propositional thoughts 99
Mind and Supermind non-linguistically, without articulating them in sub-vocal speech.5 And suppose, too, that we have conscious knowledge of some formal inference rules. Then it might be possible for us to apply these rules directly to our non-linguistic thoughts. It is true that we would not have perceptual access to the forms of these thoughts, but we might, nonetheless, have a kind of reflective access to them. For example, suppose I entertain a conscious non-linguistic thought with the content If Colonel Mustard did not commit the murder, then Miss Scarlet must have done. Then, reflecting on this thought, I may note that it has a conditional form (assuming I know what a conditional is, of course). And suppose, too, that I know that a conditional and its antecedent jointly entail the conditional’s consequent. Then I have the apparatus to engage in some rudimentary premising. If I decide to acceptp that Colonel Mustard did not commit the murder, then I shall know that I have premises which jointly entail that Miss Scarlet must have done the deed, and shall thus regard myself as committed to acceptingp that proposition, too. In short, premising could be driven by conscious reflection on one’s premises and the norms to which they are subject, rather than by the manipulation of representations of them. The case for the linguistic dependency of premising is thus not as clearcut as Cohen supposes. Still, as I said, I am sceptical of the proposal just outlined. I have two reasons for this scepticism. First, it is not clear that the process described is one that can come under voluntary control – as it must if it is to form part of a premising policy. How could we intentionally produce the required sequences of conscious thoughts, except in the course of intentionally producing linguistic expressions of them? Secondly, and more fundamentally, I suspect that the initial assumption on which the proposal is based is false. A strong case can be made for the view that we can have conscious awareness of our propositional thoughts only if those thoughts are linguistically articulated (see Carruthers 1996b and 1998). This is not the place to set out the argument, however, and for present purposes I shall assume that no a priori case for the linguistic dependency of premising has been established. The issue just raised may not, in any case, be of central importance. For even if premising does not have to be language-based, it is likely that it will 5
There is some evidence that this is so. I am thinking of the data from Russell Hurlburt’s experiments in sampling the passing contents of people’s conscious minds (Hurlburt 1990, 1993). Hurlburt discovered that, while all subjects reported entertaining verbalized thoughts, most also reported entertaining some wordless propositional thoughts. This is not conclusive, however, and it can be argued that the latter reports are due to confabulation. See Carruthers 1996b, ch. 8.
100
The premising machine in practice be so – at least in so far as it involves applying learned inference rules. The reason is this. Non-linguistic premising would require explicit conscious knowledge of inference rules, of the sort that might be acquired through formal instruction. But knowledge of inference rules can also be of a less explicit kind, acquired in the course of learning to engage in rational debate with one’s peers. Suppose, for example, that I have learned to spot utterance patterns that instantiate modus ponens and to classify them as good argumentative moves, even though I have never been taught the rule explicitly and have never articulated it to myself. Suppose, too, that I have learned to regulate my own argumentative productions in accordance with the rule – again without explicit instruction and without having articulated the rule. Thus if I assert a conditional and its antecedent, I immediately regard myself as licensed to assert the conditional’s consequent, too, and as obliged to refrain from making assertions that are incompatible with it. In this case, it would be appropriate to say that I know modus ponens. Knowledge of this kind is sometimes described as tacit or procedural, though in the present context I suggest that we think of it as belonging to the strand 1 mind. Now, non-conscious strand 1 knowledge of this kind could guide premising activities. If I have a non-conscious grasp of modus ponens, then I shall naturally be disposed to manipulate my premises in accordance with the rule. If I have adopted a conditional and its antecedent as premises, then I shall regard myself as committed to adopting the conditional’s consequent as a further premise, and to rejecting any propositions that are incompatible with it. Just saying the premises over to myself might prompt me to supply the dictated conclusion. In this way it would be possible to execute a premising policy without drawing on explicit theory at all. However, premising activities guided in this way will require a linguistic medium. Non-conscious knowledge of inference rules is acquired and employed in the course of rational debate with our peers, and it will be defined over the elements of the language employed in that activity. It will, for example, be knowledge that the application of a certain word licenses the application of a certain other one, or that sentences of that form (demonstratively identified) normatively warrant sentences of that form. And in order to apply knowledge of this kind it will be necessary to formulate our premises linguistically. Of course, natural language is not the only representational medium we can employ in interpersonal debate. We can also use artificial symbol systems, such as those of mathematics and logic, and might acquire a non-conscious knowledge of rules 101
Mind and Supermind defined over their elements. But the bulk of our non-conscious knowledge of inference rules will be defined over the words and structures of natural language and will be applicable only to linguistically formulated premises. I think that we can conclude, then, that premising strategies which draw on non-conscious knowledge of inference rules will usually be languagebased. Moreover, such strategies are likely to be much more widespread than ones drawing on explicit knowledge. People tend to acquire skills procedurally before they begin to theorize them. (Cohen overlooks this, claiming that inference rules have to be consciously accepted, even if their application subsequently becomes automatic: Cohen 1992, pp. 23, 56.) Certainly, many people can classify presented patterns of inference as good or bad without being able to say precisely what their goodness or badness consists in. Indeed, I suspect that even those who do possess explicit conscious knowledge of inference rules will generally rely on nonconscious knowledge of them in executing their premising policies. I, at any rate, rarely entertain conscious thoughts about the propositions I have acceptedp or the inferential norms to which they are subject. Typically, I just entertain their contents. (This is not to say that I entertain those contents non-linguistically, of course – indeed, so far as the phenomenology goes, just the opposite seems to be the case.) I have focused on rules of a very strict kind, but we might also make use of looser ones – rules of thumb and heuristics. (I am thinking of ones of which we have personal knowledge, of course, not ones that are applied at a sub-personal level.) An obvious example is in inductive reasoning, where we might apply the rule of thumb that if a large number of representative samples of Fs are G, then all Fs are G. Now, if these rules specify formal operations, then the same considerations will apply to them as to the more strict kind: they will typically be known non-consciously and applied unreflectively in the course of linguistic activities of various kinds. However, not all rules of thumb will specify formal operations, and those that do not will require reflective application. We shall not be able to apply them simply by manipulating sentences, but shall need to reflect consciously on the rule and its applicability in the present situation. In these cases, a linguistic medium will not be needed – except in so far as it is necessary for conscious thought itself. To sum up: inference rules will have a central role in premising, and their application will typically be a language-involving process, at least in the case of formal rules. But does premising always involve rule application? 102
The premising machine Could other methods be employed? Cohen does not consider the question, but there are in fact several other options. 2.2 Self-interrogation The fundamental commitment in adopting a premise is to take it as input to conscious intentional inference, and, as I suggested in chapter 2, we can think of inference as intentional even in cases where it does not involve carrying out explicit procedures. We often deliberately try to work something out, even though we do nothing more explicit than ‘thinking’ – perhaps furrowing our brow the while, or staring at the ceiling. We can do this sort of thing at will, and it seems to be a genuinely intentional activity, though it is not easy to say exactly what it involves. The best way to think of it, I suggest, is as a sort of self-interrogation. We are, as it were, challenging ourselves to come up with a solution to a problem, just as another person might challenge us, and deliberately focusing our nonconscious reasoning abilities onto the task. Self-interrogation like this is, of course, a self-stimulatory activity of the sort which Dennett depicts as central to the conscious mind. Now, it would be possible to use self-interrogation to advance a premising policy. Suppose I have acceptedp a range of propositions – that Colonel Mustard is not the murderer, that Miss Scarlet is not the murderer, that Dr Green is not the murderer. I ask myself ‘Suppose all that were true – what would follow?’ and set to thinking in the way just described. After a short while a conclusion pops into my head: ‘Professor Plum must be the murderer.’ Provided I am happy that this conclusion really is warranted by my premises, I shall then regard myself as committed to adopting it as a further premise. I could thus sustain a premising policy by intentional self-interrogation, without applying inference rules.6 6
In earlier discussions of this topic, I spoke of this process as involving a kind of selfsimulation (see my 1998b). Many philosophers, and some psychologists, believe that our skill in psychological prediction and explanation derives from an ability to run cognitive simulations. If we want to work out what another person is likely to think or do, they claim, we try to simulate their thought processes – pretend to share their beliefs and desires, let our inferential system run ‘off-line’ (so that its outputs are not passed to memory or motor control), and wait to see what conclusions or decisions we come to (Goldman 1989, 1992; Gordon 1986; Harris 1989, 1992; Heal 1986, 1995). In earlier work, I suggested that we could employ a similar technique to calculate the consequences of our premises. The idea was that we could suppose that our premises were true, run an off-line simulation, and see what conclusions we came to. This, I suggested, is what we are doing in the sort of self-interrogatory episodes described in the text. I am now wary of this suggestion,
103
Mind and Supermind The advantage of this technique is that it would extend the scope of premising policies to include types of inference that are not rule-governed. In particular, it would enable us to employ our premises in abductive reasoning – reasoning to the best explanation of a set of data. This sort of reasoning is central in scientific inquiry, and if we hold (as Cohen does) that acceptancep is the correct attitude for scientists to adopt towards their data, then we need to show how premising policies can encompass it. Yet abductive reasoning is not a matter of applying inference rules. It involves determining which hypothesis provides the neatest, most economical, most elegant explanation of a data set, and there are no simple algorithms for doing this. Indeed, it is hard to see how one could deliberately set about making abductive inferences, except by the sort of self-interrogatory process just described – by concentrating on the accepted data and trying to focus one’s mind on coming up with the best explanation of it. Self-interrogation also has its limitations, however. It will not always throw up the right answer, and there might be interfering factors at the non-conscious level which distort the outcome of the process. Perhaps the thought that Professor Plum is the murderer comes to me, not because it is warranted by my premises, but because I have taken an instinctive dislike to the man and have a non-conscious desire to see him behind bars. It would thus be dangerous to rely uncritically on self-interrogation. Indeed, such reliance would threaten to undermine the point of acceptancep and goal pursuit. I shall say more about this later, but I assume that one important benefit of adopting premising policies is that doing so affords us personal control of our reasoning and of the methods employed in it. To rely exclusively on self-interrogation to guide our premising policies would be to forgo this benefit and to limit the usefulness of the policies. Given this, the sensible course would be to use self-interrogation cautiously, treating its deliverances as hypotheses for subsequent evaluation rather than as established conclusions. This is, after all, how conscious abductive reasoning typically works. We try to think of the best explanation of our data, an idea occurs to us, and we set about evaluating it – comparing it with rival explanations, tracing out its consequences, however. For if we can run simulations at will, then simulation must be an intentional, personal-level process. But if so, then it looks very like a sort of temporary premising policy – a policy of taking the simulated beliefs and desires as premises and goals. And then the suggestion that we can execute premising policies by running simulations threatens to become circular. It is for this reason that I now speak of self-interrogation rather than self-simulation. Of course, it is possible that self-interrogation itself involves simulatory processes at a sub-personal level. That is a matter for empirical investigation, however.
104
The premising machine checking for conflicts with other accepted hypotheses, and so on. If we decide that it is unsatisfactory – as most are – then we reject it and continue the search. If we are happy with it, then we endorse it and use it as a basis for further inquiry. We could think of this extended sequence of actions and events as constituting a single conscious inferential process, operating upon previously adopted premises and terminating in a further act of premise adoption. Would language have a role in the process of self-interrogation? It certainly could have: self-interrogation could take the form of a question-andanswer session in sub-vocal speech. Indeed, this is the form we should expect it to take. Since we routinely rely on verbal interrogation as a way of acquiring information from others, it would be natural for us to develop habits of verbal self-interrogation, instinctively questioning ourselves and supplying answers (see Dennett 1991a, ch. 7). Introspective evidence strongly confirms the existence of such habits. Indeed, I am tempted to go further and argue that self-interrogation requires language. How could one intentionally frame questions and articulate responses except by producing utterances expressing them? I shall not press the matter, however. If we can entertain conscious propositional thoughts non-linguistically (which in any case I doubt), then perhaps some of them could take an interrogative form and perhaps these could, in turn, prompt non-linguistic responses. I have been concerned here with the use of self-interrogation as a method of working out the consequences of our premises, but it is worth noting that the technique will also have other uses in the execution of premising policies. In particular, we might use it to help us retrieve relevant premises prior to employing them in inference. So, for example, in trying to work out who murdered Dr Black, I might begin by asking myself what information is relevant to the problem, striving to recall premises that will be useful in the inquiry. I suggested that we have a general commitment to keep track of our premises and apply them to solving problems we encounter, so it will be important to engage in regular self-interrogation of this kind.7 2.3 Minimal premising So far, I have followed Cohen in assuming that premising involves intentional inferential activity – actively working out the consequences of our 7
Chris Hookway alerted me to the point made in this paragraph.
105
Mind and Supermind premises and goals in some way, even if only by self-interrogation. But premising can also take a truncated form in which no active inference is required. Sometimes there is no need for us actively to work out the consequences of our premises and goals, since they occur to us spontaneously. For example, suppose I am just about to help myself to a doughnut when I consciously recall my goal of losing weight and immediately realize that it follows from this – together with the implicit premise that doughnuts are fattening – that I should refrain. That is to say, there is a spontaneous passage from a consciously recalled goal to a consciously entertained conclusion. And if I am satisfied that the conclusion is warranted, then all I have to do in order to advance the pursuit of my goal is to act upon it. The same thing can happen with theoretical inference. The solution to a problem may pop into one’s head spontaneously, sometimes long after one has ceased trying to find it. Although no active inference is involved in these cases – we do not do anything to generate the conclusion – the process can still be thought of as under broadly intentional control. We need to recognize a spontaneous thought as an inference from our premises and goals, satisfy ourselves that it really is warranted by them, and then intentionally act upon it – whether by adopting further premises and goals or by performing overt actions. I shall call this process minimal premising. Minimal premising resembles selfinterrogation in exploiting non-conscious inferential processes, though no deliberate stimulation of them is involved. And, as with self-interrogation, it will be important to use it with caution – monitoring our spontaneous inferences, evaluating them, and being prepared to reject them if they do not appear warranted. Unlike self-interrogation, however, minimal premising does not seem to require language (not, at least, if we allow the possibility of non-conscious propositional thought), though spontaneous inferences may of course occur in inner speech, and their subsequent evaluation may involve language-driven inferential processes of the kinds described earlier. 2.4 Extended techniques The procedures mentioned so far will, I think, be the most common ones in everyday private reasoning. But there are other, more extended, inferential techniques that could be used in executing premising policies, some of them involving overt activities and the manipulation of external aids. We could run thought-experiments, draw diagrams, visualize scenes, 106
The premising machine consult authorities, debate with colleagues, use calculating devices, and so on; the list is open-ended. The use of extended techniques like these is, however, perhaps more common in the course of formal theoretical inquiry than in causal everyday reasoning. 2.5 Conclusion Cohen’s conception of premising is too narrow. As well as applying inference rules, we can also make use of self-interrogation, minimal premising, and various other techniques. To judge by my own experience, conscious thinking draws promiscuously on all of these procedures – sometimes switching between them in a single train of thought. In thinking through a problem I may alternate between self-interrogation and the construction of formal arguments, with simple steps filled in by minimal premising. Cohen is wrong, too, about the role of natural language in premising. There is no conceptual requirement for premising to be language-based, though in practice it will often be so. Self-interrogation and the construction of arguments are likely to take place in inner speech, and minimal premising could do so, too. Note that this is not to say that we shall always use the same linguistic forms to express our premises. Acceptancep , I take it, is fundamentally an attitude to a proposition rather than a sentence, and we might express the content of an acceptancep in different ways on different occasions. We might even express it in different languages. The claim, then, is that our premises will typically have some linguistic vehicle, not that they will have a unique one. In sum: inner speech is a natural, though not essential, vehicle for conscious inferential activities – and it is likely that these activities will, in practice, depend heavily upon it. This conclusion harmonizes well with the view that strand 2 beliefs and desires are premising policies – providing an explanation of how these states could constitutively involve natural language (Fodor’s challenge). I shall say more about this in the next chapter, when I respond to the challenges facing the proposed two-strand theory. 3 th e p re m i s i ng mac h i ne Now that we have a better idea of what is involved in acceptancep and goal pursuit, I want to turn to some broader issues. I suggested in the previous chapter that states of acceptancep and goal pursuit constitute a distinct level of mentality – a ‘premising machine’ – which is realized in the 107
Mind and Supermind non-conscious, strand 1 mind. In this part of the chapter I shall defend that claim in detail. I shall also consider how the premising machine influences action and compare it with the Joycean machine, as described by Dennett. 3.1 The behavioural view again As we saw in the last chapter, it is attractive to adopt a behavioural view of strand 2 belief, which identifies states of this type with high-level behavioural dispositions, realized in partial beliefs and desires. That view promised an attractive account of the relations between the two strands of belief and a solution to the Bayesian challenge. For the same reasons, it is attractive to adopt a behavioural view of strand 2 desire. Now, we also saw that it is attractive to think of strand 2 beliefs and desires as forms of acceptancep and goal pursuit respectively, so it is important to show that we can adopt a behavioural view of these states, too. In fact, the behavioural view harmonizes very nicely with the conceptions of acceptancep and goal pursuit we have developed. The case was foreshadowed in the previous chapter, but it is worth spelling it out in detail, as the considerations involved will be important for what follows. To begin with, acceptancep and goal pursuit are policies, and policy possession is a dispositional state. If one has a policy of A-ing, then one will be disposed to engage in regular A-ing. However, possessing this disposition is not sufficient for having the policy; one can be disposed to stammer when speaking in public without having a policy of doing so. To have a policy of A-ing, A-ing must be something one can do intentionally and one must be disposed to do it intentionally. Now, intentional action, I shall assume, is action that is motivated by one’s beliefs and desires, so to have a policy of A-ing, one must possess beliefs and desires which make it rational to engage in regular A-ing. Not every such set of beliefs and desires will do, however. One can be rationally disposed to do something on a regular basis – say, to eat chocolate biscuits – without having a policy of doing it. The difference is one of motivation. The motivation for a policy-related action is not simply the desirability of the action itself or of its immediate outcome. If I have a policy of A-ing, then I shall be disposed to engage in regular A-ing even if A-ing has no intrinsic or immediate rewards. The reason, of course, is that I believe that adherence to a policy of A-ing will bring some long-term benefit and want to secure that benefit. And this, I suggest, is what is distinctive of policy-related action: it is motivated by the desirability of adhering to the policy which 108
The premising machine dictates it – where this adherence is expected to bring some benefit of its own. (Of course, policy-related actions may also be intrinsically rewarding or bring immediate benefits, and thus be doubly attractive, but that is just a bonus.) In short, policy-related action is reflexively motivated: having a policy of A-ing involves being disposed to A because one believes oneself to have a policy of A-ing and wants to adhere to it (or, if A-ing is a nonbasic action, being disposed to perform actions which cause or constitute A-ing, and being so disposed because one believes oneself to have a policy whose fulfilment requires the performance of these actions, and wants to adhere to it). This is, I think, close to an analysis of what it is to have a policy of A-ing: provided one has the knowledge and skills required to execute the policy, possessing the beliefs and desires just mentioned is both necessary and sufficient for having a policy of A-ing. It follows that having a policy of premising will involve being disposed to engage in premising activities because one believes oneself to have a policy of engaging in them and wants to adhere to it. So when we take our premises and goals as inputs to conscious intentional inference, we do so because we believe our premising policies require us to do this and want to adhere to them. And when we adopt the conclusions of our inferences as further premises or goals, or perform the actions they dictate, we do so again because we believe we have premising policies which commit us to doing these things and want to adhere to them. Note that this does not mean that we must consciously entertain these beliefs and desires – even when the activities they motivate are conscious. Actions can be consciously performed even if the beliefs and desires that motivate them are not themselves consciously entertained. Consider typing, for example. As I write, I am conscious of pressing various keys on my computer keyboard. I press these keys because I desire to type certain words and believe that pressing them will achieve that. But I do not consciously entertain those beliefs and desires as I type; I just think about the content of what I am typing. Much the same goes, I suggest, for premising. Typically, all we think about at a conscious level is the content of our premises, goals, and conclusions. The attitudes which drive our premising activities reveal themselves in our attitudes to these contents – in the fact that we regard ourselves as committed to our premises and goals, and bound to act upon them. (This is not to say, of course, that we never give conscious thought to how and why we conduct our premising activities. We might do so, for example, in the course of careful theoretical reasoning.) 109
Mind and Supermind To sum up: the core forms of acceptancep and goal pursuit can be analysed as high-level behavioural dispositions, grounded in our beliefs and desires. Again, I think that it is appropriate to talk of realization here. The premising states exist in virtue of the underlying beliefs and desires, and generalizations defined over the former hold in virtue of more basic ones defined over the latter. (Our premises and goals have the effects they do because our beliefs and desires about them have the effects they do.) Moreover, these realizing beliefs and desires do not have to be flat-out ones; they could equally well be graded probabilities and desirabilities. If you are sufficiently confident that you are committed to a policy of premising that p and attach a sufficiently high desirability to adhering to this policy, then you will be disposed to premise that p. And provided there is the right sort of counterfactual dependency between this disposition and those states of probability and desirability, the disposition will count as being the product of them, and thus as a manifestation of the policy in question. Thus, if we are antecedently committed to the view that we have non-conscious partial beliefs and desires of the strand 1 kind, as well as acceptancesp and goal pursuits, then the natural view of how the two sets of states are related is that the latter are realized in the former, in line with the behavioural view. It may be asked why I talk of realization here rather than identity. After all, I claimed that premising policies can be analysed in terms of the cited beliefs and desires, so why not identify the policies with those states? The answer is that the analysis is silent as to the nature of the realizing beliefs and desires. They might be partial or flat-out, conscious or nonconscious, and so on. In the human case they are, I maintain, typically nonconscious and partial, but they might have been otherwise. Furthermore, if they are partial, they may vary slightly in strength from individual to individual and in the same individual over time. (Provided they are strong enough to sustain the policy, this will not matter.) A degree of multiple realizability is thus possible. It is perhaps worth stressing here that when I talk of the beliefs and desires in which an acceptancep or goal pursuit is realized I am referring to those which constitute the state. These should be distinguished from those which motivate it, and whose relation to it is causal rather than constitutive. As I mentioned earlier, we may be moved to adopt and maintain a premising policy for a variety of reasons – epistemic, prudential, moral, and so on – and individuals who acceptp the same proposition or pursue the same goal may do so for very different reasons. But in each case the policy state itself 110
The premising machine will be realized in broadly the same way – in a belief that one is committed to the policy and a desire to adhere to it, together with beliefs about how to execute premising policies. I shall briefly consider some objections before moving on. First, doesn’t the view just outlined require us to ascribe some implausibly complex contents to non-conscious mental states? Is it really plausible to suppose that we have non-conscious beliefs with contents such as ‘I have a policy of premising that p’? There are two points to make here. First, although the posited beliefs are complex ones, they are not significantly more complex than others which it is widely agreed we can hold non-consciously. For example, much social interaction depends on the possession of complex beliefs about the thoughts and attitudes of our companions – beliefs which we rarely entertain consciously. There should, then, be no objection in principle to positing non-conscious beliefs about premising policies. Secondly, the grounds for thinking that we do in fact possess such beliefs are abductive: beliefs of this sort are required for premising policies, and the hypothesis that we have premising policies is the best explanation of how we could have conscious, active, flat-out beliefs of the strand 2 kind. As we shall see, the hypothesis that conscious beliefs are premising policies has further attractions, too. So the fundamental response here is ‘suck it and see’: is the hypothesis warranted by its overall theoretical fruitfulness? But is the possession of beliefs about premising policies compatible with an austere view of the non-conscious mind – with the view that nonconscious beliefs are simply multi-track behavioural dispositions? Could a person’s behavioural dispositions be subtle enough to warrant the ascription of mental states with such contents? The short answer is that it depends on what sort of behaviour we consider. If we focus on overt behaviour only, then it is doubtful that we shall need to credit people with beliefs about premising policies. But if we include private, non-overt actions – acts of judgement, inner speech, and active inference – and the subject’s reports of these actions, then the case is different. For the thesis developed here is precisely that these actions are best interpreted as stages in the formation and execution of premising policies, motivated by beliefs and desires of the sort mentioned. I shall say more about this in the next chapter, when I address the Bayesian challenge. Secondly, what should we say about cases where we give conscious thought to our premising activities – where we treat propositions as premises and goals because we consciously believe that we have policies of doing so and consciously desire to adhere to them? If our conscious beliefs 111
Mind and Supermind and desires are acceptancesp and goal pursuits, as I claim, then won’t it follow that in these cases there are two levels of acceptancep and goal pursuit, the first realized in the second and the second realized in non-conscious states – giving us a three-level structure? This is a rather baroque situation, but I grant its possibility. In practice, I suspect, cases like this will be rare, and control of the policies that constitute the upper level of acceptancep and goal pursuit will soon shift down to passive non-conscious processes, collapsing the three-level structure into the standard two-level one. In any case, my central claim stands: states of acceptancep and goal pursuit are realized directly or indirectly in strand 1 states. Finally, why must acceptancep and goal pursuit be policies? Might we not be consistently disposed to take certain premises and goals as inputs to conscious intentional reasoning and to act upon the results, yet without actually having a policy of doing so? And if so, would there be any reason to refuse these dispositions the titles of acceptancep and goal pursuit? I concede both points: I think that it is possible to have such dispositions and see no reason for not regarding them as forms of acceptancep and goal pursuit. This is still consistent with the behavioural view, of course. Provided the dispositions in question were rational ones, they would still count as being realized in the beliefs and desires which gave rise to them, in line with the behavioural view. However, these forms of acceptancep and goal pursuit will, I think, be uncommon. It is hard to see why anyone would be disposed to make regular use of a proposition as a premise, unless they believed, at least non-consciously, that following this course of action would be of value – that is, unless they had a policy of following it. It is not as if premising activities are likely to have any intrinsic satisfactions – unlike, say, eating chocolate biscuits. In what follows I shall focus on policybased forms of acceptancep and goal pursuit, rather than dispositional ones. However, most of what I have to say will apply equally to both kinds (the exceptions being claims about the possibility of actively adopting these states, which depend on features unique to the policy-based varieties). So far I have been concerned with explicit acceptancep and goal pursuit, but we can extend this model to encompass the tacit and implicit varieties. These are straight dispositions rather than policies. Tacitly to acceptp a proposition or pursue a goal is to be disposed to treat it, upon consideration, as an explicit premise or goal; implicitly to acceptp a proposition or pursue a goal is to be disposed to process one’s explicit premises and goals in ways that take it for granted. But these dispositions, too, will typically be rational ones, realized in complexes of partial beliefs and desires. So, 112
The premising machine for example, one will count as tacitly acceptingp that p if one attaches a high desirability to adopting obvious truths as explicit premises and possesses probabilities and desirabilities which dispose one to classify p, on consideration, as an obvious truth. And one will count as implicitly acceptingp that p if one attaches a high probability to p and has probabilities and desirabilities which dispose one to manipulate one’s explicit premises and goals in ways that would be rational if the propositions one regards as highly probable were in fact true. (These are not offered as analyses of tacit and implicit acceptancep , just as indications of how they might typically be realized. Note, too, that there may also be context-relative varieties of the states, where the truth of the content proposition is not important.) Similar considerations apply to tacit and implicit goal pursuit. We can further extend the model by identifying the occurrent forms of acceptancep and goal pursuit with behavioural episodes in which premising policies are formed or executed – that is, with makings up of mind and episodes of conscious intentional inference of the sort described earlier in this chapter. This move is a natural one to make – the paradigm forms of occurrent thought being precisely those episodes in which propositions are consciously entertained for endorsement or use in inference. We thus have an integrated behavioural view of acceptancep and goal pursuit, which represents their standing-state forms as behavioural commitments and their occurrent forms as actions which either initiate or discharge these commitments. I conclude, then, that the conceptions of acceptancep and goal pursuit developed in this chapter harmonize well with the behavioural view. The upshot of this is that we can think of the system of acceptancesp and goal pursuits as forming a distinct level of mentality – a premising machine – whose states and processes are realized, directly or indirectly, in strand 1 mental states and actions. 3.2 Deciding to premise I now want to return briefly to a matter raised in the previous chapter. In discussing the behavioural view of flat-out belief, I pointed out that there is a prima facie tension between the claim that flat-out beliefs are realized in lower-level mental states and the claim that they can be actively formed, since it is not clear how the required changes in the realizing states could be actively brought about, or how it could be rational to bring them about, even if possible. I also pointed out, however, that if flat-out beliefs 113
Mind and Supermind are states of commitment, then a solution to this problem presents itself. For in this case one of the key realizing states will be a belief (or high confidence) that the relevant commitment has been made, and we can form that belief (or create that confidence) reliably and rationally, simply by making the commitment. Assuming the other realizing states are already in place, this will be sufficient to introduce the flat-out belief. Now since I have argued that acceptancesp and goal pursuits are realized in other mental states, a similar problem arises for the claim that these states can be actively formed. Fortunately, however, a similar response is available. Acceptancesp and goal pursuits are policies, and policy possession, like other forms of commitment, is partially constituted by a belief in its own existence. As we saw, having a policy of A-ing involves believing that one has a policy of A-ing, and we can reliably and rationally generate this belief simply by adopting the policy. (This is not to say that we actively form the belief – all we do is adopt the policy. But adopting the policy will produce the belief.) Provided we already have a general desire to adhere to any policies we have adopted, and know how to execute this one, we shall thereby come to possess the policy. (For convenience I write as if the beliefs and desires in which our premising policies are realized are unqualified ones, but the same considerations apply if they are, as I claim, graded. We simply speak of high confidence and high desirability rather than of belief and desire tout court.) It may be objected that if the belief that one has a policy of A-ing is partially constitutive of that policy, then the act of adopting the policy will constitutively involve the formation of the belief, and so cannot be said to generate it, at least not in a causal sense. The objection is understandable, but trades on an ambiguity in the phrase ‘adopting a policy’, which has what we may call narrow and broad senses. In the narrow sense it means making a datable commitment to a policy, by, say, sincerely uttering some performative formula; in the broad sense it means coming to have a continuing allegiance to a policy, usually as a result of making a datable commitment to it. Once we distinguish these senses, the objection vanishes. When I talked of an act of policy adoption generating the belief that one has the policy, I was using the term in the narrow sense, to refer to the making of a datable commitment, and it is only in the broad sense that policy adoption constitutively involves believing that one has the policy. (Of course, in adopting a policy in the narrow sense we almost always adopt it in the broad sense, too; the point of making a commitment is precisely to generate a continuing allegiance. But it is possible for the link 114
The premising machine to fail; one might, for example, make a commitment and then instantly forget that one has done so.) The reader may ask what is involved in making a datable commitment to a premising policy. The answer, I think, is that it too involves a performative utterance or an internalized version of one. Thus, acceptingp that beef is unsafe to eat might involve uttering to oneself ‘Beef is unsafe to eat’, with the intention (usually non-conscious) of thereby committing oneself to a policy of premising that beef is unsafe. This, I suggest, is what we are doing when we judge that something is true or make up our minds about something. Henceforth, when I speak of adopting a premising policy, I shall usually be using the term in its broad sense, to refer to the acquisition of a continuing allegiance to a policy through the making of a datable commitment like this. The comments above may prompt a second objection. I have claimed, not only that we can actively form strand 2 beliefs, but also that we can do so in a direct, non-instrumental way. So if strand 2 beliefs are acceptancesp , then I need to show that acceptancesp , too, can be formed in this way. And on the account just outlined it is not clear that they can. All we can do directly is commit ourselves to the relevant premising policy; we just have to hope that this commitment in turn generates the strand 1 belief that we are committed to the policy and so introduces a continuing acceptancep state. Acceptancep formation is thus indirect. I concede this; there is a sense in which acceptancep formation is not direct. It is not, however, the sense that is relevant here. In claiming that we can form strand 2 beliefs in a direct way I meant that we can form them by one-off mental acts of judgement or making up of mind, and the intended contrast was with the view that we can form them only by employing external means, such as subjecting ourselves to indoctrination (by forming beliefs here I mean actively forming them, of course). And in this sense acceptancep formation does count as causally direct. One forms an acceptancep simply by performing a mental act – committing oneself to an appropriate premising policy. And although this act needs to have certain downstream causal effects in order to generate a continuing acceptancep state, these are of a minimal and routine kind. No psychological or physiological manipulation is involved, but simply the production of a belief (or of high confidence) in the proposition that one is committed to the relevant premising policy. And in normal circumstances, sub-personal processes will generate this belief automatically as soon as one makes a commitment to the policy. I conclude, then, that there is no reason to deny that states of acceptancep and goal pursuit can be actively formed in the relevant sense. This is not, 115
Mind and Supermind of course, to say that they are always actively formed. One can come to possess a policy without having made a datable commitment to it. Most of us, for example, have a policy of respecting the laws of the land, though few of us have made an explicit commitment to it. Similarly, one can come to have a premising policy without having explicitly endorsed it. Think, for example, of cases where one automatically acceptsp something one reads or hears. 3.3 The premising machine in action I turn now to the role of acceptancep and goal pursuit in controlling action. Here the picture I have been developing has an important and far-reaching consequence. It is that our premises and goals influence our behaviour in virtue of our strand 1 beliefs and desires about them. This was implicit in my earlier characterization of acceptancep and goal pursuit. For, as I noted, these states carry an indirect commitment to overt action. In adopting premises and goals, we commit ourselves to performing any actions we may come to believe they dictate. So if we believe that our premises and goals mandate a certain action, and want to honour our commitment to them, then we shall be motivated to perform the action. In other words, we shall act upon our premises and goals because we attach a high desirability to sticking to our premising policies. Of course, the beliefs and desires which drive acceptancep -based action are typically not themselves conscious – they are non-conscious, strand 1 states. But they support conscious thought and make it effective in action. The reader may suspect some sleight-of-hand here. In discussing the confidence view of flat-out belief in the last chapter, I pointed out that it would be irrational to treat a contingent proposition as certain, since it would involve being willing to stake everything upon it for no gain. But if it would be irrational to treat a contingent proposition as certain, how could it be rational to treat it as an unqualified premise? Won’t it come to much the same thing? Likewise, how could it be rational to treat an outcome as an unqualified goal? Won’t it involve being willing to take absurd risks to secure it? I concede the main point here. It would certainly be unwise to act on our premises and goals uncritically in all contexts. There will be situations where there is little to gain from acting on a certain premise or goal and much to lose, and in such situations it will be prudent to refrain from doing so. This will not necessarily involve abandoning the premise or goal in question. We might continue to endorse it and to use it as a basis for 116
The premising machine action in less momentous contexts. It is just that in these cases the desire to act on it would be overridden by other desires (for health or security, say). In effect, the premising machine would suffer a temporary breakdown as the mechanisms which translate its states into action ceased to operate. Of course, it will be important not to make a habit of failing to act upon our premises and goals – to do so would be to erode our commitment to them, undermining the foundations of the premising machine. But occasional failures will not be too serious. (Indeed, it is plausible to think that premising policies come with built-in exception clauses – that when we adopt a premise or goal for use in our deliberations, we tacitly exclude those deliberations where it would obviously be dangerous or absurd to rely on it.) When failures of the premising machine occur, we often respond by making the reasons for caution explicit – making appropriate revisions, qualifications, or additions to our premises and goals, and so getting the machine up and running again. There are, then, two distinct, but compatible, strategies of psychological prediction and explanation available to us. If you know what propositions I acceptp and what goals I pursue, then you could predict my behaviour directly from them. If you know that I acceptp that beef is unsafe to eat and pursue the goal of avoiding food that is unsafe to eat, then you will predict that I shall avoid eating beef. Alternatively, you could make the same prediction on the basis of my strand 1 beliefs and desires concerning my acceptancesp and goals. Thus, if you know (1) that I am highly confident that I am committed to premising that beef is unsafe to eat and to pursuing the goal of avoiding food that is unsafe to eat, (2) that I am highly confident that this premise–goal pair warrant the avoidance of beef, and (3) that I attach a high desirability to adhering to my premising policies and thus to performing any actions warranted by my premises and goals, then, again, you will predict that I shall avoid eating beef. Each predictive strategy would have its advantages. The acceptancep –goal strategy would be quicker and easier; the strand 1 strategy more comprehensive and reliable. (It would predict a wider range of actions and could predict failures of the premising machine, due to interfering factors at the strand 1 level.) There will be a similar choice of explanatory strategies available to you. In retrospectively explaining my actions you might refer either to my acceptancesp and goal pursuits or to my strand 1 beliefs and desires concerning those states. Again, there will be a trade-off: acceptancep –goal explanations will be easier to formulate but not always appropriate (not all actions are the product of conscious reasoning from premises and goals), 117
Mind and Supermind while strand 1 explanations, though always appropriate, will often be complex. But doesn’t the existence of the second explanation undermine the first? If an action can be adequately explained in terms of the agent’s strand 1 beliefs and desires, doesn’t this mean that it is redundant to offer an acceptancep –goal explanation for it as well? No – no more than the fact that an action can be explained in physical terms means that it is redundant to offer a belief–desire explanation for it. The two are pitched at different levels. Our acceptancesp and goals come to have the role they do in the aetiology of action precisely in virtue of our strand 1 beliefs and desires concerning them. We might think of the different explanations as forming part of a layered framework of explanatory levels – physical, biological, intentional. Just as some events, in addition to having physical and biological explanations, also have belief–desire explanations, so some have yet a fourth kind – an acceptancep –goal explanation. I believe that this outline of the role of acceptancep and goal pursuit in action provides the basis for a robust response to the Bayesian challenge. I shall spell this out, and deal with some objections to it, in the next chapter. 3.4 Dennett compared The account of the premising machine developed here has some similarities to Dennett’s account of the Joycean machine. Both are models of the conscious mind, and both represent it as a ‘virtual’ mind, formed through the exercise of certain personal skills – in particular, that of inner verbalization. I want to argue, however, that the premising model is superior to Dennett’s, which, as we saw in the previous chapter, has some serious weaknesses. The first problem for Dennett concerned conscious reasoning. Joycean processes are associationist in character, and it was unclear how they could be constrained to instantiate sequences of coherent reasoning. The premising model, by contrast, provides an explanation of this. The key claim is that premising involves the exercise, not only of verbal skills, but also of meta-representational and inferential ones. On this view, we not only speak to ourselves, but also adopt attitudes towards our inner verbalizations and perform explicit inferential operations upon them. In particular, we treat some as expressions of premises and goals and manipulate them so as to construct chains of reasoning, using a variety of inferential skills. Conscious reasoning can thus be genuinely computational, though the 118
The premising machine computations in question are carried out intentionally, at a personal level. Note again that I am not claiming that the second-order attitudes which guide our premising activities are themselves conscious. Typically they are not – indeed, we might find it hard to say what they are. This is compatible with our nonetheless possessing them and with their playing an important role in shaping the direction of our conscious linguistic thinking.8 The second problem for Dennett concerned the role of conscious thought in the production of action. I pointed out that it was implausible to identify conscious thoughts with Joycean processes, since the latter can influence action only indirectly, by evoking conditioned responses, while conscious thought can have a much more direct and reliable role in the generation of action. Again, the premising model offers a better account. On this model, conscious thoughts are not self-stimulations, but the inputs to, and outputs from, episodes of conscious reasoning, and they influence action in a way that reflects this. If we calculate that our premises and goals mandate the performance of a certain action, then we shall be moved to perform the action precisely because we think that they mandate it. It is true that on this view, as on Dennett’s, conscious thoughts depend for their effectiveness on our lower-level attitudes towards them. There is an important difference, however. On Dennett’s model, the efficacy of conscious thought depends on our responding appropriately to self-instruction or self-encouragement, and we may not always do this. On the premising model, by contrast, it depends on our having settled meta-cognitive attitudes – in particular, a general desire to adhere to our premising policies. Provided these attitudes are in place, we shall be reliably and consistently disposed to act on the outputs of our conscious practical reasoning. The third and final problem for Dennett was that his model offered no account of standing-state conscious belief and consequently no account of those conscious episodes which involve the formation or activation of such states. Again, the premising model fares better. Standing-state acceptancesp are premising policies, and the associated occurrent thoughts are either decisions to adopt premising policies or recollections of previously formed ones, prior to the employment of their contents in conscious reasoning. 8
Note that while associationist processes have a limited role in the process of conscious reasoning itself, they may play an important role in determining its topic and scope – for example, in determining which problems we focus on and which argumentative patterns we take an interest in. (Thanks to Chris Hookway for this point.)
119
Mind and Supermind I conclude that some central features of the conscious mind are better accounted for on the premising model than on the Joycean one. This is not to deny that Joycean processes occur. As I said in the previous chapter, I agree that some of our conscious mental life involves self-stimulatory activities of the sort Dennett describes. But much of it does not. Indeed, the paradigm instances of conscious thinking – conscious reasoning and decision-making – do not. These consist of premising activities. 3.5 Acceptancep and meta-representational belief There is another comparison I want to make before closing this chapter. As we have seen, acceptancep involves possession of strand 1 beliefs with meta-representational content – beliefs about propositions or representations of them. Other writers, too, have argued that there is a kind of belief, or belief-like state, which is distinguished by meta-representational content. Keith Lehrer also uses the term ‘acceptance’ for such a state. Acceptance in Lehrer’s sense is an ‘amalgamation’ of a first-order belief or representation with a positive higher-order evaluation of it and has a functional role in inference and action related to theoretical matters (Lehrer 1991). According to Lehrer, acceptances are usually made for some specific purpose and, unlike beliefs, they can be actively formed (1975, 1986). Dan Sperber, too, has argued for the existence of a class of reflective beliefs which have meta-representational content. To have the reflective belief that p is to believe that p is true or well warranted. Such beliefs, Sperber claims, are acquired by conscious inference or the verbal communication of complex ideas, and are typically employed in conscious, deliberate thinking (Sperber 1996, 1997). Now, these states are different from acceptancep . Although acceptancep does constitutively involve meta-representational attitudes, the attitudes in question are different from those involved in acceptance and reflective belief. Acceptingp that p does not involve believing that p is true or well warranted, but that one has a policy of taking it as a premise. And the acceptancep state cannot be identified with this meta-representational belief. The latter is, rather, just one element in the acceptancep ’s realizing base – together with a desire to adhere to one’s premising policies and beliefs about how to discharge them. It is, of course, consistent with this that higher-order evaluations will often play a role in the formation of acceptancesp ; we may decide to adopt a proposition as a premise precisely 120
The premising machine because we believe that it is true or well warranted. But the resulting acceptancep state does not consist in a belief of this kind. Do we need to recognize the existence of acceptance and reflective belief, in addition to acceptancep ? It is certainly possible to have a strand 1 belief to the effect that a certain proposition is true or well warranted. However, it is implausible to identify the states Lehrer and Sperber describe with strand 1 beliefs of this kind. From a functional point of view, those states are much more like acceptancep – the product of decision and conscious inference, and associated with theoretical inquiry and conscious deliberate thinking. It may be, then, that the intuitions which point to the existence of acceptance and reflective belief should really be seen as relating to acceptancep . There is another consideration in favour of this view. It is implicit in Lehrer’s and Sperber’s accounts that the states they describe have a role in inference and action like that of first-order belief (see, for example, Lehrer 1990b, p. 35; Sperber 1996, pp. 87–9). Accepting or reflectively believing that p involves adopting a higher-order attitude towards p, but the premise these states supply to inference is simply p. The embedded content is stripped out for use in inference, and some account is needed of how this happens. There is no similar puzzle with acceptancep : the meta-representational attitudes underlying the state motivate personal inferential activities directed upon the embedded proposition. Acceptancep theory thus supplies something that is lacking in the others. I suspect, then, that there is no need to posit meta-representational attitudes of the kind proposed, and that when we speak of a conscious active doxastic state that is responsive to reflective evaluation, it is acceptancep that we have in mind. (It is, of course, possible for an acceptancep itself to have metarepresentational content; but that is a different matter.) conc lu s i on and p ro spe c t This completes my account of the premising machine. It may be worth reminding the reader of the status of the account and how it was derived. The aim was to provide a model for the strand 2 mind, as part of the process of fleshing out the two-strand theory of mind sketched in chapter 2. This model needed to have various features. It was crucial that it should: be compatible with an austere Bayesian view of the strand 1 mind; allow for the possibility of active belief adoption and active inference; accord language a cognitive role; and reflect our intuitions about the nature and efficacy of conscious thought. The account developed is, I claim, the best 121
Mind and Supermind Supermind (Conscious)
⇓
realized in
⇓
⇓
realized in
⇓
⇓
realized in
⇓
Basic mind (Non-conscious) Sub-mind (Sub-personal cognition)
Folk-psychological descriptions and explanations are pitched at one or other of these levels.
Folk psychology involves no claims about the nature of these levels.
Neurology
Figure 3 The structure of the human mind
one for the job. That is to say, the claim that we run premising machines in our heads is the best explanation of how we could have minds of the strand 2 type. The next chapter will make all this more explicit, showing how we can apply the premising model to develop our two-strand theory. I shall start by arguing that strand 2 beliefs and desires are states of the premising machine, thus transforming our original two-strand framework into a two-level one, with the second level realized in the first. I shall then show how this way of developing the theory supplies robust responses to the challenges outlined at the beginning of chapter 3 – a theoretical fruitfulness which will form further evidence for the existence of the premising machine. I shall conclude this chapter by introducing some new terminology. I claim that the premising machine constitutes a level of mentality, so it will be appropriate to have a mentalistic term for it. In earlier work, I referred to it as the ‘virtual mind’ and to strand 2 belief as ‘virtual belief ’ (see Frankish 1998b). This terminology was intended to capture the idea that the premising machine is a softwired feature of the mind, formed through acquiring and applying premising skills. The terminology has drawbacks, however. ‘Virtual’ can suggest ‘not real’, and the term ‘virtual belief’ has already been appropriated to refer to a kind of tacit belief (Crimmins 1992). Instead, I shall refer to the premising machine as the supermind, and to the non-conscious mind as the basic mind – terminology which reflects the idea that the states of the former supervene on those of the latter. (In fact, I make the stronger claim that they are realized in them, but since realization can be thought of as logical supervenience, the terminology is not inapt.) My claim, then, will be that the human mind, 122
The premising machine as pictured by a reconstructed folk psychology, consists of two levels – a non-conscious basic mind consisting of strand 1 states and processes, and a conscious supermind consisting of acceptancesp and goal pursuits, of which strand 2 beliefs and desires are subspecies. Supporting these levels there will almost certainly be a level of sub-personal cognition (for neatness’ sake we might call it the ‘sub-mind’), about whose nature folk psychology is silent. The relations between these various levels and the neurological states and processes which ultimately underpin them all are depicted in figure 3.
123
5 Superbelief and the supermind This chapter shows how we can apply the two-level framework developed in the previous chapter to flesh out the two-strand theory of mind outlined earlier. The first part argues that strand 2 beliefs and desires are premising policies, belonging to what I called the supermind. The second part then shows how this view enables us to provide robust answers to the challenges posed at the start of chapter 3. The final part of the chapter discusses the functions of the supermind and the benefits its possession confers. 1 supe r b e l i e f and supe rde s i re I have suggested that strand 2 beliefs and desires are acceptancesp and goal pursuits respectively, and I shall now develop and defend that suggestion. The bulk of this part of the chapter is devoted to strand 2 belief, which raises some difficult issues. 1.1 Doxastic and non-doxastic acceptancep We saw in chapter 3 that there is a strong case for regarding strand 2 belief as a form of acceptancep . We also saw, however, that we cannot simply identify strand 2 belief with acceptancep , since a proposition can be acceptedp without being believed. For example, a lawyer – call her Ally – might acceptp that her client is innocent for professional purposes without actually believing that they are. (We might say that she has a professional belief in her client’s innocence, but no more.) Acceptancep can be, as I shall put it, non-doxastic. It is compatible with this, however, that some acceptancesp are beliefs: the class of acceptancesp might divide into two subclasses – one of non-doxastic attitudes, like Ally’s, the other of genuinely doxastic ones, which can be identified with strand 2 beliefs. This possibility is often neglected in the literature on the premising conception of acceptance. Writers on the topic tend to focus on the non-doxastic form of the state 124
Superbelief and the supermind and on the features that distinguish it from genuine belief (see Bratman 1992; Cohen 1992; Engel 1998). And though most acknowledge that these features are not essential to acceptancep – that acceptancep may possess the distinguishing features, not that it must – few have much to say about the possibility of a genuinely doxastic form of the state.1 In what follows I shall aim to correct this. I shall approach the task by looking first at non-doxastic acceptancep – taking Ally’s professional belief as a paradigm case – and trying to identify exactly what it is that disqualifies it from beliefhood. I shall then propose that we identify strand 2 beliefs with those acceptancesp that lack the disqualifying feature or features. The issue here – which properties are compatible with beliefhood and which are not – is a broadly conceptual one, and I shall address it by consulting our semantic intuitions – by asking which states we would, and which we would not, ordinarily be prepared to class as beliefs. The relevant intuitions are, naturally, those concerning strand 2 belief, and I shall focus on those, though for convenience I shall speak of ‘belief ’ tout court. We cannot be sure, of course, that this procedure will yield a clearcut result – that the class of acceptancesp will divide cleanly into doxastic and non-doxastic varieties. Our semantic intuitions may not be sharp enough. Nor, even if they are, is there any guarantee that the resulting taxonomy of acceptancesp will be a theoretically interesting one. In these cases the right move might be to stipulate: either to make the concept of belief more precise, or to introduce a new and better-motivated taxonomy of acceptancesp . I would not be opposed to such a move. I think that philosophers of mind have focused too narrowly on belief, ignoring the broader and more fundamental category of acceptancep . And I suspect that a developed taxonomy of acceptancep will recognize a variety of categories in addition to those of doxastic and non-doxastic. All the same, I do not think that the concept of strand 2 belief is redundant, and I shall aim to show that it picks out a class of acceptancesp that is both well defined and theoretically interesting. Finally, let me stress that the claim I am making is the product of abductive inference, not conceptual analysis. I characterized strand 2 belief in broadly functional terms, and, as I explained at the end of the previous chapter, the claim that our strand 2 beliefs are premising policies is offered as the best explanation of how we could have states with this functional 1
An exception is Engel, who explicitly connects the two states in his more recent writing (Engel 1999). See also Stalnaker 1984, ch. 5.
125
Mind and Supermind profile consistently with our also having strand 1 beliefs. (Of course, there is a completely separate question as to how premising policies are constituted – a question addressed in the previous chapter.) So I do not rule out the possibility that in other creatures beliefs of the strand 2 kind could be realized in a different way. (In fact, I am tempted to make a stronger claim. I suspect that on any other view of how strand 2 beliefs are realized it would be very hard to explain how they can be actively formed and how they can involve natural language – questions which, as we shall see, can be neatly answered on the hypothesis that they are premising policies. This is merely speculation, however, and I shall not pursue the question further here.) 1.2 False starts What, then, disqualifies some kinds of acceptancep from the status of beliefhood? I shall begin by looking at three features of acceptancep commonly cited as distinguishing it from belief: absence of truth-directedness, susceptibility to voluntary control, and context-relativity (Cohen, Bratman, and Engel all stress these).2 I shall consider each in turn. It is common to say that belief is truth-directed – that it aims at the truth (Williams 1970). Professional beliefs, such as Ally’s, on the other hand, aim at utility or practical success, and thus do not qualify as beliefs in the strict sense. We need to be careful, however. For there is a sense in which professional beliefs are truth-directed: they involve treating their contents as true – at least for some purposes, in some contexts. They have, as I shall put it, a cognitive orientation to the world. So if beliefs are distinguished by their truth-directedness, then there must be more to truth-directedness than cognitive orientation.3 What more? Well, to begin with, there is a normative aspect to the claim. One thing we mean, in saying that belief 2
3
Another feature sometimes cited is lack of subjection to an ideal of integration: we should aim to make our beliefs mutually consistent, but have no similar duty to ensure the consistency of our acceptancesp (see Bratman 1992; Engel 1998). This is, however, merely a consequence of the context-relativity of acceptancep . We can acceptp incompatible propositions in different contexts, because what we acceptp in one context is kept separate from what we acceptp in another. I assume that we have a duty to ensure the consistency of propositions acceptedp within a single context. David Velleman makes the same point, in slightly different terminology, in his 2000, ch. 11. As Velleman points out, the widely used phrase mind-to-world direction of fit implies both cognitive orientation and truth-directedness. For a state with content p to have mind-toworld direction of fit is for it to represent p as obtaining and to be subject to failure if p does not obtain. See also his 1992.
126
Superbelief and the supermind aims at truth, is that it has truth as a norm of correctness – that belief is misplaced if it is directed upon a content that is not true. And in this sense a professional belief such as Ally’s does not aim at truth. Although Ally treats it as true that her client is innocent, she does not care – at least in her professional capacity – whether or not they really are innocent, and would not regard her attitude as misplaced if they were not. (I assume that there is nothing unethical in a lawyer’s acceptingp their client’s innocence for the purposes of defending them. I take it that the job of a lawyer is not to present the truth, but to present the best possible case for their client. It is compatible with this, of course, that their activities contribute to a larger process which has truth as its aim.) So, it seems, one thing that debars attitudes like Ally’s from beliefhood is their lack of truth-directedness. As far as it goes, this is, I think, perfectly correct. But it does not so much solve our problem as redescribe it. The problem now becomes that of saying why Ally’s professional belief does not have truth as a norm of correctness – why it counts as a correct non-doxastic acceptancep , rather than an incorrect belief. What is it about a belief that makes it truthdirected in the normative sense? One possible answer, advocated by David Velleman, appeals to aetiological considerations (Velleman 2000, chs. 1 and 11). Beliefs have truth as a norm of correctness, Velleman argues, because they are formed and maintained by mechanisms whose function is to track the truth: Belief aims at the truth in the normative sense only because it aims at the truth descriptively, in the sense that it is constitutively regulated by mechanisms designed to ensure that it is true . . . If a cognitive state isn’t regulated by mechanisms designed to track the truth, then it isn’t a belief: it’s some other kind of cognition. (Velleman 2000, p. 17)
This is not to say that all beliefs are true, of course; the regulatory mechanisms may misfire. But it explains why they ought to be true; if a belief is false, then the regulatory mechanisms have not done their job properly. On this view, then, one thing that distinguishes Ally’s attitude from belief is that it was not produced in the right way; it was formed in response to considerations of professional duty, not truth. The problem with this view, however, is that it is simply not true that a state counts as belief only if it is regulated by truth-tracking mechanisms. It may be the case that beliefs are typically the product of such mechanisms, but folk psychology countenances the existence of rogue beliefs, produced and sustained by processes that are not designed to track the truth. We 127
Mind and Supermind find nothing incoherent in the idea that some of our beliefs are sustained by emotional pressures, wishful thinking, hypnotism, or other irrational influences. Indeed, it is coherent to claim that many of our beliefs are of a rogue kind – the work, say, of a malicious demon intent on deceiving us. It may be objected that rogue belief formation must exploit the mechanisms of normal belief formation, and thus that rogue beliefs are at least indirectly the product of truth-tracking mechanisms. I see no reason to concede this, however. It is conceivable that a demon might implant beliefs directly into our minds, by-passing all the normal mechanisms of belief formation. Of course, rogue beliefs might not be able to survive detection – once we recognize a rogue belief for what it is, we may be moved to eradicate it – but they might survive undetected for long periods. If this is right, then regulation by truth-tracking mechanisms cannot be necessary for beliefhood. And if truth-directedness in the normative sense is necessary for beliefhood, then it cannot depend on regulation by truth-tracking mechanisms. So we have not yet identified the fundamental difference between Ally’s attitude and belief – what it is that makes her attitude a respectable non-doxastic acceptancep , rather than a rogue belief.4 A second putative difference between non-doxastic acceptancep and belief is that the former, unlike the latter, is under voluntary control. Ally can decide to acceptp that her client is innocent for professional purposes, but cannot decide to believe that they are. So, it seems, susceptibility to voluntary control is one thing that distinguishes her attitude from belief. This is too swift, however. We need to be clear about what it means to say that a propositional attitude is under voluntary control. If it means that we can actively adopt the attitude, by deciding to do so, then I hold that strand 2 belief is under voluntary control. I set out some reasons for this view in chapter 2 and shall defend it later in this chapter. If, on the other hand, we mean that we can adopt the attitude irrespective of truth considerations, then I agree that belief is not under voluntary control. We cannot decide to believe whatever we like. However, it is not clear that this reflects any 4
Another reason for denying that regulation by truth-tracking mechanisms is necessary for beliefhood is that the claim appears incompatible with first-person authority. We normally take sincere first-person avowals of belief to be authoritative. Yet we are not authoritative about the processes which sustain our beliefs. For example, I believe that my best friends are honest – and, indeed, have good reason to think they are. But I am not sure that these reasons are what sustain my belief. For all I know, it may be sustained by emotional factors and might persist even if the reasons were to vanish. Yet in entertaining this possibility I am not entertaining the possibility that my attitude to my friends’ honesty is not one of belief. It seems, then, that the status of my attitude as a belief cannot be dependent on facts about the processes that sustain it.
128
Superbelief and the supermind conceptual truth about belief. For, as I stressed above, we countenance the existence of rogue beliefs that are formed and sustained without regard to their truth. It is not clear, then, that susceptibility to voluntary control, even in this second sense, is incompatible with beliefhood, though there may be reasons why belief is not in fact susceptible to it. I shall return to these issues later in this chapter.5 The third feature sometimes cited as distinguishing non-doxastic acceptancep from belief is context-relativity. Belief, it is claimed, is context-independent. Or, more precisely, non-indexical belief is. It is possible to believe that it is hot here in one context and not in another, since the thought determines different propositional contents in different contexts. It is also possible to adopt different attitudes to the same proposition when it is presented to us under different linguistic guises. (To take a famous example, I might accord belief to the proposition expressed by the sentence ‘Superman can fly’, while withholding it from that expressed by ‘Clark Kent can fly’, even though these propositions are – arguably – identical.) But if I believe a certain propositional content under a certain mode of presentation, then I believe it under that mode of presentation in all contexts, not just when I am engaged in certain activities or inquiries. Ally’s professional belief, by contrast, is not context-independent in this way: she acceptsp that her client is innocent when acting in her professional capacity as a lawyer, but does not acceptp the same proposition under the same mode of presentation when acting as a private citizen. So its context-relativity sets her professional belief apart from the genuine article. A genuinely doxastic acceptancep , by contrast, would have to be context-independent. I think that this is on the right lines, but the claim will not do as it stands. First, we need to be clear about what kind of contexts are involved here. They are not external ones – social or environmental. Ally need not acceptp her client’s innocence at every moment she is physically present in court, nor need she cease to acceptp it the moment she leaves. She may plan her client’s defence while at home and think about personal matters while in court – acceptingp her client’s innocence in one case, but not in the other. The contexts involved, then, are not external ones, but internal deliberative ones. Now, given this, what does it mean to say that belief is context-independent? The obvious answer is that it means that beliefs are available for use as premises in all deliberative contexts. But then it is not 5
For further discussion of these issues and of the distinction between different kinds of voluntary control, see my ‘Deciding to believe again’ (in preparation).
129
Mind and Supermind true that belief is context-independent. Suppose that as well as acceptingp that her client is innocent, Ally also believes that they are guilty. Clearly, this cannot mean that in all deliberative contexts she will take it as a premise that they are guilty, since in deliberating about how to defend her client, she will take it as a premise that they are innocent. Thus, her belief is not available as a premise in all deliberative contexts and is not completely context-independent. It may be objected that Ally will retain a standingstate belief in her client’s guilt in all contexts, even while deliberating about how to defend them. But there is no asymmetry here with her professional belief. For Ally does not repudiate her acceptancep of her client’s innocence when deliberating about personal matters. She remains committed to taking it as a premise in her professional deliberations, and we can think of this commitment as constituting a standing-state acceptancep of her client’s innocence, which Ally retains in all contexts. We cannot, then, distinguish belief from acceptancep simply by saying that one is context-independent, the other not. 1.3 Belief as unrestricted acceptancep None of the features considered so far affords a clean-cut distinction between the non-doxastic and doxastic forms of acceptancep , and I want to propose a slightly different approach. The fundamental difference between the two, I want to argue, concerns the deliberations in which they are employed, though it is not best characterized in terms of context-relativity and context-independence. Rather, I shall speak of restricted and unrestricted acceptancep . I shall say that an acceptancep is restricted if it is available only in deliberations of a certain type, and that it is unrestricted if it is available by default in all deliberations. The phrase ‘available’ is shorthand here. In saying that an acceptancep is available in deliberations of a certain type, I mean that its possessor is disposed to take its content as a premise in relevant deliberations of that type. (This is not to say that they will actually take it as a premise in every such deliberation: they may fail to recall it or to recognize that it is relevant.) Thus Ally’s acceptancep of her client’s innocence is restricted since she is disposed to take its content as a premise only in her professional deliberations. I say that unrestricted acceptancesp are available by default in all deliberations, since I want to allow that their availability is defeasible. There are two aspects to this.6 First, an unrestricted acceptancep 6
The rest of this paragraph draws in part on ideas in Bratman 1992, pp. 9–11.
130
Superbelief and the supermind will not be available in deliberations where an incompatible restricted acceptancep is available. That is, restricted acceptancesp take priority over unrestricted ones (if they did not, then there would be little point in making them). For example, suppose that Ally has an unrestricted acceptancep of her client’s guilt. This will be available in all her deliberations except her professional ones – for the purposes of which she has formed the restricted acceptancep that her client is innocent. However, many of her other unrestricted acceptancesp – concerning the law, legal procedure, the facts of the case, and so on – will be available in her professional deliberations, since she has formed no incompatible restricted acceptancesp on these topics.7 Secondly, an unrestricted acceptancep may be suspended in a certain class of deliberations, even if no incompatible restricted acceptancep has been made. For example, in her professional deliberations Ally might suspend her unrestricted acceptancep of her client’s guilt, even if she has not formed a restricted acceptancep of their innocence. Now, restricted acceptancep is not belief. We would not say that a person believed that p if they were prepared to take it as a premise only in a restricted set of deliberations. Belief, as we commonly understand it, is just not compartmentalized in that way. So here is one thing that distinguishes Ally’s professional belief from genuine belief – it is restricted, not unrestricted. Can we then identify belief with unrestricted acceptancep ? This would seem to get the range of uses right. We rely on our beliefs in all deliberations except those where we have made some countervailing restricted acceptancep . There is a problem, however. For it seems possible to acceptp something in an unrestricted way without actually believing it. For example, suppose that a highly insecure person – call him Woody – has been persuaded by his therapist to engage in a policy of positive thinking, which involves taking it as a premise in reasoning on 7
It may be objected that Ally might invoke her unrestricted acceptancep of her client’s guilt in her professional deliberations. For example, she might reason that, since her client is in reality guilty, they will probably be unable to answer certain questions convincingly, and thus that she had better avoid those questions in her cross-examination. (I owe this example to Chris Hookway.) The moral of this, I think, is not that Ally’s unrestricted acceptancep is available for use in the same deliberations as her restricted one, but rather that the deliberations in which her restricted acceptancep is available need to be specified more precisely. We might identify them with those in which Ally regards herself as professionally bound to take her client’s innocence as a premise. (These, in turn, might be identified with those which could be openly avowed in court.) In general, it may not always be easy to specify the range of deliberations in which an acceptancep is available; indeed, the agent may not be consistent in the matter. This does not, however, undermine the core distinction between restricted and unrestricted acceptancep .
131
Mind and Supermind all relevant topics that he is a confident and capable person. Woody might adopt this policy – thereby unrestrictedly acceptingp that he is confident and capable – yet without actually believing that he is confident and capable. So belief is not the same as unrestricted acceptancep . What distinguishes Woody’s attitude from genuine belief? It might be suggested that it is motivation: his acceptancep was motivated by therapeutic concerns, not epistemic ones. We have already ruled this out, however. For, as we saw in the last section, the doxastic status of an attitude is not dependent on facts about its origin. (For all I know, many of my beliefs may be motivated by non-conscious therapeutic concerns.) In fact, I think that the answer is more straightforward. It is that Woody’s acceptancep is not completely unrestricted. For there is a particular class of deliberations in which Woody would not rely on his therapeutic acceptancep . The deliberations in question are defined, not by their topic, but by the agent’s criteria for the selection of premises. They are ones where their sole or overriding criterion is truth. I shall refer to such deliberations as truthcritical with respect to premises or TCP for short. Consider, for example, a scenario where Woody is required to predict his own behaviour in various imaginary situations, and where there are large penalties for false predictions (the accuracy of the predictions to be verified by unspecified but highly reliable means). If he is rational, Woody will want to take only truths as premises in the ensuing deliberation, and will not take it as a premise that he is a confident and capable person – assuming he has no reason to think that proposition true. So his therapeutic acceptancep is not completely unrestricted, since it does not extend to TCP deliberations such as this. Only if an acceptancep is available in such deliberations, I suggest, does it qualify as a belief. Availability in TCP deliberations is a necessary condition for belief. Moreover, I want to claim that it is a sufficient condition, too. The only thing that disqualifies Woody’s acceptancep from beliefhood is that it is not available for use in TCP deliberations. But couldn’t an acceptancep be available in TCP deliberations and yet be restricted in other ways, and so fail to qualify as a belief? I do not think so. Any proposition which is available in TCP deliberations will also be available by default in all others – that is, available in all deliberations save those where it has been suspended or where an incompatible restricted acceptancep is available. For even in non-TCP deliberations we usually want most of our premises to be true. (A TCP deliberation, by contrast, is one where we want all of them to be true.) This reflects the fact that our reasoning is typically about 132
Superbelief and the supermind the real world, rather than a fantasy world. (Indeed, even fantasy worlds are constructed against a background of reality: we take them to be like the real world except in those respects where they are specified to differ; see, for example, Lewis 1978.) So acceptancesp that are available in TCP deliberations will be available by default in all other deliberations too – that is, will be unrestricted. (This explains, incidentally, why I describe belief as unrestricted, rather than as restricted to TCP deliberations. For beliefs are available both in TCP deliberations and, by default, in non-TCP ones. A therapeutic acceptancep such as Woody’s, on the other hand, although widely available in non-TCP deliberations, is not available in TCP ones.)8 Some points of clarification and qualification. First, why say that the crucial deliberations here are ones that are truth-critical with respect to premises? Why not say that they are ones that are truth-critical with respect to their goal (TCG) – that is, where the aim is to arrive at only true conclusions? Would this not do as well? No. For we may acceptp a proposition in the course of truth-directed theoretical inquiry without actually believing it – as, for example, when we pursue a reductio argument or continue to acceptp the best available empirical theory even though we know that it will probably turn out to be false. That is, availability in TCG deliberations, unlike availability in TCP ones, is not sufficient for belief. (Nor, indeed, is it necessary. We may not only fail to believe some of the propositions we acceptp in theoretical inquiry, but actually believe their contradictories.) Secondly, in claiming that an acceptancep counts as a belief only if it is available in TCP deliberations, I do not mean that we must actually draw a distinction between TCP and non-TCP deliberations in order to possess beliefs – still less that we must do so consciously. Rather, my claim is that, if we do distinguish between TCP and non-TCP deliberations, either consciously or non-consciously, then an acceptancep counts as a belief only if we are disposed to rely on it in deliberations which we regard as TCP. I shall assume that if an agent does not distinguish between TCP and non-TCP deliberations (say, because they lack the conceptual resources to do so), then all their acceptancesp that are not otherwise restricted count 8
Bratman outlines a picture similar to this one. He says that an agent’s beliefs provide the default cognitive background for deliberation, to which adjustments can be made in particular contexts, either by bracketing believed propositions or by accepting additional ones that are not believed (Bratman 1992). Note, however, that Bratman’s notion of acceptance is rather different from mine, being closer to what I call implicit acceptancep (see chapter 4 above).
133
Mind and Supermind as beliefs. I shall also assume that classifications of deliberations as TCP or non-TCP are usually made at a non-conscious level. Thus, when I speak of a person desiring to take only truths as premises in some deliberation, the desire in question should generally be understood to be of the nonconscious, strand 1 kind. Thirdly, I shall take it that claims about the availability of acceptancesp are relativized to relevant deliberations. So when I say that an unrestricted acceptancep is available in TCP deliberations, I mean in relevant ones (or, more precisely, in ones the agent regards as relevant). The main factor in determining relevance will, of course, be content, but other factors may be involved, too. In particular, I want to mention accuracy. Consider a case outlined by David Clarke (Clarke 1994, 2000). Suppose I measure my 3 table and find it to be 3 feet 11 16 inches long. Yet suppose, too, that in some deliberations – say, when calculating how much cloth to buy to cover the table – I accept (in the everyday sense) that the table is 4 feet long. Clarke notes that this might be cited as an example of acceptance without belief, but goes on to argue that this would be to misdescribe the case. Acceptances, he claims, are made to within an implied degree of accuracy. So what I accept when buying the cloth is that the table is 4 feet long plus or minus an inch – which is consistent with the prior measurement and which I believe to be true. Now, Clarke’s comments concern our everyday notion of acceptance, but I shall assume that acceptancesp , too, are made to within an implied degree of accuracy, and that they are relevant only in those deliberations where a similar degree of accuracy is assumed. So in the case described by Clarke I could legitimately be said unrestrictedly to 3 acceptp , and thus to believe, both that the table is 3 feet 11 16 inches long and that it is 4 feet long, provided that these acceptancesp have different implied degrees of accuracy and that I rely on each in those TCP deliberations where the corresponding degree of accuracy is assumed. Fourthly, it needs to be emphasized that in order for an acceptancep to count as a belief it is not necessary for its possessor to treat it as certain – to be prepared to rely on it in TCP deliberations no matter what the risks. As I pointed out in the previous chapter, there will be occasions when it would be foolish to rely on our acceptancesp and when prudential considerations will override our desire to fulfil our premising commitments. For example, suppose I believe that the gun in my desk drawer is unloaded. And suppose I am now offered a small sum of money for taking the weapon, aiming it at the head of a loved one, and pulling the trigger. In deciding whether
134
Superbelief and the supermind or not to accept this offer, I might, quite reasonably, refrain from relying on the proposition that the gun is unloaded. I shall take it, however, that this in itself would not be incompatible with continued belief in that proposition – provided I remained committed to taking it as a premise and did not hesitate to rely on it in other TCP deliberations where there was less at stake. What would be incompatible with continued belief, I assume, would be my taking some other, incompatible, proposition as a premise in TCP deliberations in preference to it. It may be objected that there are counter-examples to this last claim. For example, in the situation described I might choose to err on the side of caution and take it as a premise that the gun is loaded, though still believing that it is not. Or I might consider the chances of my being mistaken about the gun and make an explicit calculation of the risks involved. Neither case, however, would constitute a genuine counter-example. In choosing to err on the side of caution I would be ceasing to regard the deliberation as TCP – ceasing to have truth as my overriding criterion for the selection of premises – and the deliberation would therefore no longer be a criterial one for belief possession. And in reflecting on my belief about the gun and the chances of its being mistaken, I would be moving to a meta level where the first-order belief itself was no longer relevant. This deliberation, too, would therefore not be a criterial one for possession of the belief. With these qualifications and clarifications, I conclude that beliefs (strand 2 beliefs, that is) are unrestricted acceptancesp . Note that this conclusion vindicates the claim that strand 2 beliefs form a clearly defined and theoretically interesting class. We have already seen that the class can be clearly defined. It is true that there will be borderline cases – cases where we take a proposition as a premise in some relevant TCP deliberations but not in others – but most will fall squarely within or without the class. The theoretical interest of the class follows from the fact that our unrestricted acceptancesp form, in Bratman’s phrase, the default cognitive background to our conscious reasoning – the set of propositions which, unless we have special reason to do otherwise, we take as premises in our conscious practical and theoretical reasoning (Bratman 1992). Information about a person’s strand 2 beliefs will thus provide the starting point for anyone wanting to predict their conscious actions and inferences. There is an alternative way of expressing the claim that beliefs are unrestricted acceptancesp . Recall Woody. Woody has been persuaded to take 135
Mind and Supermind it as a premise in reasoning on all relevant topics that he is a confident and capable person. Let us say that he has adopted that proposition as a general premise and that he has a general acceptancep of it (general since it is not restricted to deliberations on particular topics). Yet this general acceptancep does not extend to TCP deliberations. Why not? Why is Woody not disposed to rely on the premise that he is confident and capable in relevant TCP deliberations? If he has a policy of taking that proposition as a premise in reasoning on all relevant topics, then he should be at least inclined to take it as a premise in relevant TCP deliberations. What blocks this inclination? The answer, of course, is that in TCP deliberations Woody’s overriding criterion for selection of premises is truth, and he has no reason to think it true that he is confident and capable. That is to say, what prevents him from relying on his general acceptancep in TCP deliberations is that he does not have enough confidence in its truth. If his confidence had been sufficiently higher, then he would have gone ahead and relied on it. The upshot of this is that a general acceptancep will be unrestricted if, and only if, its possessor has high confidence in it – high enough for them to be willing to rely on it in TCP deliberations. So another way of expressing our conclusion would be to say that strand 2 belief is general acceptancep plus high confidence: strand 2 belief is highly confident general acceptancep . Finally, a word about tacit and implicit strand 2 beliefs. These, too, can be identified with a subclass of the corresponding forms of acceptancep , defined by their relation to TCP deliberations. Thus, one counts as having a tacit strand 2 belief that p if, despite not having previously considered the matter, one would immediately treat p as a premise in TCP deliberations, were one to consider it. And one counts as having an implicit strand 2 belief that p if one takes p for granted in TCP deliberations, manipulating one’s explicit premises in ways that assume its truth.9 9
Why not treat attitudes such as Ally’s and Woody’s as simulated beliefs, rather than as independent forms of acceptancep ? (See, for example, Recanati 2000.) There are two main reasons. First, Ally and Woody are not simulating belief, if by that we mean mimicking the inferential processes associated with belief. The premising policies they follow differ in scope from belief and have their own independent purpose. Secondly, to describe their attitudes as simulated beliefs would be to suggest that belief is the primary category and non-doxastic acceptancep a derivative; and that is, I think, wrong. Both belief (of the strand 2 kind) and non-doxastic acceptancep are subspecies of the more fundamental category of acceptancep . There is also a terminological reason for avoiding talk of simulation: the notion of belief simulation as it figures in much recent psychological literature is of a sub-personal process, whereas premising is a personal one.
136
Superbelief and the supermind 1.4 Elucidations and objections The account just outlined sheds light on some of the issues discussed earlier, concerning truth-directedness and the role of pragmatic factors in belief formation (again, ‘belief ’ here means ‘strand 2 belief ’). I want to make three points. First, the account does not rule out the possibility of rogue beliefs, formed and sustained by processes that are not sensitive to truth considerations. Whether or not an acceptancep counts as a belief depends on its cognitive role – restricted or unrestricted – not on its origins. Secondly, the account explains why belief has truth as a norm of correctness – why it aims at truth in the normative sense. Believing that p involves being disposed to take p as a premise in TCP deliberations – that is, in deliberations where we want to take only truths as premises. So believing that p involves being disposed to do something which we want to do only if p is true. So if we are rational, we shall want to believe that p only if p is true. That is to say, we shall regard belief as having truth as a norm of correctness. On this view, then, the claim that belief aims at truth follows from the general constraints of practical rationality. The alternative formulation reinforces the point. Believing that p involves having high confidence in p, and this confidence will be misplaced if p is not true – even if it is justified, given the available evidence. Thirdly, the account clarifies the role of pragmatic motives in belief formation. If the account is correct, then pragmatic motivation is not incompatible with beliefhood. Believing a content p involves taking p as a general premise and having high confidence that p is true. And while pragmatic motives cannot make us more confident of p (or should not, if we are rational), they can induce us to adopt p as a general premise. So if we antecedently have high confidence in p, then pragmatic motives can induce us to adopt p as a general premise, and thereby to form the belief that p. Thus although pragmatic motives cannot generate belief on their own – we cannot believe anything we please – they can nonetheless play a crucial role in belief formation. I shall say more about this later in the chapter. I think the features just noted are attractive ones. I turn now to some possible objections to the account. First, isn’t the notion of strand 2 belief that has emerged a gerrymandered one – part premising policy, part state of confidence? And doesn’t this undermine the distinctness of the two strands of belief I have sought to distinguish? No: my claim is that strand 2 beliefs are premising policies and strand 1 beliefs degrees of confidence. It is true that, on this account, high confidence is a possession condition for 137
Mind and Supermind the particular type of premising policy that constitutes a strand 2 belief, so strand 2 belief will not be completely independent of strand 1. But this is not a new claim. As I emphasized in the previous chapter, our premising policies are realized in our strand 1 beliefs and desires. The confidence element which distinguishes those premising policies that count as beliefs from those that do not is just another element in this realizing base. Secondly, isn’t my account open to the same objection I raised against the confidence view of flat-out belief, outlined in chapter 3? Consider again the paradox of the preface. An historian – call him Andy – has compiled a long factual work. Suppose that Andy believes every statement in his book, taken individually: he has adopted each of them as a general premise and has – justifiably, given the available evidence – high confidence in each of them. Now, if rational belief is closed under conjunction, then Andy ought also to believe the conjunction of these statements, too, which is equivalent to the claim that none of the statements is false. Suppose, however, that he does not believe that claim. For he has – again, justifiably, given the available evidence – very low confidence in it. So, on this account, rational belief is not closed under conjunction, and we must abandon conjunctive closure. Yet in chapter 3 I argued against abandoning it and rejected the confidence view because it required its abandonment. My response here is to deny the soundness of the argument. The confidence view requires us to abandon conjunctive closure, since it treats high confidence as both necessary and sufficient for belief, and thus makes it impossible for a rational agent to adhere to conjunctive closure, given that rational confidence depletes over conjunction. But the present account treats high confidence only as necessary for belief, and is consequently compatible with respect for conjunctive closure. Consider Andy again. It is true that he cannot make it the case that he has high confidence in the conjunction of statements in his book (or if he could, ought not to do so, given the evidence available to him). But he can still bring his beliefs into line with conjunctive closure. For he can change his attitude to the conjuncts. Again, he cannot adjust his confidence in them (it would need lowering this time), but he can cease to take them as general premises, and thus cease to believe them. Or, perhaps more usefully, he can modify their contents, by attaching explicit probabilities to them. But surely, if conjunctive closure holds, then Andy ought not to believe any of the individual claims in the first place? If he is committed to conjunctive closure, then in assenting to those claims, he would be committing 138
Superbelief and the supermind himself to assenting to their conjunction. And if he knows that he would be unable to do that, then he should withhold belief from at least some of the conjuncts. In other words, making retrospective adjustments to one’s belief set, when violations of conjunctive closure occur, is not enough; we should anticipate such violations and withhold assent from propositions whose conjunctions we could not believe.10 And since rational confidence is not preserved over conjunction, it follows that a rational agent will have a very small set of beliefs indeed. Now, this is really an objection to conjunctive closure itself, rather than to the present account. For in fact it seems quite reasonable to believe a set of propositions without being prepared to believe their conjunction – without, that is, being prepared to believe that none of one’s beliefs is false. Yet dropping conjunctive closure also has unpalatable consequences, as I explained in chapter 3. In the end, I suspect, we shall have to accept that there is a genuine tension in our attitudes here and that we cannot hold on to all of our pre-theoretical intuitions. (One option would be to relativize conjunctive closure to our epistemic needs – to say that we are committed to believing a conjunction if we have some epistemic need to make a judgement on it, but not otherwise.) I shall not pursue this matter further here. Note, however, that the existence of a tension in our attitudes to conjunctive closure is in fact a strong point in favour of the account of belief proposed here. For the account explains it – explains why it seems both reasonable and unreasonable to hold beliefs like Andy’s. According to the account, believing that p involves both having high confidence in p and being committed to premising that p. And depending which element we focus on, we get a different conclusion about Andy. If we focus on the confidence element, then we conclude that Andy ought not to believe the conjunction, since rational high confidence is not preserved over conjunction. If we focus on the premising element, on the other hand, then we conclude that Andy ought to believe the conjunction, since he is committed to taking its conjuncts as premises in classical reasoning, and one thing that involves is being prepared to conjoin them. Thus, far from being embarrassed by the tension in our attitudes to conjunctive closure, the present account actually predicts it. Thirdly, is it true that belief always involves high confidence? Take Andy again. Isn’t there a sense in which he does believe the conjunction of claims in his book? After all, he would be prepared to read the book 10
Thanks to Chris Hookway for drawing my attention to this point.
139
Mind and Supermind aloud, sincerely asserting each sentence and implicitly conjoining it to the preceding ones. Wouldn’t this performance manifest a belief in the conjunction of claims in the book?11 I concede this. I think that there is a sense in which Andy believes the conjunction of claims in his book. But it is a different sense from the one I have been trying to articulate, and one that is less central to folk usage. Andy’s attitude to the conjunction is, I suggest, one of restricted acceptancep – restricted to deliberations conducted in the context of disinterested theoretical inquiry. In those deliberations Andy acceptsp each claim in the book and is prepared to acceptp their conjunction too – that is, to defend the work as a whole. Theoretical acceptancep of this kind is similar to flat-out belief as understood by defenders of the assertion view, except that it involves premising its content in theoretical inquiry, rather than merely affirming it. And as with that attitude, deciding which propositions to acceptp in this way will involve trading off informativeness against probability – some highly informative propositions being theoretically acceptablep even though improbable. Now, acceptancep of this kind is not belief in the strict sense, since it does not extend to TCP deliberations; but I think that it qualifies for the title in a looser sense, in virtue of the fact that, unlike genuinely nondoxastic acceptancep , it has truth as an appropriate dimension of assessment. Other things being equal, we want our theoretical acceptancesp to be true, although some improbable propositions are theoretically acceptablep in virtue of their informativeness. We might call this attitude theoretical belief. It may be helpful at this point to have a summary of the different forms of acceptancep that have been distinguished and the relations between them. This is provided in figure 4. 1.5 Desire and pursuit of the goal I turn now to desire. Here we can be much more brief. I suggested that strand 2 desires are goal pursuits, just as strand 2 beliefs are acceptancesp . I have not spelled out the case for this view in detail, but it is parallel to the one for belief. In the context of the proposed two-strand theory, the hypothesis that strand 2 desires are goal pursuits is the best explanation of how we could have states with their distinctive functional profile. 11
I owe this example to Peter Carruthers.
140
Superbelief and the supermind
Less restricted
General acceptancep with high confidence (Superbelief)
General acceptancep without high confidence (Woody)
Theoretical acceptancep (Andy)
Professional acceptancep (Ally)
Other restricted acceptancesp
Theoretical deliberations
Professional deliberations
Other deliberations
More restricted
More restricted acceptancesp take priority over less restricted ones
TCP deliberations
Figure 4 Varieties of acceptancep (the brackets indicate the deliberations in which the named acceptancep is available)
There is, moreover, a straightforward argument for the identification of strand 2 desires with goal pursuits, based on the relation between the latter and acceptancesp . Goal pursuit stands to acceptancep as desire to belief – each pair interacting in the same way in practical reasoning. So if strand 2 beliefs are acceptancesp , then strand 2 desires must be goal pursuits. But do all goal pursuits count as desires, or only some? That is, can there be goal pursuit without desire as there can be acceptancep without belief? If so, then a paradigm case should be a professional desire. For example, suppose that for professional purposes Ally has adopted the goal of securing her client’s acquittal, even though she believes they are guilty and as a citizen would like to see them imprisoned. In the terminology introduced for acceptancep , she is engaged in a restricted pursuit of the goal of securing her client’s acquittal and an unrestricted pursuit of the goal of seeing them imprisoned (unrestricted goal pursuits, like unrestricted acceptancesp , being available for use in all deliberations except those where they have been suspended or where incompatible restricted 141
Mind and Supermind ones are available). And by analogy with acceptancep , only the latter should count as a genuine desire.12 The analogy does not hold, however. For folk psychology would not refuse the title of desire to Ally’s restricted goal pursuit. We might, it is true, say that she does not really desire her client’s acquittal – does not desire it for its own sake. But we would nonetheless be prepared to describe her attitude as an instrumental or extrinsic desire – a desire which subserves some more fundamental desire. That is, although folk psychology makes a distinction among goal pursuits corresponding to that between beliefs and non-doxastic acceptancesp , it treats it as a distinction between types of desire, rather than a distinction between desires and other non-desiderative forms of goal pursuit. (‘Extrinsic’ does not mean the same as ‘restricted’, of course, but the class of extrinsic desires will be broadly co-extensive with that of restricted goal pursuits. Extrinsic desires will be available only in deliberations where their pursuit furthers the ends they subserve – and will thus be restricted in my sense. Conversely, restricted goal pursuits will typically serve some further end – why else would one form them? – and will thus count as extrinsic.) In short, the folk concept of desire is more flexible than that of belief (we do not recognize instrumental or extrinsic varieties of belief), and we can straightforwardly identify strand 2 desire with goal pursuit. To sum up: strand 2 beliefs are unrestricted acceptancesp , intrinsic strand 2 desires are unrestricted goal pursuits, and extrinsic strand 2 desires are restricted goal pursuits. Strand 2 beliefs and desires are thus supermental states – states of the supermind, realized in basic-level mental states and actions. At last we can drop the terminology of strand 1 and strand 2. Since strand 2 beliefs and desires are supermental states, I shall refer to them as superbeliefs and superdesires. Similarly, I shall refer to strand 1 beliefs and desires as basic beliefs and desires.13 12
13
Unrestricted acceptance is defined by availability in TCP deliberations. What are the parallel deliberations for desire? They are, I suggest, ones where the subject’s criterion for the selection of optative premises (goals) is intrinsic desirability – where the subject wishes to take only intrinsically desirable states of affairs as goals. (By this I mean states that are subjectively intrinsically desirable – intrinsically desirable to the subject, rather than according to some independent norm. For discussion of the aim of desire, see Velleman 1992.) Note that ‘superbelief ’ is not synonymous with ‘strand 2 belief ’, though it is co-referential, at least in the human case. By ‘superbelief ’ I mean ‘unrestricted acceptancep ’, and, as I explained earlier, the claim that strand 2 beliefs are acceptancesp is the product of abductive inference, not conceptual analysis. Note, too, that my use of the term ‘basic belief ’ is quite different from its epistemological use, as a term for beliefs that are epistemically
142
Superbelief and the supermind 2 c hal le ng e s m et This part of the chapter shows how thinking of strand 2 beliefs and desires as supermental states can resolve the challenges set out in chapter 3. The answers are implicit in what has gone before, but I shall spell them out in detail and address some objections. 2.1 Meeting the Bayesian challenge I begin with the Bayesian challenge. This concerns the rationality of action motivated by flat-out beliefs. How can it be rational to act upon flat-out beliefs and to ignore degrees of confidence? Indeed, how can it be possible, given an austere view of partial belief? If actions are precisely things that have explanations in terms of the agent’s partial beliefs and desires, then how can flat-out belief have any influence at all upon action? The same goes for flat-out desire: how can it be rational to act upon flat-out desires, and how can it be possible, given an austere Bayesianism? Now, this challenge assumes that motivation by flat-out beliefs and desires is incompatible with motivation by partial ones – that actions cannot be simultaneously motivated both by flat-out states and by partial ones, and thus cannot be simultaneously responsive both to the norms of classical practical reasoning and to those of Bayesian decision-making. But if flatout beliefs and desires are supermental states, then this assumption is false. For as we saw in the previous chapter, supermental states influence action in virtue of our basic beliefs and desires about them. We act on our premises and goals because we attach a high desirability to adhering to our premising policies and to performing the actions they dictate. Thus in acting upon our superbeliefs and superdesires we are not departing from Bayesian norms, but adhering to them, and the resulting actions can be justified both on classical grounds, as dictated by our superbeliefs and superdesires, and on Bayesian grounds, as warranted by our probabilities and desirabilities. For the same reason, belief in the efficacy of flat-out belief and desire is compatible with an austere view of the basic mind. Austere Bayesianism dictates that all intentional actions will, as a matter of conceptual necessity, have explanations in Bayesian terms. But, again, on the present view, this is compatible with some of them also having classical explanations in terms of flat-out superbeliefs and superdesires. foundational. To say that a belief is basic, in my sense, is simply to say that it is a state of the basic (strand 1) mind; there is no implication about its epistemic status.
143
Mind and Supermind Take a simple case. Suppose I am ordering in a restaurant. I have just requested the Spaghetti Bolognese, when I consciously recall that Spaghetti Bolognese usually contains beef and that beef is unsafe – these thoughts being recollections of previously formed superbeliefs. I quickly calculate that these premises, together with the implicit goal of avoiding unsafe food, warrant avoidance of Spaghetti Bolognese, and promptly change my order. Now, these superbeliefs do not only rationalize my action, but are also dynamic causes of it: the events of conscious recollection and calculation trigger the action. So we can explain my action by saying that it occurred to me that Spaghetti Bolognese usually contains beef and that beef is unsafe. However, since supermental states influence action in virtue of our nonconscious basic beliefs and desires concerning them, there will be another intentional explanation of my action which represents it as the outcome of Bayesian decision-making, operating upon graded basic states. Thus, upon consciously recalling my superbeliefs and calculating that they warranted avoidance of the Spaghetti Bolognese, I became highly confident that I had premising policies which warranted a change of order. Since I strongly desired to adhere to my premising policies and to perform the actions they warranted, the option of changing my order now had the highest overall estimated desirability, so I took it. This explanation, too, is both rationalizing and causal, though the mental states referred to are not dynamic causes of the action, but only sustaining ones. My action thus has a dual intentional explanation. It can be explained both by reference to my flat-out supermental states and by reference to my graded basic mental states. (This does not mean that it is causally overdetermined; the two explanations are pitched at different levels and pick out different aspects of a single causal process.) And my action can be represented as rational both on classical grounds and on Bayesian ones simultaneously. A similar analysis will be possible in any case where an agent acts upon superbeliefs and superdesires. In each such case, the supermental explanation of the action will be compatible with, and underpinned by, another explanation which represents it as the rational product of the agent’s basic beliefs and desires. Of course, not all actions will have dual explanations. Many will take place without supermental activity – that is, without conscious propositional thought – and will thus have only a basic explanation. But any action that does have a supermental explanation will also have a basic one. Belief in the efficacy of flat-out belief and desire is thus compatible with a commitment to austere Bayesianism, and the 144
Superbelief and the supermind Bayesian challenge is defused. What goes for superbelief goes for nondoxastic acceptancep too, of course. Since our non-doxastic acceptancesp become effective in virtue of our underlying basic attitudes, belief in their efficacy is compatible with a commitment to austere Bayesianism. Let us look at a couple of objections to this account. First, is it not highly counter-intuitive to claim that some of our actions have two intentional explanations – and even more so to claim that one of these explanations cites beliefs about premising policies and their commitments? Surely the reason I forgo the Spaghetti Bolognese is simply that I believe it is unsafe – not that I believe that my premising policies commit me to forgoing it? I concede that these claims are counter-intuitive. But then they would be counter-intuitive if the general thesis developed here is true. For the thesis is that we mistakenly suppose there to be just one kind of belief, when in fact there are two. And, given this, we shall simply never look for dual intentional explanations. Moreover, where an action does have a dual explanation, we shall be extremely unlikely to hit on the basic one first. For there will always be a much simpler and more intuitive one available, citing first-order superbeliefs and superdesires – in the restaurant case, ones about Spaghetti Bolognese and beef. (Of course, we shall not describe this explanation as a supermental one – we shall speak simply of beliefs and desires – but we shall be referring to superbeliefs and superdesires.) Not only will this explanation be simpler and more intuitive than the basic one, but it will be the one the agent would avow if questioned – reflecting the content of their conscious reasoning. In short, we shall systematically overlook the existence of those basic explanations of actions that cite beliefs about premising policies. It is not surprising, then, that we find the claim that such explanations exist counter-intuitive. Secondly, is this account really compatible with an austere view of basic belief? For on an austere view, all there is to possessing a basic belief is having the appropriate behavioural dispositions, as revealed by behavioural interpretation. And it is not clear that behavioural interpretation will license attributions of the postulated beliefs about premising policies. Take my change of order in the restaurant. Surely, interpretational constraints would require you to ascribe to me the simplest basic beliefs and desires that could adequately explain my behaviour – in this case, ones about Spaghetti Bolognese, beef, and health, not ones about premising policies and their commitments? This is too hasty. It is important to remember, first, that the interpreter does not aim to make sense of isolated actions, but of patterns of activity extending over time, and, 145
Mind and Supermind secondly, that the various mental events and actions involved in the formation and execution of premising policies will also figure among the data for interpretation. So in the restaurant case what we have to explain is not just that I change my order, but that I do so after consciously thinking to myself that Spaghetti Bolognese contains beef and that beef is unsafe, and consciously calculating that these premises warrant a change of order – these episodes themselves being part of a larger complex of conscious private thinkings and calculatings. When this wider context is taken into account, I claim, the simplest interpretation of my action will be one that adverts to beliefs of the kind mentioned. The objector may respond that interpreters can take account only of publicly observable behaviour, and that from this perspective there will be no grounds for crediting me with beliefs about premising policies. As far as my overt behaviour goes, it will be impossible to tell whether I superbelieve that beef is unsafe or simply believe it strongly at the basic level. There are two responses to this. First, it is not true that beliefs about premising policies will have no observable manifestations. They will, for example, indirectly reveal themselves in our reports of our conscious mental lives. Thus in the restaurant case the data for interpretation might include, not only that I change my order, but also that I say that I changed it as a result of consciously thinking to myself that Spaghetti Bolognese contains beef and that beef is unsafe. Given the right context, the best interpretation of an utterance like this may be one which refers to premising operations and which involves the ascription of beliefs about premising policies and their commitments. Secondly, even if it were true that the postulated beliefs had no behavioural manifestations, it would not follow that they did not exist. On an austere view, beliefs consist in behavioural dispositions, and the existence of a behavioural disposition does not depend on its being detected, or even detectable. If there can be undetectable behaviour, then there can be undetectable behavioural dispositions. One thing the previous objection does underline, however, is that I am assuming a realist view of the private mental events involved in the formation and execution of premising policies – episodes of inner speech, judgement, conscious inference, and so on. And it may be objected that this assumption is in tension with my commitment to an austere view of the basic mind. After all, austere theorists are anti-realists about internal mental events and processes. This objection misses the point, however. For the events mentioned are not internal to the basic mind, but manifestations of it – basic-level actions. It is true that these same actions are internal to the 146
Superbelief and the supermind operations of supermind – they are stages in the processing of superbeliefs – but I have not advocated an austere view of the supermind. (Indeed, as we shall see in the following chapters, one of the great attractions of the theory developed here is that it allows us to combine an austere view of the basic mind with a rich view of the supermind.) Still, it may be objected that the very notion of a private mental act is a dubious one. With no public criteria for their successful performance, such acts are, many people feel, of doubtful ontological standing. Now, it may be wise to be suspicious of essentially private acts of the sort posited by some philosophers of action – acts of volition or willing. But the private acts invoked here – acts of policy adoption, argument construction, self-interrogation, and so on – are of a more familiar and unobjectionable kind. They are not essentially private, but internalized or abbreviated versions of overt actions, mostly of a linguistic kind, and may involve slight physiological changes similar to those which occur when the fullblown versions are performed. Even such a thoroughly austere theorist as Dennett recognizes the existence of private behaviour of this kind – including inner verbalization, calculation, and visualization (Dennett 1991a, p. 197). Before closing, I want to add some remarks on rationality. On the view outlined here an action can be assessed for rationality at two different levels and in accordance with two different sets of norms: at the supermental level, in accordance with the norms of classical practical reasoning, and at the basic level, in accordance with the norms of Bayesian decisionmaking. Now, given an austere view of the basic mind, our actions are guaranteed to be rational at the basic level, since attributions of basic attitudes are constrained by an assumption of rationality. They are not, however, guaranteed to be rational at the supermental level. In working out the consequences of a set of premises and goals we may go astray. We may misapply an inference rule, or apply an invalid one, or assume that the response to a self-interrogation is correct when in fact it is not. Indeed, we may go wrong blatantly and systematically. And if we act upon the conclusions of such faulty reasoning, then the resulting actions, qua supermental, will be irrational. But are these claims compatible? Is the possibility of supermental irrationality compatible with the impossibility of basic-level irrationality? The answer is yes. For the basic-level motives for an action will be very different from the supermental ones. The basic-level explanation will appeal to the agent’s confidence that the action was warranted by their premises 147
Mind and Supermind and goals and to the desirability they attach to adhering to their premising policies. If these attitudes are sufficiently strong, then the action will be justified on Bayesian grounds, even if it is not in fact warranted by the relevant premises and goals. Of course, in such cases the agent’s confidence that the action is warranted will be misplaced, but it may nonetheless be justified, given the evidence available to them – including the past reliability of their conscious reasoning. There is another possible objection here. I suggested that one way we may go wrong in our conscious reasoning is by relying uncritically on selfinterrogation. When we ask ourselves what follows from a set of premises, the answer that comes to mind may be wrong. But since such answers are produced by non-conscious processes, does it not follow that there can be irrationality at the basic level, contrary to the austere view I have advocated? The objection is natural, but rests on a conflation of the two strands I am distinguishing. Suppose I consciously entertain a range of premises, interrogate myself, and then consciously acceptp whatever comes to mind as a conclusion. And suppose, too, that this conclusion is not in fact warranted by the premises. Now we can think of this sequence of events as constituting a supermental inferential process. The inference is unwarranted, so I am guilty of irrationality at the supermental level. But it does not follow that the production of the answer cannot be rationalized at the basic level. The error may be attributable to misinformation – about the meaning of the premises, say, or about their inferential powers. Or it may reflect the influence of pragmatic considerations. I may, for example, have a strong basic desire to acceptp a certain false conclusion at the supermental level, which justifies its production in response to the self-interrogation. I conclude that a commitment to austerity at the basic level is compatible with the existence of irrationality at the supermental level. This hints at a powerful application for the present theory. For there are notorious difficulties in reconciling an austere view of the mind with the possibility of irrationality, including self-deception and akrasia (see, for example, Davidson 1982). These difficulties reflect the restrictions of a single-level view of the mind, and the two-level theory outlined here promises a solution. I shall return to these issues in chapter 8. 2.2 Meeting Williams’s challenge I have claimed that we can actively adopt beliefs through one-off acts of judgement or making up of mind. I called this claim activism. Williams’s 148
Superbelief and the supermind challenge was to defend activism in the face of a powerful argument against it. The argument went like this. Actions are things that are responsive to practical reasons. So if we could actively adopt beliefs, then we would be able to adopt them for practical reasons. But if we could adopt beliefs for practical reasons, then we would be able to adopt them at will: with strong enough pragmatic motivation we would be able to believe anything we wanted, regardless of truth considerations. And we cannot do this – indeed, according to Williams, it is necessarily the case that we cannot. So we do not have the power to adopt beliefs actively. Now, one way to rebut this argument would be to deny that belief adoption can be pragmatically motivated. Some defenders of activism are tempted by this, suggesting that belief adoption might be responsive to distinctively theoretical reasons, just as other actions are responsive to distinctively practical ones (see Montmarquet 1986; Walker 1996). But if the suggestion is that acts of belief adoption are responsive only to theoretical reasons, then it is mysterious. How could an action be responsive to some kinds of reasons and not to others? It may be replied that the expressions we use to refer to the active adoption of beliefs – ‘judging’, ‘making up one’s mind’, and so on – contain an implicit reference to aim. To judge a proposition true is to adopt it as an object of belief with the aim of thereby acquiring a true belief. So by definition judgement is motivated by theoretical reasons. This may be correct, but it does not disarm the anti-activist argument. For why could we not adopt a proposition as a belief with some other aim in view? Granted, we might not be prepared to call the act a judgement, but what we call it is neither here nor there. The question is whether or not it would be possible, if activism were true, and I can see no reason to deny that it would. There is another way to block the anti-activist argument. This is to concede that belief adoption can be pragmatically motivated, but deny that it can be practised at will – that is, without regard to truth considerations. Having a high degree of confidence in a proposition might be an enabling condition for the act of forming a belief in it – no matter how that act was motivated. But, again, how could this be so? What would there be to stop us from adopting beliefs in which we had low confidence? Suppose I have very strong pragmatic motives for believing that my boss’s jokes are funny, even though I have never found them so. Then if I were able to adopt beliefs actively, why could I not go ahead and adopt the belief that my boss’s jokes are funny? Of course, if I reflected on what I was doing, I would recognize that I was violating a fairly basic epistemic norm, to 149
Mind and Supermind the effect that one should not believe things without reason to think them true. But this would not necessarily deter me from adopting the belief. For if the truth or falsity of the belief were of no particular consequence to me, and the pragmatic motives for having it strong, then I might decide to overlook the epistemic impropriety and adopt it anyway. It is here that the present theory offers a solution. Superbelief is unrestricted acceptancep ; to superbelieve that p is to have a policy of taking p as a default premise in all relevant deliberations, including TCP ones. Now, we can actively adopt a policy of this kind, by committing ourselves to it. And we can make this commitment for practical reasons. We cannot, however, make it towards any proposition at all. For in order to execute the policy we would have to be prepared to take the target proposition as a premise in TCP deliberations – that is, in deliberations where our overriding criterion for selection of premises was truth. And we have very little choice as to what we are prepared to take as premises in such deliberations. If my overriding criterion for selection of premises in a certain deliberation is truth, then, assuming I am rational, I shall be prepared to take proposition p as a premise in that deliberation only if I regard p as very likely to be true – or at any rate more likely than any of the alternatives. The constraint here is simply one of practical rationality. No matter how much it might be to my advantage to believe p, it cannot be to my advantage to take p as a premise in deliberations where I want to take only truths as premises, unless I think that p is likely to be true. Thus, unless I have high confidence in a proposition – high enough to render me willing to take it as a premise in TCP deliberations – I shall be unable to sustain a policy of acceptingp it unrestrictedly. And I shall thus be unable to make a serious commitment to such a policy. (Even if I did not realize that I would be unable to sustain the policy, experience would soon teach me otherwise, rendering any commitment to it futile.) High confidence will thus be an enabling condition for unrestricted acceptancep , and hence for superbelief adoption. This, of course, reflects our alternative characterization of superbelief as highly confident general premising. It may be objected that pragmatic motives might override our desire to take only truths as premises in TCP deliberations, and so enable us unrestrictedly to acceptp propositions in which we have low confidence. Now, it is certainly true that pragmatic considerations may induce us to take improbable propositions as premises in deliberations where we would otherwise have preferred to take only highly probable ones. So, for example, a desire to please my boss may induce me to take it as a 150
Superbelief and the supermind premise in my everyday reasonings that he is witty and amusing. But this is just to say that pragmatic considerations may lead us to reclassify certain deliberations as non-TCP ones. And the reclassification will not in itself alter the doxastic status of our premises. Whether or not we superbelieve a proposition depends on whether or not we are prepared to take it as a premise in those deliberations we regard as TCP – whichever these happen to be. And while pragmatic factors may affect which deliberations we regard as TCP, they will not alter our readiness to use a given proposition as a premise in those we do regard as such.14 I conclude that if we restrict activism to superbelief, then Williams’s challenge can be rebutted. Superbeliefs can be adopted actively and for pragmatic reasons, but cannot be adopted without regard to their truth. (Does this mean that they can be adopted ‘at will’? It depends on what we mean by ‘at will’. If we mean ‘without regard to truth considerations’, then the answer is no. If we mean ‘for pragmatic reasons’, then the answer is yes.) Finally, a note on pragmatically motivated belief adoption. Is this just a theoretical possibility, or something we actually practise? It is not easy to tell, since the states that motivate acts of belief adoption will typically be non-conscious ones. I suspect, however, that pragmatic motives do often play a role. Frequently, I suspect, we do not make up our minds about a matter, even though we have enough evidence to do so, until pragmatic factors force a decision. We can be highly confident that something is true without having actually made up our minds that it is – that is, without having committed ourselves to a policy of taking it as a premise. We may simply not care whether or not it is true and have no strong desire for true beliefs for their own sake. In such a case it may take pragmatic considerations to induce us to come off the fence and commit ourselves to a flat-out belief on the matter. These motives might be of various kinds – prudential, moral, financial, and so on. But most often, I suspect, they will be connected with the pragmatics of inquiry and practical deliberation. By forming superbeliefs we equip ourselves with premises which we can feed into conscious reasoning, and the desire to engage in such reasoning and to secure the benefits of doing so will thus be a powerful motive for forming superbeliefs. 14
This claim may need some qualification. For example, pragmatic factors may affect our readiness to take epistemic risks, and thus our willingness to adopt premising policies in the first place, or to stick with those we have already adopted.
151
Mind and Supermind 2.3 Meeting Fodor’s challenge Fodor’s challenge was directed at the cognitive conception of language – the view that natural language is constitutively involved in some central cognitive processes. The challenge was to explain how sentences of inner speech could occupy the causal roles of thoughts, especially if language is a modularized peripheral system. Here we can be brief, since all the argumentative work has already been done in the previous chapter. According to the account developed here, the cognitive role of sentences of inner speech is determined by our personal-level attitudes towards them. A sentence acquires the causal role of a belief or desire when we endorse it and commit ourselves to taking its content as a premise or goal in our conscious reasoning. It is true that it may not be absolutely necessary to articulate our premises and goals linguistically, but it will always be possible, and doing so will facilitate premising activities and extend their range. Indeed, as I noted above, it is plausible to think of premising activities as internalized, self-directed versions of overt linguistic activities. Thus the present account not only shows how the cognitive conception could be true, but renders it plausible that it is true – reinforcing the evidence of introspection. The account is, moreover, compatible with the claim that the naturallanguage system is a modularized peripheral one. On this view, the language system is not physically integrated with other cognitive systems; its integration occurs at a personal level – through the attitudes we adopt towards our linguistic productions and the use we make of language in argument construction and self-interrogation. The account is also compatible with the view that language is a relatively recent development in our evolutionary history. Most writers agree that grammatical language evolved some time within the last quarter of a million years, and some would place its emergence as late as 50,000 years ago. This presents a problem for defenders of the cognitive conception. If language originally had an exclusively communicative function, then there would not have been much time for it to be co-opted to play a cognitive role as well – not, at least, if this would have involved substantial changes to the brain. On the present view, however, this problem is avoided. The supermind is not a hardwired system, but a softwired one, formed by internalizing certain kinds of linguistic behaviour and adopting various attitudes towards our inner verbalizations. And consequently, its evolution would not have involved the emergence of substantial new neural structures. There might, 152
Superbelief and the supermind it is true, have been some adaptations to facilitate premising – for example, we might have developed an innate disposition to engage in inner speech (see chapter 8 for more remarks on this). But the structures involved need not have been very elaborate, and no substantial neural reorganization would have been required. The present account thus supports a version of the cognitive conception which is, from an evolutionary perspective, very attractive.15 2.4 The role of consciousness In addition to resolving the three challenges posed in chapter 3, the account developed here also sheds light on a question raised in chapter 2, concerning the cognitive role of consciousness. As I noted there, it is natural to think that conscious occurrent thoughts influence action precisely in virtue of the fact that they are conscious. Thus, in the example from chapter 2, I changed my route to work precisely because I consciously recalled that roadworks were due to start; the conscious recollection was essential to the belief ’s efficacy. But it is not clear how this could be. How could my being aware of one of my mental states make a difference to the state’s causal role? The present account provides an answer. If conscious occurrent beliefs are superbeliefs, and if superbeliefs influence action in the way suggested, then conscious recollection will be a precondition for such influence. To superbelieve that p is to be committed to using p as a premise in one’s conscious reasoning. And being consciously entertained is a precondition for being employed in conscious reasoning. Superbeliefs can influence reasoning and action specifically as superbeliefs only if they are consciously recalled. The reader may want to object here. For I claim that our premises and goals influence action in virtue of our non-conscious beliefs and desires about them. Why, then, is conscious recall necessary? If we have a strong non-conscious desire to act upon our premises and goals, and can nonconsciously calculate what actions they warrant, then why can we not act upon them without conscious thought? The question is natural, but misrepresents the proposal. The commitment involved in adopting premises and goals is not simply to act upon them, but consciously to work out what actions they dictate and then act upon them. The commitment is to calculate and act, not just to act. Indeed, it is not clear how we could 15
For more discussion of the evolution of language-involving thought, see my 2000.
153
Mind and Supermind commit ourselves to acting as if a proposition were true, without also committing ourselves to consciously working out what actions it dictates. How would we set about discharging the commitment? This is not to deny, of course, that conscious reasoning can exploit non-conscious processes through self-interrogation and minimal premising, or that some premises and goals influence our conscious reasoning in an implicit way, without being themselves consciously recalled. But I do deny that premising can be wholly non-conscious. Still, it may be objected that this whole story is an incredibly overintellectualized account of the role of conscious thought. Do we really need to have second-order attitudes to our conscious thoughts in order for them to influence our behaviour? Take my thought about the roadworks: can’t it just move me to act – like that? Why must the mechanism run through second-order beliefs about it and its inferential powers? Well, I do not claim that it has to. My claim is at most one of sufficiency: the mechanisms I have described are one way in which our conscious occurrent thoughts could move us to act. It is an empirical matter whether the account is in fact true. The account is more than mere speculation, however; for it is motivated and constrained by a desire to accommodate a number of common-sense intuitions about conscious reasoning, judgement, flat-out belief, and so on. 3 th e supe rm i nd My response to the Bayesian challenge rested on the claim that we attach a high desirability to forming and acting upon premising policies. But why should we do this? What is the point of the enterprise? Bayesian decision theory is, after all, a global theory of rational decision-making. If I attach a sufficiently high probability to the proposition that beef is unsafe, then Bayesian theory will dictate that I should act as if it were true. What do I gain by superbelieving it too? The same goes for desire, too, of course. What do we gain by superdesiring a particular outcome, in addition to attaching a high desirability to it? To put it another way, what is the function of the supermind – what benefits does its possession confer? The question invites empirical research, and a full answer is beyond the scope of the present work. However, there are some obvious points that can be made, and the final part of this chapter is devoted to making them. Note that, since conscious beliefs and desires are supermental states, in thinking about the function of the supermind we are at the same time 154
Superbelief and the supermind thinking about the function of conscious thought. This is, I believe, a helpful way to approach the latter issue. It is often assumed that the distinctive function of conscious thought must stem simply precisely from its being conscious. This, of course, reflects what I called the unity of belief assumption; it is assumed that conscious and non-conscious thoughts are states of the same kind, distinguished only by the fact that one group is conscious, the other not. Thinking of conscious thoughts as belonging to a distinct kind of mentality offers a different and, I suggest, better perspective.16 3.1 Behavioural control Much of the time we run on auto-pilot, our behaviour guided by nonconscious processes to which we have no direct access. Think, for instance, of the processes that guide one’s moment-to-moment behaviour when driving or making one’s way through a busy shopping centre. Often this is all for the best – behaviour that is fluid and spontaneous when nonconscious can become awkward and ill-timed when it is the object of conscious thought. And if our non-conscious mental processes were optimal, then we could leave everything to them. We would always act fluidly and spontaneously, without hesitation. But of course they are not optimal. Sometimes we are perplexed, unsure what to do next; sometimes we find ourselves disposed to do something rash; and sometimes we get stuck in a rut, repeating responses that have failed before. Such situations invite supermental activity. We need to switch to manual control and engage in conscious reasoning and decision-making. Often this will involve making up our minds what to think or want – what premises to adopt and what goals to pursue – as a preliminary to deciding what to do. For example, suppose that you are ordering food in a restaurant. You go through the menu, but find yourself perplexed. It is not that you cannot find anything you like; rather, there are too many things you like, and it is hard to say which of them you like most. But given the pressures of time and etiquette, you have to choose something. So you make up your mind – plump for one of the suitable dishes, adopt the securing of it as a goal, and order it. Even if the choice was not optimal, it was better to 16
It is perhaps worth stressing that I am here treating the supermind simply as the system of superbeliefs, superdesires, and other premising policies. In this sense there is of course a lot more to the conscious mind than the supermind – including perception, sensation, imagination, and self-stimulatory activities of the sort Dennett describes.
155
Mind and Supermind make it than to continue in perplexity.17 We often have to make similar decisions about what premises to adopt. Is it safe to park here? Is there enough time to call at the office on the way to the station? Is the salesman telling the truth? It is difficult to be sure, but we may have to make up our minds in order to get moving. So here we have a general strategy for overcoming perplexity: identify some relevant propositions, of which we feel reasonably confident, and some relevant goals, which we are content to pursue, and feed them into simple decision procedures, such as the practical syllogism. With a bit of luck, this will yield a clear prescription which will get us going again. A similar strategy can help when we find ourselves stuck in a rut or disposed to do something rash or inappropriate. Again, we need to take conscious control, analyse the problem, identify some key goals and premises, and use them as a basis for decision. Note that once we have decided to adopt a particular premise or goal, we shall generally want to stick with it and to use it as a basis for future deliberation. This is not because the decision affects our assessment of the premise’s probability or the goal’s desirability, but simply because psychological consistency is itself valuable. Continually changing one’s mind is a recipe for renewed confusion and perplexity. (Incorrigible vacillators can become incapable of acting.) The need to overcome perplexity and suppress inappropriate responses thus leads us naturally to form premising policies and to adhere to them once formed. Systematically followed through, the practice creates a distinct level of cognition which can be activated whenever non-conscious processes let us down. 3.2 Cognitive control A second benefit of possessing a supermind is that it gives us control over our own cognitive processes, allowing us to shape their direction and style. We have very little control over our non-conscious minds. Non-conscious cognition is directed to immediate behavioural control, and we can influence its course only indirectly, by controlling the stimuli we receive, as in self-interrogation and other self-stimulatory activities. Nor do we have any control over how non-conscious reasoning is conducted – over the information used or the methods employed. We cannot check for biases or prejudice and cannot evaluate or modify the procedures used. Indeed, 17
Dennett uses a similar example to illustrate how the need to give our desires a linguistic form can force us to make them far more specific than they would otherwise have been (Dennett 1981b).
156
Superbelief and the supermind there is strong evidence that non-conscious cognition depends on a variety of hardwired modules, which evolved for dealing with specific cognitive tasks and whose operations are highly inflexible (see, for example, Barkow et al. 1992; Hirschfeld and Gelman 1994; Mithen 1996). This is not to deny that we can acquire new non-conscious cognitive abilities. We can, after all, learn to perform complex tasks, such as driving, without conscious thought. But such abilities cannot be acquired directly, as specifically cognitive accomplishments, but only by training ourselves on the relevant behavioural tasks. It is different with the supermind. We can decide what to think about, and can direct our supermental processes to highly theoretical problems with no immediate behavioural relevance. We can choose when to think, too, and can focus our attention on a problem over long periods of time until we reach a solution that satisfies us. It is true, of course, that our control here is not complete. Thoughts often come to mind unbidden, and tiredness or distraction sometimes prevents us from focusing our minds. But much of the time we can focus them, and many of our most important cognitive achievements result from doing so. In addition, we can control how we think at the supermental level. We can decide which premises to adopt and which goals to pursue; we can check for prejudice and wishful thinking; and we can adopt specialpurpose, context-specific inferential strategies, such as Ally’s. We can also evaluate the inferential procedures we use – the rules we follow, the argument forms we regard as valid, and the heuristic devices we employ – and modify them if necessary. If we discover that we have a weakness in some area, then we can set out to correct it by learning new inferential skills – new techniques for processing our premises and goals. Thus, for example, if we are prone to errors in reasoning with conditionals, we can repair the deficiency by learning the truth-table for the material conditional. In this way supermental cognition can be debugged, modified, and enhanced. Indeed, if we think of supermental processes as encompassing the manipulation of external aids and artefacts – and I see no reason why we should not – then the scope for enhancement is virtually unlimited. In short, in acquiring a supermind we acquire a highly flexible, generalpurpose reasoning system that is open to indefinite improvement and expansion. It is no exaggeration to say that almost all of human science is the product of the supermind. (I say almost all, since there is evidence that some rudimentary scientific knowledge – theory of mind, for example – is innate.) 157
Mind and Supermind 3.3 Self-knowledge and autonomy A third benefit of having a supermind is that it facilitates self-knowledge. As Dennett emphasizes, subjective probabilities and desirabilities are not introspectable features of the mind (Dennett 1978a, pp. 304–5). Degrees of confidence and desirability can be attributed only on the basis of behavioural observation and interpretation, and we are in no privileged position to attribute them to ourselves. Indeed, those who know us well may be better observers and interpreters of our spontaneous behaviour than we are. And this means that we are in no privileged position to predict and explain our own spontaneous actions – those, that is, that are generated without conscious thought. The springs of spontaneous action are, in a sense, hidden from us. Supermental states and processes are different. They are objects of conscious awareness, and we are well placed to describe them – to say what premises and goals we have adopted, what inferences we have made and what decisions we have taken. (This is not to say that we can never be wrong about our supermental states and processes, though I think that we do have a measure of authority here; I shall say more about this in chapter 8.) And this gives us a special understanding of our actions; as superbelievers we can present reasons to ourselves, and then deliberately act upon them. We can act, not only for reasons, but upon them. Here, then, is another benefit of possessing a supermind: it affords us a privileged knowledge of our own minds and a special understanding of our own actions. And, again, this gives us a motive not only for forming supermental states, but for sticking to those we have formed. Continually changing one’s mind not only leads to indecision and perplexity, but also undermines our distinctive form of self-knowledge.18 As well as affording us self-knowledge, the supermind may also be important to our autonomy and personhood. Several writers have claimed that freedom of the distinctively human kind requires the ability to reflect 18
Both de Sousa and Dennett locate the source of our privileged self-knowledge in our ability to form flat-out, language-involving doxastic attitudes (de Sousa 1971; Dennett 1978a, ch. 16). However, since they think of these states simply as dispositions to assert, this knowledge is of a fairly trivial kind. If all I know, in knowing that I believe that p, is that I am disposed to assert p in answer to the question ‘P or not-p?’, then what I know is of limited value. If our privileged self-knowledge concerns superbeliefs, on the other hand, then it is of a much more substantial kind, since these states guide action and inference.
158
Superbelief and the supermind on, evaluate, and to some extent control our mental states and processes (see for example, Frankfurt 1971, 1987; Lehrer 1980; Pink 1996; Price 1969). Similarly, it has been claimed that being a fully responsible epistemic agent involves being able to reflect on one’s beliefs and the norms to which they are subject (see, for example, Burge 1996; McDowell 1998). Although these accounts do not require a two-level framework, they harmonize nicely with the one proposed here. To sustain a supermind one needs much the same abilities as those putatively linked to autonomy – one needs to be able to think about propositions, to adopt reflective attitudes towards them, and to apply rational norms in the construction of arguments. And, given this, it would be natural to see the supermind as the arena in which human autonomy manifests itself. On this view, then, rather than requiring reflection on existing mental processes, autonomy would involve the activation of a distinct level of mentality which constitutively involves the exercise of reflective abilities. Linking autonomy to supermentality in this way also provides a simple explanation of how we can exercise reflective control over our mental states and processes. On a single-level view, this would involve somehow accessing and overriding sub-personal processes.19 On the proposed two-level view, by contrast, it simply involves regulating the personal activities which constitute the supermind. This is all I shall say here about the functions of the supermind. There is, of course, a lot more to say on the topic, and scope for empirical research. But enough has been said, I think, to justify the claim that we have a strong motive for forming and executing premising policies. conc lu s i on and p ro spe c t This completes the development of my two-level theory of mind. It will be useful to have a name for the theory – I shall refer to it as ‘supermind theory’. We have already seen how the theory can resolve some deep tensions in folk psychology, vindicating the existence of a kind of belief that is conscious, occurrent, flat-out, and language-involving, while at the same time doing justice to the attractions of an austere Bayesianism. In the next two chapters I shall move on to consider some further folkpsychological commitments, and show how they, too, can be vindicated 19
Price, for example, speaks of our having ‘the power of intervening, consciously and rationally, in our own mental processes, and of altering the course they take’ (Price 1969, p. 230).
159
Mind and Supermind by the theory. Throughout, I shall assume an austere Bayesian view of the lower level; but, as I indicated earlier, this part of the theory is detachable. We could jettison the commitment to austerity, while still retaining the general two-level framework and the particular account of the supermind as a premising machine realized in lower-level non-conscious states and processes. The change would somewhat reduce the theory’s explanatory power, but would not substantially undermine the proposed applications.
160
6 Propositional modularity We have seen how supermind theory can vindicate some problematic elements of folk psychology – among them, the claims that belief can be flat-out, language-involving, and actively formed. I shall now move on to consider some other contentious aspects of the folk outlook and show how these, too, can be vindicated by the theory. The present chapter looks at a thesis known as propositional modularity – a cluster of claims about the way beliefs and other propositional attitudes are stored and processed. The next chapter will then consider a related thesis about conceptual capacities – conceptual modularity, as I shall call it.1 In each case I shall argue that folk psychology is committed to the thesis in question, building on arguments in the literature, and then go on to show that this commitment can be vindicated at the supermental level. (The case for a folk commitment to propositional modularity and for its supermental vindication has already been partially prefigured in the earlier discussion of rich functionalism, but the present chapter will extend and reinforce it.) Showing that the modularity theses are true of the supermind will also serve to block a certain line of argument for eliminativism – the view that folk-psychological concepts and principles will eventually be eliminated from science – and I shall begin with some remarks on this. 1 th e e l i m i nat iv i st th reat This section discusses the nature and scope of the eliminativist threat to folk psychology and describes the line of response I shall advocate. 1
The terms ‘propositional modularity’ and ‘conceptual modularity’ were coined by Stephen Stich and Andy Clark respectively (Stich 1983; Clark 1993a). These uses of the term ‘modularity’ should not be confused with another common use, popularized by Jerry Fodor, to refer to the existence of domain-specific mental subsystems, or modules (see Fodor 1983, and the large recent literature in evolutionary psychology).
161
Mind and Supermind 1.1 The nature of the threat At the outset of this work I claimed that the two-strand theory developed here would help to underwrite integrationism – the thesis that the concepts and principles of folk psychology would be integrated into a developed science of the mind. This claim has already been partially vindicated. In distinguishing different levels of psychological description and explanation, the theory serves to regiment folk psychology and to resolve conflicts within it, thereby removing obstacles to its integration into science. However, the modularity theses present a threat to integrationism. For they involve claims about the functional architecture of the mind – about the structures and causal processes that support intelligent behaviour. And these claims seem in turn to involve claims about the architecture of the brain. If the modularity theses are true, then – so it seems – our brains must possess structures which realize the functional architecture described by the theses (Ramsey et al. 1990; Davies 1991; Rey 1995). Thus if folk psychology is committed to the modularity theses, it may come into direct conflict with cognitive neuroscience. If we discover that the brain does not possess the required internal architecture, then some important aspects of the folk outlook will be directly falsified. Some people working in cognitive science suspect that this will indeed happen – pointing to connectionist networks, for example, as models of how mental processes could be supported by systems whose internal architecture does not conform to the modularity theses.2 It is sometimes assumed that the options here are stark: either cognitive science will completely vindicate the folk theory of the mind or else the theory will be falsified and its concepts eliminated from serious science, as those of other exploded theories have been. The latter outcome, most people agree, would involve an intellectual upheaval of unprecedented extent. However, outright eliminativism may not be the most immediate danger here. The case for it depends upon a strict description theory of reference for mental state terms.3 The idea is that the folk theory, of which the modularity theses are a part, provides a descriptive profile which fixes the reference of the term ‘belief’, and that if no states with 2
3
For discussion of connectionism and its philosophical implications, see Bechtel and Abrahamsen 1991; Christensen and Turner 1993; Clark 1989, 1993a; Macdonald and Macdonald 1995; Ramsey, Stich, and Rumelhart 1991. William Lycan and Stephen Stich have each emphasized this point: see Lycan 1988, ch. 2; Stich 1996.
162
Propositional modularity this profile exist, then the term is non-referring. And strict description theories are highly implausible. A description may uniquely fix a reference even if it is inaccurate in some details (Donnellan 1966); nor is it easy to specify in advance which bits of a description are essential and which discardable. And, in any case, there are well-known arguments for thinking that reference is not fixed by description at all, but by causal-historical links (Kripke 1980). (It may be objected that folk-psychological terms are theoretical ones, and that causal-historical theories are not appropriate for such terms. The objection is weak, however. There are numerous cases of scientists retaining theoretical concepts long after they have revised or abandoned the framework which originally supported them. Think, for example, of the concepts of atom, mass, velocity, element, space, time, energy, disease – the list is huge. To insist that such revisions involve the introduction of new terms, coincidentally homonymous with the old ones, is little more than conceptual pedantry. The important question is not whether a term was introduced to fill a theoretical role, but whether it in fact tracks a natural kind. If it does, then later theorizing can correct the original conception.) So the falsity of the modularity theses would not necessarily mean the elimination of folk-psychological concepts. Indeed, if we accept the coherence of an austere functionalist position, we could retain a version of folk psychology no matter what we discover about the structure of the brain. (Since austere functionalism treats mental states as behavioural dispositions, it has no specific architectural commitments.) What is at risk, then, is not so much the folk ontology as the folk conception of mental architecture and dynamics – the common-sense view of the structure of the mind and of the mental processes that generate action and inference. I shall refer to this threat as that of mitigated eliminativism. Some defenders of folk psychology hold that we ought to concede mitigated eliminativism. Our overwhelming confidence in the soundness of folk psychology, they argue, shows that we have no strong attachment to contentious architectural claims in the first place. (For a clear presentation of this strategy, see Horgan and Graham 1990.) Now, I agree that there is a strand of folk psychology that lends itself to an austere reading – the strand pertaining to the basic mind. But it would be a mistake to deny that we are committed to architectural claims or that their falsification would be easy to accept. As we shall see, the modularity theses are crucial to some familiar explanatory and descriptive practices – practices which we employ in both the third and the first person. Giving up the theses would 163
Mind and Supermind involve conceding that we are seriously mistaken, not only about other people’s minds and actions, but also about our own. 1.2 The supermental response The eliminativist threat, as set out above, depends on the assumption that claims about cognitive architecture entail, or at least strongly warrant, claims about neural architecture. This assumption is rarely questioned, but it is one I shall reject. There is certainly no deductive link between the two sets of claims. Cognitive architectures are characterized in functional terms, and the same functional system could be realized in different substrates, including, in principle, non-physical ones. There is, it is true, a plausible abductive argument here, at least given the assumption of physicalism. Surely, the simplest explanation of how we could have minds with a certain functional architecture is that our brains possess structures which directly realize that architecture. But even this is too hasty. For there is a way of implementing a functional model of the mind which is consistent with physicalism yet which involves no claims about the architecture of the brain. The idea, of course, is to think of the model as applying to the supermind. As I shall show, the supermind exhibits propositional and conceptual modularity. Yet the supermind is realized, not in neural states and processes, but in basic-level mental states and actions. It is true that basic mental states are themselves realized in the brain, but, as I shall also show, the basic mind does not have to exhibit propositional and conceptual modularity in order to support a supermind with those features. The upshot of this will be that the folk model of the mind can be implemented in a way that is indifferent to the internal structure of the brain, and thus that we can endorse the folk architectural commitments without committing ourselves to claims about neural architecture. (This assumes, of course, that the modularity theses do not also apply to the basic mind; I shall return to this point later.) It may be objected that if there is a plausible abductive inference from the modularity theses to claims about neural structure, then the folk will make it themselves – and thus that folk psychology will involve claims about the brain. But this is to misunderstand the nature of the folk commitment to the modularity theses. The commitment is implicit, not explicit. The truth of the theses is presupposed by certain folk-psychological descriptions and explanations, but the folk do not articulate or explicitly endorse them. Still less do they go in for abductive inference from them. 164
Propositional modularity I shall be claiming, then, that we can have our cake and eat it – endorse the modularity theses without committing ourselves to claims about the structure of the brain. This view has the attraction, not only of blocking the eliminativist argument, but also of reflecting our apparent epistemic situation. We feel we have strong warrant for our folk-psychological descriptions and explanations – particularly for those we apply to ourselves. Yet this warrant fails to transfer to claims about neural architecture – and fails to do so, moreover, even if we accept that folk psychology has architectural commitments. (We do not believe in the possibility of armchair neurology.) There are various ways of accounting for this (see Davies 1998), but the simplest is to deny that claims about cognitive architecture do in fact warrant ones about neural architecture. 2 th e case f or p rop o s i t i onal modulari ty This part of the chapter sets out the case for a folk commitment to propositional modularity and defends it against objections. Propositional modularity entails a rich view of the mind, so this section will also serve to confirm that the folk are committed to such a view. 2.1 Ramsey, Stich, and Garon’s arguments Propositional modularity is a thesis about the way beliefs and other propositional attitudes are stored in memory. The case for thinking that folk psychology is committed to it is made most forcibly in a 1990 paper co-authored by William Ramsey, Stephen Stich, and Joseph Garon. The authors summarize the thesis as follows: propositional attitudes are functionally discrete, semantically interpretable, states that play a causal role in the production of other propositional attitudes, and ultimately in the production of behaviour. (Ramsey et al. 1990, p. 504)
The bit I want to focus on here is the reference to functional discreteness. (The view that propositional attitudes have semantic content is relatively uncontroversial, and I shall say something about their causal role later.) Ramsey et al. do not explain exactly what they mean by saying that propositional attitudes are functionally discrete, but their subsequent discussion reveals that the claim has two aspects.4 First, there is the claim that 4
For a more complex taxonomy of varieties of functional discreteness, see Horgan and Tienson 1995.
165
Mind and Supermind propositional attitudes can be individually acquired and lost. As Ramsey et al. point out, we allow that a person can acquire or lose a single belief or memory, without any other cognitive changes taking place: Thus, for example, on a given occasion it might plausibly be claimed that when Henry awoke from his nap he had completely forgotten that the car keys were hidden in the refrigerator, though he had forgotten nothing else. (Pp. 504–5)
Secondly, there is the claim that propositional attitudes can be individually active in reasoning and decision-making. A particular belief or desire can be causally active in some reasoning episodes and causally inert in others. (In a later paper Stich identifies this as the central aspect of functional discreteness: see Stich 1991a.) For a state to be causally active in a reasoning episode is, I assume, for some associated event to occur – the activation or accessing of the state – and for this event to play a causal role in the reasoning episode. The claim that propositional attitudes exhibit this sort of functional discreteness was central to the view I called rich functionalism – the view that beliefs are functional sub-states of the cognitive system which can be selectively activated in the form of occurrent thoughts – and we have already seen some evidence for a folk commitment to it. Ramsey et al. highlight a further piece of evidence, however, derived from folkpsychological explanatory practice. The common-sense view, they point out, is that an agent may have had several beliefs or desires that could have justified a particular action or inference and that it is an empirical matter which one was in fact responsible for it.5 They give two examples. In the first, Alice has two independent reasons for going to her office – she wants to send an email and also wants to talk to her assistant (and believes in each case that she can do so by going to her office). Either of these desires would be sufficient to induce Alice to go to her office. They are, as Clark puts it, equipotent in respect of producing that action (Clark 1993a, ch. 10). Nevertheless, we accept that she might be moved to go there by just one of them. So, for example, on this occasion her desire to talk to her assistant might be the operative one while her desire to send an email remains causally idle. The second example makes the same point about inference. The butler has told Clouseau that he spent the night at the village hotel and returned by the morning train. But Clouseau believes that the village hotel is closed and that the morning train is out of service. So he has two beliefs, each of which (together 5
Davidson makes the same point in his 1963.
166
Propositional modularity with his background beliefs) entails that the butler is lying. Again, we accept that Clouseau might infer the butler’s mendacity from just one of these beliefs, while the other one lies inert. The moral of these examples – equipotency cases, let us call them – is that folk psychology subscribes to what Stich elsewhere calls a principle of individual efficacy, to the effect that mental states can be individually active in reasoning and behavioural control (Stich 1991b). This is the core of Ramsey et al.’s case for propositional modularity. The evidence they cite is often discounted, however, and I shall therefore say a little more in defence of the thesis, focusing on the two aspects of functional discreteness just identified. 2.2 Individual acquirability I begin with the claim that beliefs can be individually acquired and lost – individual acquirability for short. Is folk psychology really committed to this claim? It is true, as Ramsey et al. note, that we often talk of acquiring or losing a particular belief. But should such talk be taken literally? Is it really possible for Henry to lose the belief that his keys are in the fridge without losing any other? Some would say not. The ascription of propositional attitudes, they claim, is holistic, and beliefs and desires always come in clusters (see, for example, Davidson 1970; Heil 1991). We need to be careful here, however. For there are different kinds of holism, and not all of them are incompatible with individual acquirability. First, there is what we may call framework holism. In order to be able to think about keys and fridges at all, Henry will need to have in place a network of framing beliefs about the nature and function of these things. If he turned out to have absolutely no idea what keys and fridges were, then we should be inclined to refrain from crediting him with beliefs about them. This is not incompatible with individual acquirability, however. For, as Ramsey et al. point out, once the relevant framing beliefs were in place, it would then be possible for Henry to acquire and lose individual additional beliefs about keys and fridges without any other cognitive changes occurring (Ramsey et al. 1990, p. 505). A stronger thesis is what I shall call ascriptive holism.6 We ascribe mental states to people on the basis of their behaviour – crediting them with the 6
This paragraph draws on discussions with George Botterill, from whom I borrow the term ‘ascriptive holism’.
167
Mind and Supermind beliefs and desires that best rationalize their actions. But there is no single pair of beliefs and desires, or even small set of pairs, which uniquely rationalize a particular action. It all depends on what background beliefs and desires we attribute to the agent: given a suitable background, almost any belief can rationalize any action. So beliefs and desires cannot be ascribed singly; individual ascriptions always take place against a background of many others. Ramsey et al. can concede this, however. To say that we cannot ascribe beliefs singly is not to say that they cannot be acquired and lost singly. There is nothing special about belief here. Take any complex functional system – a motor-car engine, say. In general, it will not be possible to isolate and identify individual components of the system without making assumptions about the identity of other neighbouring components (‘If that is the fuel lead, then this must be the air inlet’). But it is compatible with this that each component is individually detachable, and in that sense functionally discrete. A still stronger claim is that, in acquiring a particular belief, one also acquires any other beliefs that are obviously entailed by it, or by it and one’s existing beliefs. So, if I come to believe that the car keys are in the fridge, then I shall also come to believe that they are in the kitchen, in the same place as the milk, not in my pocket, and so on. Conversely, in losing a particular belief, one also loses any belief which obviously entails it (otherwise, one would fail to believe an obvious consequence of the entailing belief). Call this claim inferential holism. Now, inferential holism can take stronger and weaker forms, depending on whether it is taken as a contingent claim about human psychology or as a conceptual claim about the nature of belief. Ramsey et al. can concede the contingent claim. Beliefs do, as a rule, tend to come in clusters. This is not incompatible with their being individually acquirable – it may just happen that we usually acquire a large number of these states at a time (or, alternatively, that we rapidly compute the consequences of any belief we acquire). But Ramsey et al. do have to reject the stronger, conceptual, version of inferential holism – strong inferential holism, as I shall call it. That is to say, they need to show that it is possible for beliefs to be acquired individually – that they are the kind of things which can be acquired one at a time, even if they are not usually acquired in this way. Intuition pulls in different ways here. Strong inferential holism has some plausibility. Could I really be said to have acquired the belief that my keys are in the fridge if I remain uncertain whether or not they are in
168
Propositional modularity the bedroom? Isn’t the absence of such uncertainty something close to a possession condition for the belief? On the other hand, as Ramsey et al. note, we do sometimes fail to draw even the most obvious consequences of our beliefs (1990, p. 505). It is quite possible, they point out, for a person to believe that their keys are in the fridge and yet continue looking for them in the bedroom. The belief about the fridge might just slip their mind. We frequently suffer such slips – failures to put two and two together – and they can have serious consequences. (Christopher Cherniak describes the case of Smith, who knows full well that naked flames ignite petrol fumes and that striking a match produces a flame, yet nonetheless goes ahead and strikes a match to see how much petrol is left in his fuel tank: Cherniak 1986, p. 57.) It is not clear how defenders of strong inferential holism can account for this. To sum up: it is only strong inferential holism that is incompatible with individual acquirability, and while this thesis does have some plausibility, the fact that we acknowledge the existence of slips of mind shows that we are not unequivocally committed to it. I conclude that considerations of holism do not decisively undercut the common-sense presumption in favour of individual acquirability. Note that we can also conclude from this that folk psychology is not unequivocally committed to an austere view of the mind. For if beliefs and desires were multi-track behavioural dispositions, then they would be subject to strong inferential holism. To be disposed to behave as if one’s keys are in the fridge just is to be disposed to behave as if they are not in the bedroom. Thus if we take an austere view of the basic mind, as I have proposed, then we must accept that individual acquirability does not hold at the basic level. 2.3 Individual efficacy The case for a folk commitment to individual efficacy has also been questioned. It is true that we often cite a single propositional attitude, or a small set of them, in explanation of an action or inference. Ramsey et al. interpret these explanations as involving claims about the causal processes that generated the action or inference. But there is an alternative way of interpreting them, advocated by Andy Clark, among others (Clark 1991, 1993a, ch. 10; see also Dennett 1991d; Heil 1991; Jackson and Pettit 1996). Folk-psychological explanations, Clark argues, should be given a counterfactually based reading. He gives an example: 169
Mind and Supermind Suppose my action is that of buying a beer. We can say that it was my belief that the beer was cool and not, say, my belief that dogs have fur that caused me to buy the beer, since in those close . . . possible worlds in which I still believe that dogs have fur but I lack the belief about the beer I don’t buy the beer. This counterfact alone seems sufficient to warrant folk psychology in highlighting the beer belief rather than the dog belief in the explanation of that action. (Clark 1993a, p. 212)
That is, to say that it was Clark’s belief that beer is cool, and not his belief that dogs have fur, which was responsible for his beer-buying is to say that his action was counterfactually dependent on the presence of the former belief, but not on that of the latter. So construed, the explanation does not involve claims about the causal processes that generated the action and does not assume that beliefs are capable of individual activation. What Clark is proposing here is that we interpret folk-psychological explanations as picking out causally relevant properties rather than causal events – sustaining causes rather than dynamic ones in the terminology used earlier – and that we adopt a counterfactual analysis of causal relevance. Now, I have already conceded that at least some folk-psychological explanations should be construed in this way. In chapter 2 I argued that basic mental states should be thought of as sustaining causes of action and suggested that folk-psychological explanations adverting to such states should be given a counterfactual reading along the lines Clark suggests. I also argued, however, that this treatment is not appropriate for all folkpsychological explanations, and I want to say a little more in defence of that claim. The crucial issue, of course, is whether a counterfactual reading can deal with equipotency cases of the sort outlined by Ramsey et al. Recall the general scenario. Schematically, an agent S has two beliefs, p and q, each of which provides an independent and equally powerful reason for performing action A. And suppose that we want to say that on this occasion it is p, and not q, that leads S to perform A. Now, according to Clark, this is the case if and only if it is p, and not q, that explains S’s performing A, which in turn is the case if and only if: (1) S would not have performed A if they had not believed that p; and S would still have performed A even if they had not believed that q. Yet, as Stich points out, (1) might easily be false. It might be the case that S would still have performed A even if they had not believed that p, since they would then have recalled that q, and performed A for that reason instead (Stich 1991b, p. 233). Clark responds that this is to misrepresent the counterfactual analysis. The issue, he claims, is this (using my notation): 170
Propositional modularity If one belief were absent and all else were equal would the action occur? That is, if the agent did not believe p, and if her q belief played only the same explanatory role as it did in the actual world, would the action occur? (1993a, p. 213)
But, as a construal of (1), this is tendentious. The standard way of evaluating a modal claim is by considering the nearest possible world in which its antecedent is true. So (1) is true if and only if: (2) The nearest possible world in which S does not believe that p is one in which S does not perform A and the nearest possible world in which S does not believe that q is one in which S does perform A. Take the first conjunct. Clark must show that the nearest world in which S lacks the p belief and does not perform A is closer to this world than the nearest world in which S lacks the p belief and does perform A. Now, intuitions about modal claims are notoriously variable, but it is surely possible to imagine a counter-example. Consider a case of inference. Suppose that Jill has an intuition that Jack is lying and desperately wants to find a reason to prove it. As it happens, she has two long-standing beliefs, p and q, each of which would, given her other beliefs, independently license this inference. Jill strives to recall some fact that will prove Jack’s guilt. Finally, she recalls that p and makes the inference. It is not unreasonable to suppose that in the nearest possible world in which Jill lacks the belief that p, a search of similar length ends with her remembering that q and again making the inference, contrary to (2). (We might suppose that Jill’s memory works in an associative way, and that p’s association with the present context is stronger than q’s, but that q’s is stronger than that of all her other beliefs.) It may be objected that to talk of long-standing beliefs being individually recalled at datable moments is to assume the truth of individual efficacy, and so to beg the question at issue. Perhaps; but all I need to claim is that folk psychology recognizes the possibility of the scenario just sketched. If it does, then the folk-psychological view of the causal dynamics of belief is not captured (or at least not fully captured) by Clark’s account. I conclude that the proposed counterfactual analysis will not warrant the highlighting of one of two equipotent mental states. This is not really surprising. The analysis picks out conditions that were necessary for an action. But to say that two mental states are equipotent in respect of a certain action is to say that each is capable of producing it on its own, given the agent’s other mental states, and thus that neither is necessary for its production. It does appear, then, that these cases require the reading 171
Mind and Supermind Ramsey et al. suppose. In claiming that only one of two states that could have produced a particular action actually produced it, we are supposing that the activity of one was somehow inhibited or that of the other somehow facilitated – and thus that the states are capable of selective activation. (I suspect that Clark’s reference to keeping all else equal in the passage quoted above carries a tacit reference to internal factors facilitating the activity of one state or inhibiting that of the other – a reference which is illicit, given that Clark wishes to avoid treating those states as capable of selective activation.) Since a counterfactual analysis of mental explanations does not support individual efficacy, we can also conclude that individual efficacy does not hold at the basic level – such an analysis being, as I argued earlier, the appropriate one for explanations pitched at this level. Before moving on I want to say more about cases of absent-mindedness. Such cases, as we have seen, provide indirect support for individual acquirability; but they also provide direct support for individual efficacy. Take the example mentioned earlier, where a person – call her Mary – has a long-standing belief that her keys are hidden in the fridge, yet continues to search for them in the bedroom. The point to stress here is that when a belief slips a person’s mind we do not think of them as having lost it, even temporarily. When we are reminded of something that has slipped our minds, we typically acknowledge it as something we had known all along, and often curse our stupidity for failing to recall it. (As Cherniak emphasizes, slips of mind are not changes of mind: see his 1986, p. 58.) The picture we have seems to be of information lying dormant, stored in memory, but not activated and put to use in reasoning and decision-making. Desires, too, can slip our minds, and the same considerations apply to them. This strongly suggests that we are committed to something like the principle of individual efficacy. If propositional attitudes can remain inactive even on occasions when they ought rationally to influence reasoning and when other relevant attitudes are active, then it seems that they are capable of selective activation and can be subject to selective failures of activation.7 Certainly, it is hard to see how the possibility of absent-mindedness, as we commonly conceive of it, can be reconciled with an austere view of the mind, which treats beliefs as behavioural dispositions. How could one 7
Cherniak suggests that the fact that we talk of slips of mind indicates that we tacitly mark a distinction between two kinds of memory – long-term memory, where the bulk of one’s knowledge is stored in a dormant state, and short-term, or working, memory, to which selected items are copied for use in inference. Absent-mindedness can be thought of as a failure to access a particular belief and copy it to working memory (Cherniak 1986, ch. 3).
172
Propositional modularity continue to possess a behavioural disposition while conspicuously failing to display the appropriate behaviour? The austere theorist might bite the bullet and maintain that absent-minded behaviour is simply inexplicable in intentional terms. (Dennett takes this line, suggesting that our talk of slips of mind is confabulation, designed to make sense of senseless behaviour: see Dennett 1982.) But this is surely too strong. There is a perfectly good intentional explanation of Mary’s action: she wants to find her keys and believes that she may find them by searching the bedroom. The evidence from absent-mindedness, then, agrees with that from equipotency cases in supporting a folk commitment to individual efficacy. (Indeed, equipotency cases can be regarded as instances of absentmindedness – the inactive reasons being ones that have slipped the agent’s mind.) I conclude, then, that there is a strong folk presumption in favour of propositional modularity. As I explained earlier, this is usually taken to involve claims about the structure of the brain. It is assumed that if propositional modularity holds, then beliefs and desires must have discrete neural bases, which can be selectively activated, modified, and removed. Ramsey et al. make this assumption and go on to argue that certain connectionist models of memory do not support discrete storage of this kind and consequently present a threat to folk psychology. Some defenders of folk psychology have challenged Ramsey et al. on the details of connectionist modelling, arguing that the models they discuss do in fact support discrete storage. I shall take a more radical line, however, and argue that propositional modularity can hold whatever the structure of the memory system. 3 p rop o s i t i onal modulari ty v i nd i cate d This part of the chapter argues that it is possible to endorse propositional modularity without committing oneself to claims about the internal architecture of the brain. It shows that the supermind exhibits propositional modularity and that the basic mind need not itself exhibit propositional modularity in order to support a supermind which does. Functionally discrete supermental states can be realized in an austere basic mind whose states are neither individually acquirable nor individually efficacious. Since propositional modularity involves commitment to a rich view of the mind, the discussion will also confirm that the supermind has a rich functional profile and that a rich supermind can be realized in an austere basic mind. 173
Mind and Supermind 3.1 Supermental propositional modularity The case for thinking that the supermind exhibits propositional modularity is simple: supermental states are premising policies, and premising policies exhibit all the relevant features – semantic content, causal role, and functional discreteness. First, premising policies have semantic content. To adopt a premising policy is to commit oneself to taking a proposition – either declarative or optative – as a premise in reasoning. (As I explained earlier, although we often articulate our premises linguistically, premising policies are fundamentally attitudes to propositions, not sentences, and we may vary the linguistic expression of a premise in response to contextual and other factors.) Secondly, premising policies have a causal influence on reasoning and action. Executing a premising policy involves various personal events and actions – conscious recollection of the premise, calculation of its consequences, endorsement of these consequences – each of which is a dynamic cause of the next, and, ultimately, of any ensuing action. We can think of these events as stages in the causal role of a belief – its being activated, employed in reasoning, and so on. Thirdly, premising policies are functionally discrete – that is, individually acquirable and individually efficacious. This third feature is the one we are most interested in here, and I shall devote a separate section to each of its two aspects. I shall focus on policies directed upon declarative propositions (acceptancesp ), but the same considerations apply to those directed upon optative ones (goal pursuits). 3.2 Individual acquirability I begin with individual acquirability. The case for thinking that premising policies can be individually acquired is simple: we can adopt one proposition as a premise – acceptp it, in my terminology – without adopting any other. This goes even for ones that are obviously entailed by the one acceptedp . For example, I can acceptp the proposition that all politicians are dishonest without acceptingp the proposition that my friend Senator Smith is dishonest – despite the fact that the former entails the latter. Of course, in acceptingp the former proposition, I commit myself to calculating its consequences and acceptingp them in turn. But I may not actually calculate any of them at the time I acceptp it. Likewise, I can forget or abandon a premise without forgetting or abandoning any others. For example, I can abandon the premise that Senator Smith is dishonest while 174
Propositional modularity retaining the logically stronger premise that all politicians are dishonest. Of course, if I execute my premising policies diligently, I shall eventually be forced to reinstate the premise about Smith, but I might continue for some time without recognizing the need to do so.8 The upshot of this is that supermental states can be individually acquired and lost, since the premising policies in which they consist can be individually formed and abandoned. We might think of superbeliefs and superdesires as occupying a sort of ‘virtual’ memory store, to which items can be individually added and from which they can be individually deleted. But is this compatible with an austere view of the basic mind? The worry is this. Supermental states are realized in basic ones, so the acquisition or loss of a superbelief will involve changes to one’s basic beliefs. And if basic beliefs are multi-track behavioural dispositions, then they will be strongly holistic – since, as I noted earlier, such dispositions are not individually acquirable. So when our confidence in one proposition increases, that in those obviously entailed by it will increase too. Does this not pose a threat to individual acquirability at the supermental level? Take the previous example again. When I acceptp that all politicians are dishonest, I shall become highly confident that I have acceptedp that all politicians are dishonest. Now, the proposition I have acceptedp entails that my friend Senator Smith is dishonest, so if strong inferential holism holds at the basic level, then I shall also become highly confident that I have acceptedp that Senator Smith is dishonest. And this will be sufficient, given my other basic attitudes, for possession of the superbelief that Senator Smith is dishonest. So inferential holism cannot be contained at the basic level, but rises up to infect the supermental one. There is an obvious fallacy here, however. The proposition that I have acceptedp that all politicians are dishonest does not entail the proposition that I have acceptedp that Senator Smith is dishonest – though it may entail, given my background beliefs and desires, that I ought to acceptp it. I may never have entertained the latter proposition, let alone decided to acceptp it. Of course, the original basic belief does have various genuine entailments – that I have acceptedp a claim about politicians, that I have acceptedp a claim which entails that all senators are dishonest, and so on – and my confidence in these propositions will rise when I acceptp the original proposition. But none of these basic beliefs will introduce a new superbelief (there is a 8
Not only can we fail to acceptp the obvious consequences of a premise; we can acceptp propositions that are incompatible with it. For example, I can acceptp that all politicians are dishonest while simultaneously acceptingp that Senator Smith is honest.
175
Mind and Supermind difference between believing that one has acceptedp something which entails p and believing that one has acceptedp p). Similar considerations apply to the loss of acceptancesp . If I abandon the premise that all politicians are dishonest, then I shall lose confidence in the proposition that I acceptp that all politicians are dishonest, and shall probably also lose confidence in various propositions supported by that one. But, again, this will not involve the loss of any other superbeliefs. So the individual acquirability of superbeliefs is not compromised by the existence of strong inferential holism at the basic level. But shan’t we at least tacitly acceptp the obvious consequences of propositions we acceptp ? (Tacitly to acceptp a proposition, in my sense, is to be disposed to treat it as a premise, on entertaining it, even though one has not overtly acceptedp it.) If I am highly confident that I have acceptedp that all politicians are dishonest, then, assuming that strong inferential holism holds at the basic level, I shall also be highly confident that I ought to acceptp that Senator Smith is dishonest – since I ought to acceptp the consequences of propositions I have acceptedp and this is obviously one of them. And I shall therefore be disposed to take it as a premise that Senator Smith is dishonest as soon as the thought occurs to me – that is, I shall tacitly acceptp it. Now, I do not think that this conclusion would amount to a serious challenge to the folk view, even if the argument for it were sound, since it is not obvious that the folk commitment to individual acquirability extends to tacit belief. But in any case, the argument is suspect. For even if I am highly confident that I ought to acceptp that Senator Smith is dishonest, it does not follow that I would actually acceptp it, were I to consider the matter. I might find that I simply could not bring myself to acceptp the Senator’s dishonesty and opt instead to reject the premise which entails it. It may be objected that this response applies only to non-doxastic forms of acceptancep , not to superbelief. For superbelief is unrestricted acceptancep , and unrestricted acceptancep requires high confidence. So if I have formed the superbelief that all politicians are dishonest, then I must be highly confident that all politicians are dishonest; and if inferential holism holds at the basic level, then I must also be highly confident that Senator Smith is dishonest. And I shall therefore be disposed to acceptp that proposition unrestrictedly on consideration – in other words, I shall tacitly superbelieve it. For the same reason, it might be argued, I shall also implicitly superbelieve it – that is, be disposed to take it for granted in my conscious deliberations. (I shall not explicitly superbelieve the proposition, 176
Propositional modularity of course – at least not simply in virtue of having high confidence in it. One can have high confidence in a proposition without actually adopting it as a premise.) Now, again, it is not clear that these claims would pose a serious challenge to the folk view, but they are, in any case, dubious. It is possible to be highly confident of a proposition without being disposed to acceptp it unrestrictedly on consideration. It might be that one’s confidence is not high enough, or that it would drop if one were to give conscious thought to the matter. (Of course, if one’s existing premises entail the proposition, then in refusing to acceptp it one will incur an obligation to revise or reject some of them.) Likewise, one might be highly confident of something without being disposed to take it for granted in one’s conscious reasoning (that is, without implicitly superbelieving it). In general, it would be wise not to take something for granted unless one were very sure indeed of it – more sure, perhaps, than of many things one takes as explicit premises. Explicit premises are, as it were, before one’s mind, readily available for revision or rejection, whereas things we take for granted are not, and it might be sensible to have a higher evidential standard for the latter than for the former. 3.3 Individual efficacy I turn now to individual efficacy. Here I need to show that premising policies can be individually active in reasoning and decision-making. Again, the case is simple. Premising policies are executed at a conscious level, and an acceptedp premise will influence our reasoning and decision-making only if we consciously recall it. (At least, that is the only way it will influence our reasoning in virtue of being an acceptedp premise; our commitment to it may have unintended side-effects.) And premises can be recalled individually. A particular premise may be recalled in one reasoning episode and not recalled in another, even though it is relevant in both. (Here and in what follows ‘recall’ means ‘conscious recall’ – recall in the form of a conscious propositional thought, perhaps articulated in inner speech.) Let us see how this account deals with the cases that provide evidence for individual efficacy, beginning with equipotency scenarios. Take Alice. She has two independent reasons for going to her office – she wants to send an email and also wants to talk to her research assistant. But only one of these desires is responsible for her actually going there on this occasion. Now suppose that Alice’s reasons take the form of superdesires: that she has separately adopted the goals of sending an email and of talking to her 177
Mind and Supermind assistant. And suppose that each of these goals, together with her other superbeliefs and superdesires, implicit and explicit, independently warrants a trip to the office. Finally, suppose that now Alice recalls just one of these goals – that of talking to her assistant. She recognizes that this dictates a trip to the office, and decides to make one. So, although she has endorsed both goals, and would, if prompted, acknowledge that she is committed to both, she acts upon just one of them. That is, on this occasion only one of her two relevant superdesires is causally active. The Clouseau case can be analysed in a similar way. Clouseau has acceptedp two propositions from which he might infer that the butler is lying, but recalls only one of them and draws the inference from that alone. Absent-mindedness, too, can be explained in this way. Recall Mary. We want to say that Mary continues to possess the belief that her keys are in the fridge, even though it has temporarily slipped her mind and she is currently searching for her keys in the bedroom. Again, we can identify the dormant belief with a superbelief. Mary has acceptedp the proposition that her keys are in the fridge and would take it as a premise if she were to recall it. On this occasion, however, she has failed to recall it and consequently does not act upon it. So although she continues to superbelieve that her keys are in the fridge, the superbelief remains dormant. It is important to stress that these explanations are couched entirely in personal terms and do not involve assumptions about the sub-personal level. The claim that premises can be individually recalled is not a theoretical hypothesis, but a datum of experience. We know that individual conscious recall is possible because we experience it all the time – and sometimes rack our brains to produce it. No assumptions are made about the structure of our memory systems or the nature of the processes that generate episodes of conscious recall. Nor are any assumptions made about the basic mind. The account does not assume that basic beliefs and desires can be selectively activated and is compatible with the view that they are merely multi-track behavioural dispositions. This last claim may provoke an objection. I hold that superbeliefs and superdesires are realized in basic ones – in degrees of confidence and desirability. So how can the former lie dormant without some of the latter doing so, too? Take Mary. If she retains the superbelief that her keys are in the fridge, then she must remain highly confident that she has acceptedp that her keys are in the fridge, since such confidence is partially constitutive of the superbelief. But if she were highly confident that she has acceptedp that her keys are in the fridge, then she would act as if she were, and go 178
Propositional modularity to the fridge. So, it seems, we must allow that her basic belief about the premising policy has slipped her mind along with the superbelief about the keys – and thus that individual efficacy holds at the basic level. This is too hasty, however. There is a difference between acting as if one is highly confident that p and acting as if one is highly confident that one has acceptedp that p. Acceptancep involves commitment to a policy of premising, and to act as if one has acceptedp that p is to be disposed to follow a policy of premising that p. And we can suppose that Mary is disposed to follow a policy of premising that her keys are in the fridge. If she were to recall that proposition, then she would employ it as a premise in her reasoning. It may be objected that if Mary is following a policy of premising that her keys are in the fridge, then she would have recalled that proposition when she needed her keys, since it was clearly relevant to the problem. But this is to overstate the scope of a premising policy. In acceptingp a proposition we do not commit ourselves to recalling it on relevant occasions, but, at most, to trying to do so – by self-interrogation, for example. We do not have sufficient control over our powers of recall to do more. And Mary’s current behaviour is compatible with adherence to such a commitment. She might have tried and failed to recall relevant premises, or failed to recognize the occasion as one on which it was appropriate to try. So, again, there is no threat here to our account of the case. The same considerations apply to Alice and Clouseau. They retain their basic beliefs about the neglected premises, and remain fully disposed to pursue the associated premising policies, even though they have failed to recall them on this occasion. There remains a problem with Mary’s case, however. How do we explain her actual behaviour – searching the bedroom? Superbelief involves high confidence. So if Mary possesses the superbelief that her keys are in the fridge, then she must be highly confident that her keys are in the fridge. Why, then, does she not look in the fridge – regardless of whether or not she recalls that her keys are there? Why is her basic belief in the true location of the keys not sufficient to lead her to the fridge? Again, must we suppose that that belief, too, has slipped her mind? I think that we can resist this conclusion. There are at least two alternative explanations of Mary’s behaviour, consistent with an austere view of the basic mind. First, it may be that her search of the bedroom is driven by supermental activity. Suppose that Mary has consciously formed the hypothesis that her keys may be in the bedroom and wants to test it. If she attaches a sufficiently high desirability to acting on the outcome of her conscious reasoning, this 179
Mind and Supermind will lead her to go to the bedroom, despite her high confidence that her keys are in the fridge. That is to say, Mary may currently attach such a high desirability to acting on the results of her conscious reasoning that she has effectively ceded behavioural control to her conscious supermind, with the result that her basic belief about the whereabouts of the keys has no effect on her behaviour. (If this seems counter-intuitive, remember that basic mental states are non-conscious ones: Mary would not be directly aware that she is confident that her keys are in the fridge.) An explanation of this sort may be the most appropriate for Cherniak’s Smith, who strikes a match to see how much petrol is left in his fuel tank. We may hypothesize that Smith’s basic desire to act on his consciously formed plan of striking a light has overridden other basic beliefs and desires that would make for prudence. The second explanation has it that Mary’s behaviour is guided without conscious thought and that she has temporarily forgotten, at the basic level, that her keys are in the fridge – that is, has temporarily lost confidence in that proposition. This would not be an unreasonable view, given her current behaviour, but, of course, it threatens our original analysis of the case. How can Mary be said to retain the superbelief that her keys are in the fridge, if she has low confidence in that proposition? She may still acceptp it, of course, but superbelief proper requires high confidence. This seems like a serious problem, but it is more apparent than real. The claim that superbelief requires high confidence was merely a corollary of our original account of superbelief as unrestricted acceptancep , and I think that it needs slight qualification. To acceptp a proposition unrestrictedly is to commit oneself to taking it as a default premise in all deliberations, including ones where one wants to take only truths as premises (‘TCP deliberations’). Now, I argued in the previous chapter that a rational agent could not seriously make such a commitment unless they had high confidence in the proposition in question. But it seems possible that, having made the commitment, they might retain it even if their confidence in the proposition flagged occasionally. Suppose that Mary does currently lack confidence in the proposition that her keys are in the fridge, although she remains confident that she has acceptedp it. And suppose, too, that this is a temporary aberration rather than a response to new evidence, and that her confidence in the proposition would immediately rise again if she were to recall it. Then, I think, it is legitimate to say that Mary still unrestrictedly acceptsp the proposition that her keys are in the fridge, even though her confidence in it is currently low. For if she were to recall the proposition 180
Propositional modularity she would regard herself as committed to taking it as a premise in all relevant deliberations, and her confidence in its truth would rise, enabling her to execute this commitment. This means, of course, that she will not be able to execute the commitment unless she recalls the proposition, but recall is in any case a precondition for use in conscious inference. So it is not quite right to say that superbelief requires high confidence; it would be better to say that it requires either high confidence or a disposition to acquire high confidence as soon as the content proposition is recalled. And since Mary meets this condition, we can maintain our original analysis of her case as one in which she retains the superbelief that her keys are in the fridge, but fails to recall and act upon it.9 I conclude that individual efficacy holds at the supermental level and is compatible with austerity at the basic level. Supermental states can be individually active in reasoning and decision-making because the premising policies in which they consist can be individually recalled and executed. The reader may ask whether we really need all the apparatus of supermind theory in order to support this account of individual efficacy. Isn’t all the real work being done by appeal to conscious recall? Couldn’t a single-strand theorist say everything I say about the individual efficacy of beliefs – again without making assumptions about the internal structure of the memory system? They could say, for example, that Alice consciously recalled her desire to talk to her assistant, and that this recollection caused her to go to the office, but that she failed to recall her desire to send an email, and so that desire remained causally idle. This is a possible view.10 However, the proposal remains seriously incomplete until supplemented with some account of the nature and cognitive role of conscious occurrent thought. What are conscious occurrent thoughts, and how are they constituted? Do beliefs and desires have to be consciously recalled in order to influence reasoning and action? If not, what is the function of conscious thought, and how is it related to non-conscious mental processes? It was in response to such questions, among others, that I developed supermind 9
10
Christopher Maloney has also argued that we need a two-strand theory of belief in order to account for absent-mindedness (Maloney 1990). His proposal is, however, different from the one sketched here, and involves distinguishing e-beliefs, which are responsive to evidence, and a-beliefs, which control action. Maloney would say that Mary e-believes that her keys are in the fridge, but a-believes that they may be in the bedroom. Suggestions in this spirit have indeed been made in some discussions of connectionist models of memory and reasoning: see Botterill 1994; Clark 1990b; Horgan and Tienson 1995; O’Brien 1991.
181
Mind and Supermind theory, and it needs to be shown that single-strand alternatives can match it in explanatory power. 3.4 Basic propositional modularity? We have seen that the supermind exhibits propositional modularity. And this means that the folk commitment to propositional modularity can be vindicated, regardless of the internal architecture of the brain. Supermental states are formed and processed entirely at a personal level – the states, events, and skills involved belong to agents, not to their cognitive subsystems. Moreover, as we have also seen, the existence of propositional modularity at the supermental level is compatible with its absence at the basic level: basic mental states need not be functionally discrete in order to support functionally discrete supermental ones. So the threat of even mitigated eliminativism recedes considerably. Any neuroscientific theory that is compatible with the existence of the personal dispositions, actions, and skills which constitute the supermind will be compatible with the existence of propositional modularity at the supermental level. But what is to say that the folk are not committed to propositional modularity at the basic level, too? If they are, then surely the eliminativist threat will return? I think that this is too alarmist. For the evidence for a folk commitment to propositional modularity relates mainly to episodes of conscious thought of precisely the kind I characterized as strand 2. Think again of cases which support the principle of individual efficacy – cases where only one of two relevant reasons is active in producing an inference or action. Evidence for this sort of thing comes from first-person reports. People often report that they performed an action for this reason rather than that. And they make such reports, I suspect, simply because they remember having consciously entertained the operative reason at some point prior to performing the action. In the absence of such reports, it is not clear that we would have any reason to endorse the principle of individual efficacy. I, at any rate, do not find it plausible to endorse the principle for creatures incapable of conscious thought – to speculate, for example, that when Lassie goes home she might be acting upon only one of two reasons she has for doing so. (I am not suggesting that animals are not conscious – only that they are not capable of conscious thought.) Or take absent-mindedness. For a belief to slip one’s mind is, I suggest, precisely for it to slip one’s conscious mind. (The stereotype of the absent-minded person is, after all, that of the scientist so preoccupied with their conscious 182
Propositional modularity thoughts as to neglect mundane matters.) It is worth noting here that there are some things that never slip our minds. For example, it never slips my mind that I have to rotate the steering wheel on my car clockwise in order to turn right and anticlockwise in order to turn left – though I believe this. And it never slips my mind, I suggest, because I never think about it consciously. Again, I have no inclination to attribute slips of mind to animals – to suppose, for example, that it might slip Lassie’s mind that the cat is up the tree. Besides, even if the folk commitment to propositional modularity did extend to the basic mind, we could still neutralize the eliminativist threat by adopting a partial error theory of folk psychology. We could maintain that the folk were right about the commitment, but wrong about its extent. This would be a perfectly legitimate move. As I stressed at the outset, my aims are not narrowly analytic: my goal is not simply to explicate folk psychology – which is, I think, something of a muddle – but to provide a robust theoretical armature for it. A cautious revisionism is thus perfectly in order. It is certainly a more sober response to the eliminativist threat than jettisoning the folk commitment outright. Of course, this constitutes a vindication of the folk commitment only if we accept supermind theory. Now, the case for accepting supermind theory is simply that it offers the best way of regimenting the descriptive and explanatory practices of folk psychology. And the very ease with which the theory accommodates the folk commitment to propositional modularity is, I suggest, a further consideration in its favour. There is thus a nice consilience between the arguments for a folk commitment to propositional modularity and for its supermental vindication. conc lu s i on and p ro spe c t I have argued that folk psychology is committed to propositional modularity and that this commitment can be vindicated at the supermental level, whatever the internal architecture of the brain. In the next chapter I shall consider another problematic folk commitment and show how this, too, can be vindicated in a similar way.
183
7 Conceptual modularity The previous chapter looked at the claim that propositional attitudes are functionally discrete. I now want to consider a similar thesis about concepts – conceptual modularity, as it has been dubbed. Again, I shall argue that there is a folk commitment to the thesis and that this commitment can be vindicated at the supermental level, without involving claims about the brain. The chapter concludes with some further remarks about eliminativism and the future of folk psychology. 1 th e case f or conc e p tual modulari ty Conceptual modularity is a thesis about the nature of concepts, understood as psychological states and capacities. We can think of it as having two aspects, which I shall call modularity of process and modularity of vehicle. Modularity of process is the claim that there is a common capacity involved in each application of a particular concept. So, for example, if I possess the concept bachelor, then I possess some discrete capacity which can be applied in all my various thoughts about bachelors. Modularity of vehicle is the claim that thought tokens have distinctive physical properties, which correspond to their semantic properties and activate the corresponding conceptual capacities. (I shall refer to such physical properties as syntactic ones.) So when I entertain the thought that Cliff is a bachelor, my thought token has a physical property which is correlated with the semantic element bachelor and activates my capacity for thinking about bachelors. Understood in this way, as a conjunction of modularity of process and modularity of vehicle, conceptual modularity amounts to the thesis that there is a language of thought. This thesis has often been defended as an abductive inference from folk psychology – an explanation of how folk-psychological generalizations could be true of us (Fodor 1975, 1987), but some writers have argued that there is a more direct folk commitment to it. In particular, 184
Conceptual modularity Martin Davies has set out what he calls an ‘a priori-ish’ argument for a language of thought (Davies 1991, 1992, 1998; see also Rey 1995). The following section outlines this argument, focusing on the version in Davies 1991.1 A note on terminology. There is an ambiguity in the word ‘concept’ that is worth flagging. The term is often used for psychological capacities – ways of thinking about properties or objects. But it is also used for abstract items – for the semantic components of propositions, which stand to words roughly as propositions do to sentences. In the sentence ‘One must grasp the concept bachelor in order to understand propositions containing the concept bachelor’, the first use is psychological, the second abstract. I shall use the word in both senses, relying on context to disambiguate. 1.1 Davies’s argument Davies begins by distinguishing thought content from mere information content of the sort possessed by tree-rings, written records, thermostats, and so on. The key difference between the two, he suggests, is that thought content is conceptualized. In order to entertain thoughts about an object or property one must have mastered the concept of that object or property. And concept mastery, Davies claims, involves possession of a general ability: to possess the concept F, one must know what it would be for an arbitrary thing to be F – must know, that is, of any suitable object with which one is acquainted, what it would be for that object to be F. (I say ‘suitable’ in order to rule out category errors.) So, to be able to think that one object, a, is F, one must be able to think – that is, to entertain and evaluate the thought – that b, c, d, or any other suitable object one can think of, is F, too. Following Evans, Davies refers to this as the Generality Constraint.2 It is a consequence of the Generality Constraint that conceptualized thought is closed under recombination of its conceptual elements. If one can think that a is F and that b is G, then, barring category errors, one can also think that a is G and that b is F. Davies argues, however, that 1 2
Note that the terminology employed here – of modularity of process and modularity of vehicle – is mine, not Davies’s, though it reflects the structure of Davies’s argument. This view of concept mastery is sometimes referred to as neo-Fregean, since it aims to address roughly the same range of concerns as does Frege’s theory of senses. See Evans 1982.
185
Mind and Supermind meeting this closure condition in respect of a concept is not sufficient for mastery of it. The fact that a creature can represent a range of systemically related contents, Fa, Gb, Fb, Ga, does not of itself show that it has mastered the concepts involved. For it might have a set of completely autonomous representational capacities: one for Fa, another for Gb, and so on. In such a case, none of the states would manifest the sort of general representational ability that is characteristic of conceptualized thought. To have mastery of a concept, Davies concludes, a thinker must not only be able to represent the appropriate range of contents, but be able to do so in virtue of a common underlying capacity.3 This gives us modularity of process – the claim that there is a common capacity at work in all thought episodes involving the same concept. Davies goes on to develop an argument for modularity of vehicle – and thus for a language of thought. He begins by adding a claim about the nature of conceptual capacities. Concept mastery, he suggests, is constituted by commitment to a set of inferential principles. To grasp the concept bachelor, for example, is to be committed to various inferential principles governing thoughts about bachelors – among them, that for any individual, x, it follows from the thought that x is a bachelor that x is unmarried. (The inferential principles associated with a concept need not amount to a definition of it, Davies notes.) There will thus be certain input–output patterns in our inferential dispositions which are due directly to our conceptual capacities. If I have mastered the concept bachelor, then I shall be disposed to make an inference from the thought that Cliff is a bachelor to the thought that Cliff is unmarried, and from the thought that Hank is a bachelor to the thought that Hank is unmarried, and so on. And this input–output pattern will be due to the exercise of a common conceptual capacity – my mastery of the concept bachelor. Moreover, Davies insists, this reference to the exercise of a common capacity should be understood in a ‘full-blooded’ sense: to say that an input–output pattern is due to the exercise of a common capacity is to say that there is a common causal explanation for the existence of the pattern, adverting to a common internal state or mechanism. Davies expresses this by saying that 3
Davies is here following Evans (Evans 1981, 1982, ch. 4). Evans also suggests that conceptual capacities can be individually acquired and lost. To understand a predicate, ‘F’, is, he says, to possess a state – the subject’s understanding of . . . ‘F ’ – which originated in a definite way, and which is capable of disappearing (an occurrence which would selectively affect his ability to understand all sentences containing . . . ‘F ’). (Evans 1982, p. 102)
186
Conceptual modularity the inferential processes in question are causally systematic relative to the pattern.4 From this it is a short step to the claim that thoughts have syntactic properties. For if a process is causally systematic relative to a certain pattern in its inputs and outputs, then the inputs to the process must possess some physical property whose detection triggers the production of the appropriate outputs. Davies provides an illustration. A drinks machine is operated by inserting tokens of various colours and shapes. Square red tokens deliver coffee with milk, square blue tokens deliver coffee without milk, round red tokens deliver tea with milk, and round blue tokens deliver tea without milk. There are several input–output patterns here: square tokens lead to the delivery of coffee, red tokens lead to the addition of milk, and so on. Now, the operation of the machine is causally systematic relative to one of these patterns if there is a common causal explanation for the pattern’s existence. For example, there is causal systematicity relative to the square token ⇒ coffee pattern if the machine contains a single coffee-producing mechanism triggered by all square tokens (rather than, say, two autonomous mechanisms, one triggered by square red tokens and producing white coffee, the other triggered by square blue tokens and producing back coffee). Suppose that the machine is indeed causally systematic relative to the square token ⇒ coffee pattern. Then square tokens must possess some common physical property, not shared by tokens of a different shape, which triggers the coffee-producing mechanism. (This property need not be squareness, but it must be some property that is coextensive with squareness among input tokens – a certain weight, say.) The same goes, Davies concludes, for the inputs to causally systematic inferential processes. If my inferential processes are causally systematic relative to the 4
Davies offers a variant of this argument, which draws on Christopher Peacocke’s work on concept possession (Peacocke 1992). According to Peacocke, mastery of a concept consists in an appreciation of its canonical grounds and commitments – an appreciation of which thoughts warrant its application and which are warranted by its application. For example, a canonical commitment of thinking that someone is a bachelor might be to accept that they are unmarried. In other words, if you have mastered a concept, then you will regard certain inferential transitions as warranted simply by the fact that they involve the concept. This does not, Peacocke emphasizes, mean that you must conceptualize the form of the transition, or have explicit command of a rule which prescribes it. Rather, it is enough that you find these inferential transitions ‘primitively compelling’ – compelling in virtue of their form. The idea, Davies explains, is that the form of the transition should be causally explanatory: to say that a subject finds a particular inference primitively compelling in virtue of its form is to say that its form enters into a causal explanation of their making it (Davies 1991, pp. 246–7). In other words, there is a common causal explanation for inferences of this form – the processes involved are causally systematic in Davies’s sense.
187
Mind and Supermind bachelor ⇒ unmarried pattern, then my thoughts about bachelors must share some common physical property which triggers the mechanism that is responsible for the pattern. The same goes for thoughts about other objects and properties. In general, conceptualized thoughts must possess physical properties which are correlated with their semantic elements and activate the corresponding conceptual capacities. Such properties are by definition syntactic ones. Davies leaves the argument here, but it is not quite complete. He wants his conclusion to apply to all thoughts, but he has established it only for those that serve as inputs to inferential processes which manifest our conceptual capacities. And not all inferential processes are of this sort (think of inductive inferences, for example). In fact, we can easily move to the wider conclusion. For any conceptualized thought can in principle serve as input to inferential processes which manifest our mastery of its component concepts. If you have a thought involving a concept C, then it is possible for you to make inferences from this thought which directly reflect your mastery of C. Since such inferences require inputs with syntactic properties, it follows that all conceptualized thoughts have such properties.5 1.2 Concepts as skills How persuasive is Davies’s argument? I shall grant the initial moves and the existence of a commitment to modularity of process. The issue I want to focus on is how this commitment should be interpreted. Davies assumes that it involves claims about the sub-personal level: to say that a common conceptual capacity is involved in a set of inferential transitions is, he claims, to say that there is a common internal mechanism which mediates the transitions. The drinks machine example serves to reinforce this interpretation. The example is so set up that there are only two options: either a pattern depends on the operation of a single internal mechanism, or there is no common explanation for its existence. So interpreted, the commitment to modularity of process is worrying, since it imposes restrictions on 5
Georges Rey has outlined a similar argument for a language of thought (Rey 1995). To be capable of thought, Rey argues, is to be capable of making inferential transitions between thoughts in virtue of their logical properties, and it will be possible for a thinker to do this only if the logical structure of their thoughts is causally available to them – that is, only if it is marked by syntactic properties. Again, the argument has a general conclusion: if any thought can enter into logical transitions, then all thoughts must have syntactic structure.
188
Conceptual modularity the kind of neural architectures that can support conceptualized thought – and thus means that our status as thinkers could be undermined by discoveries in neuroscience. Indeed, Davies speculates that certain connectionist models of memory may not support genuine concept possession and offers an ‘invitation to eliminativism’ (Davies 1991). This is too swift, however. For there are alternative ways of cashing out the demand for modularity of process which do not involve internalist commitments. I shall begin with a proposal by Andy Clark (Clark 1991, 1993a, ch. 10). Clark accepts that thought requires the exercise of distinct conceptual capacities, which can be systematically exercised in a variety of contexts. However, he identifies these capacities with personal skills, rather than internal mechanisms.6 To grasp a concept, F, he suggests, is to possess an overall skill in dealing with Fs – a skill composed of various sub-skills. For example, grasping the concept dog involves possessing a variety of dog-related sub-skills – being able to talk about dogs, interact with dogs, recognize dogs, understand dog-related metaphors, and so on – the range varying somewhat from person to person. These skills, Clark claims, need have no unified internal basis. What unifies them is the existence of an external context – a human interest in dogs as such – in which they are variously employed. Their unity is visible only when we consider the human needs and preoccupations which they subserve – when we examine them through what Clark calls the ‘lens of folk-psychological interests’. (Clark compares practical skills, such as a skill at golf. A proficiency at golf comprises a number of disparate sub-skills: driving, putting, chipping, and so on. What unifies these sub-skills is the existence of an external social context – playing golf – in which they are all employed.) From this perspective, Clark claims, the question of whether a bunch of sub-skills constitutes a single capacity will often depend on how they were acquired. Sub-skills acquired in the course of negotiating a unified folk domain will generally be thought of as constituting a single skill. Similar sub-skills acquired in disparate domains will not. The idea that conceptual capacities are personal skills is, I think, an attractive one. It certainly reflects our epistemic situation: we are very confident that we are concept-possessors, but not at all confident that we have a discrete internal mechanism associated with each item in our conceptual repertoire. Clark’s way of developing the idea is unsatisfactory, 6
As Clark points out, this seems to reflect Evans’s view of the matter: see Evans 1982, pp. 101–2.
189
Mind and Supermind however. For it simply fails to engage with the intuitions that support Davies’s argument. Davies is interested in concepts as psychological states and capacities – as ways of thinking about objects and properties – which are employed in episodes of occurrent thought.7 Yet for Clark, conceptual skills are behavioural ones – skills in classifying objects and interacting with them. Indeed, as Clark acknowledges, even a giant look-up table could count as a concept possessor by his lights, despite its complete lack of cognitive processes (Clark 1991). (A giant look-up table is a robot which simulates human behaviour by matching perceptual inputs with behavioural outputs according to a long list of pre-set instructions, and without engaging in reasoning. See, for example, Copeland 1993.) Now, of course, even if we grant the need for a psychological theory of concepts, we might question whether it must be an inferential-role theory of the sort Davies adopts. This is not the place for a detailed discussion of theories of concept possession – indeed, I suspect that there are several different strands in our talk of concepts, which call for different theoretical underpinnings. But it is not implausible to think that inferential-role theories do capture at least one salient strand of our concept talk. It is hard to deny that one way of acquiring a concept is by establishing inferential links between it and other concepts one already possesses. That is why we can acquire new concepts by consulting a dictionary. In what follows I shall outline an inferential-role theory of concept possession which addresses Davies’s concerns and supports conceptual modularity, yet which remains broadly faithful to Clark’s view of concepts as skills and does not involve claims about the internal architecture of the brain. 2 conc e p tual modulari ty v i nd i cate d This part of the chapter develops an account of concept possession at the supermental level. I identify supermental conceptual capacities with personal-level inferential skills, and show that, on this view, the supermind supports both modularity of process and modularity of vehicle. As in the earlier discussion of propositional modularity, I also show that these claims about the supermind are compatible with an austere view of the basic mind. 7
Davies notes that his argument presupposes propositional modularity (Davies 1998, pp. 232–3). Evans, too, makes it clear that he thinks of conceptual abilities as exercised in episodic thought (Evans 1982, pp. 100–5).
190
Conceptual modularity 2.1 Supermental conceptual capacities Supermental states are premising policies, intentionally executed at a conscious level. As we saw in chapter 4, in order to execute such policies it is necessary to have command of personal inferential skills – skills at working out what conclusions and decisions are warranted by our premises and goals. Central among these skills, I argued, is the ability to construct arguments in inner speech, following learned inference rules, such as modus ponens. As I emphasized, our command of these rules does not have to be conscious and explicit. We may also have a basic-level knowledge of inference rules defined over the words and structures of natural language – a knowledge acquired in the course of interpersonal debate and manifesting itself in an ability to construct valid arguments in overt or private speech. Even if we do have explicit command of inference rules, I argued, we typically rely on non-conscious knowledge of this kind to guide our everyday premising activities. Now, I suggested that among our inference rules there are what we may loosely call conceptual rules – rules defined over concepts, such as the rule that, for any x, the thought that x is a bachelor licenses the thought that x is unmarried. Again, our knowledge of these rules may be non-conscious. We may never have consciously articulated the rule that x is a bachelor licenses x is unmarried, but simply learned to recognize the word ‘bachelor’ as a distinct lexical item with a distinct inferential potential and to regard it as licensing the application of ‘unmarried’. And, again, we can think of this as involving a basic-level knowledge of rules defined over linguistic items – in the example mentioned, the rule that ‘bachelor’ licenses ‘unmarried’. Most words are, of course, associated with a cluster of rules of this kind. ‘Bachelor’, for example, licenses not only ‘unmarried’, but also ‘male’, ‘adult’, and ‘human’. And each of these rules can be applied in an unlimited range of contexts. If I know that ‘bachelor’ licenses ‘unmarried’, then I shall be able to exploit this knowledge in calculating the consequences of any sentence about bachelors. I shall be able to work out, for example, that ‘Cliff is a bachelor’ licenses ‘Cliff is unmarried’, that ‘Hank’s best friend is a bachelor’ licenses ‘Hank’s best friend is unmarried’, and so on. I shall say that, collectively, the cluster of inference rules associated with a particular word constitutes the inferential potential of the concept the word expresses, and I want to suggest that to master a concept is to know, and be able to apply, the inference rules that constitute its inferential potential. 191
Mind and Supermind Note that in claiming that concept mastery involves knowledge of conceptual rules, I am not claiming that the inferences licensed by these rules are analytically valid and unrevisable. I assume that a conceptual capacity may be constituted by a cluster of rules without any of these rules being individually necessary to its identity. The view of concept mastery outlined here is thus compatible with the suggestion in chapter 4 that inference rules are highly reliable heuristics rather than semantic axioms. Note, too, that people may differ in the range of inference rules they associate with a given word. I assume, however, that all competent speakers will associate the same core cluster of inference rules with each word, reflecting their shared grasp of the concept it expresses. Finally, note that although I assume that conceptual rules are typically defined over words and applied in inner speech, and though I shall for the moment focus exclusively on this sort of language-dependent concept mastery, I am not assuming that concept possession requires language. I shall say something about the possibility of non-linguistic concept application later. What I am proposing, then, may be regarded as a more restricted version of Clark’s concepts-as-skills story. I am suggesting that we identify conceptual capacities with personal skills – only skills with words, rather than objects. To master a concept is to be able to perform certain inferential operations upon sentences containing words which express the concept, drawing on a knowledge of rules defined over them. And like Clark, I claim that these skills may be composed of a bunch of discrete sub-skills – skills for applying each of the various inference rules associated with a word – which are unified only externally, by the context in which they are employed – the context, that is, of conscious reasoning to and from thoughts involving the word. (This is not, of course, to say that conceptual capacities do not have unified internal bases, just that nothing in their description requires them to be internally unified.) The conditions for possession of a conceptual capacity are thus squarely personal – consisting in an ability to recognize a certain word, knowledge of various inference rules defined over it, and the ability to apply this knowledge in the execution of one’s premising policies. This view of concept possession, then, retains the attractions of the conceptsas-skills view while doing justice to the idea that conceptual capacities are cognitive ones, displayed in episodes of occurrent thought. I shall now show that it also supports both modularity of process and modularity of vehicle.
192
Conceptual modularity 2.2 Modularity of process Although there may be no common internal state underlying our knowledge of the various inference rules associated with a particular word, there is still a good sense in which this knowledge constitutes a single capacity, which can be employed in a variety of contexts. If we know the inference rules associated with the word ‘bachelor’, then we shall be able to draw on this knowledge whenever we entertain a premise containing the word ‘bachelor’. This is not to say that we shall actually apply any or all of the rules on every such occasion – it depends on whether we think that doing so will further the inquiry we are pursuing at the time – but we shall be disposed to do so, if appropriate. Our knowledge of the rules thus constitutes a common resource available in every thought episode about bachelors. And this, I suggest, is enough to justify the claim that a common conceptual capacity is involved on all these occasions – that is, that modularity of process holds. It may be objected that this is not modularity at all. If conceptual capacities are personal skills, rather than internal mechanisms, then we cannot capture the distinction between someone who has really grasped a concept and someone who merely behaves as if they have. Anyone who displays the appropriate inferential dispositions will count as having mastered the concept bachelor, regardless of why they display them. This objection misrepresents the proposal, however. On the proposed account, displaying the right inferential dispositions is not sufficient for concept mastery. For example, suppose that whenever I entertain a premise of the form ‘x is a bachelor’, I am disposed to derive a conclusion of the form ‘x is unmarried’ – and suppose for simplicity’s sake that this is the only inferential disposition associated with mastery of the concept bachelor. Would it follow that I have mastered the concept bachelor? No; for I might possess this inferential disposition without having learned the rule that ‘bachelor’ licenses ‘unmarried’. I might, for example, have learned a bunch of specific inference rules – that ‘Cliff is a bachelor’ licenses ‘Cliff is unmarried’, that ‘Hank is a bachelor’ licenses ‘Hank is unmarried’, and so on, for all the bachelors I know. Or I might have formed an empirical belief that underwrites the inferences – say, that bachelors are so obnoxious that they never attract mates. In short, two people might make exactly the same range of inferences about bachelors, but only one of them do so in virtue of a knowledge of inference rules defined 193
Mind and Supermind over the word ‘bachelor’ – that is, in virtue of a mastery of the concept bachelor. I am suggesting, then, that a common conceptual capacity is at work in a range of inferences if they are all dependent on knowledge of a single conceptual rule or set of such rules. Now, I argued in chapter 4 that the knowledge which guides our conscious reasoning will usually be of the non-conscious, basic-level type, so it is important to show that the proposed account is compatible with an austere view of the basic mind. On such a view, the claim that a person inferred ‘Cliff is unmarried’ from ‘Cliff is a bachelor’ because they believed that ‘bachelor’ licenses ‘unmarried’ amounts to this: that they possessed that belief, understood as a multi-track behavioural disposition, and would not have made the inference if they had not possessed it (for convenience I write here as if basic belief were flat-out). Such a reading is, I suggest, perfectly adequate to support the proposed account. The account does not assume that our beliefs about conceptual rules receive occurrent activation or that they are dynamic causes of the inferences we make. It may be objected that on the proposed reading it becomes vacuous to cite a belief in a conceptual rule in explanation of an inference. On an austere view, to believe that ‘bachelor’ licenses ‘unmarried’ is simply to be disposed to make inferences from ‘bachelor’ to ‘unmarried’, and it is therefore uninformative to explain the inference by reference to the belief. And this in turn means that we cannot sustain the distinction between genuine and apparent concept mastery. Anyone who is disposed to make the inference from ‘bachelor’ to ‘unmarried’ will count as possessing the belief that ‘bachelor’ licenses ‘unmarried’, and it will be impossible to draw a distinction between cases where the disposition is dependent on a belief in the rule and cases where it is dependent on other beliefs. This objection is weak, however. For even on an austere view there is more to possessing the belief that ‘bachelor’ licenses ‘unmarried’ than simply being disposed to make inferences from ‘bachelor’ to ‘unmarried’. The belief is a multi-track behavioural disposition and will manifest itself in various other ways, too. Thus, a person who possesses the belief will not only be disposed to make inferences from sentences of the form ‘x is a bachelor’ to ones of the form ‘x is unmarried’, but will also regard these inferences as licensed simply by the presence of the words ‘bachelor’ and ‘unmarried’ in their premises and conclusions, and be disposed to justify them by reference to that fact. It is possible to possess the inferential disposition without possessing these other dispositions and therefore not vacuous to cite the belief in explanation of an inference of this type. The 194
Conceptual modularity distinction between genuine and apparent concept mastery can thus be sustained. A person who is disposed to make inferences from ‘bachelor’ to ‘unmarried’ because they have learned the conceptual rule that ‘bachelor’ licenses ‘unmarried’ will have different attitudes towards these inferences from someone who makes them for other reasons, and will be disposed to justify them in a different way.8 Still, it may be argued that this is not real modularity of process – not, at least, if we adopt an austere view of basic belief. For on that view, the conditions one must meet in order to count as knowing a conceptual rule are themselves just behavioural ones. To say that a person believes that ‘bachelor’ licenses ‘unmarried’ is simply to say that they display the various behavioural dispositions characteristic of a belief in that conceptual rule – including, though not confined to, the disposition to make inferences from ‘bachelor’ to ‘unmarried’. So all we have done is to replace one rather crude behavioural criterion for concept possession (exhibiting a certain inferential disposition) with another, somewhat more complex, one (exhibiting the behavioural dispositions associated with belief in a conceptual rule). This is true, but misses the point. My aim is to show that we can capture the distinction between apparent and genuine concept mastery in personal – essentially behavioural – terms. And the suggestion is precisely that it lies in a difference in the respective behavioural criteria for each. To object to the suggestion on the grounds that it takes this form is to beg the question. But does this amount to a ‘full-blooded’ causal construal of modularity of process, of the sort Davies insists upon? Will there be a common causal explanation for the inferential patterns associated with mastery of a particular concept? Will the inferential transitions be causally systematic in the way Davies describes? Yes: basic mental states are sustaining causes of action, so there will be a common sustaining cause operative in all the inferences we make in virtue of our knowledge of a particular conceptual rule. It may be objected that this is only a limited form of causal systematicity. Grasping 8
What if a person possesses more than one basic belief capable of supporting a particular inferential disposition? Suppose, for example, that I believe both that ‘bachelor’ licenses ‘unmarried’ and that bachelors are so obnoxious that they never attract mates. How, on an austere view, could we justify citing one belief rather than the other in explanation of the disposition? This would be an equipotency case, and, as I argued in the previous chapter, a counterfactual analysis will not serve to highlight one, rather than the other, of two equipotent attitudes. This is not a stumbling block, however. For we do not have to insist on an exclusive role for the conceptual rule. We can say that in such cases, the disposition is overdetermined – manifesting both a general conceptual capacity and an empirical belief.
195
Mind and Supermind a concept involves being disposed to apply a cluster of rules (‘bachelor’ licenses ‘unmarried’, ‘bachelor’ licenses ‘male’, etc.), and there will be no common cause at work across applications of different rules belonging to the same cluster. Now, I am not sure that there is any folk commitment to this broader form of causal systematicity, but even if there is, it can be met on the present account. For there will also be a common dynamic cause at work in the inferences we make in virtue of a particular conceptual capacity – and not just in those that involve application of the same rule. We shall be moved to apply the various inference rules we associate with a particular word only if we recognize that the premise we are currently entertaining contains the word in question. The act of recognition is necessary to trigger our disposition to apply the rules and is a dynamic cause of the resulting inferences. So there will be a common cause at work in all inferences of this kind – namely, that they are all triggered by the recognition of a particular word in their inputs. Of course, the considerations here apply only to those inferences that involve the application of rules. And, as I argued in chapter 4, not all conscious intentional inferences will be of this kind. In particular, some of them will involve the exploitation of sub-personal processes through selfinterrogation. And there is no guarantee that these sub-personal processes will display causally systematic patterns. This is not an objection to the present account, however. For it was never part of Davies’s claim that all inferential processes exhibit such patterns. The claim was only that some of them do – namely, those deductive inferences which manifest our conceptual abilities (see Davies 1991, pp. 243–7). There is no reason to think that non-deductive inferences will exhibit causally systematic formal patterns. And as I argued in chapter 4, it is precisely such inferences that self-interrogation will help us to make. But can we be sure that we do not rely exclusively on self-interrogation in supermental reasoning? If we do, then the case for modularity of process at the supermental level will collapse. Now, I concede the possibility of this: we could in principle sustain premising policies without knowledge of conceptual rules. It is, however, extremely unlikely that we do. For, as I pointed out in chapter 4, it would be unwise to rely heavily on selfinterrogation in executing our premising policies; to do so would be to forgo the element of cognitive control which is one of the chief benefits of having a supermind. Besides, in many cases self-interrogation would be redundant. We all know a huge number of conceptual rules – we know the meanings of the words we use – and this knowledge will guide our 196
Conceptual modularity inferential activities as a matter of course. If you know that ‘bachelor’ means ‘unmarried man’, then you can work out straight off that bachelor Cliff is unmarried, without bothering to interrogate yourself. 2.3 Modularity of vehicle Modularity of vehicle is the claim that thought tokens are syntactically structured – that is, that they possess a physical structure which mirrors their semantic structure and activates the corresponding conceptual capacities. Now, supermental thoughts will typically exhibit this sort of modularity. For I have argued that we typically articulate our premises in inner speech. And this means that our thoughts will possess non-semantic properties associated with their semantic elements – images of the vocal movements or sounds involved in articulating the associated words. Moreover, these properties will often play a role in determining how the thoughts are processed. For, as I have repeatedly emphasized, the rules which guide our conscious reasoning are typically ones defined over the words and structures of natural language. And in order to apply such rules we shall need to be sensitive to the linguistic forms of our premises – to recognize their lexical components and grammatical structures. Sentences in inner speech will thus have non-semantic properties which are correlated with their semantic ones and serve to trigger our conceptual capacities – and will thus be syntactically structured in the present sense. (These properties are not strictly physical ones – they are properties of images in inner speech – but since they are non-semantic and have the right causal role, I shall assume that they qualify as syntactic.) I am suggesting, then, that the requirement for syntax can be met at the personal level – that the language of thought is natural language. In a later essay, Davies considers this suggestion and rejects it. He points out that although sentences of inner speech are structured, it does not follow that this structure is causally efficacious in inference: from the fact that Bruce imagines the sentences ‘A or B’ and ‘not-A’ as structured and moves in imagination to the sentence ‘B’, it does not follow that he performs that transition in virtue of its form. (Davies 1998, p. 246)
This is true, but it does not follow that Bruce does not perform the transition in virtue of its form, either. It depends on whether he has developed a personal-level sensitivity to the forms of his natural language and learned to regulate his inner utterances in a way that implements inference rules 197
Mind and Supermind defined over these forms – in other words, whether he has learned to construct arguments in inner speech. We can certainly learn to do this in overt speech, in the course of arguing with our peers, and I see no reason to deny that we can learn to do it in inner speech, in the course of executing premising policies. And if we can, then we can use natural language as a language of thought. I have argued that supermental thoughts will typically have syntax. But do they require it? The case for thinking that conceptualized thoughts require syntactic structure depends on the assumption that conceptual capacities cannot be triggered directly by the semantic properties of thoughts, but only by associated physical properties. This is plausible enough if conceptual capacities are sub-personal mechanisms, as Davies supposes; but what if they are personal skills, as I have suggested? Suppose for the sake of argument that it is possible to entertain conscious thoughts non-linguistically. Then it would seem possible for us, as reflective conscious agents, to be directly sensitive to the semantic properties of our thoughts and to apply inference rules defined directly over those properties. For example, I might entertain the conscious non-linguistic thought that Cliff is a bachelor, consciously reflect that bachelor licenses unmarried, and thereupon move to the thought that Cliff is unmarried. (This assumes conscious explicit knowledge of the conceptual rule, of course.) It may be, then, that we can have structure-sensitive processing without a structured medium of thought, and thus that modularity of process does not in fact require modularity of vehicle. I do not want to insist on this, however. For I am in any case doubtful of the possibility of entertaining conscious propositional thoughts non-linguistically (see my remarks in chapter 4), and am content to let the defence of conceptual modularity rest entirely on the linguistic case. 2.4 Basic conceptual modularity? I conclude that, as with propositional modularity, the folk commitment to conceptual modularity can be met at the supermental level, consistently with an austere view of the basic mind and without making assumptions about the internal architecture of the brain. Davies’s invitation to eliminativism can thus be declined. This assumes, of course, that there is not a folk commitment to conceptual modularity at the basic level, too. Is that assumption warranted? I think so. If we treat folk psychology as a unified theory, then it will be 198
Conceptual modularity natural to suppose that any architectural commitments it has are universal. But once we abandon the assumption of unity, the boot is on the other foot. We now need positive evidence that the commitments do extend to basic belief, and it is not clear that there is any. Conceptual capacities of the kind Davies discusses are manifested in occurrent thought and explicit reasoning, and, as I argued in chapter 2, there is no folk commitment to the existence of states and processes of this kind at the non-conscious level. The point is reinforced by reflection on animal belief, where the considerations that drive Davies’s argument have little or no grip. It is implausible to suppose that animal thought is subject to the Generality Constraint – to suppose, that is, that in order to have beliefs about Fs an animal must be able to frame the thought that an arbitrary object is F. (As Dennett points out, there are surely many creatures which can think that a lion wants to eat them, but cannot frame the thought that they want to eat a lion: see Dennett 1991b, p. 27.) Indeed, it is doubtful whether animals have the capacity to frame thoughts at all – that is, to entertain propositions without endorsing or rejecting them. (This is not to deny that attributions of basic belief may presuppose a certain degree of cognitive flexibility. We might refuse to credit a creature with beliefs about lions unless we thought it capable of possessing a range of beliefs about them – without, however, requiring that it meet the stringent conditions of the Generality Constraint, and still less that it do so in virtue of a common capacity.) Nor, I think, is it plausible to adopt an inferential-role theory of concept possession for animal thought – to hold that for a creature to be able to think about lions it must be committed to a set of inferential principles governing thoughts about lions. The folk commitment to conceptual modularity does not extend to all kinds of belief, then; and given that it can be discharged so smoothly at the supermental level, we may conclude that it applies only to that level.9 Does this mean that there are no basic-level concepts? Not necessarily – basic mentality might involve some weaker form of concept possession. It is attractive to invoke Clark’s account here and identify basic-level concepts with clusters of overt skills. On this view, possessing the basic-level concept lion, for example, would involve possessing an ability to recognize lions in various contexts, interact with lions in various ways, and so on. Possession of the concept lion, in this weak sense, might then be a 9
Of course, even if there were a folk commitment to basic-level conceptual modularity, it would still be open to us to adopt a revisionary approach and argue that the commitment should be restricted to the supermind.
199
Mind and Supermind precondition for the attribution of basic beliefs about lions. No architectural assumptions would be involved, however, and there would be no challenge to an austere view of the basic mind. Finally, let me stress that I am not denying that there are modular conceptual capacities at the sub-personal level. Our basic minds may be supported by discrete representational resources which can be activated in a variety of contexts. (The psychological literature on concepts is, I think, best understood as concerned primarily with sub-personal representational resources of this kind.) There might even be a language of thought at the sub-personal level. I do maintain, however, that there is no folkpsychological commitment to the existence of sub-personal structures of this kind. The folk are not concerned with the sub-personal. 3 th e f uture of f ol k p syc h olog y The moral of this chapter and the previous one is that the threat of even mitigated eliminativism is remote. Although folk psychology does have architectural commitments, these apply only to the supermind and are vindicated entirely at that level. The other strand of folk psychology – the one that concerns the basic mind – likewise involves no claims about neural architecture. For it treats mental states simply as multi-track behavioural dispositions and has no architectural commitments at all. A restructured two-level folk psychology is thus compatible with anything neuroscience may teach us about the structure of the brain. Of course, showing that folk psychology is compatible with neuroscience is not the same as showing that it is true. We might come to dispense with folk psychology, not because it is incompatible with what we know about the brain, but because it is a bad theory of the mind. Indeed, this is what some people suspect will happen. Their worries centre for the most part on folk psychology’s commitment to a view of reasoning as involving the serial manipulation of discrete propositional contents (see, for example, Churchland 1981). This view can indeed seem unattractive. Compared with other methods, serial processing of this kind is slow, inflexible, and fragile. My response here is to refer back to the earlier chapters of this work. The model of the supermind invoked in the present chapter is no ad hoc device to save folk psychology, but, I have argued, the best explanation of a host of intuitions and insights about occurrent thought, flat-out belief, 200
Conceptual modularity inner speech, judgement, active inference, and so on. The case for our having superminds has already been made. And, in this light, the putative weaknesses of folk psychology begin to look much less serious. For the commitment to serial propositional reasoning applies only to the conscious, supermental strand of cognition. And it is quite true that conscious reasoning has many limitations. It is indeed a slow, fragile, and highly selective process. Our trains of conscious thought often go astray, falter, or come to a dead end. And there are many kinds of calculation which we find it very hard to perform at a conscious explicit level – calculations of relevance or abductive warrant, for example. In these cases we tend to rely on self-interrogation – co-opting sub-personal processes whose character is unknown to us. (We typically find ourselves unable to explain how we have arrived at judgements of this kind, and often justify them in aesthetic terms – speaking of properties such as elegance and simplicity.) I shall address a final objection before concluding. If folk psychology does not have sub-personal commitments, won’t it follow that some rather bizarre systems will count as having minds? Couldn’t a look-up table meet the conditions for possessing a supermind? And isn’t this a reductio of my claims? For we have a strong intuition that look-up tables do not have minds like ours. I think that this intuition is sound. There is more to having a human mind than behaving like a human. The look-up table mimics overt human behaviour without mimicking human reasoning processes; it produces the right behaviour, but does not produce it in the right way. The moral is that, in saying what it is to have a human mind, we have to say something about the processes that produce our behaviour. But this is what I have done in characterizing the supermind. To possess a supermind is to be disposed to engage in certain conscious inferential activities which are the precursors of overt action. It may be objected that if look-up tables can mimic human behaviour, then they can mimic our inferential behaviour, too – the various actions involved in forming and executing premising policies. After all, I have insisted that there are no a priori constraints on what kind of internal processes can support these actions. So nothing in my account rules out a priori the claim that look-up table architectures could do so. This is true, but it is nonetheless plain, I think, that look-up tables will be incapable of supporting a supermind. For on any substantial theory of consciousness it will turn out that they are not conscious. And if they are not conscious, then they certainly cannot engage in conscious supermental reasoning.
201
Mind and Supermind conc lu s i on and p ro spe c t I suggested at the end of the last chapter that there is a nice consilience between the argument for propositional modularity and that for its supermental vindication. The same goes, I think, for the corresponding arguments concerning conceptual modularity. The case for thinking that there is a folk commitment to conceptual modularity looks more plausible when we see that the commitment can be vindicated at the supermental level, without involving contentious claims about neural architecture. Of course, if supermind theory is true, then this is no more than we should expect; if the theory really is latent within folk psychology, then many aspects of the folk practice should become more intelligible when interpreted in its light. The following, final, chapter will provide further illustrations of this.
202
8 Further applications The theory developed in this book is a wide-ranging one, with applications to many other issues in the philosophies of mind and action. If folk psychology does require a two-level framework of the sort outlined, then many existing debates will need re-evaluation. The framework will require new distinctions, permit new explanatory strategies, and cast old problems in a new light. There should also be implications for psychology and cognitive science. In this final chapter I shall briefly consider three areas in which supermind theory may have application: akrasia, self-deception, and first-person authority. In each case my aim will be modest. I shall not attempt to survey the literature or to argue for the superiority of my approach, but confine myself to sketching the application of the theory and briefly indicating how it can resolve some of the puzzles associated with the phenomenon in question. The discussion will also include some remarks on the nature of intention. I shall close the chapter with a section outlining some possible applications of supermind theory in scientific psychology. 1 ak ras i a Akrasia and self-deception are a pair of puzzling phenomena. Though apparently common, they seem to involve the violation of some basic norms of rationality, and it is not obvious how a unified intentional agent could suffer from them. They also present a special challenge to those who take an austere view of the mind. If mental explanation involves a presumption of rationality, as austere theorists maintain, then it is hard to see how attributions of akrasia and self-deceit could be warranted. The only resource for such theorists, it seems, is to adopt what is sometimes called a partitioning strategy – that is, to suppose that the human psyche is partitioned into distinct subagents, each internally rational but in conflict with the other (Davidson 1982; Pears 1984; I borrow the term ‘partitioning 203
Mind and Supermind strategy’ from Mele 1987a, 1987b). I want to propose an alternative approach. Both Cohen and Dennett suggest that a two-strand theory of belief is needed in order to explain akrasia and self-deception (Cohen 1992, ch. 5; Dennett 1978a, ch. 16), and in what follows I shall show how the version developed here can provide a robust account of them, compatibly with an austere view of the basic mind. I shall deal with akrasia in this section and with self-deception in the next. 1.1 Akrasia and the supermind By ‘akrasia’ I mean acting against one’s better judgement. The akratic agent judges that it is better, all things considered, to perform action A rather than action B, and yet, without revising or abandoning that judgement, freely and intentionally performs B. Note that in saying that the akratic agent judges it better to perform action A, I do not mean that they judge it morally better – just that they judge it better for them, given their needs, desires, and interests. Thus a criminal might judge it better to kill a witness rather than let them escape, and would be akratic if they failed to act upon this judgement. In other words, akrasia need not involve a failure to do what is right, merely a failure to do what one’s practical reasoning tells one to do. Note, too, that akrasia is different from inconstancy in judgement. A person who keeps revising their judgements of what it is best to do is inconstant, but not akratic. I think we all recognize akrasia as a condition from which we occasionally suffer. Yet it is puzzling. Why does the agent perform the akratic action, B, given that they judge it better to perform the other one, A? If the akratic action is intentional, then it must be done for a reason (or so a very common view has it). Yet the akratic agent seems to have no reason for performing B. Any reason they might have had has already been outweighed by other considerations favouring A. It seems that the agent both has a reason for performing B and has none – that their action is both intentional and not intentional. (I am using ‘reason’ here in an internalist sense, of course.) Another problem, highlighted by Donald Davidson, is how to reconcile the possibility of akrasia with two plausible principles linking judgement with desire and desire with action. If an agent judges that it would be better to perform action A rather than action B, then they want to perform A more than B; and if an agent wants to perform action A more than action B, and believes they are free to perform either, then they will intentionally perform A if they intentionally perform either. So how 204
Further applications can people intentionally act against their better judgement? (See Davidson 1969; the principles mentioned are the ones Davidson calls P2 and P1, in that order.) There are many ways of responding to these problems (for a survey, see Walker 1989). Some writers see akrasia as resulting from the application of incommensurable values (Wiggins 1980); some argue that it involves the ‘usurpation’ of behavioural control by an unruly desire (Pears 1982); while others endorse a partitioning strategy, positing ‘two semiautonomous departments of the mind’, one favouring the judgement, the other prompting the akratic action (Davidson 1982, p. 300). No consensus has emerged, however, and I want to suggest a new approach, grounded in supermind theory. A central feature of akrasia is that it involves making a practical judgement – a judgement about what it is best to do. And these judgements are typically conscious ones: the evidence for the existence of akrasia is precisely that we are sometimes aware of acting against our conscious judgements. An explanation of akrasia should thus include an account of what conscious practical judgements are and what cognitive role they have – a condition which not all existing theories meet. Here supermind theory can supply what is needed. The theory represents conscious practical judgements as the conclusions of episodes of supermental practical reasoning in which we deliberately calculate what to do in the light of our premises and goals, applying the inferential techniques described earlier and prioritizing or revising our goals as necessary in order to resolve conflicts between them. And this gives us a new perspective on akrasia. For as I argued earlier, the judgements which issue from supermental reasoning influence our behaviour in virtue of our basic beliefs and desires about them. We act upon the conclusions of our supermental reasoning because we believe that these conclusions are dictated by our premising policies and want to adhere to those policies (as before, I use the term ‘premising policies’ to include both acceptancesp and goal pursuits). That is, the efficacy of the supermind depends on the possession of a strong basic desire to adhere to one’s premising policies. This desire can be overridden, however. In some cases the desire to adhere to one’s premising policies will be outweighed by an even stronger desire to perform some other action. Such cases, I suggest, are precisely ones of akrasia. An example may help to illustrate this. Suppose that I am debating what to do tonight. The late movie on television is one I have wanted 205
Mind and Supermind to see for some time, but I am feeling tired and have to be up early for an important meeting. I review my goals, including seeing the movie, making a success of my job, and staying healthy. Prioritizing and making some simple inferences, I judge that the best thing is to get an early night. Since I have a strong basic desire to adhere to my goals and acceptancesp , I now become disposed to act upon this judgement. Yet as I am preparing for bed, I idly switch on the television and become engrossed in the movie. I do this, moreover, without revising my judgement. My basic desire to continue watching the movie outweighs my basic desire to act upon my judgement, but does not lead me to revise that judgement. I still judge it best to go to bed, and the awareness that I am failing to act upon this judgement causes me uneasiness and subsequent regret. (Note that in this example, performance of the akratic action was consciously considered and rejected prior to the formation of the judgement – seeing the movie was a goal of mine – but this is not an essential feature of the account. In other cases the akratic action may never have been consciously considered.) I think that this view can resolve some of the puzzles surrounding akrasia and explain our competing intuitions about the condition. To begin with, it yields a straightforward explanation of why the agent performs the akratic action. Take the example just described. I stay up because I have a strong basic desire to see the movie – stronger than my desire to perform any of the alternative actions open to me. So the account vindicates the claim that akratic agents have reasons for their actions – that their actions are intentional. Yet the account also explains why we feel there is a sense in which such actions are performed without reason. For they are intentional only at the basic level. I have no supermental reason for staying up (although the goal of seeing the movie did figure in my conscious reasoning, it was overridden by other considerations). In fact, my action has no supermental explanation at all – the relative weakness of my basic desire to adhere to my premising policies having rendered my supermental states temporarily impotent. (Thus it is wrong to talk of a defeated desire usurping behavioural control, as if the state which generates the akratic action is the same as that which was defeated in practical reasoning. The two states are quite different: the former a basic desire, the latter a superdesire.) It is important to emphasize that I am not claiming that any action which lacks a supermental explanation counts as akratic. Many ordinary nonakratic actions are generated without supermental activity and have only
206
Further applications basic-level explanations. What renders an action akratic is not simply the absence of supermental influence, but the failure of it. Actions are akratic if they are performed in the face of a countervailing supermental judgement. Ones performed in the absence of such judgements are unreflective, but not akratic. This view also allows us to reconcile the possibility of akrasia with Davidson’s two principles, which now turn out to be ambiguous. Take the first principle – that if an agent judges that it would be better to perform action A rather than action B, then they want to perform A more than B. If we identify judgements with the conclusions of episodes of supermental practical reasoning, as I have suggested, then the truth or falsity of this principle depends on whether the phrase ‘want more’ refers to a basic preference or a supermental one. If the former, then the principle does not come out true. As we have seen, I can judge (consciously, at the supermental level) that it is better to perform A rather than B, yet have a stronger basic desire to perform B. If, on the other hand, the principle refers to a supermental desire, then there is a sense in which it does come out true. Of course, since superdesires are flat-out states, they cannot differ in strength in the way that basic desires can: one either pursues a certain goal or does not. However, goals can be assigned relative priority, and in this sense one can superdesire one thing more than another. And on this reading the principle comes out true, at least for the most part. If I judge that it is better to perform A rather than B, then I shall typically assign the goal of A-ing priority over that of B-ing. Davidson’s other principle is also ambiguous. Davidson claims that if an agent wants to perform action A more than action B, and believes that they are free to perform either, then they will intentionally perform A if they intentionally perform either. Now, if the desire here is a basic one, then the principle is true: other things being equal, the stronger basic desire prevails. But there is no incompatibility with akrasia, since on the corresponding reading the other principle came out as false. If the desire is a superdesire, on the other hand, then the second principle is not true. I can assign the goal of A-ing priority over the goal of B-ing, yet perform B all the same. This will happen if my basic desire to perform B is stronger than my basic desire to act on my premises and goals – that is, if I am akratic. Again, then, there is no incompatibility with akrasia. It is true that if I superdesire to perform A more than B, then in so far as my subsequent action is guided by my supermental states, I shall perform A if I perform either. That is to say, understood as a
207
Mind and Supermind claim about action that is intentional at the supermental level, the principle is true. There is, then, a non-equivocal reading on which both principles come out as true. There is still no incompatibility with akrasia, however. For all that is ruled out on this reading is the possibility of action that is both akratic and intentional at the supermental level. And I have argued that akratic action is not intentional in this way. The proposed account also highlights the nature of the irrationality involved in akrasia. It is simply a failure to do what one’s conscious reasoning tells one to do. (If we think of conscious practical judgement as constituting the will, then the other name for akrasia – weakness of will – is an appropriate one for this failure.) However, the account does not require us to attribute any basic-level irrationality to the akratic agent. Since their basic desire to perform the akratic action is stronger than their basic desire to act upon their premising policies, it is rational for them to prefer it. The account is thus compatible with an austere view of the basic mind, which treats rationality as a constitutive principle of psychological interpretation. This is not to say that the akratic agent’s basic attitudes are immune from criticism, however. For it may be unwise to value short-term pleasure above adherence to one’s premising policies. There is little point committing oneself to a policy unless one is reasonably confident that one will adhere to it, and a persistent failure to adhere to past policies will erode this confidence. People who are routinely akratic may thus become unable to sustain an effective and coherent supermind (or, more precisely, unable to sustain a supermind capable of effective practical reasoning; they might still be able to maintain its theoretical functions). To sum up, then, supermind theory can provide a robust account of akrasia, compatible with a commitment to basic-level austerity and without the extravagance of a partitioning strategy. Of course, the account applies only to agents with superminds, and it may be objected that this makes it implausible. Surely, we can imagine akrasia occurring in a singlelevel mind? This is to misconstrue the account, however. It is not offered as a conceptual analysis of akrasia, but as an empirical theory of human akrasia. I am not claiming that akrasia must involve supermental processes, merely that it does in us. The objector may say that even this is too strong. Could we not be akratic simply in relation to our non-conscious basiclevel decisions, without any involvement of the conscious supermind? I grant that this sort of akrasia is conceivable (at least if we are prepared to give up an austere view of the basic mind), but see no evidence for its existence. As I mentioned earlier, the evidence for the existence of 208
Further applications akrasia is simply that we are sometimes aware of acting against our conscious judgements.1 1.2 Akrasia and intention So far I have been concerned with the standard form of akrasia, which consists in failure to act upon a practical judgement. It has been suggested, however, that other kinds of akrasia are possible, and, in particular, that we may akratically fail to form, or to act upon, an intention (Mele 1987a, 1992; Rorty 1980). In this section I shall say a little about this kind of akrasia. This will also give me the opportunity to say something about intention itself and how it fits into the two-level framework I have been developing. We use the term intention to characterize both actions and states of mind (Bratman 1987). We speak of actions being done with intentions and of agents having intentions to perform future actions. In what follows I shall be using the term in the latter sense. My concern is with future-directed intentions, since it is in connection with these that the possibility of akrasia arises, and I shall say nothing about intentions manifested in action. (Thus, I shall not address the question of whether acting with an intention must involve having an intention.) Henceforth, ‘intention’ should be understood to mean ‘future-directed intention’, unless otherwise indicated. It is sometimes claimed that intentions are just complexes of beliefs and desires – that to have an intention to perform an action is simply to have certain beliefs and desires relating to the proposed action. In recent years, however, a powerful case has been mounted for regarding intentions as a distinct species of mental state, with a distinctive functional role (Bratman 1987; Harman 1986; Mele 1992). Of these accounts, Michael Bratman’s has been particularly influential. According to Bratman, to have an intention to do something is not simply to desire to do it or to believe that 1
Cohen also appeals to the belief/acceptance distinction in order to explain akrasia. However, his account is the opposite of the one outlined here – representing the akratic action as acceptance-based and the ineffective judgement as belief-based. According to Cohen, the akratic agent ‘has a moral belief that requires him to bring it about that not-p, while he self-indulgently accepts as his maxim the principle of bringing it about that p’ (1992, p. 153). This view seems to be dictated by Cohen’s assumption that we are morally responsible for an action only if it stems from an act of voluntary acceptance. However, it seriously mischaracterizes the phenomenon. Typically, we do not deliberately accept that we shall perform our akratic actions – let alone accept maxims which dictate their performance. Rather, we lapse into them despite ourselves.
209
Mind and Supermind one will do it, but to be committed to doing it. Intentions, he argues, are parts of larger action-plans and help us to organize our activities over time and to co-ordinate them with those of others. Intentions can exercise this role, Bratman claims, because they have three features: they are conductcontrolling (if I intend to A now, then I shall normally at least try to A now; in this respect intentions differ from desires, which are only potential influencers of action); they have stability (once formed, intentions tend to resist revision); and they dispose us to reason in ways that will secure their satisfaction (to think about means and preparatory steps, and to ensure their consistency with other intentions one has) (Bratman 1987, pp. 16, 108–9). Bratman refers to this as the planning theory of intention. This account is, I think, an attractive one. And it is attractive to think of intentions, so conceived, as supermental states. The planning and coordinating of action typically occurs at a conscious level, and the idea that intentions involve commitment suggests that they are flat-out, personally controlled states, like other supermental ones.2 I suspect, then, that intentions belong exclusively to the supermental level and have no basic-level counterpart. I shall not argue for this view, however, and shall remain officially agnostic about the existence of basic-level intentions. Since we are interested in intention-related akrasia, and since the evidence for this relates to conscious intentions, we can set aside the question of whether there are non-conscious intentions. In what follows, ‘intention’ means ‘conscious intention’.3 I suggest, then, that intentions are supermental states. More precisely, I want to suggest that they are a specialized form of goal pursuit, distinguished from the ordinary kind by two features. First, an intention aims at the performance of an action, or series of actions, rather than the existence of an independent state of affairs. Its content may be very specific, incorporating details of when and where and how the action is to be performed, and may become more specific as our planning proceeds. It is 2
3
Indeed, the planning theory of intention and the account of the supermind developed here complement each other well. Bratman notes that the planning theory assumes that planning takes place against a background of flat-out beliefs, and that the theory therefore needs to be supplemented with an account of the nature of such beliefs and of their relation to degrees of confidence (Bratman 1987, pp. 36–7 and p. 166). I am happy to concede that a primitive kind of intention could exist at the sub-personal level. This might consist simply in the memory of the conclusion of an episode of practical reasoning, stored for subsequent execution when the time is right. But I suspect that fullblown intentions of the sort Bratman discusses are found only at the supermental level. And it is to such states, I think, that everyday talk of intentions refers.
210
Further applications because intentions are directed at the performance of actions that they are directly conduct-controlling; desires, on the other hand, dictate actions only in conjunction with instrumental beliefs about how their objects can be realized. Their content also gives intentions a distinctive role in reasoning. Whereas the problem posed by a desire is primarily an instrumental one, that posed by an intention is primarily a planning one. The former requires us to think about which actions to perform in order to bring about the desired state of affairs, the latter about how to arrange our other activities in order to facilitate the performance of the intended action. The second distinguishing feature of intentions is that they have greater stability than other goal pursuits. Ordinary goals are open to revision or rejection at any time. It may be unwise to make a habit of continually changing one’s goals, but there is no reason to regard any particular goal as specially resistant to change. An intention, on the other hand, can perform its function of facilitating long-term planning only if it is resistant to casual revision or rejection. Thus, to form an intention to A, one must not merely take A-ing as a goal, but also commit oneself to maintaining this goal unless strong reasons for changing it present themselves. To sum up: intentions, I suggest, are stable action-oriented goal pursuits.4 It may be objected that this account involves a regress. I claim that intentions are constituted by premising policies. But, surely, adopting a policy involves forming an intention – an intention to perform the actions required by the policy (see Bratman 1987, pp. 87–91; Bratman calls such intentions general intentions, since they are open-ended, rather than being directed at a specific action). So if intentions are policies, then forming an intention will involve forming a second intention, which will in turn involve forming a third, and so on. Any account which has this consequence must be wrong. There are two lines of response to this. The first concedes that policies involve general intentions, but points out that the proposed account applies only to conscious intentions. I can maintain that forming a conscious intention involves forming a further general intention to pursue a premising policy, but insist that this general intention is a non-conscious, basic one, and that such intentions are constituted differently from conscious ones. This would, of course, involve conceding the existence of basic-level intentions. The second option, and the one I tentatively endorse, is to deny that policy adoption must involve forming 4
Let me stress that this is not offered as a conceptual analysis of intention, but simply as an hypothesis about how human intentions are constituted.
211
Mind and Supermind a general intention. As I argued in chapter 4, to have a policy of A-ing it is sufficient to believe that one is committed to a such a policy, to know what the policy requires, and to desire to adhere to it. There is no need to invoke intentions in an account of policy adoption. This is not to deny that some policies may involve intentions. In particular, conscious ones typically will. (The policies that support supermental states, by contrast, are usually non-conscious.) To form a conscious policy of A-ing is, I suggest, typically just to form the conscious intention to pursue a policy of Aing. On the proposed account, this intention will itself involve adopting a non-conscious policy of taking the execution of a policy of A-ing as a stable action-oriented goal. That is, conscious policies will be realized in non-conscious second-order policies. Return now to akrasia. I suggested that it is possible to be akratic in the formation or execution of intentions. One might judge it best to perform some future action, yet akratically fail to form an intention to perform it. Or one might form the intention, yet fail to execute it properly – either omitting to do the necessary planning or simply failing to perform the action when the time comes. Now, if intentions are supermental states, then these kinds of akrasia are easily explained. The explanation is the same as for the standard kind: the agent’s basic desire to perform some other action, or to do nothing, is stronger than their desire to adhere to their premising policies. For example, after I have decided that it is best to visit the dentist next week, my reluctance to make the necessary arrangements might override my basic desire to adhere to the policies that dictated the decision, leading me to refrain from adopting the making of the visit as a stable action-oriented goal. Or, after I have adopted that goal and made the initial arrangements, my fear of the dentist’s chair might outweigh my desire to stick to my goals, with the result that I do not turn up for the appointment. As with the standard kind, this sort of akrasia need not involve any basic-level irrationality – the agent does what they desire most to do – though the preferences it manifests are open to criticism. In preferring immediate satisfaction over adherence to their policies, akratic agents will undermine the effectiveness of their supermental processes, making it harder for them to rely on those processes in the future. Episodes of intention-related akrasia of the sort just described should be distinguished from cases in which an agent changes or revises their intentions without sufficient reason for doing so. Such cases are not ones of akrasia, properly speaking, since they do not involve acting against one’s intentions, 212
Further applications but they do display a sort of weakness of will. (Indeed, Holton argues that unreasonable revisions of intention are the paradigm cases of weakness of will: see Holton 1999.) As I emphasized, it is important to persist in one’s intentions, and a failure to do so will undermine their effectiveness. Incontinent intention revision might be thought of as involving a failure to adhere to a meta-policy of persisting in one’s intentions. 2 se l f - de c e p t i on We often talk of people deceiving themselves – that a partner is faithful, that they do not have a drink problem, that their failure at work is due to the envy of colleagues. Yet the very idea of self-deceit can seem paradoxical. If I deceive you, then I intentionally induce you to believe a proposition which I believe to be false. If self-deceit follows the same pattern, then a self-deceiver is one who intentionally induces themselves to believe a proposition which they believe to be false. But how can this happen? How can I believe that p while also believing that not-p? And how can I intentionally induce myself to believe a proposition I think is false? Surely, any attempt I make will be self-defeating – serving simply to draw my attention to the fact that I think that it is false? There are various ways of responding to these problems. Some writers weaken the interpersonal model, arguing that self-deceivers do not really know the truth about the matter on which they are deceived. Self-deception, they claim, involves believing a proposition in the face of strong evidence for its falsity, but without actually believing that it is false (Canfield and Gustavson 1962; Mele 1983; Penelhum 1964). Others suggest the opposite: that self-deceivers do know the truth, at least nonconsciously, but do not really come to believe the falsehood. To deceive oneself that p, they suggest, one need not actually believe that p, but simply be disposed to avow it sincerely (Audi 1982b), or to avoid entertaining the occurrent thought that not-p (Bach 1981). A third group of writers adopt a partitioning strategy, retaining the interpersonal model, but positing distinct subagents within the self-deceiver, one of which deceives the other (Davidson 1982; Pears 1984, ch. 5).5 None of these strategies is without its drawbacks. Accounts that weaken the interpersonal model, while they may describe real psychological 5
This brief list is not exhaustive, of course, though is it representative of some of the main lines of response to the problem. For a survey of work on self-deception, see Mele 1987b, and for important collections of papers, see Martin 1985 and McLaughlin and Rorty 1988.
213
Mind and Supermind conditions, arguably fail to come to grips with full-blown self-deceit. Partitioning strategies, on the other hand, have an ad hoc air about them and, if taken literally, involve positing subsystems with implausibly sophisticated motives, plans, and self-monitoring abilities (Johnston 1995). Again, supermind theory offers an alternative approach, which has the robustness of a partitioning strategy without its extravagance. There are two ways in which the theory is particularly well placed to explain self-deception. It can account for the conflicting attitudes involved by assigning them to different levels, and it can explain the intentional element in the process by supposing that the deceptively induced state is a supermental one, formed in response to basic-level desires. Here is what I suggest happens in a typical case. The agent has strong evidence for the falsity of some proposition, p, and a strong basic belief that not-p. However, they also have a strong basic desire that p, feel anxiety whenever they entertain the conscious thought that not-p, and are deeply unwilling to acceptp that not-p and to face up to its epistemic and practical consequences. These desires and anxieties lead them to pursue what I shall call a shielding strategy, which involves manipulating their supermental processes in ways designed to keep the distressing thought at bay. In the weakest case they simply take steps to avoid the conscious thought that not-p (see Bach 1981 for a description of various strategies we can use for doing this). But in full-blown self-deceit they go a step further and adopt p as a general premise (that is, as a premise for deliberation on all relevant topics), thereby ending deliberation on the matter and committing themselves to a view they find comforting. I claim, then, that full-blown self-deception involves a form of pragmatic general acceptancep , in which a proposition is acceptedp for non-epistemic reasons and regardless of the evidence for its truth. To this extent, the selfdeceiver is like the positive-thinker who acceptsp that they are confident and capable in order to boost their self-esteem. Where, then, does the element of deceit enter? It enters, I suggest, in the self-deceiver’s attitude towards their acceptancep . The positive-thinker may be fully aware that their acceptancep is pragmatically motivated and unsupported by evidence. The self-deceiver, on the other hand, does not consciously acknowledge this. They either do not explicitly consider the matter or, if they do, think of their attitude as one of justified belief. This is crucial, of course: to acknowledge that they had acceptedp p without evidence would be to reopen the issue of whether p is really true, which is precisely what they want to avoid. Moreover, this lack of conscious awareness is not 214
Further applications accidental, but motivated – sustained by a basic desire to maintain the shielding strategy. It is here that talk of deceit becomes particularly apposite. The self-deceiver unconsciously ‘fiddl[es] with the evidential books’, as Cohen puts it (1992, p. 145), focusing on evidence that favours p and ignoring or explaining away evidence that counts against it, thus sustaining the illusion that their acceptancep of p is epistemically motivated. Such activities will be particularly important at the time the acceptancep is formed – allowing the commitment to be made without obvious epistemic impropriety. I suspect that the self-deceiver’s basic desire to maintain their shielding strategy will have a further effect, too, contributing to a shift in their deliberative standards which makes their acceptancep appear more like a genuine belief. As I argued in chapter 5, acceptancep without high confidence is not really belief. An agent believes that p (in the strand 2 sense) only if they are disposed to take it as a premise in deliberations where they want to take only truths as premises (‘TCP deliberations’). Now, by this criterion, the self-deceiver does not count as believing that p. Since their confidence in p is low, they would not take it as a premise in deliberations where they genuinely wanted to take only truths as premises – either suspending judgement or taking not-p as a premise instead. This would, of course, expose and undermine their shielding strategy – forcing them to acknowledge that they had acceptedp p for pragmatic reasons and without evidence. (Why else should they be so reluctant to rely on it when it is important to rely on the truth?) The self-deceiver’s basic desire to maintain their shielding strategy is thus in tension with any basic desire they may have to treat a deliberation as TCP, and, if strong enough, may override the latter, leading them to reclassify the deliberation as non-TCP. For example, take a case where one is asked whether it is the case that p and has no reason for deceiving the questioner or concealing one’s opinion. In normal circumstances the ensuing deliberation would count as TCP – one would want to take only truths as premises in deciding what to say. But if one is self-deceived with respect to p, then the situation may be different. The desire to maintain one’s shielding strategy may override the desire to take truths as premises, leading one to continue premising that p, and so to declare that p. It is important to stress that the desires mentioned will be basic-level ones and the process will not involve any conscious deceit. At a conscious level the agent will simply entertain the thought that p in, as it were, premising mode – as something they are committed to taking as a premise – unaware that they are doing so in response to a non-conscious 215
Mind and Supermind desire to maintain a shielding strategy, rather than out of a concern with truth. The self-deceiver will also typically avow belief in p and tell themselves that they believe it – again, promoted by a non-conscious desire to maintain their shielding strategy and without any conscious insincerity. The upshot of this is that if the self-deceiver’s basic desire to maintain their shielding strategy is strong – as it will be in cases where they find the thought that not-p very troubling – then there will be few, if any, deliberations which they regard as TCP (the exceptions being ones where what is at stake is something more important to them than p – life itself, perhaps, or the life of a loved one). In such cases, the self-deceiver will take p as a default premise in most of their deliberations, just as if they genuinely believed it, and their attitude to p will be hard to distinguish from genuine superbelief. To sum up, then, I suggest that in a typical case of self-deceit, the agent (1) has a strong basic belief that not-p, (2) has a strong basic desire to avoid consciously acknowledging that p, and consequently (3) pursues a shielding strategy, which involves adopting p as a general premise, manipulating the evidence so as to prevent conscious acknowledgement that they do not really believe that p, and treating as non-TCP many deliberations which they would otherwise have treated as TCP. This view offers solutions to the core problems associated with selfdeceit. Since the opposing attitudes involved are located at different levels, they do not come into direct conflict with each other, and since the shielding strategy is motivated at a non-conscious level, it is not selfdefeating, as it would be if it were conscious. (Indeed, preventing conscious awareness of its own existence is an essential part of the strategy.) Moreover, as with akrasia, the account does not require us to attribute any basic-level irrationality to the self-deceiver and is thus compatible with an austere view of the basic mind. Acceptingp p does not involve believing p at the basic level, and though it does involve possessing certain basic beliefs – among them, that one has acceptedp p – none of these is inconsistent with the self-deceiver’s strong basic belief that not-p. The other aspects of a shielding strategy are also compatible with basic-level rationality. The selfdeceiver manipulates the evidence and adjusts their deliberative standards, but the effects of these activities are confined to the supermental level, and it is quite rational for the self-deceiver to engage in them, given their strong basic desire to avoid consciously acknowledging the truth. (As with akrasia, this is not to say that the self-deceiver’s basic attitudes 216
Further applications are beyond criticism. It may be unwise to place a high value on shielding oneself from unpleasant truths, and better in the long run to face up to them.) I am not going to defend this account here, but I shall briefly address a couple of objections to it before moving on. First, it may be objected that the account does not capture full-blown deceit, since it represents the self-deceiver, not as believing that p, but only as acceptingp it, and thus weakens the interpersonal model from which we started. My response here is to point out that the weakening involved is slight.6 For even if the selfdeceiver’s attitude to p is not belief, it is very like belief and quite different from the sort of general acceptancep whose status is openly acknowledged. As we saw, a self-deceiver will regard themselves as believing that p if they consciously consider the matter, will avow p without any conscious insincerity, and will take p as a default premise in all or most of their relevant conscious deliberations, including many which they would otherwise have regarded as TCP. Indeed, it may be that their attitude does in fact fall within the extension of the folk concept of belief and thus constitutes an exception to the claim that high confidence is necessary for belief. (This would be in the spirit of our original definition of superbelief as unrestricted acceptancep . Since the self-deceiver is reluctant to treat deliberations as TCP, their acceptancep of p will in practice be almost completely unrestricted.) At any rate, there is a real tension between the self-deceiver’s conscious and non-conscious attitudes, and given the nonconscious manipulation involved in supporting the former, it seems quite appropriate to liken their condition to interpersonal deceit. Secondly, it may be objected that not all self-deceitful thoughts are comforting ones which serve to shield us from distressing truths. We can also deceive ourselves into thinking painful thoughts, as when a jealous person deceives themselves into thinking that their partner is unfaithful, despite having no evidence that they are (Davidson 1985, p. 144). I grant that such cases exist. The proposed account can, however, easily be extended to deal with them. The general nature of the deceit is the same as in the standard case: the agent acceptsp the distressing thought as a general premise and then non-consciously manipulates the evidence and adjusts their deliberative standards in order to prevent themselves from consciously 6
All accounts involve some weakening of the interpersonal model. In interpersonal deceit the deceived person believes that p while the deceiver does not, and it is impossible for a self-deceiver simultaneously to believe that p and not believe that p – at least in the same sense.
217
Mind and Supermind acknowledging what they have done. The difference is primarily one of motivation: the aim is not to shield oneself from a distressing thought, but to expose oneself to one – whether for masochistic reasons or in order to pre-empt future distress. Finally, a note on the relation between self-deceit and wishful thinking. The latter, I suggest, also involves acceptingp a proposition for pragmatic reasons and then non-consciously manipulating things in order to cover one’s tracks. What distinguishes it from self-deceit, I suggest, is the degree of confidence the agent has in the acceptedp proposition and the extent of the subsequent manipulation required. In self-deceit the agent’s confidence in the acceptedp proposition is very low, and the need for subsequent manipulation correspondingly high. In wishful thinking, on the other hand, the agent’s confidence is somewhat higher, though still not high enough for normal belief, and the need for manipulation less. 3 f i r st - pe r s on auth ori ty Another area of application for supermind theory lies in the explanation of first-person authority, where the theory supports a performative account of the phenomenon. In this section I shall briefly outline this account and address some objections to it. 3.1 First-person authority as performative A person’s claims about their own current mental states (‘avowals’) are usually regarded as authoritative. This is not to say that we would never question them – we might suspect that the speaker is lying or that they are guilty of self-deception. But we do assume that people cannot make straightforward errors about their own mental states, through overlooking or misinterpreting the evidence, as they might in describing their nonmental states. Indeed, avowals do not seem to be made on the basis of observation or interpretation at all; in typical cases we can say what mental states we have straight off, without first checking the evidence. (Or, at any rate, this is a very common assumption. There is, in fact, evidence that people sometimes confabulate when asked to report their beliefs and desires, yet without realizing that they are doing so. I shall say something about these cases later.) First-person authority is notoriously difficult to explain – at least if we reject the idea that each person’s mind is a distinct non-physical realm to 218
Further applications which they have unique and infallible access. If our mental states belong to the public world of physical causes and effects, then how can we be specially authoritative about them – any more than we can about any other aspect of the physical world?7 It is sometimes suggested that we possess an inner sense – a self-scanning mechanism which gives us specially reliable access to our mental states (Armstrong 1968; Goldman 1993). However, firstperson authority seems to involve something more than reliable access, and it is often suggested that it has a conceptual character. There are various ways of fleshing out this idea. According to expressivist theories, avowals serve to express mental states, rather than to report them, and thus cannot misreport them (Wittgenstein 1953, 1980). Constitutive theories, on the other hand, allow that avowals are reports, but deny that they are empirical ones. According to such theories, in the right circumstances, believing oneself to possess a certain mental state makes it the case that one does possess it: there is nothing more to possessing the state than conceiving oneself to do so. Thus, in the right circumstances, sincere avowals are true a priori (Wright 1989, 1998). Functionalists can make a related move, claiming that it is part of the functional role of mental states to generate accurate second-order judgements about themselves, and part of the functional role of such judgements to be caused only by the states they are about – thus making it a priori that such judgements are true (see Fricker 1998). These views have their attractions, but also some well-known weaknesses. Expressivist theories have the consequence that first-person uses of mental-state terms have a radically different meaning from other uses; constitutive theories tend to assume an irrealist conception of mental states; while the functionalist theories mentioned impose very stringent conditions for belief possession. Again, supermind theory offers an alternative approach, which also represents first-person authority as having a broadly conceptual character and which has a number of further attractions, too. Note, however, that the suggestion will apply only to beliefs, desires, and intentions. Perception and sensation will require a different treatment. An important feature of first-person authority is that it holds only for conscious mental states. In order to identify my non-conscious mental states 7
The problem becomes even more difficult if we hold that the content of mental states is determined in part externally, by their causal relations to features of the external world. For then it seems that authority about our mental states will require a corresponding authority about aspects of the external world, too. For present purposes I shall set aside this aspect of the problem.
219
Mind and Supermind I shall need to observe and interpret my behaviour, and the conclusions I arrive at will not be authoritative. My close friends may be better observers of my behaviour than I am. (Thus avowals are authoritative only if intended as ascriptions of conscious mental states; henceforth I shall take this limitation as read.) Now, I have argued that conscious beliefs and desires are supermental states, and I want to suggest that the authority we have in regard to them depends on distinctive features of the supermind. The key feature is that supermental states are under personal control. We can actively adopt superbeliefs and superdesires by committing ourselves to appropriate premising policies. And I suggest that avowals serve to make or reaffirm premising commitments of this kind. A sincere utterance of ‘I believe that p’, used in reference to a conscious belief, makes a commitment to acceptingp p unrestrictedly (that is, to taking it as a default premise in all relevant deliberations, including TCP ones). It is, in effect, shorthand for ‘I hereby acceptp p unrestrictedly.’ Similarly, an utterance of ‘I want x’, used to refer to a conscious desire, makes a commitment to taking x as a goal. In these cases, the primary function of the avowal is neither descriptive nor expressive, but performative. This approach has a number of attractive features. In order to explain I shall need to say a little about how I view performatives. (It is not essential that the reader share this view; the assimilation of avowals to performatives would, I think, remain attractive on other views of performatives, though its attractions stand out particularly well on this one.) A performative utterance is one which performs the action it apparently describes – for example ‘I thank you’ or ‘I promise to be there.’ It is sometimes denied that such utterances have truth values, but I shall assume otherwise. A performative utterance, I shall assume, not only performs some act, but also states that the speaker performs it. This case for this view is strong (see Bach and Harnish 1979; Heal 1974; Searle 1989). I shall also assume that performative utterances derive their status from the intentions with which they are made, rather than from special meanings attaching to the terms used, or from special conventions surrounding their use (though some formal performatives are dependent on social conventions). Again, there is a strong case for this view (see Bach and Harnish 1979, 1992). (Note that I am here using the term ‘intention’ in its other sense, to characterize actions rather than states of mind. There is no implication that performative utterances must issue from previously formed future-directed intentions.)
220
Further applications Now, it is a consequence of the view of performatives just outlined that performative utterances are self-guaranteeing. In sincerely uttering the sentence ‘I promise to A’, I make a promise to A, and thus bring it about that my utterance is true. And if avowals are performatives, then they, too, will be self-guaranteeing. In sincerely uttering ‘I believe that p’, I acceptp p unrestrictedly, and thus bring it about that I believe that p; the utterance simultaneously asserts that I am a p-believer and makes me one. The performative view thus explains the authority of avowals as a species of the more general phenomenon of self-guaranteeing performativity. This account of first-person authority might loosely be called a constitutive one, in that it holds that a sincere avowal makes it the case that the speaker possesses the avowed state; but it should be clearly distinguished from the constitutive theories mentioned earlier. According to those theories, simply believing oneself to possess a certain mental state is, in the right circumstances, constitutive of the state. On the view outlined here, by contrast, it is not the belief that one possesses a mental state that is constitutive of the state, but the act of committing oneself to a suitable premising policy. The mental state is constituted by the policy and the avowal is constitutive of the mental state only because it makes a commitment to the policy. The performative view explains other features of avowals, too. Consider, first, a feature of their epistemology. It is often noted that if asked whether we believe a certain proposition, we direct our attention, not to ourselves and our mental states, but outward, to the aspects of the world that make the proposition true or false (Carruthers 1996c; Evans 1982, p. 225; Gordon 1995). If asked whether we believe that beef is safe to eat, we think about beef, not about ourselves. (This does not contradict the earlier claim that first-person belief reports are not based on evidence, since the evidence in question is quite different. The point was that such reports are not based on evidence for the existence of the beliefs reported.) Similarly, if asked whether we want a certain outcome or intend a certain action, we think about the desirability of the outcome or the value of the action, not about ourselves. Now, if avowals are performatives, then it is easy to see why this is so. For in avowing belief in a proposition, we are acceptingp it unrestrictedly, and since it will be rational to do this only if we think the proposition likely to be true, it is appropriate to consider the evidence for it before committing ourselves. Similarly, in saying that we desire a certain outcome or intend a certain action, we are committing
221
Mind and Supermind ourselves to pursuing the one or performing the other, and it is appropriate to consider their utility before doing so. The view also resolves a puzzle about the semantics of psychological terms. On the one hand, first-person present-tense uses of psychological verbs seem to possess some special semantic feature which accounts for their authority. On the other hand, we have a strong intuition that psychological verbs have the same meaning in all their various persons and tenses. If I say that I currently believe a certain proposition, then I seem to be saying exactly the same of myself as I would be saying of you if I said that you believed it, or of my past self if I said that I formerly believed it. Now, if avowals are performatives, then this tension is explained. For performative verbs have the same meaning in all their uses, but can be used performatively only in the first-person present tense. My utterance of ‘I promise to A’ says the same of me as my utterance of ‘You promise to A’ says of you (namely, that we promise to A), but only the former utterance additionally makes a promise to A. That is to say, first-person present-tense uses of performative verbs are special, not because they have a different meaning, but because they perform an additional action. The same, I suggest, goes for the psychological verbs ‘believe’, ‘desire’, and ‘intend’. When I say that I believe that p, I am saying the same of myself as I say of you when I say that you believe that p, but when I say it of myself I do something extra – namely, commit myself to a premising policy – which makes the statement true. 3.2 Objections and replies This is not the place for a full defence of this account, but I shall briefly consider some possible objections to it, in order to help fill in the picture. First, it may be objected that on this account all we can be authoritative about is that we are currently committing ourselves to a premising policy; we cannot be sure that the commitment will produce a continuing allegiance to the policy and thus a persisting superbelief or superdesire. We might make the commitment and then immediately forget that we have done so or suddenly lose the will to discharge it. This is true, but does not amount to a serious limitation on first-person authority. For in normal circumstances a commitment will generate a continuing allegiance of at least some duration. Instant amnesia is rare, and whatever reasons we have for making a commitment will be reasons for discharging it too. Only in pathological cases, then, will a sincere avowal fail to introduce the 222
Further applications corresponding mental state, and it is no surprise that first-person authority may fail in such cases. (This is not to say that we never forget or abandon our premising policies, just that we do not do so immediately upon adopting them. An avowal does not, of course, guarantee that the avowed state will never be lost.) A second objection is that the proposed account applies, at best, only to some avowals. Some avowals may make premising commitments, but, surely, many others serve simply to report commitments already made – that is, to let others know what we believe or want or intend. And in these cases first-person authority will not hold, since we may misremember our commitments. If asked whether I believe a particular philosophical theory which I thought about at some point in the past, I may misremember the conclusion I came to – perhaps thinking that I decided to acceptp it when in fact I did not. Now, it is certainly true that not all avowals serve to make new premising commitments; many, I agree, reflect commitments already made. But even in these cases, I suggest, the avowal still involves commitment. In avowing a mental state we report that we currently possess it – that is, that we are currently committed to the relevant premising policy. And we are not bound to make such a report, even if we are aware of possessing the state in question up to this very moment. For we might change our minds right now – that is, revise or repudiate the premising policy involved. Indeed, being questioned about our beliefs or desires may itself provoke such a change (‘Well, I did believe it, but now you ask, I’m not so sure’). The decision to avow a mental state, then, reflects a tacit decision not to change our minds, and recommits us to the relevant policy. Thus, in the imagined case, if I were to avow belief in the philosophical theory under the misapprehension that I had previously acceptedp it, then I would thereby acceptp it and incur the relevant premising commitments. Thirdly, it may be objected that for a speech act to constitute a commitment to a premising policy it has to be intended as such, and we simply do not think of avowals in this way. This objection is weak, however. For while it is true that we do not consciously think of avowals as commitments to premising policies, it is possible that we do so non-consciously, at the basic level. This is not to say that avowals are never made with conscious thought, just that their function is dependent on more specific, non-conscious attitudes. At a conscious level, we simply intend to say what we believe; but saying what we believe, I claim, involves producing an utterance with the non-conscious intention of making a commitment to a premising policy. 223
Mind and Supermind This intention reveals itself in our subsequent behaviour – in the way that we feel bound to treat the content of the avowed belief as a premise. This objection prompts a fourth. If the function of avowals depends on our non-conscious attitudes, does this not compromise our title to firstperson authority? We might think that we were making a sincere avowal when in fact our utterance was motivated by a non-conscious desire to deceive our hearer, rather than by a wish to make a serious premising commitment. This objection is also weak. For in so far as it trades on the possibility of non-conscious deceit, it applies to any theory of firstperson authority. On any account, first-person authority holds only for avowals that are sincere, in the sense of not deceitful, and if we allow that non-conscious deceit is possible, then we must allow that some avowals may be insincere, and hence non-authoritative, even though no conscious deception is intended. If that conclusion is rejected, then the problem lies with the claim that non-conscious deceit is possible, rather than with the account of first-person authority offered here. (In practice, non-conscious deceit of this kind would soon reveal itself to the deceiver; we would soon realize that the avowal we had made was not sincere when we found ourselves unwilling to take the content of the avowed attitude as a premise in our private reasoning.) Similar considerations apply to self-deceit. I claim that sincerely to avow belief in a proposition is to acceptp it unrestrictedly – that is, to commit oneself to taking it as a default premise in all relevant deliberations, including TCP ones. Sincere avowals will thus require high confidence, since this is a prerequisite for premising in TCP deliberations. And, again, we cannot be sure that any given avowal we make is sincere in this way. For all we know, it might have been motivated by a non-conscious desire to maintain a shielding strategy, and the commitment involved might have been restricted to non-TCP deliberations and unaccompanied by high confidence. We may, in other words, be deceiving ourselves. Again, however, this does not constitute a special objection to the present proposal, since on any account, first-person authority will fail when we are self-deceiving. (I am assuming here that self-deceivers do not count as genuinely believing the propositions about which they are deceived. If that assumption is false – and, as we saw earlier, it may be – then the objection does not arise.) The moral of these considerations is that the authority of avowals is defeasible and, more specifically, that the defeating conditions can be nonconscious. It follows that we cannot be sure that any given avowal we make 224
Further applications is authoritative. That is to say, we are not authoritative about the existence of the conditions under which we are authoritative about our mental states: we do not have second-order authority of that kind. But it remains true that we possess first-order authority: in the right circumstances – circumstances which only rarely fail to obtain – we can pronounce directly and authoritatively on our current conscious mental states, without risk of error through mistaking or misinterpreting the evidence. Finally, I want to consider a challenge to the very existence of firstperson authority. There is evidence that our verbal reports of our mental states and processes are sometimes confabulated. Psychologists have found that it is possible to influence a person’s behaviour by means of suggestions and other stimuli which are not obviously relevant to it. When the subjects of these experiments are asked to explain their behaviour, their responses are surprising. They do not mention the stimuli or confess ignorance of their motives, but instead invent some plausible reason for their actions (Nisbett and Wilson 1977).8 What is happening in these cases, it seems, is that the subjects’ actions are guided by non-conscious processes of which they are unaware, and their reports are attempts to interpret their own behaviour as if from a third-person perspective. So far, there is no conflict here with first-person authority; as I emphasized earlier, we are not authoritative about our non-conscious mental states. The problem is that in these cases the subjects do not realize that their reports are merely self-interpretations – they make them with complete sincerity, just as if they were reporting conscious mental states and processes. So, it seems, avowals can be erroneous. Indeed, for all we know, many of our avowals may be like this, and first-person authority an illusion. I think that the threat here is specious. The main point to stress is that first-person authority in the strict sense holds only for one’s current mental states, and the reports in question are of past ones – of the states and processes that led to a prior action. Now, it is true that the actions are not long past, and the subjects seem to indicate that they still possess the beliefs or desires they cite. But it is not clear that they are wrong about this. For in avowing a belief or desire, I suspect, they will silently endorse 8
Similar results occur with subjects who have undergone commisurotomy (surgical severing of the corpus callosum which connects the two halves of the brain). If a command such as ‘Walk!’ is presented to such a subject in the left half of their visual field, so that it is received only by their right hemisphere, then they will typically comply with it. Yet when asked to explain their action, they will not mention the command, since their left hemisphere, which controls speech, has no information about it. Instead, they will invent some plausible reason for the action, such as that they were going to get a drink (Gazzaniga 1983, 1994).
225
Mind and Supermind it, committing themselves to taking it as a premise and to regulating their future behaviour accordingly. So although they did not previously possess the mental states they mention and are wrong to cite them in explanation of their earlier actions, they do now possess them, at the supermental level, and their avowals are therefore accurate. These cases thus present no threat to the existence of first-person authority, at least as understood within the context of supermind theory. (What these cases do show is that we know less than we think about our mental processes. But this is, I think, just because we overestimate what we can know about them – perhaps because we tacitly subscribe to the unity of belief assumption. We are, I assume, usually able to give accurate reports of the reasons for those of our actions that are motivated at a conscious level. Our mistake is to think that we can give similarly accurate accounts of all our actions and to assume that any explanations that come to mind must be true.) This is all I shall say here in defence of the proposed account. The general idea that avowals are performatives has been canvassed before – most recently by Jane Heal (Heal 1994, 2002). Heal arrives at the view by abductive means, arguing first for a broadly constitutive account of first-person authority, and then developing the idea through an extended comparison between avowals and promises. The picture that emerges is similar in outline to the one proposed here: in self-ascribing the belief that p we also judge that p is true, thereby introducing the ascribed belief. Heal does not flesh this account out in any detail, but it is interesting to note that she gestures at the need for a two-strand theory of belief – drawing a distinction between what she calls natural beliefs and personal beliefs, the former being non-conscious states over which we have no special authority, and the latter, states of a more reflective kind for which a performative account is appropriate. This suggests that Heal’s view may harmonize well with the one developed here.9 4 sc i e nt i f i c p syc h olog y So far, I have focused on applications within the scope of folk psychology: the phenomena considered have been ones characterized in 9
There is another anticipation of the present account in the ‘constructivist’ theory of selfknowledge proposed by Julia Tanney (Tanney 1996). Tanney suggests that psychological self-description is to some extent a creative enterprise: within certain limits, we are to free to choose which self-interpretations to accept – our choices carrying a commitment to act in accordance with the interpretations chosen.
226
Further applications folk-psychological terms and recognized in everyday discourse about the mind. The ease with which supermind theory can explain these phenomena is further evidence that the theory is implicit in folk psychology. But if the theory is, in addition, true – as I have claimed it is – then it should also have applications outside the realm of folk psychology. The theory should illuminate, and be illuminated by, work in various areas of scientific psychology. Exploring these connections will be a separate task, but I shall briefly mention some links here and suggest how they might be developed. 4.1 Dual-process theories The most obvious link is with ‘dual-process’ theories of reasoning developed by some psychologists working on reasoning and rationality. Experimental evidence shows that humans consistently make certain basic errors in reasoning tasks, particularly ones involving conditionals and probabilities. These errors seem to reflect certain innate cognitive biases, and subjects continue to make them, particularly when a fast response is required, even when they have been taught the principles governing correct responses and shown themselves able to apply them in other situations. Several writers have suggested that this indicates that humans possess two reasoning systems: a non-conscious system (‘system 1’), which is fast, inflexible, and automatic, and which relies on heuristics rather than rules of inference, and a conscious system (‘system 2’), which is slow, flexible, and controlled, and which is responsive to verbal instruction (Evans and Over 1996; Sloman 1996; Stanovich 1999). It has been suggested that individual differences in cognitive ability are largely due to system 2 processes, while system 1 processes display little individual variation (Stanovich 1999). There is a clear correspondence here with the two-level theory set out in this book, with system 2 corresponding to the supermind and system 1 to the basic mind, and I believe that supermind theory can be fruitfully integrated with dual-process theories. In particular, the account of the supermind developed here may provide a framework for thinking about system 2 processes – explaining how they are constituted, how they influence action, and the nature of the states upon which they operate. Experimental work may in turn suggest ways in which the account can be revised and refined. The relation between the basic mind and system 1 is more complex, and at first sight there may seem to be an incompatibility here. The basic mind, as pictured here, is a collection of multi-track 227
Mind and Supermind behavioural dispositions (thickly carved functional states), whose attribution carries a presupposition of rationality. System 1, on the other hand, is conceived as a processing system, or suite of processing systems, whose operations sometimes violate rational norms. But the conflict here may be only superficial. We can think of the psychologist’s account of system 1 as a description of the sub-personal processes underlying the basic mind. That is to say, we can regard both accounts as focused on the same level of cognition – the folk account serving as a competence theory, which provides an idealized description of the system’s behaviour, and the psychologist’s as a performance theory, which aims to identify the underlying mechanisms and their particular idiosyncrasies. (This is what we should expect: the basic mind is a folk construct, and the folk have neither great interest in non-conscious cognition nor access to the experimental data needed in order to theorize about it.) 4.2 Evolutionary and developmental psychology Another broad area of application for supermind theory lies in thinking about the development of the human mind, both in the species and in the individual, and the extent to which our mental capacities are innate. There are obvious implications here for supermind theory and, in particular, for its picture of the conscious mind. If the conscious mind is a virtual structure, as I have suggested, formed by the exercise of various personal abilities, then in order to understand its development we shall need to think about how these abilities develop and how they become co-ordinated in the service of supermentality. The issues here are complex, but I shall offer a few suggestions. First, I suspect that the elements of supermentality developed independently in the species and continue to develop independently in the individual, only later being co-opted to support a supermind. Consider some of the abilities required for simple language-based forms of acceptancep and goal pursuit. In addition to natural language, these include: (1) private speech – the habit of speaking to oneself; (2) meta-representational abilities – the ability to think of sentences as representations of states of affairs, actual and non-actual; (3) personal inferential skills, including the ability to construct explicit arguments following learned inference rules; (4) meta-cognitive skills – skills in focusing attention, searching memory, keeping track of information; and (5) strategic abilities – the ability to adopt and execute policies of action. It is also likely that supermentality requires 228
Further applications theory of mind. Pursuing a premising policy involves having beliefs about propositions and the attitudes one has adopted towards them. And if theory of mind is required for such beliefs (as it surely is), then it will be required for supermentality. Now, each of these abilities has a more basic function than that of helping to support a supermind. Private speech can help to focus attention and aid memory; meta-representational abilities are required for reading, writing, and other sophisticated forms of language use; inferential skills are needed for rational debate with one’s peers; meta-cognitive skills can improve performance on a wide range of tasks; strategic abilities are required for engagement in structured social activities where there is division of labour; and theory of mind is, of course, essential for efficient social interaction. These skills are ones that children acquire in the course of normal development, and there may be an innate component to them. This suggests, then, that the conscious mind is a kludge – a jury-rigged system assembled from pre-existing components designed for other purposes. But how do we come to unite these disparate skills in the service of supermentality? Does each of us need to learn the trick of premising, or are we innately disposed to master it? I suspect the latter, thanks in part to what is known as the Baldwin effect. The idea is that once a useful skill has been discovered – say, the knack of making a certain tool – then there will be pressure for other members of the discoverer’s community to acquire it too. Those who find it easy to acquire the skill will thus have a selective advantage over those who find it hard, and individuals who are innately predisposed to acquire it will gradually come to predominate in the community. Dennett suggests that this happened with inner verbalization, and I suspect that the same is true of the more complex activity of premising (Dennett 1991a, ch. 7). Normal children spontaneously engage in conscious verbalized reasoning, without instruction or encouragement, and if such reasoning involves premising, as I claim, then it is plausible to think that we are innately disposed to form and execute premising policies. Given the advantages of having a supermind, there should certainly have been the sort of selectional pressure needed for the Baldwin effect to get a grip here. Note that this is not to say that supermental abilities will develop in any environment; some quite specific environmental stimuli may be needed for their emergence – including exposure to and engagement in rational debate, in which rules of inference are followed and at least tacitly acknowledged. But I suspect that in any environment which facilitates acquisition of the component abilities required for premising, children 229
Mind and Supermind will spontaneously apply these abilities to construct a working supermind. Note, too, that this still leaves room for cultural and individual variation in supermental processes. Even if we are innately disposed to acquire and apply inference rules, which rules we acquire will be to some extent culturally determined. Cultural and linguistic factors may also influence the style of conscious reasoning – cultures with confrontational debating styles tending to promote more analytical styles of reasoning, and languages that highlight logical form facilitating deductive inference. This view of the conscious mind may help to shed light on a particularly difficult problem in evolutionary psychology. Much recent work in this area has been inspired by the view that cognition is modular – that the mind consists of discrete modules, each specialized for dealing with a particular domain and with its own dedicated inputs, outputs, and processing mechanisms (see, for example, Barkow et al. 1992; Carruthers and Chamberlain 2000; Hirschfeld and Gelman 1994; Mithen 1996; Pinker 1997). Candidate modules include ones for social contracts, theory of mind, biological classification, and simple physics. This approach has proved fruitful, but leaves us with a problem when it comes to explaining the origins of domain-general thinking – thinking which involves uniting ideas from different domains. Clearly such thinking did evolve, since we are capable of it, but it is hard to see how a modular mind could support it – there would not even have been a common medium for the construction of cross-modular thoughts. Nor is it clear how a domain-general reasoning system could have evolved from a modular basis; surely any new cognitive demands could have been met more efficiently by tweaking old modules or adding new ones, rather than by developing a cumbersome general-purpose reasoning system? We might call this the ‘hard problem’ of evolutionary psychology. Now, if we identify the domain-general system with the supermind, then this hard problem may become at least a little more tractable. For on this view, the development of domain-general reasoning would not have required substantial changes to the underlying cognitive architecture. As we have seen, a premising machine could have been created by co-opting pre-existing cognitive resources, with natural language serving as the system’s representational medium. Even if a genetic disposition to premising emerged, the neural changes involved need not have been elaborate. Of course, the hard problem immediately resurfaces as the question of how a modular mind could support the various abilities required for premising and how it could co-ordinate them effectively; but refocusing the problem 230
Further applications in this way may itself constitute progress. And it is not too hard to see how some of the abilities involved could be supported by modular systems. Meta-representational abilities could be supported by theory of mind or the language system, and strategic skills by social intelligence. Some inferential abilities might also be supported by the language system – taking the form of skills in spotting sentences with certain syntactic features and producing appropriate formal transformations of them. At any rate, there is a potentially fruitful line of research here.10 4.3 Clinical psychology Supermind theory may also shed light on aspects of abnormal development. What would happen if the supermind failed to develop normally? What predictions would the theory make, and are they borne out? Recall the remarks in chapter 5 about the functions of the supermind. There I suggested that supermental abilities confer benefits in the areas of behavioural control, cognitive control, and self-awareness. A person with impaired supermental abilities would have deficits in these areas. They would get stuck in behavioural dead-ends, unable to make up or change their minds; they would have little control over the content or style of their thinking; and they would have a poor understanding of their own minds and motivations. It is very tempting to make a connection here with autism. Autism is a developmental disorder of the brain, characterized by three main areas of impairment: problems in social interaction, communication difficulties, and repetitive behaviour (American Psychiatric Association 1994). It is the last aspect I want to focus on. Autistic people typically exhibit stereotyped movements, insist on following routines, and have a limited range of interests. This aspect of the condition has been relatively neglected in studies. The leading current theory of autism is that sufferers are mindblind – that they have an impaired theory of mind system and consequently have great difficulty understanding other people’s actions and predicting their behaviour. This theory is supported by some powerful experimental evidence and provides an attractive explanation of autistic people’s difficulties in social interaction and communication (see, for example, Baron-Cohen 1995; Baron-Cohen et al. 2000; Leslie 1991). However, it does not provide 10
The idea that domain-general reasoning is conducted in natural language and involves the redeployment of existing modular resources is one that has been canvassed in some detail by Peter Carruthers (Carruthers 2002, forthcoming).
231
Mind and Supermind a direct explanation of their repetitive behaviour, which is often written off as a side-effect of the condition. Since autistic individuals cannot understand and predict other people’s behaviour, it is suggested, they find social situations frightening and resort to repetitive behaviour as a coping mechanism (Baron-Cohen 1989; Carruthers 1996a; for criticism see Turner 1997). In the present context, however, another explanation suggests itself. Perhaps autistic people have difficulty maintaining an effective supermind. This would directly explain their repetitive behaviour. Lacking the power to engage in active premising, they would be at the mercy of their nonconscious minds, unable to override instinctive responses and take active control of their thinking. In contexts where there were clear cues for action this might not matter much. But when confronted with novel situations where creative thinking was required, such people might easily get locked into repetitive behavioural patterns. (There is indeed evidence that autistic people show less repetitive behaviour in situations which are structured for them than in ones where they have to decide for themselves: see Turner 1997.) This hypothesis explains other aspects of autism, too. Autistic people tend to perform well on routine tasks of the sort which can be executed without conscious thought, but much less well on ones that require reflection (see Frith et al. 1994, and the papers in Russell 1997). They also seem to have very impoverished inner lives – as we would expect if they do not engage in supermental activities. When asked to describe their own mental processes they report little or no inner verbalization or unsymbolized thoughts – in strong contrast to normal individuals (Hurlburt et al. 1994; Frith and Happ´e 1999). Some advocates of the mindblindness theory explain this by claiming that autistic people are blind to their own minds as well as to those of others (Carruthers 1996a), but the present account suggests a more radical explanation – that they simply do not have conscious mental states, or only fragmentary ones. This is of course only a suggestion – no more than a proposal for future work. Let me stress, too, that it is intended, not as an alternative to the mindblindness theory, but as a supplement to it. The mindblindness theory offers, I think, a very plausible explanation of the social and communicative difficulties from which autistic people suffer. But autism is a complex condition, and it is not implausible to suppose that it involves more than one underlying impairment. More intriguingly, it may be that impaired supermental abilities result from an underlying impairment in theory of 232
Further applications mind. As I mentioned earlier, in order to form premising policies one needs to think about the mental attitudes one has adopted to propositions, and it is very likely that theory of mind is required for this. If so, then a person with impaired theory of mind would have difficulty, not only in understanding the minds of others, but also in constructing a supermind for themselves. conc lu s i on The topics discussed in this chapter by no means exhaust the potential applications for supermind theory, either in philosophy of mind or in scientific psychology. I have, for example, said nothing about how emotion might fit into the picture. Do we have two levels of emotion, as well as of cognition? Are the cognitive elements in emotion located at the basic level or the supermental level, or both? How do our emotions affect our premising activities? Supermind theory may also have important implications for theories of mental content. And, of course, the applications already sketched require further elaboration and defence. All this is a matter for another time, however. My main aim here has simply been to set the core theory in a wider context, as a way of helping to evaluate it. I think that the ease with which it finds application in a variety of areas is itself very encouraging.
233
Conclusion I have made some big claims in this book. I have presented a revisionary view of folk psychology and at the same time a revisionary picture of the architecture of the human mind. I think that the resulting picture is coherent, but am not sanguine enough to believe that I have got everything right (though in the present context I do of course acceptp that I have). I am, however, very confident of the fundamental claim that both folk psychology and scientific psychology require a two-strand theory of belief and reasoning, and if I have succeeded in making that claim look credible and given some useful pointers as to how the theory should be developed, then I shall be satisfied. Finally, I hope that this work will tend to have an irenic effect – showing how some apparently conflicting views about the mind can be reconciled. It is natural to adopt a confrontational style in academic debate: inquiry is about truth, not compromise, and a forensic approach is often the best way to uncover it. It can mislead us, however. Some conflicts are specious, and sometimes we need to step back and take a wider view. I hope that this work has demonstrated the attractions of such an approach in thinking about the mind.
234
References American Psychiatric Association. 1994. Diagnostic and Statistical Manual of Mental Disorders (4th edn). Washington, D.C.: American Psychiatric Association. Anscombe, G. E. M. 1957. Intention. Oxford: Blackwell. Armstrong, D. M. 1968. A Materialist Theory of the Mind. London: Routledge. 1984. Consciousness and causality. In D. M. Armstrong and N. Malcolm, Consciousness and Causality: A Debate on the Nature of Mind (pp. 103–91). Oxford: Blackwell. Audi, R. 1973. The concept of wanting. Philosophical Studies, 24: 1–21. 1982a. Believing and affirming. Mind, 91: 115–20. 1982b. Self-deception, action, and will. Erkenntnis, 18: 133–158. 1985. Rationalization and rationality. Synthese, 65: 159–84. 1993. Mental causation: sustaining and dynamic. In J. Heil and A. Mele (eds.), Mental Causation (pp. 53–74). Oxford: Oxford University Press. Bach, K. 1981. An analysis of self-deception. Philosophy and Phenomenological Research, 41: 351–70. Bach, K., and Harnish, R. M. 1979. Linguistic Communication and Speech Acts. Cambridge, Mass.: MIT Press. 1992. How performatives really work: a reply to Searle. Linguistics and Philosophy, 15: 93–110. Baier, A. 1979. Mind and change of mind. Midwest Studies in Philosophy, 4: 157–76. Barkow, J. H., Cosmides, L., and Tooby, J. (eds.) 1992. The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press. Baron-Cohen, S. 1989. Do autistic children have obsessions and compulsions? British Journal of Clinical Psychology, 28: 193–200. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, Mass.: MIT Press. Baron-Cohen, S., Tager-Flusberg, H., and Cohen, D. J. (eds.) 2000. Understanding Other Minds: Perspectives from Developmental Cognitive Neuroscience (2nd edn). Oxford: Oxford University Press. Bechtel, W., and Abrahamsen, A. 1991. Connectionism and the Mind. Oxford: Blackwell. Bennett, J. 1990. Why is belief involuntary? Analysis, 50: 88–107. Bickerton, D. 1990. Language and Species. Chicago: University of Chicago Press. 1995. Language and Human Behaviour. London: UCL Press.
235
List of references Blackburn, S. 1990. Filling in space. Analysis, 50: 62–5. Block, N. 1995. On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18: 227–47. Botterill, G. 1994. Beliefs, functionally discrete states, and connectionist networks: a comment on Ramsey, Stich, and Garon. British Journal for the Philosophy of Science, 45: 899–906. Botterill, G., and Carruthers, P. 1999. The Philosophy of Psychology. Cambridge: Cambridge University Press. Braithwaite, R. B. 1932–3. The nature of believing. Proceedings of the Aristotelian Society, 33: 129–46. Bratman, M. E. 1987. Intention, Plans, and Practical Reason. Cambridge, Mass.: Harvard University Press. 1992. Practical reasoning and acceptance in a context. Mind, 101: 1–15. Braun, D. 1995. Causally relevant properties. Philosophical Perspectives, 9: 447–75. Burge, T. 1996. Our entitlement to self-knowledge. Proceedings of the Aristotelian Society, 96: 91–116. Canfield, J. V., and Gustavson, D. F. 1962. Self-deception. Analysis, 23: 32–6. Carruthers, P. 1986. Introducing Persons: Theories and Arguments in the Philosophy of Mind. Beckenham, Kent: Croom Helm. 1996a. Autism as mind-blindness: an elaboration and partial defence. In P. Carruthers and P. K. Smith (eds.), Theories of Theories of Mind (pp. 257– 73). Cambridge: Cambridge University Press. 1996b. Language, Thought, and Consciousness: An Essay in Philosophical Psychology. Cambridge: Cambridge University Press. 1996c. Simulation and self-knowledge. In P. Carruthers and P. K. Smith (eds.), Theories of Theories of Mind (pp. 22–38). Cambridge: Cambridge University Press. 1998. Conscious thinking: language or elimination? Mind and Language, 13: 457–76. 2002. The cognitive functions of language. Behavioral and Brain Sciences, 25: 657–74. forthcoming. Distinctively human thinking: modular precursors and components. In P. Carruthers, S. Laurence, and S. Stich (eds.), The Structure of the Innate Mind. Carruthers, P., and Boucher, J. (eds.) 1998. Language and Thought: Interdisciplinary Themes. Cambridge: Cambridge University Press. Carruthers, P., and Chamberlain, A. (eds.) 2000. Evolution and the Human Mind: Modularity, Language, and Meta-cognition. Cambridge: Cambridge University Press. Cherniak, C. 1986. Minimal Rationality. Cambridge, Mass.: MIT Press. Chisholm, R. M. 1957. Perceiving. Ithaca: Cornell University Press. Chomsky, N. 1975. Reflections on Language. New York: Pantheon Books. 1988. Language and Problems of Knowledge: The Managua Lectures. Cambridge, Mass.: MIT Press. Christensen, S. M., and Turner, D. R. (eds.) 1993. Folk Psychology and the Philosophy of Mind. Hillsdale, N.J.: Erlbaum.
236
List of references Churchland, P. M. 1979. Scientific Realism and the Plasticity of Mind. Cambridge: Cambridge University Press. 1981. Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78: 67–90. Churchman, C. W. 1956. Science and decision making. Philosophy of Science, 23: 248–49. Clark, A. 1989. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, Mass.: MIT Press. 1990a. Belief, opinion and consciousness. Philosophical Psychology, 3: 139–54. 1990b. Connectionist minds. Proceedings of the Aristotelian Society, 90: 83–102. 1991. Radical ascent. Proceedings of the Aristotelian Society, Suppl. 65: 211–77. 1993a. Associative Engines: Connectionism, Concepts and Representational Change. Cambridge, Mass.: MIT Press. 1993b. The varieties of eliminativism: sentential, intentional and catastrophic. Mind and Language, 8: 223–33. 1998. Magic words: how language augments human computation. In P. Carruthers and J. Boucher (eds.), Language and Thought: Interdisciplinary Themes (pp. 162–83). Cambridge: Cambridge University Press. Clarke, D. S. 1994. Does acceptance entail belief? American Philosophical Quarterly, 31: 145–55. 2000. The possibility of acceptance without belief. In P. Engel (ed.), Believing and Accepting (pp. 31–53). Dordrecht: Kluwer. Cohen, L. J. 1989. Belief and acceptance. Mind, 98: 367–89. 1992. An Essay on Belief and Acceptance. Oxford: Oxford University Press. 2000. Why acceptance that P does not entail belief that P. In P. Engel (ed.), Believing and Accepting (pp. 55–63). Dordrecht: Kluwer. Copeland, J. 1993. Artificial Intelligence. Oxford: Blackwell. Cosmides, L., and Tooby, J. 1996. Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58: 1–73. Crimmins, M. 1992. Tacitness and virtual beliefs. Mind and Language, 7: 241–63. Davidson, D. 1963. Actions, reasons and causes. Journal of Philosophy, 60: 685–700. Reprinted in Davidson 1980. 1969. How is weakness of the will possible? In J. Feinberg (ed.), Moral Concepts (pp. 93–113). Oxford: Oxford University Press. Reprinted in Davidson 1980. 1970. Mental events. In L. Foster and J. W. Swanson (eds.), Experience and Theory (pp. 79–101). Amherst: The University of Massachusetts Press. Reprinted in Davidson 1980. 1975. Thought and talk. In S. Guttenplan (ed.), Mind and Language (pp. 7–23). Oxford: Oxford University Press. Reprinted in Davidson 1984. 1980. Essays on Actions and Events. Oxford: Oxford University Press. 1982. Paradoxes of irrationality. In R. Wollheim and J. Hopkins (eds.), Philosophical Essays on Freud (pp. 289–305). Cambridge: Cambridge University Press. 1984. Inquiries into Truth and Interpretation. Oxford: Oxford University Press.
237
List of references 1985. Deception and division. In E. LePore and B. P. McLaughlin (eds.), Actions and Events: Perspectives on the Philosophy of Donald Davidson (pp. 138–48). Oxford: Blackwell. Davies, M. 1991. Concepts, connectionism, and the language of thought. In W. Ramsey, S. Stich, and D. Rumelhart (eds.), Philosophy and Connectionist Theory (pp. 229–57). Hillsdale, N.J.: Erlbaum. 1992. Aunty’s own argument for the language of thought. In J. Ezquerro and J. Larrazabel (eds.), Cognition, Semantics and Philosophy (pp. 235–71). Dordrecht: Kluwer. 1998. Language, thought and the language of thought (Aunty’s own argument revisited). In P. Carruthers and J. Boucher (eds.), Language and Thought: Interdisciplinary Themes (pp. 226–47). Cambridge: Cambridge University Press. de Sousa, R. B. 1971. How to give a piece of your mind: or, the logic of belief and assent. Review of Metaphysics, 25: 52–79. Dennett, D. C. 1969. Content and Consciousness. London: Routledge and Kegan Paul. 1975. Why the law of effect will not go away. Journal of the Theory of Social Behaviour, 5: 169–87. Reprinted in Dennett 1978a. 1978a. Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, Mass.: MIT Press. 1978b. Toward a cognitive theory of consciousness. In C. W. Savage (ed.), Perception and Cognition: Issues in the Foundations of Psychology, Minnesota Studies in Philosophy of Science, 9 (pp. 201–28). Minneapolis: University of Minnesota Press. Reprinted in Dennett 1978a. 1981a. Three kinds of intentional psychology. In R. Healey (ed.), Reduction, Time and Reality (pp. 37–61). Cambridge: Cambridge University Press. Reprinted in Dennett 1987. 1981b. True believers: the intentional strategy and why it works. In A. F. Heath (ed.), Scientific Explanation (pp. 53–75). Oxford: Oxford University Press. Reprinted in Dennett 1987. 1982. Making sense of ourselves. In J. L. Biro and R. W. Shahan (eds.), Mind, Brain and Function (pp. 63–81). Brighton: Harvester. Reprinted in Dennett 1987. 1987. The Intentional Stance. Cambridge, Mass.: MIT Press. 1991a. Consciousness Explained. Boston: Little Brown and Co. 1991b. Mother Nature versus the walking encyclopedia: a Western drama. In W. Ramsey, S. Stich, and D. Rumelhart (eds.), Philosophy and Connectionist Theory (pp. 21–30). Hillsdale, N.J.: Erlbaum. 1991c. Real patterns. Journal of Philosophy, 88: 27–51. Reprinted in Dennett 1998. 1991d. Two contrasts: folk craft versus folk science and belief versus opinion. In J. Greenwood (ed.), The Future of Folk Psychology: Intentionality and Cognitive Science (pp. 135–48). Cambridge: Cambridge University Press. Reprinted in Dennett 1998. 1993. Back from the drawing board. In B. Dahlbom (ed.), Dennett and his Critics (pp. 203–35). Oxford: Blackwell.
238
List of references 1994. Self-portrait. In S. Guttenplan (ed.), Companion to the Philosophy of Mind (pp. 236–44). Oxford: Blackwell. Reprinted in Dennett 1998. 1998. Brainchildren. Harmondsworth: Penguin. Descartes, R. 1984. The Philosophical Writings of Descartes. Cambridge: Cambridge University Press. Donnellan, K. S. 1966. Reference and definite descriptions. Philosophical Review, 75: 281–304. Dretske, F. 1989. Reasons and causes. Philosophical Perspectives, 3: 1–15. Eells, E. 1982. Rational Decision and Causality. Cambridge: Cambridge University Press. Engel, P. 1998. Belief, holding true, and accepting. Philosophical Explorations, 1: 140–51. 1999. Dispositional belief, assent, and acceptance. Dialectica, 53: 211–26. (ed.) 2000a. Believing and Accepting. Dordrecht: Kluwer. 2000b. Introduction: the varieties of belief and acceptance. In P. Engel (ed.), Believing and Accepting (pp. 1–30). Dordrecht: Kluwer. Evans, G. 1981. Semantic theory and tacit knowledge. In S. Holtzman and C. Leich (eds.), Wittgenstein: To Follow a Rule (pp. 118–37). London: Routledge and Kegan Paul. 1982. The Varieties of Reference. Oxford: Oxford University Press. Evans, J. S. B. T., and Over, D. E. 1996. Rationality and Reasoning. Hove: Psychology Press. Fishburn, P. C. 1981. Subjective expected utility: a review of normative theories. Theory and Decision, 13: 129–99. Fodor, J. A. 1975. The Language of Thought. New York: Crowell. 1983. The Modularity of Mind. Cambridge, Mass.: MIT Press. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, Mass.: MIT Press. Foley, R. 1979. Justified inconsistent beliefs. American Philosophical Quarterly, 16: 247–57. 1992. The epistemology of belief and the epistemology of degrees of belief. American Philosophical Quarterly, 29: 111–21. 1993. Working Without a Net. Oxford: Oxford University Press. Frankfurt, H. 1971. Freedom of the will and the concept of a person. Journal of Philosophy, 68: 5–20. 1987. Identification and wholeheartedness. In F. Schoeman (ed.), Responsibility, Character, and the Emotions (pp. 27–45). Cambridge: Cambridge University Press. Frankish, K. 1998a. A matter of opinion. Philosophical Psychology, 11: 423–42. 1998b. Natural language and virtual belief. In P. Carruthers and J. Boucher (eds.), Language and Thought: Interdisciplinary Themes (pp. 248–69). Cambridge: Cambridge University Press. 2000. Evolving the linguistic mind. Paper presented at the 3rd International Conference on the Evolution of Language, Paris, France. in preparation. Deciding to believe again. Unpublished manuscript, The Open University.
239
List of references Fricker, E. 1998. Self-knowledge: special access versus artefact of grammar – a dichotomy rejected. In C. Wright, B. C. Smith, and C. Macdonald (eds.), Knowing our Own Minds (pp. 155–206). Oxford: Oxford University Press. Frith, U., and Happ´e, F. 1999. Theory of mind and self-consciousness: what is it like to be autistic? Mind and Language, 14: 1–22. Frith, U., Happ´e, F., and Siddons, F. 1994. Autism and theory of mind in everyday life. Social Development, 3: 108–24. Gazzaniga, M. 1983. Right hemisphere language: a twenty year perspective. American Psychology, 38: 525–37. 1994. Consciousness and the cerebral hemispheres. In M. Gazzaniga (ed.), The Cognitive Neurosciences (pp. 1391–1400). Cambridge, Mass.: MIT Press. Gigerenzer, G. 1991. How to make cognitive illusions disappear: beyond ‘heuristics and biases’. European Review of Social Psychology, 2: 83–115. Gigerenzer, G., Todd, P. M., and The ABC Research Group. 1999. Simple Heuristics that Make us Smart. Oxford: Oxford University Press. Goldman, A. I. 1970. A Theory of Human Action. Englewood Cliffs, N.J.: PrenticeHall. 1986. Epistemology and Cognition. Cambridge, Mass.: Harvard University Press. 1989. Interpretation psychologized. Mind and Language, 4: 161–85. 1992. In defense of the simulation theory. Mind and Language, 7: 104–19. 1993. The psychology of folk psychology. Behavioral and Brain Sciences, 16: 15– 28. Gordon, R. M. 1986. Folk psychology as simulation. Mind and Language, 1: 158– 71. 1995. Simulation without introspection or inference from me to you. In M. Davies and T. Stone (eds.), Mental Simulation (pp. 53–67). Oxford: Blackwell. Harman, G. 1973. Thought. Princeton, N.J.: Princeton University Press. 1986. Change in View: Principles of Reasoning. Cambridge, Mass.: MIT Press. Harris, P. L. 1989. Children and Emotion: The Development of Psychological Understanding. Oxford: Blackwell. 1992. From simulation to folk psychology: the case for development. Mind and Language, 7: 120–44. Harsanyi, J. C. 1985. Acceptance of empirical statements: a Bayesian theory without cognitive utilities. Theory and Decision, 18: 1–30. Hawthorne, J., and Bovens, L. 1999. The preface, the lottery, and the logic of belief. Mind, 108: 241–64. Heal, J. 1974. Explicit performative utterances and statements. Philosophical Quarterly, 24: 106–21. 1986. Replication and functionalism. In J. Butterfield (ed.), Language, Mind and Logic (pp. 135–50). Cambridge: Cambridge University Press. 1994. Moore’s paradox: a Wittgensteinian approach. Mind, 103: 5–24. 1995. How to think about thinking. In M. Davies and T. Stone (eds.), Mental Simulation: Evaluations and Applications (pp. 33–52). Oxford: Blackwell. 2002. On first-person authority. Proceedings of the Aristotelian Society, 102: 1–19.
240
List of references Heil, J. 1991. Being indiscrete. In J. Greenwood (ed.), The Future of Folk Psychology: Intentionality and Cognitive Science (pp. 120–34). Cambridge: Cambridge University Press. Hempel, C. G. 1965. Aspects of Scientific Explanation, and Other Essays in the Philosophy of Science. New York: Free Press. Hirschfeld, L., and Gelman, S. (eds.) 1994. Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge: Cambridge University Press. Holton, R. 1999. Intention and weakness of will. Journal of Philosophy, 96: 241–62. Horgan, T., and Graham, G. 1990. In defense of southern fundamentalism. Philosophical Studies, 62: 107–34. Reprinted in Christensen and Turner 1993. Horgan, T., and Tienson, J. 1995. Connectionism and the commitments of folk psychology. Philosophical Perspectives, 9: 127–52. Horst, S. 1995. Eliminativism and the ambiguity of ‘belief ’. Synthese, 104: 123–45. Hume, D. 1739/1888. A Treatise of Human Nature. Oxford: Oxford University Press. Hurlburt, R. 1990. Sampling Normal and Schizophrenic Inner Experience. New York: Plenum Press. 1993. Sampling Inner Experience in Disturbed Affect. New York: Plenum Press. Hurlburt, R., Happ´e, F., and Frith, U. 1994. Sampling the form of inner experience in three adults with Asperger syndrome. Psychological Medicine, 24: 385–95. Jackson, F., and Pettit, P. 1996. Causation in the philosophy of mind. In A. Clark and P. J. R. Millican (eds.), Connectionism, Concepts and Folk Psychology (pp. 75– 99). Oxford: Oxford University Press. Jeffrey, R. C. 1970. Dracula meets Wolfman: acceptance versus partial belief. In M. Swain (ed.), Induction, Acceptance, and Rational Belief (pp. 157–85). Dordrecht: Reidel. 1983. The Logic of Decision (2nd edn). Chicago: University of Chicago Press. 1985. Animal interpretation. In E. LePore and B. P. McLaughlin (eds.), Actions and Events: Perspectives on the Philosophy of Donald Davidson (pp. 481–7). Oxford: Blackwell. Johnston, M. 1995. Self-deception and the nature of mind. In C. Macdonald and G. Macdonald (eds.), Philosophy of Psychology: Debates on Psychological Explanation, Volume One (pp. 433–60). Oxford: Blackwell. Kahneman, D., Slovic, P., and Tversky, A. (eds.) 1982. Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kaplan, M. 1981a. A Bayesian theory of rational acceptance. Journal of Philosophy, 78: 305–30. 1981b. Rational acceptance. Philosophical Studies, 40: 129–45. 1995. Believing the improbable. Philosophical Studies, 77: 117–45. 1996. Decision Theory as Philosophy. Cambridge: Cambridge University Press. Kripke, S. 1980. Naming and Necessity. Oxford: Blackwell. Kyburg, H. E. 1961. Probability and the Logic of Rational Belief. Middletown, Conn.: Wesleyan University Press.
241
List of references 1970. Conjunctivitis. In M. Swain (ed.), Induction, Acceptance and Rational Belief (pp. 52–82). Dordrecht: Reidel. Lance, M. N. 1995. Subjective probability and acceptance. Philosophical Studies, 77: 147–79. Lehrer, K. 1975. Reason and consistency. In K. Lehrer (ed.), Analysis and Metaphysics: Essays in Honor of R. M. Chisholm (pp. 57–74). Dordrecht: Reidel. Reprinted in Lehrer 1990a. 1980. Preferences, conditionals and freedom. In P. van Inwagen (ed.), Time and Cause (pp. 187–201). Dordrecht: Reidel. Reprinted in Lehrer 1990a. 1986. The coherence theory of knowledge. Philosophical Topics, 14: 5–25. Reprinted in Lehrer 1990a. 1990a. Metamind. Oxford: Oxford University Press. 1990b. Theory of Knowledge. Boulder: Westview Press. 1991. Reply to Christian Piller. Grazer Philosophische Studien, 40: 62–69. LePore, E., and Loewer, B. 1987. Mind matters. Journal of Philosophy, 84: 630–42. 1989. More on making mind matter. Philosophical Topics, 27: 175–91. Leslie, A. 1991. The theory of mind impairment in autism: evidence for a modular mechanism of development? In A. Whiten (ed.), Natural Theories of Mind (pp. 63–78). Oxford: Blackwell. Levi, I. 1980. The Enterprise of Knowledge. Cambridge, Mass.: MIT Press. Lewis, D. 1978. Truth in fiction. American Philosophical Quarterly, 15: 37–46. Lycan, W. G. 1988. Judgement and Justification. Cambridge: Cambridge University Press. 1990. The continuity of levels of nature. In W. G. Lycan (ed.), Mind and Cognition: A Reader (pp. 77–96). Oxford: Blackwell. Macdonald, C., and Macdonald, G. (eds.) 1995. Connectionism: Debates on Psychological Explantion. Oxford: Blackwell. McDowell, J. 1998. The Woodbridge lectures 1997: Having the world in view: Sellars, Kant, and intentionality. Journal of Philosophy, 95: 431–92. McLaughlin, B. P., and Rorty, A. O. (eds.) 1988. Perspectives on Self-Deception. Berkeley: University of California Press. Maher, P. 1986. The irrelevance of belief to rational action. Erkenntnis, 24: 363–84. 1993. Betting on Theories. Cambridge: Cambridge University Press. Makinson, D. C. 1965. The paradox of the preface. Analysis, 25: 205–7. Malcolm, N. 1973. ‘Thoughtless brutes’. Proceedings and Addresses of the American Philosophical Association, 46: 5–20. Maloney, J. C. 1990. It’s hard to believe. Mind and Language, 5: 122–48. Martin, M. W. (ed.) 1985. Self-Deception and Self-Understanding. Lawrence: University of Kansas Press. Mele, A. 1983. Self-deception. Philosophical Quarterly, 33: 365–77. 1987a. Irrationality: An Essay on Akrasia, Self-Deception, and Self-Control. Oxford: Oxford University Press. 1987b. Recent work on self-deception. American Philosophical Quarterly, 24: 1– 17. 1992. Springs of Action. Oxford: Oxford University Press. Mellor, D. H. 1974. In defense of dispositions. Philosophical Review, 83: 157–81.
242
List of references Mithen, S. J. 1996. The Prehistory of the Mind: A Search for the Origins of Art, Religion, and Science. London: Thames and Hudson. Montmarquet, J. 1986. The voluntariness of belief. Analysis, 46: 49–53. Mumford, S. 1998. Dispositions. Oxford: Oxford University Press. Nisbett, R. E., and Ross, L. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, N.J.: Prentice-Hall. Nisbett, R. E., and Wilson, T. D. 1977. Telling more than we can know: verbal reports on mental processes. Psychological Review, 84: 231–59. O’Brien, G. J. 1991. Is connectionism commonsense? Philosophical Psychology, 4: 165–78. O’Shaughnessy, B. 1980. The Will: A Dual Aspect Theory. Cambridge: Cambridge University Press. Peacocke, C. 1992. A Study of Concepts. Cambridge, Mass.: MIT Press. Pears, D. 1982. Motivated irrationality. Proceedings of the Aristotelian Society, 56: 157–78. 1984. Motivated Irrationality. Oxford: Oxford University Press. Penelhum, T. 1964. Pleasure and falsity. American Philosophical Quarterly, 1: 81–91. Perry, J. 1993. The Problem of the Essential Indexical and Other Essays. Oxford: Oxford University Press. Pettit, P. 1993. The Common Mind: An Essay on Psychology, Society, and Politics. Oxford: Oxford University Press. Piattelli-Palmarini, M. 1994. Inevitable Illusions: How Mistakes of Reason Rule our Minds. New York: John Wiley and Sons. Pink, T. 1996. The Psychology of Freedom. Cambridge: Cambridge University Press. Pinker, S. 1994. The Language Instinct: The New Science of Language and Mind. Harmondsworth: Penguin. 1997. How the Mind Works. London: Allen Lane. Pojman, L. P. 1985. Believing and willing. Canadian Journal of Philosophy, 15: 37– 55. 1986. Religious Belief and the Will. London: Routledge and Kegan Paul. Popper, K. R. 1957. The propensity interpretation of the calculus of probability, and the quantum theory. In S. K¨orner (ed.), Observation and Interpretation (pp. 65–70). London: Butterworth. Price, H. H. 1969. Belief. London: Allen and Unwin. Prior, E. 1985. Dispositions. Aberdeen: Aberdeen University Press. Prior, E., Pargetter, R., and Jackson, F. 1982. Three theses about dispositions. American Philosophical Quarterly, 19: 251–7. Ramsey, F. P. 1926. Truth and probability. In F. P. Ramsey, Foundations (pp. 58– 100). Atlantic Highlands, N.J.: Humanities Press. Ramsey, W., Stich, S. P., and Garon, J. 1990. Connectionism, eliminativism and the future of folk psychology. Philosophical Perspectives, 4: 499–533. Ramsey, W., Stich, S. P., and Rumelhart, D. E. (eds.) 1991. Philosophy and Connectionist Theory. Hillsdale, N.J.: Erlbaum. Recanati, F. 2000. The simulation of belief. In P. Engel (ed.), Believing and Accepting (pp. 267–98). Dordrecht: Kluwer.
243
List of references Rey, G. 1995. A not ‘merely empirical’ argument for a language of thought. Philosophical Perspectives, 9: 201–22. Rorty, A. O. 1980. Where does the akratic break take place? Australasian Journal of Philosophy, 58: 333–46. Rosenthal, D. 1986. Two concepts of consciousness. Philosophical Studies, 49: 329– 59. 1993. Thinking that one thinks. In M. Davies and G. W. Humphreys (eds.), Consciousness: Psychological and Philosophical Essays (pp. 197–223). Oxford: Blackwell. Rudner, R. 1953. The scientist qua scientist makes value judgments. Philosophy of Science, 20: 1–6. Russell, J. (ed.) 1997. Autism as an Executive Disorder. Oxford: Oxford University Press. Savage, L. J. 1972. The Foundations of Statistics (2nd edn). New York: Dover. Scott-Kakures, D. 1993. On belief and the captivity of the will. Philosophy and Phenomenological Research, 54: 77–103. Searle, J. R. 1989. How performatives work. Linguistics and Philosophy, 12: 535–58. 1992. The Rediscovery of the Mind. Cambridge, Mass.: MIT Press. Sellars, W. 1964. Induction as vindication. Philosophy of Science, 31: 197–231. Skinner, B. F. 1957. Verbal Behaviour. New York: Appleton-Century-Crofts. Sloman, S. A. 1996. The empirical case for two systems of reasoning. Psychological Bulletin, 119: 3–22. Smolensky, P. 1988. On the proper treatment of connectionism. Behavioral and Brain Sciences, 11: 1–23. Sperber, D. 1996. Explaining Culture: A Naturalistic Approach. Oxford: Blackwell. 1997. Intuitive and reflective beliefs. Mind and Language, 12: 67–83. Stalnaker, R. C. 1984. Inquiry. Cambridge, Mass.: MIT Press. Stanovich, K. E. 1999. Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, N.J.: Lawrence Erlbaum Associates. Stein, E. 1996. Without Good Reason. Oxford: Oxford University Press. Stich, S. P. 1982. Dennett on intentional systems. In J. L. Biro and R. W. Shahan (eds.), Mind, Brain and Function (pp. 39–62). Brighton: Harvester. 1983. From Folk Psychology to Cognitive Science. Cambridge, Mass.: MIT Press. 1990. The Fragmentation of Reason. Cambridge, Mass.: MIT Press. 1991a. Causal holism and commonsense psychology: a reply to O’Brien. Philosophical Psychology, 4: 179–81. 1991b. Do true believers exist? Proceedings of the Aristotelian Society, Suppl. 65: 229–44. 1996. Deconstructing the mind. In S. P. Stich, Deconstructing the Mind (pp. 3–90). New York: Oxford University Press. Swinburne, R. 1985. Thought. Philosophical Studies, 48: 153–71. Tanney, J. 1996. A constructivist picture of self-knowledge. Philosophy, 71: 405– 22. Teller, P. 1980. Zealous acceptance. In L. J. Cohen and M. Hesse (eds.), Applications of Inductive Logic (pp. 28–53). Oxford: Oxford University Press.
244
List of references Turner, M. 1997. Towards an executive dysfunction account of repetitive behaviour in autism. In J. Russell (ed.), Autism as an Executive Disorder (pp. 57–94). Oxford: Oxford University Press. Ullmann-Margalit, E., and Margalit, A. 1992. Holding true and holding as true. Synthese, 92: 167–87. van Fraassen, B. C. 1980. The Scientific Image. New York: Oxford University Press. Varley, R. 1998. Aphasic language, aphasic thought: an investigation of propositional thinking in an a-propositional aphasic. In P. Carruthers and J. Boucher (eds.), Language and Thought: Interdisciplinary Themes (pp. 128–45). Cambridge: Cambridge University Press. Velleman, J. D. 1992. The guise of the good. Noˆus, 26: 3–26. Reprinted in Velleman 2000. 2000. The Possibility of Practical Reason. Oxford: Oxford University Press. Vygotsky, L. S. 1934/1986. Thought and Language, trans. A. Kozulin (revised edn). Cambridge, Mass.: MIT Press. Walker, A. F. 1985. An occurrent theory of practical and theoretical reasoning. Philosophical Studies, 48: 199–210. 1989. The problem of weakness of will. Noˆus, 23: 653–76. Walker, M. T. 1996. The voluntariness of judgment. Inquiry, 39: 97–119. Wason, P. C., and Evans, J. S. B. T. 1975. Dual processes in reasoning? Cognition, 3: 141–54. Wiggins, D. 1980. Weakness of will, commensurability, and the objects of deliberation and desire. In A. O. Rorty (ed.), Essays on Aristotle’s Ethics (pp. 241–66). Berkeley: University of California Press. Williams, B. 1970. Deciding to believe. In H. Kiefer and M. Munitz (eds.), Language, Belief and Metaphysics (pp. 95–111). Albany: State University of New York Press. Reprinted in Williams 1973. 1973. Problems of the Self: Philosophical Papers 1956–1972. Cambridge: Cambridge University Press. Winters, B. 1979. Believing at will. Journal of Philosophy, 76: 243–56. Wittgenstein, L. 1953. Philosophical Investigations. Oxford: Blackwell. 1980. Remarks on the Philosophy of Psychology, vols. I–II. Oxford: Blackwell. Wright, C. 1989. Wittgenstein’s later philosophy of mind: sensations, privacy, and intention. Journal of Philosophy, 86: 622–34. 1998. Self-knowledge: the Wittgensteinian legacy. In C. Wright, B. C. Smith, and C. Macdonald (eds.), Knowing our Own Minds (pp. 13–45). Oxford: Oxford University Press. Zynda, L. 2000. Representational theories and realism about degrees of belief. Philosophy of Science, 67: 45–69.
245
Author index ABC Research Group, see Gigerenzer et al. Abrahamsen, A., 162 American Psychiatric Association, 231 Anscombe, G. E. M., 42 Armstrong, D. M., 14, 34, 219 Audi, R., 39, 95, 213 Bach, K., 213, 214, 220 Baier, A., 21, 72 Barkow et al., 157, 230 Baron-Cohen, S., 231, 232, see also Baron-Cohen et al. Baron-Cohen et al., 231 Bechtel, W., 162 Bennett, J., 56 Bickerton, D., 22 Blackburn, S., 35 Block, N., 4 Botterill, G., 2, 36, 167, 181 Boucher, J., 57 Bovens, L., 63 Braithwaite, R. B., 15, 66 Bratman, M. E., 28, 81, 82, 83, 85, 86, 87, 125, 126, 130, 133, 135, 209–10, 211 Braun, D., 39 Burge, T., 159 Canfield, J. V., 213 Carruthers, P., 2, 14, 22, 35, 36, 56, 57, 58, 100, 221, 230, 231, 232 Chamberlain, A., 230 Cherniak, C., 54, 62, 169, 172 Chisholm, R. M., 60 Chomsky, N., 58 Christensen, S. M., 162 Churchland, P. M., 2, 200 Churchman, C. W., 66 Clark, A., 2, 27, 57, 76, 161, 162, 166, 169–71, 172, 181, 189–90
Clarke, D. S., 83, 134 Cohen, D. J., see Baron-Cohen et al. Cohen, L. J., 20, 81–4, 86, 90–2, 93, 95, 96, 99, 100, 102, 104, 107, 125, 126, 204, 209, 215 Copeland, J., 190 Cosmides, L., see also Barkow et al. Crimmins, M., 95 Davidson, D., 7, 34, 37, 39, 40, 46, 96, 148, 166, 167, 203, 204–5, 207–8, 213, 217 Davies, M., 7, 162, 165, 184–9, 190, 196, 197 de Sousa, R., 31, 67, 72, 158 Dennett, D. C., 2, 3, 4, 22, 34, 35, 36, 37, 38, 40, 43, 44, 46, 55, 71–80, 103, 105, 118–20, 147, 156, 158, 169, 173, 199, 204, 229 Descartes, R., 20 Donnellan, K. S., 163 Dretske, F., 39 Eells, E., 30 Engel, P., 80, 81, 82, 86, 125, 126 Evans, G., 185, 186, 189, 190, 221 Evans, J. S. B. T., 10, 24, 227 Fishburn, P. C., 30 Fodor, J. A., 1, 16, 34, 38, 58, 161, 184 Foley, R., 60, 63 Frankfurt, H., 159 Frankish, K., 76, 97, 103, 122, 129, 153 Fricker, E., 219 Frith, U., 232, see also Frith et al., Hurlburt et al. Frith et al., 232 Garon, J., see Ramsey et al. Gazzaniga, M., 225
246
Author index Gelman, S., 157, 230 Gigerenzer, G., 47 Gigerenzer et al., 30 Goldman, A. I., 16, 30, 39, 96, 103, 219 Gordon, R. M., 103, 221 Graham, G., 2, 36, 163 Gustavson, D. F., 213 Happ´e, F., 232, see also Frith et al., Hurlburt et al. Harman, G., 18, 22, 209 Harnish, R. M., 220 Harris, P. L., 103 Harsanyi, J. C., 61 Hawthorne, J., 63 Heal, J., 8, 103, 220, 226 Heil, J., 167, 169 Hempel, C. G., 31 Hirschfeld, L., 157, 230 Holton, R., 213 Horgan, T., 2, 36, 163, 165, 181 Horst, S., 13 Hume, D., 20, 60 Hurlburt, R., 100, see also Hurlburt et al. Hurlburt et al., 232 Jackson, F., 35, 169 Jeffrey, R. C., 30, 59 Johnston, M., 214 Joyce, J., 77 Kahneman et al., 46 Kaplan, M., 30, 53, 61, 62, 63, 66, 67–8, 69–70 Kripke, S., 163 Kyburg, H. E., 62, 63 Lance, M. N., 29 Lehrer, K., 120, 121, 159 LePore, E., 39 Leslie, A., 231 Levi, I., 61 Lewis, D., 133 Locke, J., 60 Loewer, B., 39 Lycan, W. G., 35, 162 Macdonald, C., 162 Macdonald, G., 162 McDowell, J., 159
McLaughlin, B. P., 213 Maher, P., 53, 60, 61, 62, 67, 70 Makinson, D. C., 62 Malcolm, N., 16 Maloney, J. C., 181 Margalit, A., 81 Martin, M. W., 213 Mele, A., 209, 213 Mellor, D. H., 34, 35 Mithen, S. J., 157, 230 Montmarquet, J., 149 Mumford, S., 34, 35 Nisbett, R. E., 46, 225 O’Brien, G. J., 181 O’Shaughnessy, B., 56 Over, D. E., 10, 24, 227 Pargetter, R., 35 Peacocke, C., 187 Pears, D., 7, 203, 205, 213 Penelhum, T., 213 Perry, J., 81 Pettit, P., 55, 169 Piattelli-Palmarini, M., 46 Pink, T., 95, 159 Pinker, S., 57, 58, 230 Pojman, L. P., 56 Popper, K. R., 35 Price, H. H., 15, 22, 23, 159 Prior, E., 35 Ramsey, F. P., 30 Ramsey, W., 162, see also Ramsey et al. Ramsey et al., 7, 162, 165–7, 169 Recanati, F., 136 Rey, G., 162, 185, 188 Rorty, A. O., 209, 213 Rosenthal, D., 14 Ross, L., 46 Rudner, R., 66 Rumelhart, D. E., 162 Russell, J., 232 Ryle, G., 34 Savage, L. J., 30 Scott-Kakures, D., 56 Searle, J. R., 14, 220 Sellars, W., 60 Siddons, F., see Frith et al.
247
Author index Skinner, B. F., 77 Sloman, S. A., 25, 227 Slovic, P., see Kahneman et al. Smart, J. J. C., 34 Smolensky, P., 25 Sperber, D., 120, 121 Stalnaker, R. C., 64, 81, 82 Stanovich, K. E., 25, 227 Stein, E., 54, 62 Stich, S. P., 2, 47, 54, 161, 162, 166, 167, 170, see also Ramsey et al. Swinburne, R., 15, 57 Tager-Flusberg, H., see Baron-Cohen et al. Tanney, J., 226 Teller, P., 66 Tienson, J., 165, 181 Todd, P. M., see Gigerenzer et al. Tooby, J., 47, see also Barkow et al. Turner, D. R., 162
Turner, M., 232 Tversky, A., see Kahneman et al. Ullmann-Margalit, E., 81 van Fraassen, B. C., 67 Varley, R., 33 Velleman, J. D., 126, 127, 142 Vygotsky, L. S., 57 Walker, A. F., 28, 205 Walker, M. T., 149 Wason, P. C., 24 Wiggins, D., 205 Williams, B., 56, 126 Wilson, T. D., 225 Winters, B., 56 Wittgenstein, L., 219 Wright, C., 219 Zynda, L., 46
248
Subject index abductive inference, 104–5 absent-mindedness, 16, 169, 172–3, 178–81, 182 acceptance in Lehrer’s sense, 120, 121 in Perry’s sense, 81 premising conception of, 80–1, 88 as term for flat-out belief, 59 see also acceptance-as-a-premise acceptance-as-a-premise (acceptancep ), 80–7, 88, 90–5, 124–40, 141 active formation of, 82, 92–3, 113–16, 128–9 behavioural view of, 85, 108–13 Cohen on, 81–4, 90–2 context-relativity of, 82–3, 129–30 deductive closure of, 91–2 explicit, 94–5 as flat-out, 83 general, 135–6, 141, 214 implicit, 94–5, 112–13 implicit (Cohen’s sense), 91 influence on action, 116–18 as language-involving, 83, 97–107 and meta-representational belief, 120–1 non-doxastic, 124–32, 141 non-policy-based forms of, 112 as not truth-directed, 82, 126–8 occurrent form of, 113 overt, 93–5 relation to basic-level mental states, 109–12 restricted versus unrestricted, 130–6, 141 and strand 2 belief, 84–7, 124–36 tacit, 93–5, 112–13, 176 terminology, 93 theoretical, 139–40, 141 see also acceptance; premising; premising policies
act view, 66–7, 88 action dual explanations of, 143–5 role of the supermind in, 116–18, 143–8, 205–7 two strategies for predicting, 117 see also akrasia active belief adoption, 20–2, 55–6, 70, 74, 148–51 and believing ‘at will’, 128–9, 151 high confidence as enabling condition for, 149–50 introduces qualitative attitude, 23 pragmatically motivated, 137, 149, 151 see also acceptance-as-a-premise, active formation of; first-person authority active reasoning, 31–2, 33, 91, 97–107, 190–8 see also conscious reasoning; premising; supermental reasoning activism, 55–6 challenge to, 56 defended, 148–51 see also active belief adoption akrasia, 7–8, 73, 203–13 compatible with basic-level austerity, 208 Davidson on, 204–5, 207–8 intention-related, 212–13 irrationality of, 208, 212 supermind theory’s account of, 205–8 animals, 16, 24, 30, 33, 182, 183, 199 anti-realism, 36–7, 49 aphasia, 33 assent, see active belief adoption assertion view, 68–70, 88, 140 assumptions and suppressed premises, 27–9, 41, 85, 94, 177
249
Subject index austere view, 5, 9, 35–8, 75 and akrasia, 203, 208 appropriate for strand 1 mind, 43 compatible with behavioural view of acceptancep 111, 145–6 compatible with rich view of the supermind, 147–8, 175–7, 178–81, 194–6 and eliminativism, 163 folk commitment to qualified, 169, 172–3 involves presumption of rationality, 37, 54, 203 and mental explanation, 39–40, 47–8 as realist, 36–7 and self-deception, 203, 216–17 see also rich view autism, 231–3 autonomy, 158–9 avowals, 218–19 as performative, 219–26 see also first-person authority Baldwin effect, 229 basic belief (strand 1 belief), 6, 23–4, 33, 43, 44–5, 50, 142 not introspectable, 158 relation to acceptancep , 109–12 see also non-conscious belief; partial belief basic mind (strand 1 mind), 6, 33, 43, 50, 122, 122–3, 147–8, 175–7, 178–81, 182–3, 194–6, 198–200, 223–6, 227–8 austere view of optional, 9, 159–60 role in akrasia, 205–8, 212 role in self-deception, 214–17 supports premising machine, 108–13 basic reasoning (strand 1 reasoning), 33, 45–7, 49, 50 see also non-conscious reasoning Bayesian challenge, 5, 52–5, 143–8 Bayesian confirmation theory, 59–60 Bayesian decision theory, 29–30, 45–9, 52 as interpretative framework, 45–7 see also Bayesian challenge behavioural view, 65–70, 73–4 of acceptancep and goal pursuit, 85, 108–13 and active belief adoption, 70, 74, 113–16
defined, 65 distinguished from austere functionalism, 65–6 extended to occurrent belief, 78 integrated version of, 80, 85, 113 of opinion, 73–4 promises solution to the Bayesian challenge, 65 behaviourism, 5, 15, 17 see also austere view; dispositionalism belief active formation of, see active belief adoption categorical-state versus dispositionalist views of, 34–6 and context-relativity, 66, 86, see also under superbelief divisions in folk view of, 12–24, 34–8 implicitly active, 28–9 language-dependent, 23 language-involving, 22–3, 56–8 responsiveness to pragmatic motives, 86, see also under superbelief truth-directedness of, see under superbelief two strands identified, 23–4 two-strand views of, 58–89 as a unitary psychological kind, 12, 24 see also basic belief; conscious belief; flat-out belief; meta-representational belief; non-conscious belief; occurrent belief; partial belief; reflective belief; standing-state belief; superbelief; theoretical belief betting on sentences, 67, 73, 76 categorical-state theories, 34–6 causal systematicity, 186–7, 195–6 causation, sustaining versus dynamic, 39 change of mind, see making up of mind cognitive orientation, 126 conceptual capacities, chapter 7 passim basic-level, 199–200 as general abilities, 185–6 as inferential commitments, 186, 190, 191–2, 199 as personal skills, 189–90, 199–200 sub-personal, 200
250
Subject index supermental, 191–6 see also conceptual modularity conceptual modularity, 7, chapter 7 passim absent at basic level, 198–9 compatible with basic-level austerity, 194–6 exhibited by the supermind, 190–8 folk commitment to, 184–90 as involving sub-personal commitments, 188–9 see also modularity of process; modularity of vehicle confidence, degree of, see partial belief confidence view, 60–4, 88 conjunctive closure, 62–3, 68, 138–9 connectionism, 162, 173 conscious belief, 13–14, 23, 153–4 as belonging to separate system, 14, 17, 25–6 defined, 13, 17 and the Joycean machine, 77–80 as language-involving, 22, 57 as requiring occurrent activation, 28–9 see also superbelief; supermind conscious reasoning, 25–6 as explicit, 26 as language-involving, 32–3, 57 not wholly Joycean, 78–9, 118–19 role of suppressed premises in, 27, 28–9 usually classical, 30–1 see also active reasoning; premising; supermental reasoning consciousness, 14 cognitive role of, 25–6, 153–4 content, 8, 174 conceptualized, chapter 7 passim of goals, 96–7 information, 185 degrees of confidence, see partial belief desire, 2, 24, 70 see also goal pursuit; superdesire direction of fit, 12, 126 dispositionalism, 34–6 as a form of functionalism, 34–5 see also austere view domain-general thinking, evolution of, 230–1 dual-process theories, 4, 10, 24–5, 227–8
eliminativism, 2, 7, 161–5, 182–3, 188–9, 198, 200–1 mitigated, 163–4 and theories of reference, 162–3 equipotency cases, 166–7, 170–2, 177–8, 182, 195 explicit reasoning, 26–9 defined, 26 first-person authority, 8, 128, 218–26 and confabulation, 225–6 other views of, 219 and self-deceit, 224 supermental (performative) view of, 219–22 supermental view of defended, 222–6 see also self-knowledge flat-out belief, 17–20, chapter 3 passim act view of, see act view apparent irrelevance of, 52–5 assertion view of, see assertion view Bayesian approaches to, 59–71 behavioural view of, see behavioural view confidence view of, see confidence view does not require certainty, 61 influence on action, 52–5, 63–4, 68–70, 143–8 as a linguistic disposition, 67–70 not context-dependent, 66 as realized in partial beliefs and desires, 65, 69 role in scientific inquiry, 59–60 see also acceptance-as-a-premise; superbelief Fodor’s challenge, 56–8, 78, 85, 107, 152–3 folk psychology architectural commitments of, 7, 165–73 austere and rich interpretations of, 5, 38 commitments vindicated, 173–83, 190–200 divisions in, 4–5, chapter 2 passim duality in explanatory practice of, 41–2 as employing mongrel concepts, 3–4 integration into science of, 1–4, 10–11, 24, 38, 162–4, 200–1 and neural architecture, 164–5, 182, 200 regularization of, 2, 10, 42–4, 47–50, 183 use of term, 1
251
Subject index functional states, thickly carved versus finely carved, 35–6 functionalism, 34 austere versus rich forms of, 35–8 see also austere view; rich view general premise, 136 Generality Constraint, 185, 199 goal adoption, see goal pursuit goal pursuit, 83, 95–7, 140–2 behavioural view of, 108–13 content of, 96–7 intention as form of, 210–11 occurrent form of, 113 tacit and implicit forms of, 96, 112–13 heuristics and biases, 30, 46–7 high-level dispositions, defined, 65 holism of belief, 167–9 ascriptive, 167–8 framework, 167 inferential, 168–9, 175–6 individual acquirability, 165–6, 167–9, 174–7 individual efficacy, 166–7, 169–73, 177–83 inference rules, 98–103 concept mastery as involving grasp of, 191–2 non-conscious knowledge of, 101–2 as reliable heuristics, 98–9 role of language in application of, 99–102 inferential potential, 191 inner speech comes with semantic interpretations attached, 57 as medium of thought, see language, as medium of thought as self-stimulation, 77 instrumentalism, 36 integrationism, 1–4, 24, 162–4 see also folk psychology, integration into science of intention, 209–13 planning theory of, 209–10 as a supermental state, 210–12 see also akrasia interpretation, 30, 37, 45–7, 145–6 irrationality, 37, 147–8
compatible with basic-level rationality, 147–8 see also absent-mindedness; akrasia; self-deception Joycean machine, 76–80 compared with the premising machine, 118–20 influence on action, 79, 119 and opinion, 79–80 and standing-state belief, 79–80, 119 judgement see active belief adoption language cognitive conception of, see language, as medium of thought as cognitive tool, 56 communicative conception of, 57 evolution of language-based thinking, 152–3 as medium of thought, 22–3, 32–3, 56–8, 152–3, 197–8 as modularized, 58, 152 role in premising, 97–107 see also assertion view; betting on sentences language of thought, 184–5 see also conceptual modularity; language, as medium of thought look-up tables, 190, 201 lottery paradox, 62 making up of mind, 20–2, 72–3, 75–6, 115, 149, 151, 155–6, 223 see also active belief adoption; opinion memes, 80 mental explanation, 38–42 counterfactual analysis of, 47–9, 169–72 role of Bayesian idioms in, 47–9 two levels of, 117–18, 143–5 Mentalese, 57 meta-representational belief, 120–1 minimal premising, 105–6 modularity of process, 184, 185–6, 188–90, 193–7 modularity of vehicle, 184, 186–8, 197–8 modules, 157, 230–1 see also domain-general thinking mongrel concepts, 3–4
252
Subject index non-conscious belief, 13–14, 23 defined, 13, 17 distinguished from Freudian Unconscious, 13–14 as lacking an occurrent form, 16–17, 28–9, 44–5 not language-involving, 44–5 see also basic belief non-conscious reasoning, 25–6, 49, 155, 156–7 interpretable as Bayesian, 45–7 as non-explicit, 26–8 not language-driven, 33 non-explicit reasoning, 26–9 defined, 26 occurrent belief, 14–17, 20 conscious versus non-conscious, 16–17 role in action and inference, 16–17, 75–6 opinion, 3, 4, 71–6, 88 behavioural view of, 73–4 influence on action, 75–6 and the Joycean machine, 79–80 and strand 2 belief, 73, 75–6 partial belief, 17–20 as behavioural disposition, 45 as non-conscious, 23 not reducible to flat-out belief, 18–19 and rational decision-making, 52–5, 143–8 partitioning strategies, 203, 205, 213, 214 passive reasoning, 32 performative utterances, 220–1 avowals as, 219–26 role in acceptancep formation, 115 policy adoption, narrow and broad senses of, 114–15 policy possession, 108–9, 211–12 pragmatism, 53 preface paradox, 62, 68, 138–9 premising, 91, 97–107 extended techniques of, 106–7 minimal form of, 105–6 role of heuristics in, 102 role of inference rules in, 91, 98–103 role of language in, 91, 99–102, 105, 106, 107 role of self-interrogation in, 103–5
premising machine, 5–6, 85, chapter 4 passim compared with the Joycean machine, 118–20 failures of, 117 influence on action, 116–18 relation to the basic mind, 108–13 see also supermind premising policies, chapter 4 passim adoption of, 113–16 causal role of, 174 how constituted, 109–10 nature and extent of, 90–3, 95–7, 153–4 realized in basic-level mental states, 110–12, 138 use of term, 97 professional belief, see acceptance-as-a-premise, non-doxastic propositional modularity, 7, chapter 6 passim absent at the basic level, 182–3 compatible with basic-level austerity, 175–7, 178–81 exhibited by the supermind, 173–82 folk commitment to, 165–73 see also individual acquirability; individual efficacy propositions, optative and declarative, 96–7 Quine–Duhem thesis, 98 realism, 36–7 about mental acts, 146–7 reasoning classical versus probabilistic, 29–31, 143–4 divisions in folk view of, 24–33 language-driven versus not language-driven, 32–3 two kinds identified, 33 see also active reasoning; conscious reasoning; dual-process theories; explicit reasoning; non-conscious reasoning; non-explicit reasoning; passive reasoning; unity of processing assumption reflective belief (Sperber), 120, 121 representation theorems, 30 representational theory of mind, 15
253
Subject index rich view, 5, 35–8 appropriate for strand 2 mind, 43 folk commitment to, 16–17, 165–73, 184–90 and mental explanation, 40–1 of the supermind compatible with basic-level austerity, 147–8, 175–7, 178–81, 194–6 see also austere view scientific inquiry, see theoretical inquiry scientific psychology, 226–33 self-deception, 7–8, 73, 203–4, 213–18 compatible with basic-level austerity, 216–17 and first-person authority, 224 interpersonal model of, 213 and positive thinking, 214 supermind theory’s account of, 214–16 and wishful thinking, 218 self-interrogation, 103–5, 148, 196–7 limitations of, 104 role in abductive inference, 104 role of language in, 105 role in retrieving premises, 105 and self-simulation, 103 self-knowledge, 158 see also first-person authority shielding strategy, 214–17 simulated belief, 136 simulation theory, 103, 136 standing-state belief, 14–17 two types of, 17 strand 1 belief, see basic belief strand 1 mind, see basic mind strand 1 reasoning, see basic reasoning strand 2 belief, see superbelief strand 2 desire, see superdesire strand 2 mind, see supermind strand 2 reasoning, see supermental reasoning subjective probability, see partial belief sub-mind, 122, 123 see also sub-personal psychology sub-personal psychology, 9, 43–4, 47, 49, 200 superbelief (strand 2 belief), 6, 23–4, 33, 43, 50, chapter 3 passim, 88, 124–40, 141, 142, 143–54 and acceptancep , 84–7, 124–36 active formation of, 128–9, 148–51
and context-relativity, 129–30 does not require certainty, 134–5 as dynamic cause of action, 144 high confidence necessary for, 135–6, 137–40, 180–1, 217 as individually acquirable, 174–7 as individually efficacious, 177–82 influence on action, 143–8, 205–7, see also under acceptance-as-a-premise introspectable, 158 and opinion, 73, 75–6 relation to basic-level mental states, see under acceptance-as-a-premise responsiveness to pragmatic motives, 137, 149, 151 tacit and implicit, 136, 176–7 theoretical interest of, 125, 135 truth-directedness of, 126–8, 137 as unrestricted acceptancep , 130–6, see also TCP deliberations see also acceptance-as-a-premise; conscious belief; flat-out belief superdesire (strand 2 desire), 85, 140–2 see also goal pursuit supermental reasoning (strand 2 reasoning), 33, 50, 97–107, 118–19, 190–8 control over, 157 function of, 155–6 improvability of, 157 possibility of irrationality in, 147 see also active reasoning; conscious reasoning; premising supermind (strand 2 mind), 6, 33, 43, 50, 122, 122–3, 154–9, 173–82, 190–8, 200–1, chapter 8 passim and akrasia, 205–8, 212–13 and autism, 231–3 and autonomy, 158–9 components of, 228–9 cultural and individual variation in, 230 development of, 228–31 and domain-general thinking, 230–1 exhibits individual acquirability, 174–7 exhibits individual efficacy, 177–82 exhibits modularity of process, 193–7 exhibits modularity of vehicle, 197–8 and first-person authority, 219–26 functions of, 6, 154–9 influence on action, 116–18, 143–8, 205–7
254
Subject index and intention, 210–12 as a kludge, 229 as a premising machine, 85 relation to the basic mind, see under premising machine and self-deception, 214–16, 217–18 and self-knowledge, 158 and theory of mind, 228–9, 233 see also premising machine supermind theory, 159–60 and dual-process theories, 227–8 and scientific psychology, 226–33 status of, 9–11, 121–2, 125–6 syntax, see modularity of vehicle TCP deliberations, 132–6, 141, 150–1, 215–16, 217 defined, 132 TGP deliberations, 133
theoretical belief, 139–40 theoretical inquiry, 59–60, 133, 139–40 theory of mind, 228–9, 231, 233 Unconscious (Freudian), 13 unity of belief assumption, 2–3, 226 basis for, 12, 24, 31 challenged, 12–24 unity of processing assumption, 3, 26 challenged, 24–33 virtual belief, 122 virtual mind, 122 volitionism, see voluntarism voluntarism, 56, see also activism weakness of will, see akrasia Williams’s challenge, 55–6, 74, 148–51 wishful thinking, 218
255