Chapter 1 INTRODUCTION∗ Daniel Vanderveken In contemporary philosophy as well as in human and cognitive sciences, langua...
18 downloads
775 Views
430KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Chapter 1 INTRODUCTION∗ Daniel Vanderveken In contemporary philosophy as well as in human and cognitive sciences, language, thought and action are systematically related. One considers that the primary function of language is to enable human speakers not only to express and communicate their thoughts but also to act in the world. Thus speakers who communicate are viewed as intentional agents provided with rationality. By choosing to exchange certain words speakers first of all attempt to perform speech acts of different kinds (acts of utterance, acts of reference and predication, illocutionary and perlocutionary acts) in certain ways (literally or not). They also want to contribute to conversations whose goal is often to change rather than to describe the world they live in. So contemporary logic and philosophy of language study both thought and action. Underlying any philosophy of language there is a certain philosophy of mind and action. The main purpose of this book is to present and discuss major hypotheses, issues and theories advanced today in the logical and analytic study of language, thought and action. One can find in the book major contributions by leading scholars of analytic philosophy, logic, formal semantics and artificial intelligence. Among fundamental issues discussed in the book let us mention the rationality and freedom of agents, theoretical and practical reasoning, the logical form of individual and collective attitudes and actions, the different kinds of action generation, the nature of cooperation and communication, the felicity conditions of
∗ I am very grateful to Springer’s referee whose critical remarks have greatly helped to improve the book. I also wish to warmly thank my research assistant Florian Ferrand who, with so much care, has produced the camera ready final typescript and my colleague Geoffrey Vitale for his invaluable help in correcting the introduction. Most of all I want to express my gratitude to my wife Candida Jaci de Sousa Melo for her constant help and encouragement. Grants from the Social Sciences and Humanities Research Council of Canada and the Quebec Foundation for Research on Society and Culture have facilitated the collective work that underpins the publication of the present volume.
D. Vanderveken (ed.), Logic, Thought & Action, 1–24. 2005 Springer. Printed in The Netherlands.
c
2
LOGIC, THOUGHT AND ACTION
speech acts, the construction and conditions of adequacy of scientific theories, the structure of propositional contents and their truth conditions, illocutionary force, time, aspect and presupposition in meaning, the dialogical approach to logic and the structure of dialogues as well as formal methods needed in logic or artificial intelligence to account for choice, paradoxes, uncertainty and imprecision. The book is divided into five parts. The first part, Reason, Action and Communication, contributes mainly to the general philosophy of language, mind and action, the second, Experience, Truth and Reality in Science, to the philosophy of science, the third, Propositions, Thought and Meaning, to the logic of language and formal semantics, the fourth, Agency, Dialogue and Games, to the logic of action, dialogues and language games and the last part, Reasoning and Cognition in Logic and Artificial Intelligence, to the role and formal methods of logic and computer science. Many authors participated in the Decade on Language, Reason and Thought which took place in June 1994 at the castle of Cerisy-la-Salle in France. Our dear and regretted colleague J-Nicolas Kaufmann to whom this book is dedicated was present and very active at that conference. We wish to pay him homage. According to the Western conception of reason, the proper rationality of human agents basically rests on their capacity to weight on the scales of the balance of reason their different beliefs, reasons, desires, intentions and goals and having deliberated to select the best actions that will allow them to achieve their goals. The classical model of rationality goes back to Aristotle who claimed that deliberation is about means and not about ends. So, in the model underlying decision theory, human agents are supposed to have certain primary desires and well ordered preferences prior to making a deliberation and they reason on the basis of these desires and their beliefs about the state of the world in order to form other desires for means of coming to their ends. However it often happens that human agents have relatively inconsistent desires that cannot all be satisfied. Moreover their preferences are not always well ordered before deliberation. They often have to choose between conflicting desires in the process of deliberation. And finally there are the freedom and the weakness of the will. An agent who forms an intention after deliberation can revise or abandon the intention. Previous desires, beliefs and intentions of agents do not seem to cause their future actions. How can we account for such facts in a theory of rationality? Human agents are by nature social. They share forms of life, speak public languages, create social institutions and act together in the world. What kinds of speech acts do they attempt to perform in conversation? How can we explain in philosophy of mind their collective attitudes and actions and
Introduction
3
their communication abilities? The first part of the book, Reason, Action and Communication, contains a general philosophical discussion of these important questions. In Chapter 2, The Balance of Reason, Dascal discusses the ideal of a perfectly reliable balance of reason, an ideal challenged by scepticism. He shows that the balance metaphor is compatible with two different conceptions of rationality which are both present in Western thought. The first conception expects the balance of reason to provide conclusive decisions in every rational deliberation. The second conception acknowledges the limits of human reason. It is clearly more appropriate for handling uncertainty, revision of intentions and more apt to face scepticism. Leibnitz, one of the most eminent rationalist philosophers, made a substantial contribution to both conceptions of rationality. Dascal discusses in detail his ideas. He shows how Leibniz came to grips with the balance metaphor. The state of equilibrium of the scales of a balance mirrors the equilibrium of indifference between the arguments for and the arguments against a belief, a decision or an action. Yet an indifference of that kind seems to model arbitrariness rather than rationality. Leibniz, as Dascal stresses, was well aware of the problem. He acknowledged that the balance of reason, when it is conceived as a metric and digital balance, lies open to the objection raised above, but he worked out another version of the balance of reason to circumvent this. We can conceive of a balance which permits us to directly compare the “values” of what is placed on the scales without reducing them to universal measuring units. A major merit of Dascal’s essay lies in the original response he gives to the new kind of scepticism that pervades the Post-modernist trend today. Developing Leibniz’ insights, Dascal shows how Leibniz’ revised metaphor of the balance of reason can apply even to what is imponderable and do justice to the idea that there are reasons (for believing, acting or deciding) which incline without necessitating. A new picture of reason emerges in which hard rationality represented by algorithms and soft rationality exemplified by the reasoning of lawyers can be seen as complementary rather than conflictual. Dascal considers foreground notions which are proper to the reasoning of lawyers (e.g. presumption, burden of proof) and shows that they anticipate Grice’s theory of conversation and non monotonic reasoning studied nowadays in Artificial Intelligence. In the third chapter, Desire, Deliberation and Action, Searle criticizes the classical conception of rationality underlying current analysis of practical reasoning and deliberation in philosophy of mind and in decision theory. It is wrong to require of rational agents a satisfiable set of desires. It is also wrong to think that an agent who prior to engag-
4
LOGIC, THOUGHT AND ACTION
ing in a deliberation already has certain primary beliefs and desires is thereby committed to other secondary desires or intentions. There could be no logic of practical reasoning stating valid principles of inference underlying such commitments of an agent. Searle denounces in detail the mistakes of this conception of practical syllogism. He first explains why desire differs radically from belief in both its logical and phenomenological features. He also briefly describes the nature of intentions and analyzes the relation between desire and action by discussing the nature of reasons for agents to act. In Dascal’s chapter the digital and metric conception of the balance of reason was shown to be inadequate. Searle goes further and identifies the source of the trouble. That conception rests upon the faulty assumption that we can deal with choice, preference and desire without recognizing their intentional character. In Searle’s view, it is a mistake to suppose that the desire must always be the ground for the reason. An acknowledgement of the facts plus the agent’s rationality can motivate the internal desire of an action. So the reason can also be the ground for the desire. Among desire independent reasons Searle considers previous commitments, obligations and duties of the agent. Searle carefully avoids the common mistake of assimilating an external reason to a physical cause. He argues that intentional causation is very different from physical causation. Prior beliefs, desires and intentions can be reasons for an action. However they do not really compel the agent to act. There is a certain gap in life between prior intentions and their execution just as there is a certain gap in the process of the deliberation between previous desires and beliefs and the formation of a prior intention. It is remarkable that Searle provides new and independent reasons for Dascal’s idea (borrowed from Leibniz) of a desire which inclines without necessitating. Agents are free. They always have to act on reasons and intentions. So they can be weak. And their weakness of will or akrasia is not to be confused with self-deception. Searle’s chapter ends up with an illuminating account of the formal resemblances and differences which exist between weakness of the will and self-deception. In order to act together with success several rational agents sharing a common goal have to cooperate. It is now widely accepted that collective actions are more than the sum of individual actions of their agents. They require of agents a collective intention and a will to cooperate. But what is the very nature of cooperation in collective actions? How can agents share collective attitudes in general and collective intentions in particular? In the fourth chapter, Two Basic Kinds of Cooperation, Tuomela discusses these important questions for the philosophy of social sciences. According to him one must distinguish a full cooperation
Introduction
5
based on a shared collective goal (the “we” mode) and a weaker kind of cooperation that reduces to coordination (the “I” mode). While most current empirical studies concern simple coordination in the “I” mode, Tuomela emphasizes an analysis of full blown cooperation in the “we” mode. He also explains why shared collective goals tend to work better than shared private goals in most circumstances. Agents have to come to an agreement to solve many coordination problems. A logical lesson should be drawn here : there is no non circular rational solution to such problems. Mere private rationality will fail. The same remark can be made about coordination dilemmas . Developing an argument that goes back to Hume, Tuomela shows that only shared collective goals can reliably solve the game in a way satisfactory to all the participants. Human agents use language in order to coordinate their actions in the world. They need to communicate their beliefs, desires and intentions in order to achieve shared collective goals. The basic units of meaning and communication are speech acts of the type called by Austin illocutionary acts. Unlike propositions, such acts have felicity rather than truth conditions. In Chapter 5, Speech Acts and Illocutionary Logic, Searle and Vanderveken analyze the logical form of illocutionary acts and their relations with other types of speech acts. Elementary illocutionary acts such as assertions, questions and promises consist of an illocutionary force and of a propositional content. Contrary to Frege and Austin whose notion of force was primitive, Searle and Vanderveken divide forces into several components (illocutionary point, mode of achievement, degree of strength, propositional content, preparatory and sincerity conditions). Rather than giving a simple list of actual forces, their speech act theory formulates a recursive definition of the set of all possible illocutionary forces. Moreover they rigorously define the conditions of successful and non defective performance of elementary illocutionary acts. Unlike Austin they distinguish between successful utterances which are defective (like promises which are insincere or that the speaker could not keep) and utterances which are not even successful (like promises to have done something in the past). They also analyze common illocutionary force markers such as verb mood and sentential type and propose a new declaratory analysis of performative utterances. Finally they show the importance of illocutionary logic for the purposes of an adequate general theory of meaning and for the foundations of universal grammar. Some illocutionary acts strongly or weakly commit the speaker to others. It is not possible to perform these illocutionary acts without eo ipso performing or being committed to other illocutionary acts. Thus commands contain orders and weakly commit the speaker to granting permission. One of the main objectives of illocutionary logic is to formulate the basic
6
LOGIC, THOUGHT AND ACTION
laws of illocutionary commitment. Searle and Vanderveken explain basic principles of illocutionary commitment. Later in Meaning and Speech Acts (1990-91) Vanderveken has used the resources of proof and model theories in order to formulate the laws of a general semantics containing illocutionary logic. Recently he has extended and generalized speech act theory so as to deal with discourse. In the special issue Searle With his Replies in the Revue internationale de philosophie (2001) he has also shown how to analyze the structure and dynamics of language games with a proper linguistic goal. Verbal exchanges between speakers communicating with each other are standard cases of collective actions. They often consist in joint collective illocutionary acts like debates, consultations and negotiations that last during a certain interval of time in the conversation. In Chapter 6, Comprehension, Communication and Minimal Rationality in the Tradition of Universal Grammar, Andre´ Leclerc presents Arnauld and Nicole’ theory of communication. Borrowing from neglected ´ etuit´ (1669-1672), Leclerc shows that sources such as La grande perp´ Arnauld and Nicole were aware of the insufficiency of the code model of linguistic communication according to which the speaker codes his thoughts into sentences and the hearer decodes them in order to access the thoughts of the speaker. They fully realized the need to enrich this model with an inferential model of linguistic communication in order to account for the role of implicatures, insinuations and presuppositions. Leclerc provides evidence for the claim that Arnauld and Nicole anticipated Cherniak’s principle of minimal rationality and Grice’s maxim of quantity. He notices that they linked the maxim of quantity with the speakers’ lack of logical omniscience. Leclerc does not only make an historical study which shows the roots of modern pragmatics, he also comments passages which can still teach us something important today. The treatment of metaphors is a case in point. Arnauld and Nicole interestingly explain why words could not acquire a metaphorical meaning in metaphors Human reason fully manifests itself in scientific practice. Human agents formulate scientific theories in order to describe, explain and predict what is happening in the world they live in. According to empiricism scientific theories must be checked against the facts of our experience. We need to confirm or falsify them by observing the world. In order to be true, scientific statements must be empirically adequate. However the meaning of scientific terms and even the interpretation of observation sentences that are used for testing scientific statements are theory laden and depend on conventions which determine their use in a context of verification. There is a real construction of models in scientific theo-
Introduction
7
ries. Observable phenomena are explained by reference to unobservable processes. Empirically equivalent scientific theories can differ in many aspects. How can we then relate Experience, Truth and Reality in Science? The second part of book discusses this fundamental question of the philosophy of science. It raises important issues for current empiricist, constructivist and realist views of science. In Chapter 7, Truth and Reference, Lauener opposes to physical realism a pragmatic kind of relativism in the conception of truth and ontology. According to the received view, the question of meaning is prior to the question of truth. Before asking whether an assertive utterance is true or false, one has to understand the meaning of that utterance. Since meaning depends on sense and reference, one is led to think that reference precedes truth. Tarski’s definition of truth reinforces the received view. Tarski equates truth with satisfaction by all sequences of objects of the domain. Quine however in Pursuit of Truth (1990) claims that truth precedes reference. Lauener criticizes Quine’s claim and shows that reference plays a primordial role in the determination of truth conditions. According to him even the meaning of observation sentences is irreducible to stimulus meaning. Their use and interpretation in a context depend on both the senses and denotations of their terms which are relative to a given linguistic system and conceptual scheme. Scientific activity moreover requires rule governed intentional illocutionary acts such as assertions, conjectures, conventions and agreements that cannot be accounted for in an austere extensional ontology. Lauener’s objection to the priority of truth over reference leads to general conclusions which are independent of that issue. Quine upholds scientific realism and physicalism as far as truth is concerned, while he advocates relativism as regards to ontology. Lauener questions the compatibility of these two positions. Quine himself was fully aware of the problem. Lauener does not try to reconcile realism for truth with relativism for ontology. He advocates relativism in both cases, but the kind of relativism that he advocates has nothing to do with cultural or subjective relativism. Lauener does not so much challenge realism as he does the holistic view of science as a language-theory conglomerate conceived as a constantly evolving whole. Lauener is not opposed to realism but rather to holism in science and universalism in logic (the view of logic as a language as opposed to the view of logic as a calculus). Lauener does not really advocate a relativistic ontology. He rather advocates a pluralistic ontology: “Since a new domain of values for the variables is presupposed for each context I advocate a pluralistic conception of ontology in contrast to Quine who postulates a unique universe by requiring us to quantify uniformly over everything that exists . . . according to my
8
LOGIC, THOUGHT AND ACTION
method of systematic relativization to contexts (of action), we create reality sectors by employing specific conceptual schemes through which we describe the world.”. Lauener’s argument for the recognition of regional ontologies is based on philosophical considerations. It is very interesting to notice that in the next chapter Michel Ghins advances an independent argument supporting the claim that we need to circumscribe domains in science on the basis of purely scientific considerations. This spontaneous convergence between two chapters is worth stressing. In Chapter 8, Empirical Versus Theoretical Existence and Truth, Michel Ghins mainly argues in favour of a specific, selective and moderate version of scientific realism in accordance with the common use of the terms “existence” and “truth” in ordinary speech. The actual presence of an object in sensory perception and the permanence of some of its characteristics during an interval of time jointly constitute a sufficient condition, a “criterion”, of existence of that object in scientific activity as well as in everyday experience. These features also ground the truth of statements about ordinary observable objects and of some physical laws connected to experience. So scientific statements can be accepted as true when they are inductively well established in a certain limited domain of experience. Ghins illustrates his version of scientific realism by considering the two particular examples of electromagnetic and gravitational fields and crystalline spheres in ancient astronomy. As one might expect, his criterion supports the existence of the fields but not of the spheres. Herman Weyl had already compared the different observable manifestations of an electric field with different perceptions of an ordinary object and argued that if we see forces as corresponding to perceptions and different charges as corresponding to different positions of the observers, we are entitled to attribute objective reality to electric fields. Ghins takes advantage of Weyl’s analogy and goes further. Sharing Kant’s criterion of reality (“[reality is] that which is connected with perception according to laws”), Ghins shows that we have good grounds to consider as laws not only true physical statements like the three famous Newtonian laws but also mathematical laws of classical mechanics of point-like masses which restrict the domain in which Newton’s laws are true. Such an extension of the coverage of the concept of law is by no means a trivial matter. It enables Ghins to answer Popper’s objection according to which limiting a theory to a given domain would be tantamount to protecting it against adverse evidence. It also provides an independent support for Lauener’s position on the question as to whether human knowledge is an “amorphous and unified language-theory” or as a constellation of separate theories, each endowed with its own language and its own ontology.
Introduction
9
The scientist’s preference for simpler theories is often seen as springing from aesthetic or pragmatic considerations which have nothing to with what reality is like. Ghins debunks this view and shows that simplicity is a reliable guide for those who want to know what there is. The opposite view leads to counter-intuitive consequence. In Chapter 9, Michel Ghins on the Empirical Versus the Theoretical, Bas van Fraassen replies to Ghins’ ideas on existence and truth in science. van Fraassen basically agrees with Ghins on the central role of experience, the need to reject the myth of the given and the hope for an empiricist philosophy of science. However as regards existence and truth van Fraassen considers that one must sharply separate questions of epistemology from questions of semantics and ontology. Sensory perception and invariance are not a necessary condition of existence. Ghins does not give a criterion of existence strictly speaking. He only offers a partial criterion of legitimacy for assertions of existence of certain objects of reference that we can observe. But perhaps there also exist in the world other sorts of entities which are “transient”, “invisible” and “intangible”. This is not an issue of semantics but of ontology. Is Ghins’ epistemic principle right? van Fraassen does not give an answer to the question. Both Ghins and van Fraassen view reality from the standpoint of experience. According to Ghins, proponents of a scientific theory are committed to believing in the existence of all entities among those postulated that bear a certain relationship to what can be experienced. Directly observable ones are not privileged. Van Fraassen’s empiricism is less moderate. Proponents of a scientific theory are only committed to believing in the existence of observable entities. In contemporary philosophy of language, mind and action, propositions are not only senses of sentences provided with truth conditions. They are also contents of human conceptual thoughts like illocutionary acts (assertions, questions, promises) and attitudes (beliefs, desires, intentions). The third part of the book deals with Propositions, Thought and Meaning. The double nature of propositions imposes new criteria of material and formal adequacy on the logic of propositions and formal semantics. One can no longer identify so called strictly equivalent propositions having the same truth conditions. They are not the senses of synonymous sentences, just as they are not the contents of the same thoughts. Moreover human agents are not perfectly rational in thinking and speaking. They do not make all valid inferences. They can assert and believe necessarily false propositions. So we need very fine criteria of propositional identity in logic. It is important to take into account the creative as well as the restricted cognitive abilities of human agents. It is also important to consider tense and aspect as
10
LOGIC, THOUGHT AND ACTION
well as presupposition accommodation and assignment of scope in the understanding of truth conditions. For that purpose, we need a better explication of truth conditions with an account of aspect, tense and presupposition. We also need to take into account the illocutionary forces of utterances. Part three of the book contains logical contributions on the matter. In Chapter 10, Propositional Identity, Truth According to Predication and Strong Implication, Daniel Vanderveken enriches the formal ontology of the theory of sense and denotation of Frege and Church. His main purpose is to formulate a natural logic of propositions that explains their double nature by taking into consideration the acts of reference and predication that speakers make in expressing propositions. According to his analysis, each proposition is composed of atomic propositions (each predicating a single attribute of objects of reference under concepts). Human agents do not know actual denotations of most propositional constituents. Various objects could fall under many concepts or could have certain properties in a given circumstance. So they also ignore in which possible circumstances atomic propositions are true. Most could be true in many different sets of possible circumstances given the various denotations that their attribute and concepts could have in the reality. For that reason atomic propositions have possible in addition to actual Carnapian truth conditions. For each proposition one can distinguish as many possible truth conditions as there are distinct sets of possible circumstances where that proposition would be true if its propositional constituents had such and such possible denotations in the reality. In understanding a proposition we just know that its truth in a circumstance is compatible with certain possible denotation assignments to its propositional constituents and incompatible with others. So logic has to distinguish propositions whose expression requires different acts of predication as well as those whose truth is not compatible with the same possible denotation assignments to their constituents. Consequently, not all necessarily false propositions have the same cognitive value. Some are pure contradictions that we a priori know to be false in apprehending their logical form. We cannot believe them. One can define the notion of truth according to a speaker, distinguish subjective from objective possibilities and formulate adequate principles of epistemic logic in predicative propositional logic. As Vanderveken points out, the set of propositions is provided with a relation of strong implication that is much finer than strict implication. Strong implication is a relation of partial order which is paraconsistent, finite, decidable and a priori known. In the second part of the chapter Vanderveken proceeds to the predicative analysis of modal and temporal
Introduction
11
propositions of the logic of ramified time. He uses the resources of model theory and formulates a powerful axiomatic system. He also enumerates valid laws for propositional identity and strong implication. And he compares his logic with intensional and hyperintensional logics, the logic of analytic implication and that of relevance. In predicating a property of an object a speaker expressing a propositional content can view the represented fact in different ways as a state, an event or an unfinished process. There are different ontological categories of fact. Having aspectualized the predication, the speaker has to insert the represented fact into his own time reference, which is distinct from the external time reference of the calendar. Verbal aspect and tense are fundamental to the understanding of truth conditions of elementary propositions. Their semantic analysis requires a logical calculus. In the late seventies Rohrer edited two collective books presenting a rigorous logical treatment of tense and aspect showing how one can represent in Montague grammar the temporal structure of verbs and how verbal meaning interacts with the meaning of tense forms and temporal adverbs. In Chapter 11, Reasoning and Aspectual-Temporal Calculus, Jean-Pierre Descl´ ´es analyzes aspect and time within the theoretical framework of cognitive applicative grammar which is an extension of Shaumyan’s Universal Applicative Grammar incorporating combinatory logic and topology. Descl´ ´es analyses fundamental concepts of aspectuality: state, event, process, resultative state by means of topological notions. He uses open and closed intervals of instants for giving a semantic interpretation of aspectual concepts. Aspectual operators are obtained by an abstraction process from semantic interpretations. Curry’s combinatory logic is used to build abstract aspectual operators. Both the Montagovian and cognitive applicative approaches are rooted in Church’s lambda calculus as regards logic and focus on intervals as regards semantics. Yet there are important differences between the two approaches. In “Universal Grammar” Montague interprets indirectly sentences of natural language via their translation into a formal object language of intensional logic for which he builds a truth conditional model-theoretical semantics. In his Cognitive Applicative Grammar, Descles ´ starts with defining a quasi-topological model of speech operations. Next he expresses model-theoretical concepts in terms of operators of combinatory logic. His formal language is not an object language related to natural language via translation. It is a meta-language which serves to describe natural language. Yet both approaches share a central concern for aspectual reasoning. In a natural deduction style Descl´ ´es formulates principles of valid inferences that enable us to derive from the sentence “This morning, the hunter killed the deer” conclusions like “Therefore
12
LOGIC, THOUGHT AND ACTION
the deer was killed this morning”, “Now, the deer is dead” and “Yesterday, the deer was alive” An interesting feature of Descl´ ´es’ approach lies in his concern for the speaking act as well as for the dynamics of meaning. He analyzes intricate connections between aspectual-temporal conditions and the learning of lexical predicates. His approach aims at shedding light on the interaction of language activities with other cognitive activities such as perception and action. Descl´ ´es also shows that applicative grammar can accommodate speech acts in its own way. The problems raised by presupposition have been a challenge for logicians, linguists and philosophers of language for almost a century. The current notion of presupposition is ambiguous; among pieces of information which are not explicitly stated but taken for granted, one should distinguish between what is “induced” or “triggered” by lexical items or syntactic constructions, and what is “already given” but “not marked” because of background knowledge. In Chapter 12, Presupposition, Projection and Transparency in Attitude Contexts, Rob van der Sandt advocates an unified account of presuppositions which establishes a straightforward connection between the two kinds of phenomena. The central tenet of his anaphoric theory of presupposition is that one single process underlies both the process of anaphoric binding and the process of presupposition resolution. He treats all presuppositions as anaphoric expressions which are bound by some previously established antecedent. van der Sandt acknowledges that sometimes the so called antecedent of the presupposition is missing and has to be supplied by some kind of accommodation. But, contrary to others, he imposes a new constraint on accommodation. When no antecedent is in the offing, accommodation has to insert an identifiable object which can then function as antecedent for the presuppositional anaphor. Accommodation applies to discourse structures. This leads van der Sandt to work out a theory of presupposition in the framework of Kamp’s Discourse Representation Theory. Using discourse representation structures he constructs a class of conditions that encode anaphoric material. Accommodation is implemented by a projection algorithm. When it is applied to a modal sentence containing a definite description, that algorithm yields either the wide or the narrow scope reading of the description. This depends on the level at which the accommodation is made. Just as Descles ´ offered a dynamic conception of disambiguation, van der Sandt gives us a dynamic account of the contrasts between wide scope and narrow scope, or de re and de dicto readings. His projection mechanism has a greater explanatory power than the standard Russellian theory of descriptions. On Russell’s account, there is no way to project a description in the consequent of a conditional to its an-
Introduction
13
tecedent. According to van der Sandt such a projection is possible. Are there truth value gaps in the case of presupposition failure? Kamp and Reyle’ standard verification conditions for discourse representation side with Russell. van der Sandt revises verification conditions in such a way that no truth value is assigned when an presuppositional anaphor can neither be bound nor accommodated. This is a significant improvement in accordance with the Frege-Strawson theory. At the end, van der Sandt comes to grips with very difficult problems which arise when the pragmatic distinction between the first person and the third person interact with the semantic distinction between de re and de dicto. He shows how his theory can solve recalcitrant puzzles mentioned by Kripke and Heim for the presuppositional adverb “too.” The concept of assertion has played a crucial role in the development of contemporary logic. It took time for logicians and philosophers to clarify the role of assertion in formalization. In Chapter 13 The Limits of a Logical Treatment of Assertion Denis Vernant considers that any logical treatment of assertion is limited because the concept requires a pragmatic analysis. The first part of Vernant’s contribution analyses Russell’s account of assertion from the Principles of Mathematics to Principia Mathematica, while emphasizing its characteristic aporias. Vernant carefully reconstructs successive stages of Russell’s thought on the matter. The second part deals with the pragmatic treatment of assertion which began with Frege’s Logische Untersuchungen and was to continue with Searle’s definition of assertive speech acts and the formulation by Searle and Vanderveken of illocutionary logic. Vernant defends a solution which is based on Frege’s account but which goes much beyond it. Assertion like judgment is well the acknowledgment of the truth of propositional content. Vernant shows that the new treatment of assertion as an illocutionary act (and not a mental state) removes Russell’s aporias. Frege only dealt with propositional negation. However, there is another negation, called illocutionary negation, which applies to force. As Searle and Vanderveken pointed out, the point of an act of illocutionary denegation is to make it explicit that the speaker does not perform a certain illocutionary act. So one must distinguish between the assertion of the negation of a proposition and the illocutionary denegation of that assertion. Vernant shows that already in 1904 Russell had anticipated illocutionary negation in his treatment of what he called denial. Russell’s insistence on denial as the expression of disbelief shows that he had understood the pragmatic complexity of assertion. At the end, Vernant criticizes current speech act theory for neglecting interactions and conversational exchanges between speakers and makes a plea for a multi-agent speech act theory. Speakers perform their assertions
14
LOGIC, THOUGHT AND ACTION
and other individual illocutionary acts with the intention of contributing to the conversation in which they participate. By pointing out the dialogical function of illocutionary acts Vernant shares actual concerns in a more general speech act theory adequate for dialogue analysis. As Wittgenstein pointed out, meaning and use are inseparable. Human speakers are agents sharing forms of life whose language-games serve to allow them to act in the world. Their verbal and non verbal actions are internally related. Human agents first of all make voluntary movements of their own body. In oral speech they emit sounds. Their basic intentional actions generate others in various ways (causally, conventionally, simply, etc.) How do agents succeed to bring about facts in the world? What is the causal and temporal order prevailing in the world in which they act? As Belnap pointed out, the logic of action requires a theory of branching time with an open future as well as a theory of games involving histories that represent possible courses of history of the world. Such a theory is compatible with indeterminism. How can we formally account for the freedom of will and the intentionality, capacities and rationality of human agents? The fourth part of the book deals with Agency, Dialogue and Games. It is concerned with questions such as: What is the nature of agency? How can we explicate free choice, action in the present and in the future, mental causation, success and failure and action generation? What is the nature of basic actions? In language use, speakers make utterances, acts of reference and predication, they express propositional contents with forces and perform illocutionary acts which have perlocutionary effects on the audience? How do they succeed in doing all this? Is there an irreducible pragmatic aspect in predication and discourse? What is the logical structure of a dialogue? The first contribution by Paul Lorenzen to dialogical logic appeared more than fifty years ago. Since then different dialogical systems and related research programmes have been developed. Is there a general framework for the study of the various interactions between dialogue and logic? What kind of rationality do agents manifest in practising language-games? How can they reach outcomes given their knowledge and other attitudes? In a recent book Facing the Future Agents and Choices in Our Indeterminist World (2000), Nuel Belnap and co-authors have outlined a logic of agency which accommodates both causality and indeterminism in a conception of ramified time where the set of moments of time is a tree-like frame. There is a single causal route to the past but there are multiple future routes. So agents are free: their actions are not determined. In the indeterminist theory of ramified time moments representing complete possible states of the actual world are instanta-
Introduction
15
neously world-wide super-events. Because of the global nature of these causal relata (the instantaneous moments), there is a world-wide matter of action at a distance in the logic of agency with branching time. The theory remains non relativistic and commits us to an account of actionoutcomes that makes them instantaneously world-wide. However it is clear that both our freedom and our actions are local matter. They are made up of events here now that have no effect on very distant regions of the universe. In Chapter 14, Agents and Agency in Branching Space Times, Nuel Belnap shows how to improve the logic of agency by using the theory of branching space-times which can account for local indeterminism. For that purpose the cosmological model proposed by Einstein and Minkowski is an invaluable source of insight. This model in which action at a distance is abandoned forces us to reconsider our conception of an event. As Belnap observes, “a causally ordered historical course of events can no longer be conceived as a linear order of momentary super-events. Instead, a history is a relativistic spacetime that consists in a manifold of point-events bound together by a Minkowski-style causal ordering that allows that some pairs of points events are space-liked related”. So the theory of branching space times better articulates better the indeterminist causal structure of the world. In that theory causal relata are point events which are limited in both time-like and space-like dimensions. Now indeterminism and free will are not global but local. Because the theory of branching space times is both indeterminist and relativist, it is a much better theoretical apparatus for the purpose of the logic of agency. As Belnap shows, logic can now more finely identify persisting agents and also describe their choices concerning the immediate future. Belnap begins his chapter by explaining his basic ideas about choice and agency in branching time. Next he presents the theory of branching space-times. And then he considers how the two theories can be combined. He discusses new interesting postulates that the logic of agency could adopt as regards the nature of agents, their free choice and how they do things in branching space-times. Belnap’s investigations could lead us to an important new theory of games in branching space times that would describe, as he says, “with utmost seriousness the causal structure of the players and the plays in a fashion that sharply separates (as von Neumann’s theory does not) causal and epistemic considerations” One of Belnap’s new postulate characterizes causation in branching space-times. Using the notion of transition between an initial event and a scattered outcome event together with the notion of causal loci, Belnap defines the notion of joint responsibility of two agents. He concedes that his account does not cover joint action which requires the
16
LOGIC, THOUGHT AND ACTION
additional concept of joint intention. Belnap’s account of action in terms of causation does not consider at all the intentions of agents. The main objective of the next chapter is to take them into account. In Chapter 15, Attempt, Success and Action Generation, Daniel Vanderveken presents a logic of agency where intentional actions are primary as in contemporary philosophy. In his view, any action that an agent performs unintentionally could in principle have been attempted. Moreover any unintentional action of an agent is generated by an intentional action of that agent. As strictly equivalent propositions are not the contents of the same attitudes, the logic of agency should distinguish intentional actions whose contents are different. For that purpose Vanderveken uses the resources of the predicative modal and temporal propositional logic presented in Chapter 9. His main purpose now is to enrich the logic of action thanks to a new account of attempt and action generation. Unlike prior intentions which are mental states, attempts are mental actions of a very specific kind that Vanderveken analyzes: they are personal, intrinsically intentional, free and also successful. (Whoever tries to make an attempt makes that attempt). Like intentions attempts have strong propositional content conditions. They are directed towards the present or the future, etc. Vanderveken explicates model theoretically these features within ramified time. As before, coinstantaneous moments are logically related in models by virtue of actions of agents at these moments. Now, moments of time and histories are also logically related by virtue of attempts of agents. Attempts have conditions of achievement. Human agents sometimes attempt to do impossible things. However they are rational and cannot attempt to do what they believe to be impossible. Thanks to his account of subjective possibilities, Vanderveken can deal with unachievable attempts. To each agent and moment there always corresponds in each model a non empty set of coinstantaneous moments which are compatible according to that agent with the achievement of his attempts at that moment. He proceeds to a unified explication of attempt and action. In order that an agent succeed in doing things it is not enough that he try and that these things occur. It is also necessary that they occur because of his attempt. Vanderveken uses the counterfactual conditional in order to define intentional causation and intentional actions. He explicates how attempts can succeed or fail, which attempts are the most basic actions and how they generate all other actions. Not all unintended effects of intentional actions are contents of unintentional actions, only those that are historically contingent and that the agent could have intended. So many events which happen to us in our life (e.g. our mistakes) are not really actions. Vanderveken accounts for the minimal rationality of
Introduction
17
agents in explaining action generation. Agents cannot try to do things that they know to be impossible or necessary. Moreover agents have to minimally coordinate their knowledge and volition in trying to act in the world. He states the basic valid laws of his logic of action. In the usual account of one-place predication where a general term serves to attribute a property to a particular of an independently given domain of objects, one takes for granted a conceptual framework which uses, among others, the metaphysics of substance and attribute, and which is, furthermore, dependent on the availability of individuated objects. In Chapter 16, Pragmatic and Semiotic Prerequisites for Predication: A Dialogical Model, Kuno Lorenz considers the prepropositional state where the task to utter a sentence and express a proposition is still to be achieved. He gives a rational reconstruction of the prerequisites for predication within a novel conceptual framework, a dialogical model, that is partly derived from ideas of Peirce and Wittgenstein. By relating the both pragmatic and semiotic approaches of Peirce and Wittgenstein to a dialogical methodology, Lorenz presents a sequence of nested dialogical constructions. His purpose is to lead us from modeling simple activity to modeling the growth of more complex activities up to elementary verbal utterances. Lorenz uses dialogue, conceived as a generalized language-game, as a means of inquiry. Emulating Nelson Goodman’s spirit he says that neither particulars nor properties exist out there. Lorenz argues that the contrast between individuals and universals is not something that we discover by observing the world. It emerges from a process of objectivation which is part of the acquisition of action competence. This process is best understood if we look at it from the perspective of the agent-patient opposition. The agent performs the token of an action and looks at it from the I-perspective. For him, action is a means to reach a goal. The patient recognizes an action-type and looks at it in a Youperspective. For him, action is an object among others. One has learned an action when one is able to go back and forth from one perspective to the other. This shift of perspective is exemplified in dialogue. Lorenz shows how the move of objectivation from action as a means to action as an object is accompanied by a split of the action into action particulars whose invariants may be treated as kernels of universalia and respective wholes which are closures of the actualization of singularia. Kernels and closure taken together (form and matter in the tradition) make up a particular within a situation. Hence particulars, for Kuno Lorenz, are the product of a dialogical construction. As he puts it, “particulars may be considered to be half thought and half action”.
18
LOGIC, THOUGHT AND ACTION
A major innovation of Lorenz lies in the role he gives to the dialogical structure of utterances. He distinguishes between two different functions in acts expressing elementary propositions: the significative function of showing and the communicative function of saying. Communication takes place between the two protagonists of the utterance. In his view, when a speaker makes an act of reference, he shows something to a hearer. Similarly when he predicates an attribute, he does that for a hearer. And even the ostensive function can involve a communicative component. Lorenz’ account of predication is fine-grained. His conceptual framework enables him to distinguish between part, whole, aspect and phase. He give an account not only of familiar elementary propositions in which a universal is predicated of a particular but also of other propositions in which a particular is seen as a part of a whole. Lorenz’ analysis of predication covers both the class-membership predication and the mereological predication. This is a remarkable advance. The dialogical approach to logic and the theory of language games are part of the dynamic turn that logic took over the last thirty years. In Chapter 17, On How to Be a Dialogician, Shahid Rahman and Laurent Keiff present an overview on recent development on dialogues and games. Their aim is to present the main features of the dialogical approach to logic. The authors distinguish three main approaches following two targets: (1) the constructivist approach of Paul Lorenzen and Kuno Lorenz (1978) and (2) the game-theoretical approach of Jaakko Hintikka (1996) aim to study the dialogical (or argumentative) structure of logic. (3) The argumentation theory approach of Else Barth and Erik Krabbe (1982) is concerned with the logic and mathematics of dialogues and argumentation. It links dialogical logic with informal logic (Chaim Perelman, Stephen Toulmin). Now two very important lines of research attempt to combine the lines of the two groups: (4) the approach of Johan van Benthem (2001-04) aims to study interesting interfaces between logic and games as model for dynamic many-agent activities and (5) Henry Prakken, Gerard Vreeswijk (1999) and Arno Lodder stress the argumentative structure of non-monotonic reasoning. Rahman & Keiff describe main innovations of the dynamic approach from the standpoint of dialogical logic. They give a new content to key logical notions. There are illocutionary force symbols in the object language of dialogical logic. The two players (proponent and opponent) perform illocutionary acts with various forces in contributing to possible dialogues of dialogic. The first utterance is an assertion by the proponent which fixes the thesis in question. That assertion is defective if the speaker cannot defend its propositional content so as to win the game. Other moves have the forces of attacks
Introduction
19
and defence. An attack is a demand for a new assertion. A defence is a response to an attack that justifies a previous assertion. The second utterance has to be an attack by the opponent. The third can be a defence or a counterattack of the proponent. And so on. In dialogical logic as in Frege’s Begriffschrift force is part of meaning. Utterances serve to perform illocutions with different forces and conditional as well as categorical assertions. Particle rules determine how one can attack and defend formulas containing logical constants, whereas structural rules determine the general course of a dialogue. Dialogical logic can formulate different logical systems by changing only the set of structural rules while keeping the same particle rules. It can also formulate different logics by introducing new particles. Thus classical and intuitionistic logics differ dialogically by a single structural rule determining to which attacks one may respond. The dialogical approach to logic makes it simple to formulate new logics by a systematic variation and combination of structural and particle rules. Notice that the structural rules determine how to label formulas — number of the move, player (proponent or opponent), formula, name of move (attack or defence) — and how to operate with these labelled formulae. The thesis advanced is valid when the proponent has a formal winning strategy: when he can succeed in defending that thesis against all possible allowed criticisms by the opponent. Rahman and Keiff show that the dialogical and classical notions of validity are equivalent under definite conditions. Like in illocutionary and paraconsistent logics, speakers can assert in paraconsistent dialogic incompatible propositions without asserting everything. Certain kinds of inconsistency are forbidden by dialogical logic. Like relevant logic connexive dialogic can discriminate trivially true conditionals from those where a determinate kind of meaning links the antecedent to the consequent. Each modal logic is distinguished by the characteristic properties of its accessibility relation between possible worlds. In dialogic accessibility relations are defined by structural rules specifying which contexts are accessible from a given context. Authors show the great expressive power of dialogic as a frame by presenting a dialogical treatment of non normal logics in which the law of necessitation does not hold. At the end, they advocate pluralism versus monism in logic. In Chapter 18, Some Games Logic Plays, Pietarinen takes a gametheoretical look at the semantics of logic. Game-theoretical semantics has been studied from both logical and linguistic perspectives. Pietarinen shows that it may be pushed into new directions by exploiting the ressources of the theory of games. He focuses on issues that are of common interest for logical semantics and game theory. Among such topics
20
LOGIC, THOUGHT AND ACTION
Pietarinen discusses concurrent versus sequential decisions, imperfect versus perfect and complete versus incomplete information. Furthermore, he draws comparisons between teams that communicate and teams that do not communicate, agents’ short-term memory dysfunctions such as forgetting of actions and of previous information, screening and signalling, and partial and complete interpretations. Finally, Pietarinen addresses the relevance of these games to pragmatics and its precursory ideas in Peirce’s pragmaticism. The common reference point is provided by Independence-Friendly (IF) logics which were introduced by Hintikka in the early 1990s. In contrast to the traditional conception of logic, the flow of information from one logically active component to another in formulas of IF logics may be interrupted. This gives rise to imperfect information in semantic games. It is worth investigating, as Pietarinen does, in which senses game-theoretical approaches throw light on pragmatically constrained phenomena such as anaphora. Following Hintikka’s idea that language derives much of its force from the actual content of strategies, Pietarinen extends the semantic game framework to hyper-extensive forms where one can speak about strategies themselves in the context of semantic games that are played in a move-by-move fashion. He further argues that Peirce’s pragmatic and interactive study of assertions antedates not only the account of strategic meaning, but also Grice’s programme on conversational aspects of logic. The borderline between decision theory and game theory is one of the more lively areas of research today in the philosophy of action. Over the last ten years, the economist Robert Aumann renewed epistemic logic by his account of common knowledge and philosophers and logicians such as Cristina Bicchieri, Richard Jeffrey, Wlodeck Rabinowicz and Jordan Howard Sobel made decisive contributions to the analysis of rational action. In Chapter 19 Backward Induction Without Tears? Sobel focuses on a kind of game whose solution hinges on a pattern of reasoning which is well known in inductive logic : backward induction. The rules of the game under scrutiny are described in the following passage : “X and Y are at a table on which there are dollars coins. In round one, X can appropriate one coin, or two. Coins she appropriates are removed from the table to be delivered when the game is over. If she takes two, the game is over, she gets these, and Y gets nothing. If she takes just one, there is a second round in which Y chooses one coin or two. Depending on his choice there may be a third round in which it is X’s turn to choose, and so on until a player takes two coins, or there is just one coin left and the player whose turn it is takes it.”
Introduction
21
Sobel distinguishes between weak and strong solutions to a game. A weak solution shows that players who satisfy certain conditions resolve a game somehow without explaining how. On the contrary, a strong solution shows how players reach the outcome. Conditions determine the level of rationality ascribed to the gameplayers. Game theorists disagree about rationality which should be granted to players even in a theory which is intended to reflect the behaviour of idealized players. Consider the backward-induction terminating game described above. The question arises whether ideally rational and informed players in that game satisfy a strong knowledge condition. Rabinowicz would give a negative answer. He claims that it is not reasonable to expect the players to be stubbornly confident in their beliefs and incorruptible in their dispositions to rational behaviour. On the contrary Sobel says that once we have granted that the gameplayers are resiliently rational, we should also admit that past irrationality would not exert a corrupting influence on present play. Even though he does not share Rabinowicz’s view, Sobel wonders about the possibility of finding an intermediate solution which would be less demanding than his initial condition — the condition of knowledge compounded robustly forward of resilient rationality — but which nevertheless “would enable reasoning on X’s part to her choice to take both coins and end the game” He argues that ideally rational and well informed players in the game would not have a strong solution to the game unless they satisfied demanding subjunctive conditions (involving counterfactual conditionals) which are not significantly different from “knowledge compounded robustly forward of resilient rationality”. One can find in Sobel’s contribution original and deep ideas on ideal game-theoretic rationality. Thus he investigates the consequences of holding prescience as being an ingredient of game-theoretical rationality. Reasoning and computation play a fundamental role in mathematics and science. A primary purpose of logic is to state principles of valid inference and to formulate logical systems where as many logical truths as possible are provable by effective methods. The last part of the book, Reasoning and Computation in Logic and Artificial Intelligence, contains discussions on the matter. It is well known that material and strict implication, which are central notions for the very analysis of entailment and valid reasoning, lead to paradoxical laws in traditional logic. Among so-called paradoxes of implication there is, for example, the law that a contradiction implies any sentence whatsoever. Do inconsistent theories really commit their proponents to asserting everything? This part of the book presents paraconsistent and relevant logics which advocate like intuitionist logic rival conceptions of impli-
22
LOGIC, THOUGHT AND ACTION
cation and of valid reasoning. It also discusses important issues for artificial intelligence. Human agents take decisions and act in situations where they have an imperfect knowledge of what is happening and they do many things while relying on imprecise perceptions. They are not certain of data and they can revise their conclusions. Which new methods should logic and artificial intelligence use in order to deal with uncertainty and imprecision in computing data? Developing insights due to Vasilev and J´ a´skowski, da Costa and Asenjo invented paraconsistent logic. In Chapter 20, On the Usefulness of Paraconsistent Logic, Newton da Costa, Jean-Yves B´´eziau and Otavio Bueno examine intuitive motivations to develop a paraconsistent logic. These motivations are formally developed using semantic methods where in particular, bivaluations and truth-tables are used to characterize paraconsistent logic. The authors then discuss the way in which paraconsistent logic, as opposed to classical logic, demarcates inconsistency from triviality. (A theory is trivial when every sentence in the theory’s language is a theorem.) They also examine why in paraconsistent logic one cannot infer everything from a contradiction. As a result, paraconsistent logic opens up the possibility of investigating the domain of what is inconsistent but not trivial. Why is it desirable to rescue inconsistent theories from the wreck? The reason is that in practice we live with inconsistent theories. From 1870 to 1895 Cantor derived important theorems of set theory from two quite obvious principles: the postulate of extensionality and the postulate of comprehension. Yet around 1902, Zermelo and Russell discovered a hidden inconsistency in the second principle. However when the shaky foundations of set theory were brought to light, mathematicians and logicians did not abandon the whole body of set theory. They decided instead to search for a way of correcting the faulty postulate and found several solutions (Russell’s theory of types, Zermelo’s separation axiom etc. . . ) This historical fact shows that an inconsistent theory can be useful, that working mathematicians do not derive anything whatever from an inconsistency and that we need a logic if we want to continue to use reasoning during the span of time which elapses after the discovery of an inconsistency and before the discovery of a solution which removes the inconsistency. Inventors of paraconsistent logic intended to provide such a logic. da Costa, Beziau ´ and Bueno briefly consider applications of paraconsistent logic to various domains. In mathematics they consider the formulation of set theory, in artificial intelligence the construction of expert systems, and in philosophy theories of belief change and rationality. With these motivations and applications in hand, the usefulness and legitimacy of paraconsistent logic become hard to deny.
Introduction
23
According to relevance logic what is unsettling about so-called paradoxes of implication is that in each of them the antecedent seems irrelevant to the consequent. Following ideas of precursors such as Ackermann and Anderson & Belnap, relevance logicians tend to reject laws that commit fallacies of relevance. The most basic system of relevance logic is the system B+ that Paul Gochet, Pascal Gribomont and Didier Rosetto consider in Chapter 21, Algorithms for Relevant Logic. Their main purpose is to investigate whether the connection method can be extended to that basic system of relevance logic. A connection proof proceeds like a refutation constructed by tableaux or sequents. It starts with the denial of the formula to be proven and attempts to establish by applying reduction rules which stepwise decompose the initial formula that such a denial leads to a contradiction. The connection method which has been recently extended to modal and intuitionistic logic is much more efficient than the sequent calculi and tableau method. So it is very useful to extend it to other non classical logics especially to those used in artificial intelligence. Gochet, Gribomont and Rosetto begin their chapter by presenting the basic axiomatic system B+ of relevant logic. They also briefly present Bloesch’s tableau method for B+ and next adapt Wallen’s connection method to the system B+. The authors give a decision procedure which provides finite models for any satisfiable formula of system B+. They also prove the soundness and the completeness of their extension. This is an important logical result. Extensional languages with a pure denotational semantics are of a very limited interest in cognitive science. Intensional object languages are needed in artificial intelligence as well as in philosophical logic and semantics to deal with thoughts of agents who are often uncertain. However, many natural intensional properties existing in artificial and natural languages are hard to compute in the algorithmic way. In Chapter 22, Logic, Randomness and Cognition, Michel de Rougemont shows that randomized algorithms are necessary to represent well intensions and to verify some specific relations in computer science. There are two main intensional aspects to take into consideration in artificial intelligence namely the complexity and the reliability of data. When data are uncertain, the advantage of randomized algorithms is very clear according to de Rougemont for both the uncertainty and complexity can then be improved in the computation. Rougemont concentrates on the reliability of queries in order to illustrate this advantage. This important contribution to “exact philosophy” fits in with the previous chapter in which complexity issues were also raised. Perceptions play a key role in human recognition, attitudes and action. In Chapter 23, Computing with Numbers to Computing
24
LOGIC, THOUGHT AND ACTION
with Words — From Manipulation of Measurements to Manipulation of Perceptions, Lofti Zadeh provides the foundations of a computational theory of perception based on the methodology of computing with words. There is a deep-seated tradition in computer science of striving for progression from perceptions to measurements, and from the use of words to the use of numbers. Why and when, then, should we compute with words and perceptions? As Zadeh points out, there is no other option when precision is desired but the needed information is not available. Moreover when precision is not needed, the tolerance for imprecision can be exploited to achieve tractability, robustness, simplicity and low solution cost. Notice that human agents have a remarkable capability for performing a wide variety of actions without any need for measurements and computations. In carrying out actions like parking a car and driving, we employ perceptions — rather than measurements — of distance, direction, speed, count, likelihood and intent. Because of the bounded ability of sensory organs to resolve detail, perceptions are intrinsically imprecise. In Zadeh’s view, perceived values of attributes are fuzzy and granular — a granule being a clump of values drawn together by indistinguishability, similarity, proximity or functionality. In this perspective, a natural language is a useful system for describing perceptions. In Zadeh’s methodology, computation with perceptions amounts to computing with words and sentences drawn from natural language labelling and describing perceptions. Computing with words and perceptions provides a basis for an important generalization of probability theory. Zadeh’s point of departure is the assumption that subjective probabilities are, basically, perceptions of likelihood. A key consequence of this assumption is that subjective probabilities are f-granular rather than numerical, as they are assumed to be in the standard bivalent logic of probability theory. In the final analysis, Zadeh’s theory could open the door to adding to any measurement-based theory the capability to operate on perception-based information. Fuzzy logic, even more than relevant and paraconsistent logics, had to overcome deep-seated prejudices and hostility. Nowadays the hostility has vanished and the merits of fuzzy logic have been widely recognized. Like other non-standard logics, fuzzy logic brings together concern for logic, for thought and for action.
I
REASON, ACTION AND COMMUNICATION