Philosophy of Language and Linguistics: Volume I: The Formal Turn; Volume II: The Philosophical Turn 9783110330472, 9783110330106

Papers gathered in the two volumes investigate the complex relations between philosophy of language and linguistics, vie

182 95 5MB

English Pages 708 [426] Year 2010

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Philosophy of Language and Linguistics: Volume I: The Formal Turn; Volume II: The Philosophical Turn
 9783110330472, 9783110330106

Table of contents :
Philosophy of Language and Linguistics: The Formal Turn. Preface
Philosophy, Linguistics and Semantic Interpretation
An Unresolved Issue: Nonsense in Natural Language and Non-Classical Logical and Semantic Systems
Varieties of Context-Dependence*
The Logos of Semantic Structure
The Good Samaritan and the Hygienic Cook: A Cautionary Tale About Linguistic Data
The Meaning of Multiple Quantified Sentences: Where Formal Semantics Meets Psycholinguistics
The Hybrid Theory of Reference for ProperNames
On the Nature of Statistical Language Laws
Vagueness and Contextualism
The Myth of Semantic Structure*
Scalar Implicatures, Communication, and Language Evolution
Semantics and Contextuality: The Case of Pia’s Leaves
Is Logico-Semantical Analysis of Natural Language Expressions a Translation?
Beyond the Fregean Myth: The Value of Logical Values
Modal Calculus of Illocutionary Logic
‘Subjectivity’ in Philosophy and Linguistics
Gottlob Frege, Philosophy of Language, and Predication
Order
Asymmetrical Semantics
Truth: An Anti-realist Adequacy Condition
Belief Reports: Defaults, Intentions and Scorekeeping
On Truth in Time
Index of names
Index of subjects

Citation preview

Piotr Stalmaszczyk (Ed.) Philosophy of Language and Linguistics Volume I: The Formal Turn

Piotr Stalmaszczyk (Ed.)

Philosophy of Language and Linguistics Volume I: The Formal Turn

Bibliographic information published by Deutsche Nationalbibliothek The Deutsche Nastionalbibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de

North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected] United Kingdom, Ire, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]

Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne ; F-75005 PARIS Tel. +33 (0)1 43 54 03 47 ; Fax +33 (0)1 43 54 48 18 www.vrin.fr

2010 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-86838-070-5 2010 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work Printed on acid-free paper ISO-Norm 970-6 FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by buch bücher dd ag

Contents Philosophy of Language and Linguistics: The Formal Turn. Preface Piotr Stalmaszczyk

1

Philosophy, Linguistics and Semantic Interpretation Christian Bassac

17

An Unresolved Issue: Nonsense in Natural Language and Non-Classical Logical and Semantic Systems Elżbieta Chrzanowska-Kluczewska

43

Varieties of Context-Dependence Tadeusz Ciecierski

63

The Logos of Semantic Structure Marie Duží, Bjørn Jespersen and Pavel Materna

85

The Good Samaritan and the Hygienic Cook: A Cautionary Tale About Linguistic Data Chris Fox

103

The Meaning of Multiple Quantified Sentences: Where Formal Semantics Meets Psycholinguistics Justyna Grudzińska

119

The Hybrid Theory of Reference for Proper Names Filip Kawczyński

137

On the Nature of Statistical Language Laws Agnieszka Kułacka

151

Vagueness and Contextualism Joanna Odrowąż-Sypniewska

169

The Myth of Semantic Structure Jaroslav Peregrin

183

Scalar Implicatures, Communication and Language Evolution Salvatore Pistoia Reda

199

Semantics and Contextuality: The Case of Pia’s Leaves Stefano Predelli

215

Is Logico-Semantical Analysis of Natural Language Expressions a Translation? Jiří Raclavský 229 Beyond the Fregean Myth: The Value of Logical Values Fabien Schang

245

Modal Calculus of Illocutionary Logic Andrew Schumann

261

‘Subjectivity’ in Philosophy and Linguistics Barbara Sonnenhauser

277

Gottlob Frege, Philosophy of Language, and Predication Piotr Stalmaszczyk

295

Order William J. Sullivan

315

Asymmetrical Semantics Mieszko Tałasiewicz

329

Truth: An Anti-realist Adequacy Condition Luca Tranchini

347

Belief Reports: Defaults, Intentions and Scorekeeping Giacomo Turbanti

363

On Truth in Time Bartosz Więckowski

381

Index of names

411

Index of subjects

413

Piotr Stalmaszczyk University of Łódź [email protected]

Philosophy of Language and Linguistics: The Formal Turn. Preface … it is a task of philosophy to break the power of words over the human mind Frege, Begriffsschrift (Preface VI)

0. Introduction Recent renewed interest in the philosophy of language has resulted in several important handbooks (e.g. Prechtl 1998; Taylor 1998; Lycan 2000; Miller 2007), anthologies of classical studies (e.g. Martinich, ed. 2007; 2009), and collections of in-depth overviews (e.g. Devitt and Hanley, eds. 2006; Lepore and Smith, eds. 2006). According to Davies (2006: 29) the “foundational questions in philosophy of language concern the nature of meaning, understanding, and communication”, which basically means that “philosophers are interested in three broad aspects of language: syntax, semantics and pragmatics” Martinich (2009: 1). Papers gathered in this volume investigate the complex relations between philosophy of language and linguistics, viewed as independent, but mutually influencing one another, disciplines. In the development of philosophy of language over the last 100 years several different stages, or ‘turns’, might be distinguished, from early associations with metaphysics and logic, through assimilation of developments in analytic philosophy and formal logic, to influences from modern linguistics and cognitive science. The ‘linguistic turn’1 1

The term is usually associated with the influential collection of essays edited by Richard Rorty (1967). Nevertheless Dummett (1993: 5) sees the origins of the linguistic turn already in Frege’s early work.

2 resulted from ‘linguistic philosophy’, understood as “the view that philosophical problems are problems which may be solved (or dissolved) either by reforming language, or by understanding more about the language we presently use” (Rorty 1967: 3).2 Oversimplifying somewhat, it may be claimed that attempts at reforming language led to considerable development and application of formal tools in linguistic analysis, and hence triggered the ‘formal turn’, whereas elucidations concerning language use may result in the ‘philosophical turn’. An important feature of the ‘formal turn’ is associated with Richard Montague’s rejection of “the contention that an important theoretical difference exists between formal and natural languages” (Montague 1970a: 188). Montague’s approach to grammar resulted in what may be dubbed ‘formal philosophy of language’, in which syntax, semantics and pragmatics are considered branches of philosophy of language. Furthermore, Montague considered it possible to “comprehend the syntax and semantics of both kinds of languages within a single natural and mathematically precise theory” (Montague 1970b: 222). This volume investigates the ‘formal turn’, initiated by Gottlob Frege, with further developments associated with the work of Bertrand Russell, (early) Ludwig Wittgenstein, Rudolf Carnap, Jan Łukasiewicz, Kazimierz Ajdukiewicz, Alfred Tarski, W.O.V. Quine, Richard Montague, Donald Davidson, Pavel Tichý, to name the most prominent philosophers and logicians.3

2

3

It needs to be added, however, that in the retrospective essay included in the 1992 edition of his anthology, Rorty almost apologizes for attracting philosophers’ excessive attention to language, and remarks “What I find most striking about my 1965 essay is how seriously I took the phenomenon of the ‘linguistic turn,’ how portentous it then seemed to me” (Rorty 1992: 371). A subsequent volume will tackle the ‘philosophical turn’ (Stalmaszczyk, ed., forthcoming). The distinction between the ‘formal’ and the ‘philosophical’ is by necessity arbitrary. Papers in the first volume offer deep philosophical insights, likewise, in articles from the second volume rigorous formalism is often implicit (if not explicit). Responsible for the division of the contributions is solely the editor. Another volume, focused on the ‘cognitive turn’, is also planned.

3 1. The formal turn and Gottlob Frege A possible – though indirect – way of approaching vital issues in philosophy of language is to view them from the perspective of other disciplines, such as, for example, logic. Inquiry into language from the perspective of logic has often resulted in interesting theoretical claims, incorporated subsequently into linguistic and philosophical research. One of the most important logicians whose influence upon the philosophy of language and modern linguistics, especially semantics (but also pragmatics), has been profound was Gottlob Frege (1848-1925), the co-founder of analytic philosophy. Frege is most often associated with developments in modern predicate logic, with devising a symbolic language for logic (the Begriffsschrift), providing the seminal analysis of the meaning of an expression, a semantic analysis of identity statements, and formulating the context principle; he is also credited with putting forward the assumptions that lead to formulation of the compositionality principle.4 Paul Pietroski has recently remarked that Frege “bequeathed to us some tools – originally designed for the study of logic and arithmetic – that can be used in constructing theories of meaning for natural languages” (Pietroski 2004: 29-30). The ‘Fregean tools’ still prove useful in analyzing not only the fundamental issues of sense and meaning, but also such traditional linguistic notions as predication, and his “philosophy of language […] remains intensely vital today. Not since medieval times has the connection between logic and language been so close” (Mendelsohn 2005: xviii). Frege did not consider spoken language a sufficiently precise instrument for logic. He pointed to the need for creating a language made up of signs, the concept-script (or ‘ideography’), clear of any double meaning, he also claimed that “[t]he main task of the logician is to free himself from language and to simplify it. Logic should be the 4

The classic study on Frege and philosophy of language is Dummett (1981). For more recent discussion of Frege’s influence upon modern thought, including linguistics and philosophy of language, see Kenny (1995), Beaney (1997), Sainsbury (2002), Pietroski (2004), Burge (2005) and Mendelsohn (2005).

4 judge of languages” (Letters to Husserl, 1906, 303).5 Elsewhere Frege stated that “[i]nstead of following grammar blindly, the logician ought rather to see his task as that of freeing us from the fetters of language” (Logic, 244). As observed by Carl (1994: 54), this “struggle against language and grammar” directs the logician’s concern to the issue of the thought expressed by a sentence, and aims at discovering the “logical kernel”. Papers in this volume show Frege’s long-lasting legacy, especially in different attempts at investigating formalized language; however, equally interesting and fruitful are numerous recent developments departing from Fregean logic and semantics. 2. Contents of the Volume The collection brings together contributions by philosophers, logicians and linguists, representing different theoretical orientations but united in outlining the common ground, necessary for further research in philosophy of language and linguistics. Christian Bassac considers some relevant proposals aimed at giving an appropriate semantic interpretation of a sentence. He shows that although philosophers and linguists agree on the fact that some kind of logical form is needed, they differ – often considerably – in their conception of this logical form. Bassac demonstrates that Categorial Grammars (in the tradition of Ajdukiewicz and Leśniewski) and Lambek Calculus, initially developed as logical formalisms, despite their limitations, provide adequate intuitions to allow the emergence of systems that obviate the “translation” problem that both philosophers and linguists have to face in the construction of the logical form. A substantial part of the article discusses Linear Logic (an enrichment of Lambek Calculus), and shows how this formalism deals with three 5

Frege’s words were echoed, in a somewhat different methodological context, by Jan Łukasiewicz, who elucidated the idea of formalism and observed that every “scientific truth, in order to be perceived and verified, must be put into an external form intelligible to everybody”, and this aim “can be reached only by means of a precise language built up of stable, visually perceptible signs. Such a language is indispensable for any science”. (Łukasiewicz 1957: 15)

5 precise linguistic phenomena: non adjacent types, morphological agreement and coercion phenomena. Elżbieta Chrzanowska-Kluczewska tackles the issue of nonsense in natural language as well as non-classical logical and semantic systems. Her general claim is that the “philosophy of modern linguistics” eventually ought to determine what specific “philosophies of language” are needed in solving linguistic problems. She inquires about the utility of non standard systems of logic developed by the Lwów-Warsaw school of philosophical and mathematical logic (J. Łukasiewicz, S. Leśniewski, Cz. Lejewski, S. Jaśkowski, A. Mostowski) in considering the intricacies of natural semantics, especially in analysing fictional discourse and figuration. She observes that certain problems in stylistics and poetics disclose the limitations of more traditional methodologies. The case study is devoted to catachresis, a far-fetched metaphor difficult to interpret or verging on nonsense. Chrzanowska-Kluczewska shows that Possible-Worlds Semantics and Game Semantics are among the more interesting paradigms that can be postulated to deal with this figure (or linguistic phenomenon). Both these models draw heavily from philosophy and logic. Tadeusz Ciecierski investigates varieties of context-dependence and context-sensitivity. He observes that an adequate theory of contextsensitivity must take into account the fact that several properties of linguistic signs are somehow context-dependent. In his paper, Ciecierski sketches the functional approach to the context-sensitivity and combines it with the idea of the parametrization of context. This idea dates back to the work of Y. Bar-Hillel, D. Scott and R. Montague. He compares the functional taxonomies of contexts with two other approaches present in the philosophical literature: John Perry’s distinction into pre-semantic, semantic and post-semantic contexts, and Eros Corazza’s non-functional distinction into narrowly and broadly conceived context. Ciecierski’s work also briefly discusses three concepts of derivative contextdependence: definitional, relational, and analytical. The contribution by Marie Duží, Bjørn Jespersen and Pavel Materna is a plea for a realist procedural semantics. Their approach stands at variance with both denotational semantics (such as model theory) and pragmatist semantics (such as inferentialism). The authors

6 propose a neo-Fregean semantics, and argue in favour of a robustly realist notion of semantic structure based on the hyperintensional procedural semantics of Transparent Intensional Logic (as developed by Pavel Tichý). The paper shows how to decompose the respective semantic structure of a mathematical sentence and two empirical sentences by means of annotated trees in keeping with a ramified type theory. The resulting logos of semantic structure turns out to consist in what is known as logical form, which details the logical operations involved in combining meaning-imbued atoms or simples into meaningimbued molecules or complexes. This notion of semantic structure offers a solution to Russell’s old problem of the unity of the proposition, The principal assumption made by Chris Fox is that a formal analysis of obligations should seek to account for obligations as expressed using natural language. In particular, he does not assume that the primary objective is to formalize some pure, abstract notion of obligations, and then attribute any difficulties to the imperfect nature of natural language. In his discussion Fox takes into consideration the so-called Good Samaritan Paradox (as formulated by Arthur Prior), and argues that problematic aspects of the behaviour of “deontic” examples are in reality specific examples of essentially non-deontic phenomena. The author also advocates taking into consideration results from linguistics when using natural language examples to motivate analyses in formal semantics and philosophical logic. Justyna Grudzińska focuses on the linguistic phenomenon of multiple quantification. Her principal aim is to defend the grammatical ambiguity hypothesis. She argues argue against the position of a unitary semantics in which ambiguous sentences encode a single interpretation, and a pragmatic (context dependent) derivation is essential in arriving at the other readings. In her attempt to provide a plausible account of semantic (scopal) ambiguity, Grudzińska uses a combination of approaches and methods, including theories of grammar (such as Chomsky’s Government and Binding theory), tools of mathematical logic, and results of psycholinguistic experiments. She shows that instead of giving a traditional enumeration of sentence interpretations or word senses, the idea is to relate senses to one another into one coherent structure. This line of thinking is represented in both computational

7 linguistic and psycholinguistic approaches. Filip Kawczyński presents the main ideas of the Hybrid Theory of reference for proper names (as defined by Gareth Evans). The theory has arisen as a response to Descriptivism and Kripke’s Causal Theory. The Hybrid Theory agrees with Descriptivism that intentional content associated with a name serves a significant function in determining reference of the name, it follows Kripkean theory’s adherent in saying that there are some communication chains of reference-exchange. The author offers some additions to the theory, especially connected with what he calls the “mock names”, i.e. expressions that look like proper names but in fact are nothing more than abbreviations for descriptions used attributively. Agnieszka Kułacka considers the nature of language laws with particular focus on statistical language laws. She discusses the notion of law of science and describes the types of laws with regard to language laws, paying attention to qualitative and quantitative, synchronic and diachronic, deterministic and statistical, and empirical and theoretical laws. She exemplifies the contemporary methods of investigating language laws with the Menzerath-Altmann law. This law states that the longer the language construct, the shorter its constituent. On the syntactic level, the law can be interpreted as statistically the longer the sentence measured in number of clauses, the shorter the average of its clauses measured in words. The author verifies the law for fragments of Polish and English syntax. Joanna Odrowąż-Sypniewska discusses vague expressions (such as “tall”, “rich”, “bald”). One of the most characteristic features of such expressions is that they seem to be tolerant: if two objects differ only marginally in the relevant respect then if one is in the extension of the given vague predicate, the other should be as well. This feature makes vague expressions susceptible to sorites paradoxes (such as the Bald Man paradox or the Heap paradox). Recently, a new – contextualist – account of vagueness has been put forward that is supposed to solve the paradox. The author tries to assess two contextualist theories of vagueness (Fara’s interest-relativity solution, Shapiro’s conversational solution). She shows their deficiencies and suggests that subvaluation is the most adequate logic for the contextualist account proposed by

8 Shapiro. The logic underlying the subvaluation theory is a paraconsistent logic. Whereas supervaluationists argue that borderline statements are devoid of truth value, subvaluationists claim that statements concerning borderline cases are both true and false. Jaroslav Peregrin challenges the myth of semantic structure, that is the common contention that behind the overt, syntactic structure of an expression there lurks a covert, semantic one (also referred to as the logical form). According to the author, this myth has been influenced by both Russell’s concept of logical form, and Chomsky’s investigation into levels of representation. Peregrin claims that this contention is a result of a mere confusion, and that the usual notion of semantic structure, or logical form, is actually the result of certain properties of tools of linguistic analysis being unwarrantedly projected into what is analyzed. He also stresses that when natural language is translated into a logical language of a very simple structure, discrepancies are bound to arise, but turning this fact into a fact about language is tantamount to changing the train of empirical linguistics for that of speculative metaphysics. Salvatore Pistoia Reda deals with scalar implicatures. According to the Grice-inspired view, derivation of scalar implicatures occurs at the root of the sentence, after the compositional computation. On the other hand, theorists like Chierchia argue that scalar implicatures can occur in embedded context, in parallel with the semantic computation. The first approach is referred to as globalism, the latter as localism. A third approach, by Recanati, shares with the localists the account for embeddability, and with the globalists the pragmatic interpretation of implicatures. Adopting an evolutionary standpoint, the author argues against Recanati’s approach. He also discusses the recent linguistic debate which deals with language evolution. According to Chomsky, considering language as an adaptation for communication is far too vague to be addressable and it fails to recognize the distinction between questions of computation and questions of evolution. Recanati fails to recognize the same distinction and thus the author rejects his mixed approach. Stefano Predelli defends a response to a classic contextualist argument against the traditional paradigm in truth-conditional semantics. According to that argument, traditional semantics fails to take into

9 account certain truth-conditionally relevant forms of contextuality, not reducible to classic forms of either ‘pre-semantic’ or ‘meaning governed’ contextual dependence. The response presented by Predelli grants the contextualist contentions that, in the cases under discussion, ambiguity, ellipsis, indexicality, or classic Gricean maneuvers are of no relevance. However, he counters that the intuitions put forth by the contextualists are naturally assimilable to standard forms of pre-semantic contextuality, and are thus not problematic from the traditional semantic standpoint. Jiří Raclavský rejects in his contribution the thesis that logicosemantical analysis of natural language corresponds to mere translation into formal language. He claims that this thesis has at least two unacceptable consequences. Firstly, to explain the meaning of the formal expression which is a translation of a natural language expression, one has to translate it into another language, thus an infinite regress of translations arises. Secondly, the translation does not disclose the meaning (it indicates only the sameness of meanings), which is a serious drawback because the semanticist’s aim is to explicate meanings. The discussion is conducted with reference to Tichý’s Transparent Intensional Logic. In addition to the criticism of the translational thesis, the author offers an alternative explanation of typical findings of semanticists (juxtapositions of natural and formal expressions) which fits the idea that logico-semantical analysis of natural language should provide pairs . Fabien Schang investigates the ‘Fregean Axiom’ (as described by Roman Suszko), according to which the reference of a sentence is a truth value. In contrast to this referential semantics Schang constructs a usebased formal semantics, in which the logical value of a sentence is not its putative referent but the information it conveys. He introduces an appropriate use-based formal semantics, Question-Answer Semantics, a non-Fregean many-valued logic. In this formal approach the meaning of any sentence is an ordered n-tuple of yes-no answers to corresponding questions. This algebraic semantics departs from Frege’s truthvaluations while making use of logical values as the referents of sentences.

10 Andrew Schumann discusses the modal calculus of illocutionary logic. The aim of illocutionary logic is to explain how context can affect the meaning of certain special kinds of performative utterances. A logic of speech acts cannot satisfy the Fregean approach in general, hence a need for a non-Fregean formal analysis. Additional support for this claim comes from social constructivism (developed by Berger and Luckmann), according to which the content of social acts and the content of performances of any propositions are not physical facts. Therefore performances cannot be evaluated as either true or false. Schumann introduces therefore many-valued interpretation of illocutionary forces understood as modal operators and he constructs a non-Archimedean valued logic for formalizing illocutionary acts. This formalization could be applied in model-theoretic semantics of natural language and natural language programming. Barbara Sonnenhauser focuses on the notion of ‘subjectivity’ in a primarily linguistic (but also philosophical) perspective. She observes that in dealing with subjectivity, linguistics is confronted with basically the same problems as philosophy. These problems are mainly based on the underlying dualistic thinking rooted in classical Aristotelian logic. Her paper proposes a triadic redefinition of subjectivity in terms of Gotthard Günther’s transclassical logic and Charles S. Peirce’s triadic sign conception. This redefinition of subjectivity and its application to language is exemplified with the category of ‘indirectivity’ in Bulgarian. In this account, the conception of emerging subjectivity in language is different from traditional approaches: subjectivity is not to be located in meaning components; it does not consist in the relatedness of a specific element’s meaning or interpretation to some subject. It rather arises out of the sign process itself – more specifically, out of linguistic elements triggering a process of reflection, of abductive reasoning. Piotr Stalmaszczyk discusses in his paper selected problems in the philosophy of language and linguistics as exemplified with Frege’s approach to functions and arguments. He investigates the relevance of this approach for contemporary linguistics, in particular generative grammar. Though Fregean semantics is not concerned with natural language categories, his line of reasoning (especially the distinction between saturated and unsaturated functions) may be applied to

11 analyzing predication as a grammatical relation. In addition, the paper offers a preliminary classification of predication types into thematic, structural and propositional predication. William J. Sullivan observes that linguistic theories during the past three centuries, partly because of their interests, have generally failed to notice that the linear order in linguistic output (texts) is a problem that requires explanation. Recent interdisciplinary research has shown the non-linear nature of the cognitive store, and therefore the problem of (non)linearity became even more pressing. Sullivan attempts to demonstrate that a black box analysis of a problem of anataxis in Russian shows that a relational network approach resolves the problem within the linguistic system itself. The paper by Mieszko Tałasiewicz is an attempt to sketch out a theoretical paradigm capable of accommodating a number of competing accounts of various problems belonging to the theory of language. The author refers to this paradigm as ‘Asymmetrical Semantics’, since, as he observes, an adequate semantics of a natural language must be asymmetrical: describe differently the correspondence relations between language and world, and world and language. He further distinguishes two kinds of discourse situations: A-situations, where the object is given and the use of a word is a reaction to it, and D-situations, where the word is given and the corresponding object will have to be identified. A-semantics and D-semantics are, respectively, semantic theories which can be used to describe adequately the correspondence relations in A-situations and D-situations. Asymmetrical Semantics is not in itself a solution to any of the specific problems, but rather a way of ordering the existing solutions as well as defining the scope of their application. A definitive assessment of the usefulness of this paradigm would require that many of the existing approaches be analyzed with respect to their relationship to the principle of classification contained in the paradigm. Luca Tranchini attempts to show that the notion of truth naturally finds its place in the anti-realist proof-theoretic semantic approach. He starts from Dummett’s analysis of the so-called paradox of deduction, namely the tension between two crucial features of inference: validity and usefulness (or epistemic fruitfulness). He reconsiders the principle according to which an inference is valid if and only if it

12 preserves the truth from the premises to the conclusion. In the light of the independent account of the notion of validity offered by anti-realism, the principle can be taken as the milestone of an explication of the notion of truth in the proof-theoretic framework. Giacomo Turbanti discusses two dynamic approaches to semantics, Discourse Representation Theory and Default Semantics. He shows that they provide effective tools for representing how speakers handle meanings in linguistic practices. The paper exploits these approaches with relation to the analysis of belief reports. Despite their benefits, the theories that support these representational advances may be themselves question begging from a philosophical point of view. Further on, Turbanti claims that Brandom’s remarks about the normative character of intentional content offer an important contribution to bring into focus the right path to drive the representational improvements towards fully acceptable answers to philosophical questions about semantics. Bartosz Więckowski outlines in his paper an aboutness-free account of truth in time and develops an associative substitutional semantics for the first-order tense-logical fragment of English. In associative semantics the truth of an atomic sentence is explained in terms of the mutual matching of the semantic values (associates) which are associated with the terms from which that sentence is composed. The associative account is used for the semantical analysis of a selection of temporal constructions which are problematic from the point of view of presentism (i.e., the ontological theory that only present entities exist) as they prima facie seem to involve reference to or quantification over nonpresent entities. The associative analyses suggested are in agreement with presentism and do not encounter the difficulties to which a denotational reading of the problem sentences gives rise; moreover, they are ontologically parsimonious and both compositional (down to the subatomic level) and comparatively faithful to the surface structure of the problem sentences.

13

Acknowledgments Papers gathered in this volume were submitted and, in most cases, presented at the first International Conference on Philosophy of Language and Linguistics, PhiLang2009. The conference was held in Łódź in May 2009, and organized by the Chair of English and General Linguistics at the University of Łódź. I am grateful to all invited participants for stimulating presentations and discussions, and to my colleagues from the organizational committee (Ryszard Rasiński, Jerzy Gaszewski, Krzysztof Kosecki, Janusz Badio) for help with the organizational issues. I am grateful to Mr Ryszard Rasiński for comprehensive editorial assistance. Finally, I wish to thank Dr Rafael Hüntelmann for advice and accepting the volume for publication with Ontos Verlag.

References Beaney, Michael 1997. Introduction. In: Michael Beaney (ed.), 1-47. Beaney, Michael (ed.) 1997. The Frege Reader. Oxford: Blackwell Publishers. Burge, Tyler 2005. Truth, Thought, Reason. Essays on Frege. Oxford: Clarendon Press. Carl, Wolfgang 1994. Frege’s Theory of Sense and Reference. Its Origins and Scope. Cambridge: Cambridge University Press. Davies, Martin 2006. Foundational Issues in the Philosophy of Language. In: Michael Devitt and Richard Hanley (eds.), The Blackwell Guide to the Philosophy of Language. Oxford: Blackwell Publishers, 19-40. Devitt, Michael and Richard Hanley (eds.), 2006. The Blackwell Guide to the Philosophy of Language. Oxford: Blackwell Publishers. Dummett, Michael 1981. Frege: Philosophy of Language (Second edition). London: Duckworth. Dummett, Michael 1993. Origins of Analytical Philosophy. London: Duckworth. Frege, Gottlob [1879] 1997. Begriffsschrift (translated by M. Beaney). In: M. Beaney (ed.), 47-78. Frege, Gottlob [1897] 1997. Logic (translated by P. Long and R. White). In: M. Beaney (ed.), 227-250.

14 Frege, Gottlob [1906] 1997. Letters to Husserl, 1906 (translated by H. Kaal). In: M. Beaney (ed.), 301-307. Heck, Richard and Robert May 2006. Frege’s Contribution to Philosophy of Language. In: E. Lepore and B. Smith (eds.), 3-39. Kenny, Anthony 1995. Frege. An Introduction to the Founder of Modern Analytic Philosophy. London: Penguin Books. Lepore, Ernest and Barry C. Smith (eds.), 2006. The Oxford Handbook of Philosophy of Language. Oxford: Oxford University Press. Lycan, William 2000. Philosophy of Language. A Contemporary Introduction. London & New York: Routledge. Łukasiewicz, Jan 1957. Aristotle’s Syllogistic from the Standpoint of Modern Formal Logic (second enlarged edition). Oxford: Clarendon Press. Martinich, Aloysius P. (ed.) 2007. The Philosophy of Language, 5th ed. New York: Oxford University Press. Martinich, A. P. 2009. General Introduction. In: A. P. Martinich (ed.), Philosophy of Language. Volume 1. London & New York: Routledge, 1-18. Martinich, A. P. (ed.) 2009. Philosophy of Language. Critical Concepts in Philosophy. Volumes I-IV. London & New York: Routledge. Mendelsohn, Richard 2005. The Philosophy of Gottlob Frege. Cambridge: Cambridge University Press. Miller, Alexander. 2007. Philosophy of Language. Second Edition. London & New York: Routledge Montague, Richard 1970a. English as a Formal Language. In: Bruno Visentini et. al. Linguaggi nella Società e nella Tecnicà, Milan: Edizioni di Comunita, 189-224. Reprinted in: Richmond H. Thomason (ed.), 188-221. Montague, Richard 1970b. Universal Grammar. Theoria 36, 373-398. Reprinted in: Richmond H. Thomason (ed.), 222-246. Morris, Michael Introduction to the Philosophy of Language. Cambridge: Cambridge University Press. Pietroski, Paul M. 2004. Events and Semantic Architecture. Oxford: Oxford University Press. Prechtl, Peter 1998. Sprachphilosophie. Lehrbuch Philosophie. Stuttgart: Metzler. Rorty, Richard 1967. Introduction. Metaphilosophical Difficulties of Linguistic Philosophy. In: Richard Rorty (ed.), 1-39. Rorty, Richard 1992. Twenty-five Years Later. In Richard Rorty (ed.), 371-374. Rorty, Richard (ed.) 1967. The Linguistic Turn. Recent Essays in Philosophical Method. Chicago and London: The University of Chicago Press. Rorty, Richard (ed.) 1992. The Linguistic Turn. Recent Essays in Philosophical Method. With Two Retrospective Essays. Chicago and London: The University of Chicago Press.

15 Sainsbury, R. M., 2002. Departing from Frege. Essays in the Philosophy of Language. London and New York: Routledge. Stalmaszczyk, Piotr (ed.) (forthcoming) Philosophy of Language and Linguistics. The Philosophical Turn. Taylor, Kenneth 1998. Truth and Meaning. An Introduction to the Philosophy of Language. Oxford: Blackwell Publishers. Tichý, Pavel 1988. The Foundations of Frege’s Logic. Berlin and New York: Walter de Gruyter. Thomason, Richmond H. (ed.) 1974. Formal Philosophy. Selected Papers of Richard Montague. New Haven, CT: Yale University Press.

Christian Bassac University of Lyon 2 (France) & CRTT (EA-656) & INRIA-Signes [email protected]

Philosophy, Linguistics and Semantic Interpretation Abstract: This article considers some relevant proposals that have been elaborated to arrive at the semantic interpretation of a sentence. It is shown first that although philosophers and linguists agree on the fact that some kind of logical form is needed, they differ in their conception of this logical form. It is also shown that Categorial Grammars, initially developed as logical formalisms, despite their limitations, provide adequate intuitions that allowed the emergence of systems that obviate the “translation” problem both philosophers and linguists have to face in the construction of the logical form.

0. Introduction Although there is agreement among linguists and philosophers that the semantic interpretation of a sentence is derived from some kind of logical form, there is no agreement as regards the nature of this logical form. For philosophers it is a formal object closely connected to extralinguistic phenomena which reveals the ultimate meaning of a proposition, whereas for linguists (at least for those working in the Generative paradigm), the Logical Form1 is a level of interpretation derived from syntactic structure via operations that are syntactic in nature (e.g. quantifier raising). Not only is there disagreement between linguists and philosophers, but philosophers have distinct conceptions of logical form as shown by Lappin (1991). Nevertheless, whatever the conception of logical form is, philosophical or linguistic, the problem that must be solved is that of the translation from a given form to logical form. Therefore, the aim of this paper is to analyse how this problem can 1

It has become common practice to refer to the Logical Form with capital initials in this paradigm.

18 be solved. After a brief review of Lappin’s analyses and a quick account of the derivation of Logical Form in Generative Grammar presented in Section 2, I will examine in Section 3 how Categorial Grammar and Lambek Calculus can be considered as a solution to the “translation problem” and will present their limitations. In Section 4, I will show how Linear Logic is an enrichment of Lambek Calculus and how it can both improve the adequacy of this formalism and elegantly handle various linguistic phenomena. 1. Logical Form Before examining to what extent conceptions of logical form differ between philosophers and linguists, it is necessary to state that there is general agreement on the fact that the logical form can be defined, in the first analysis, as an expression that encapsulates the meaning of a sentence. There is also agreement on the basic materials it is made of: these are minimally (for instance in a propositional language) an alphabet made of propositional symbols, of the usual connectives (i, ∧, o, ¬), and construction rules (the syntax) such that two atoms linked by any of the connectives (or one atom preceded by ¬ ) yield new formulae of the language. First order predicate calculus is obtained by adding to the alphabet that defines a propositional language, constants, variables, predicate constants and the two quantifiers A and E. It is also admitted that a logical form is a representation of the things that are in a model, defined as a necessary object for the interpretation of a language: it expresses the necessary notion of content as opposed to form. Examples of models are provided by models of algebraic structures: models for a group with operation of addition are C, R, Z, or the set of translations of a plane with composition of translations, models for a ring are Z, the set of polynomials with coefficients in R with addition and multiplication of polynomials, the set of matrices with zero and additive inverse matrices etc. Conversely, an algebraic structure can be considered as a model for a set of elements with some properties. As there may be various languages to consider, there are various models for various languages. Therefore we give here the most

19 general definition of a model: minimally a model for a language L is a couple {{D} {I}} in which D is a set and I an interpretation function, such that for every constant C, and for every n-ary predicate P: M=({D} {I}) D≠{Ø} C¤L I(C)¤ {D} I(P)⊂{Dn} 1.1. In philosophy The recognition of the necessity of some kind of logical form goes back at least to Leibnitz, but it is probably Frege who first expressed the view that there is a discrepancy between what is expressed via a natural language and the real content of a sentence. Here I will follow Lappin (1991) and consider that there are basically three conceptions of logical form among philosophers: the inferential, the epistemic and the ontological conceptions. These will be briefly presented now. 1.1.1. Frege Frege’s motivations for positing the existence of a logical form are unambiguously linked to the notion of inference. In the preface to his Begriffsschrift (1879: 5) he clearly underlines the deficiency of natural language as regards the inferential process: My initial step was to attempt to reduce the concept of ordering to that of logical consequence…I had to bend to every effort to keep the chain of inferences free of gaps. In attempting to comply with this requirement…I found the inadequacy of language to be an obstacle…this deficiency led me to the…present ideography. Its first purpose… is to provide us with the most reliable test of the validity of a chain of inferences…

That the syntactic form common to two propositions is not relevant in the inferential process can be shown by (1)-(3) below, in which two

20 propositions have identical Subject/Predicate structure and yet allow different entailments: (1) John is clever (2) Every logician is clever (3) If Peter is a logician then he is clever Although (1) and (2) have the same Subject Predicate form and even the same predicate, (3) is entailed by (2) and certainly not by (1). Conversely two propositions with different structures can allow identical entailments, and Frege then lay to rest anything that is not relevant for the inferential process in the logical form. 1.1.2. Carnap The notion of logical form is of paramount importance too in Carnap’s The logical structure of the world (1928). Here the purpose, which is in accordance with the manifesto of the Vienna circle, is to build a logical language that would express a unified representation of scientific knowledge, via an incremental process starting with objects from which all others are derived. This is what is expressed in § 67: Since we wish to require of our constructional system that it should agree with the epistemic order of the objects, we have to proceed from what is epistemically primitive…from the given, i.e. from experiences themselves in their totality and undivided unity.

The elementary elements of the epistemic system are the autopsychological elementary experiences. The primitive basic relation, which is epistemically primitive too, is the antisymetric relation of Recollection of Similarity or Rs, (§78) which holds between two elementary experiences when one is recognized as similar (even if partly so) to the other through the comparison of a memory image. The main idea here is also that some objects are epistemically primary to others. An object type is defined as epistemically primitive relative to another

21 one that is epistemically secondary to it, if the second is recognized through the mediation of the first. The method is extensional, and the “logistic rendition” is achieved via the usual principles: extralogical concepts are expressed by usual words, and logical concepts by the traditional logical symbols. 1.1.3. Wittgenstein For Wittgenstein, the logical form of a sentence results from the existence of an isomorphic relation between the world and a language. Consequently, a proposition, as the representation of a state of affairs, contains as many constituents as the state of affairs it represents does. The logical form of a sentence then is shared by reality, as words have an ostensive force (cf. for instance TLP 2.1511). This conception is epitomized by TLP 3.21: “The configuration of objects in a situation corresponds to the configuration of simple signs in the propositional sign”. In a way here, reality itself is a model for the logical form (cf. TLP 2.18). 1.2. In linguistics In linguistics, more specifically in GB or in the Minimalist Program, the Logical Form (LF) is derived from the level of S-structure via operations that will be presented now, but we give first the motivations for a distinct level of LF. 1.2.1. Motivations for LF The main motivation for LF is that whereas (4) below is clearly a predication on that day, (5) is not a predication on every day: (4) That day passed (5) Everyday passed Consequently, the LF associated with (4) is (4a), that associated with (5) is (5a):

22

(4a) passed(day) (5a) Ax[day(x)ipassed(x)] The same goes with Wh-NPs: (6) is a predication on an accident, whereas (7) is not a predication on what. Consequently the LF of (6) is (6a) that of (7) is (7a): (6) an accident happened (7) What happened? (6a) happened(accident) (7a) For what x, event(x) a happened(x) 1.2.2. Derivation The derivation of LF is achieved via Quantifier Raising or NP raising from an S-Structure, as shown in (5b) and (6b) below for (5) and (7) respectively: (5b) IP[[Every dayi] IP[ti passed]] (7b) [CP[Whati] IP[ti happened]] The difference is that in (5b) there is adjunction of the quantified NP to an IP node, whereas in (7b) the Wh-word has been moved to a CP position. 1.2.3. A problem common to both conceptions of logical form Whatever the conception, philosophical or linguistic as expressed for instance in the Minimalist Program, the problem is that the logical form is always obtained via a “translation” from a particular object external to the syntax of the proposition: in the Minimalist Program a translation from S-structure, for Frege from a sentence, for Carnap from experiences, for Wittgenstein from objects of the world. We want to show now how Categorial Grammars and Lambek Calculus solve this

23 “translation problem” and lead to the conception of a syntactic-semantic interface in which the semantics of a sentence is “consubstantial” with its syntax. 2. Categorial Grammars (CG) and Lambek Calculus (LC) 2.1. The intuition The first paper on CG is that of Ajdukiewicz (1935), who claimed that the problem of “syntactic connection” is of greatest importance for logic: “syntactic connection” can be defined as the specification of the conditions under which a word pattern constituted of meaningful words is itself a meaningful expression. After admitting that Russell’s theory of types could be a solution to the problem of syntactic connection,2 he developed some results previously obtained by Leśniewski. To ensure that an expression is syntactically connected, two types of categories are first distinguished: basic categories and functor categories which are unsaturated or functional signs. Each expression of the language is then associated to a categorial index: for instance a word forming a sentence of category s with two names as its arguments of category n is associated to the functor index s . Consequently a hierarchy of possible semantic nn categories emerges whose first elements could be: s s s s s s s s s s ..., , , ..., , ..., , ..., n .... s,n, s , , s s s s n nn nnn s ss sss ns sn n nn n For instance, to check if the formula p∨p→p is well formed, each s element receives an index: if p has s as an index, each connective has ss 2

Yet Ajdukiewicz also shows that the formula whose word pattern forms the definition of Russell’s antinomy of classes is not syntactically connected, thereby proving first that Russell’s problem boils down to a problem of syntactic connection, and second that syntactic problems are logical problems.

24 as an index. Each element of the expression p∨p→p is consequently associated to a category as follows: (8)

p∨p→p s s s s s ss ss

The main functor is then written first, then the second (Polish notation), then the arguments, so as to yield (9) and its “proper index” (10): (9) (10)

→ ∨ p,p,p s s , ,s,s,s ss ss

The “proper index” of the sequence is the final string of categories obtained. Then a procedure of cancellation from left to right applies to check if a combination of indices with a fractional index in initial position is immediately followed by exactly the same indices occurring in the denominator of the fractional index. If this is the case, the first index in the sequence is cancelled. This results in a certain number of “derivatives”. The first derivative starting with (10) is then (11): (11)

s ,s,s. ss

After application of the same procedure on the first derivative, the next and final derivative called the “exponent” of the expression is then: (12)

s

Ajdukiewicz claimed that this procedure can be applied to natural languages too and so did Bar-Hillel (1953). Obviously this procedure corresponds to the building of constituents: a constituent B is formed by a functor A/B combined with a basic category A as indicated below in three equivalent formats, a tree, a rewrite rule, or a demonstration:

25

(13) B A

A,A/B,iB A/B

A,A/ B B

The problem with this grammar is that the argument of a functor must be on its right hand side. Consequently Bar-Hillel introduced bidirectionality in the construction of categories, so that if A and B are categories, A/B and A\B are categories too, depending whether the category needed to build a B is on the right or on the left hand side of the functor category.3 Then Lambek (1958) developed this and added a product of categories that will be presented in section 3.2.1 below. 2.2. More formally 2.2.1. Definition A categorial grammar can now be defined as a 6-tuple: {{P},{C},{/},{R},{V},{Lex}} in which: {P} is the set of primitive categories {C} is the set of constructed categories {/}=def if X1,X2,..Xn then X1,X2,…Xi-1/Xi/Xi+1…Xn∈{C} {R}=def X1,X2,…Xi-1(X1,X2,…Xi-1)/Xi/(Xi+1 Xi+2…Xn)Xi+1Xi+2…Xn→ Xi this reduction rule can be understood as the elimination of /

3

The notation used by Bar-Hillel (1953,48) is not \ but (). Such bi-directional CGs are usually called AB grammars, A for Ajdukiewicz and B for Bar-Hillel.

26 {V} is a terminal vocabulary {Lex} is a type assignment{V}→{C} The language then generated is the closure of {V} under /. Different grammars can be defined depending whether they accept other construction rules than /. For instance as was seen above, BarHillel added one more connective so that the two connectives used are \ / and then Lambek (1958) added the non commutative product of types noted as . that is related to \ or / as follows, Lambek (1958: 163): (Xi/Xi+1)/Xi+2=Xi/(Xi+2.X i+1) Xi/(Xi+2.X i+1) =(Xi/Xi+1)/Xi+2 Consequently LC can be considered as an evolution of CGs that started with Ajdukiewicz. What is new, is that LC is cast in a natural deduction framework a la Gentzen, by defining the sequent: Xi, Xi+1, Xi+2, … Xn →Y in which Xi, Xi+1, Xi+2, Xn, and Y are types, as an equivalent for: (…((Xi Xi+1),Xi+2) … Xn) →Y with the requirement that there must be no empty context in a sequent. LC is also particularly interesting in that it first considers functors of type A/B as functions that map an element of category B to an element of category A, second it allows function composition, and third it is the link between the first AB grammars and more recent formalisms using proof theory, in which the semantics of a string of natural language is given together with its syntactic form, as will be shown later on. Like any other syntactic theory, CG and LC must account for some phenomena that are pervasive in natural languages like ambiguity, syncretism, or polymorphism. A word consequently can have several types. If in a CG, Awi, wi∈{V}, Lex(wi) has only one type, then this CG

27 is a rigid CG; if Awi, wi∈{V}, Lex(wi) has at most k types, it is a kvalued CG.4 2.2.2. A model for LC The interpretation of a category being as usual the set of words of this category, a model for LC is a semi-group. A semi-group is a monoid5 without identity (or zero) element, which corresponds to the fact that there is no empty context in a sequent in LC. A model for LC is then {M,.} with {.} being associative only by definition of a monoid, and such that: [A\\B]={x, Aa∈[A], a.x∈[B]} [B//A]={x, Aa∈[A], x.a∈[B]} [AOB]= [A] [B]={a.b, a∈[A], b∈[B]} 2.3 Phrase Structure Grammars (PSG) and Categorial Grammars 2.3.1. A comparison A quick comparison between the formalisms of PSG and CGs is necessary. Lyons (1968: 231) provides such a comparison showing the isomorphism between the two formalisms for the simple sentence (14): (14) poor John ran away (14) is generated by a PSG as indicated in (15):

4 5

A rigid CG is then a 1-valued CG. A monoid is a set together with the operation . whose sole property is associativity.

28 (15)

S NP A poor

VP N

V

John ran

Adv away

With categories John: n; ran: n/s; poor: n/n; away: (n/s)/(n/s), the same string is generated by a CG as indicated in (16) below:

(16)

poor John ran away n/n n n/s (n/s)/(n/s) n ns s

Very interestingly, the categorial typing explicitly expresses the dependence relation between a head and its modifier: the element with the more complex label is the modifier: then n/n is a modifier of n, and n/s/n/s is a modifier of n/s, that is an intransitive V. More generally, all modifiers have type X/X whenever X is a type. As regards generative capacity, LC and CG are weakly equivalent to context-free PSG, a result demonstrated by Pentus (1993). 2.3.2. Semantics As was seen above (cf. 2.1.2), in a PSG the semantics of a sentence is located in the LF, which is a level of representation obtained by the application of some syntactic rules to a syntactic form. Thus, the semantics of a sentence is obtained via the mediation of some level of representation. In CGs, and in LC, syntactic categories are closely linked to semantic types: the semantic representation is obtained step by step simultaneously with the syntactic construction, via a simple morphism from syntactic types to Montague’s semantic types as indicated in the table below, t being the type for true and e for entities:

29 Syntactic type s* np* n*

Semantic type=(syntactic type)* t e eit

Generalization : (a/b)*

a*ib*

Table 1: A morphism from syntactic types to semantic types

Consequently there is no divide here between syntax and semantics, compositionality is ensured by the step by step construction of semantics, but the problem CGs and LC have to face is that they cannot easily account for some common linguistic phenomena as will be seen now. 2.4. The inadequacies of AB grammars The main problem with AB Grammars is that they only allow concatenation of adjacent words (types) in a sentence. So we now see two strategies that were used to account for some precise linguistic phenomena that could not be accounted for in AB grammars. Chronologically these are Bach’s Wrap rule (1979) and function composition used for instance by Ades and Steedman (1982).6 2.4.1. Wrap Bach (1979) uses structure-building rules that are not strictly concatenative. Some structure building rules are concatenative, like RCON (right concatenation) or LCON (left concatenation) defined as: - RCON (a/b) =def ab - LCON (b\a) =def ba

6

Function composition is allowed by LC, but what is interesting is the range of precise linguistic phenomena that can be dealt with by Steedman’s approach.

30 Those that are not concatenative are RWRAP (Right Wrap) and PREPCON (Preposition Concatenation) defined as: - RWRAP:

- if a is simple, then RWRAP(a,b)=def(ab) - if a is of form XP[XW], then RWRAP(a,b) =defXbW

This is necessary in order to build constituents like: persuade Bill to leave, easy man to please, or too hot to eat. Consequently Bach (1980) is led to the conception of Transitive Verb Phrase TVP7 defined as: TVP/VP, VP → TVP and TVP, NP→VP A sentence with object control V contains a TVP as shown in (17), a sentence with a subject control V contains an IVP as shown in (18): (17)

(18)

S NP

VP

John

TVP

TVP/VP persuade

7

S NP NP VP to go

Mary

John

VP VP/VP

VP

VP/VP/NP

NP

promise

Mary

to go

This accounts for the opposition between TVP (e.g. persuade with Object control, allows passivization Mary was persuaded to go ) and I(ntransitive)VP (e.g. promise Subject control, does not passivize *Mary was promised to go). More generally, a TVP is made of discontinuous elements that can be considered as a constituent.

31 - PREPCON: If a is of form [A Prep] (where A is any category), then PREPCON(a, b) =def A PP[Prep b]. This is necessary to build constituents like: depend on Mary, arrive at the decision, proud of his children, angry at Bill Another relevant strategy used to build structures from non adjacent types is that developed by Steedman.

2.4.2. Function composition The framework in which the analyses by Ades and Steedman (1982) are cast is Combinatory Categorial Grammar, which is characterized by the use of combinators (forward, backward, partial forward, etc.) from combinatory logic. The main tool used is function composition defined in (19): (19)

Function ƒ: A→B and g: B→C, then gOf: A→C gOf= lx(g(f(x)))

Ades and Steedman (1982: 547) use function composition to cope with various phenomena among which extraction from clausal complement like in (20): (20)

Who do you think he loves

Here the basic types are np and s, the string he loves is of type s/np as it is a function that maps an np (the extracted element) onto a sentence, and the string do you think is of type s/s as it maps a sentence onto a sentence. The type s/np of the string do you think he loves is obtained by function composition of the previous functions s/s and s/np, and the whole sentence of type s is built by backward function application as indicated in (21) below:

32

(21)

do you think he loves who s/s s/np np s /np s

A possible generalisation of function composition is that it is a structure building rule in which an argument string lacks its rightmost element. Consequently, fuction composition can also be applied to other phenomena like topicalization, parasitic gaps or coordination of unlikes as was shown by Steedman (1985). Though interesting as regards compositionality, the range of linguistic phenomena dealt with, and most of all the direct semantic interpretation it allows, LC can also be considered as a mere step towards the logical system that will be presented now. 3. Linear Logic (LL) 3.1. The intuition

LL such as defined in Girard (1995) is an extension of LC that differs from it on the following points: 1) The context in a sequent can be empty, which was not the case in LC (cf. 3.2.1). 2) There are new connectives whose justifications will be presented below. 3) There is no negation in LC. In LL there is a negation noted F such that properties in (22) hold: (22)

(AF)F# A AsA# sAFA (sequent with empty context licit) AsA



sAF A

33 The syntactic analysis of a sentence S made up of words w1w2w3…wn is a typing of S and a demonstration of S written w1w2w3…wnsS. The type corresponding to the meaning of the sentence is derived in a step by step fashion from the previous proofs and terms associated to the vocabulary. Via the morphism of types presented in 2.3.2 above, to each syntactic type there is an associated semantic type. The problem is that in classical logic the rules of weakening, which allows to extend the context or the succedent of a sequent, and the rule of contraction, which allows the replacement of several occurrences of the same formula in the context or in the succedent, are adapted to stable truths, or put differently to non causal implication,8 whereas linguistic resources are not stable and once consumed in a derivation they no longer hold. Consequently these rules are rejected in LL. Nevertheless LL allows the repetition of a formula but this is stipulated as a particular case and not as a generality in a structural rule: that is the meaning of the exponential !. It means that if a formula is derivable with a unique occurrence of A, then it can be derivable too by the iteration of any number of occurrences of A, hence the rule (23): 9 (23)

Γ,As∆ l! Γ,!As∆

As a consequence, the usual implication in classical logic A→B, in which A is considered as stable and iterable ad libitum, is replaced in LL by (24) (24)

(!A)

B,

in which is the linear implication and the exponential ! indicates that A is freely iterable. This means that implication is causal and expresses 8

9

These rules express as

ΓsA, ∆

rw

Γs∆

lw and

Γ,A,As∆ ΓsA,A,∆ rc lc Γ,As∆ ΓsA,∆

Γs∆ Γ, As∆ with the mnemonics r and l for right and left respectively and c and w for contraction and weakening respectively. Only formulae containing ! can undergo contraction or weakening.

34 an interaction between a cause and a consequence, similarly to the principles of action and reaction in physics. 3.2. The principles and rules

A consequence of the rejection of weakening and contraction as structural rules is that there are now two conjunctive connectives: one when both resources are used in a derivation, one if only one of the resources is used. The former conjunctive is ⊗ (multiplicative conjunction read times), the latter is & (additive conjunction read with) and the rules for these connectives are expressed in (25):10 (25)

Γ,A,Bs∆ l⊗ Γ,A ⊗ Bs∆

Γ,As∆ l& Γ,Ai & Ai + 1s∆

Symmetrically there are two disjunctions that are the duals of each conjunctive: the additive ⊕ (read plus) expresses the choice between two possible resources and the multiplicative℘(read par), which is linked to negation11 and is equivalent to AF B or BF A. Consequently it is the equivalent of classical disjunction as AoB#nA→B. The table below sums this up:

connective multiplicative additive

conjunctive ⊗ &

disjunctive ℘ ⊕

Table 2: Conjunctive and disjunctive connectives in LL

10

11

We give the rules for left introduction only. The same would go for right introduction. It is important to note too that linear implication is not a primitive but is defined as A B #AF℘B.

35 We now show how LL can be applied to some linguistic phenomena with a view to comparing it to approaches previously presented and to showing that it is both elegant and efficient. 3.3. Linguistic phenomena

We first provide an example of how a sentence can be derived as a deduction and then briefly sketch how LL deals with three precise linguistic phenomena: non adjacent types, morphological agreement and coercion phenomena. 3.3.1 An example

We show how the derivation of a simple sentence like (26) is obtained. (26) John reads a book the types being as follows: book: n a: n np a book: np John: np reads: np (np s) John read a book: s

at Each type is introduced as an axiom. Then there is introduction of each step of the derivation. The final sequent obtained is made up of a context in which all the types of the sentence are in the order in which they appear in the sentence to derive, and of a succedent which is the sentence type as shown by (27) in which the usual mnemonics are attached to each step:

36 ax

(27)

ax

nsn npsnp i ax (n np),nsnp sss ax npsnp (np s),(n np),n ss np,(np (np s)),(n np),nss

i i

3.3.2. Non adjacent types

We insisted earlier (2.4) that the main limitation of AB grammars is that they are strictly concatenative and can then only combine adjacent types. This led to function composition or Wrap operations (cf. 2.4) in order to solve the problems caused by non adjacent types. Now LL has a negation defined by De Morgan laws which swaps the conjoined elements as indicated in (28) below: (28)

(A&B)F# B F⊕AF (A⊕B)F# BF&AF (A℘B)F# BF⊗AF (A⊗B)F# BF℘AF

Consequently, if in a derivation the types to be cancelled are the right types but in the wrong order, transforming a formula containing a conjunctive or disjunctive connective into its dual via De Morgan will reestablish the adequate order and will render the cancellation possible. 3.3.3. Agreement

In accordance with the meaning of the additive connectives & and ⊕, the lexicon can receive morphological features for instance person and number. To derive the correct sentence he leaves, the words he and leaves will receive the features 3rd and sg as follows (cf. Bayer and Johnson 1995: 70 for a similar view): he : np&(3rd &sg) leaves: (np&(3rd&sg) )

s

37 whereas leave would be leave: (np&(((1st⊕2nd⊕3rd)&pl)⊕((1st⊕2nd)&sg))))

s

The derivation of he leaves will then be (29): ax

(29)

ax

3s3 sgssg ax &i npsnp 3& sgs3 &i &i np & 3snp np & (3& sg)s3 &i ax np & (3& sg)snp & (3& sg) sss (np & (3& sg)),(np & (3& sg) s)ss

i

As can be observed in (29) this is a correct derivation as the context of the final sequent is the lexicon with its morphological features and the succedent is s. 3.3.4. Coercion

Sometimes, the type of an argument is not that expected by the function it is an argument of: consequently, either there is failure or the argument is converted into the right type by an operation of type coercion (cf. Pustejovsky 1995: 111). For instance in (30) below, believe selects for a propositional type as its complement, yet (31) is correct as the NP the book has been coerced into the right type. (30) (31)

Mary believes that he left Mary believes the book

We are not going into any detail here about the mechanism of type coercion,12 we just want to show how, in LL, the two types can be encoded in a single lexical entry. First the types for believe are both (32) and (33): (32) 12

believe: np

(np

s)

For details see Pustejovsky (1995: 118).

38 (33)

believe: s

(np

s)

As & is the additive that allows the choice of either of the elements joined, the type of believe is: (34)

believe: (np

(np

s)) &(s

(np

s))

Now we replace the elements each formula is made of in the following way: np=A, (np s)=C, s=B. The following theorem expressed in (34) holds (cf. Moot (2002: 22): ax

(34)

ax

ax

ax

AsA CsC BsB CsC i i A,A CsC B,B CsC &i &i A,(A C) & (B C)sC B,(A C) & (B C)sC ⊕i A ⊕ B,(A C) & (B C)sC i (A C) & (B C)s(A ⊕ B) C

In the final sequent, the context is (A C)&(B C), which expresses the type of the verb believe with the replacement convention above, and the succedent is the new type of believe which now rewrites as (35): (35)

(A⊕B)

C=(np⊕s)

(np

s)

The lexical entry for believe is thus more simple as what was repeated in the initial typing (np s) is now shared in the new entry (36) for believe: (36)

believe: (np⊕s)

(np

s)

3.4 Syntax and Semantics: the Curry-Howard isomorphism We have shown how syntactic analysis in LL is an example of natural deduction. Now the most important property of this system is that it expresses a natural link between the two domains of reasoning and typing: a type such as a b c with no constant can be read as a formula of propositional calculus containing implication, thus establishing a link

39 between deduction and types, between proofs and interpretation. More specifically this link is expressed in (37a) (types) and (37b) (modus ponens): (37a)

A : α β,B : α AB : β

(37b)

α → β,α modus ponens β

e

This correspondence, known as the Curry-Howard isomorphism, associates a typed λ-term of type τ to a proof of the formula corresponding to τ. As in LL the syntactic analysis of a sentence of words w1w2w3…wn is a deduction that leads to a proof of w1w2w3…wnss, (cf. 4.1) and as every word wi has a corresponding semantic type w*i, (cf. 3.3.2), the semantic analysis of the sentence becomes a proof of w*1w*2w*3…w*nst. There is then a λ-term that corresponds to w*1w*2w*3…w*n expressing the compositional structure of the construction, and consequently the semantics of the sentence is obtained by β-reduction of this λ-term. For instance the types and λ-terms associated to the words of (26) above are: book : syntactic type: n semantic type e t; λ-term: λye t (booke ty) a book syntactic type: np semantic type e; λ-term: λye (a_bookey) reads : syntactic type : np (np s): semantic type e (e t); λ-term: λyeλxe (reade the interpretation of (26) is then (38):

(38)

Johne

λyeλxe(reade e t(xy)) λye(a_ book) λxe(reade e t(a _ book)) reads(John,a _ book) t

(e

t)

(xy))

40

So, the specification of the lexical types together with that of the way they are combined obviates any translation process such as that needed by the construction of LF or in the philosophical approaches presented above in 1. 4. Conclusion We have shown that the “translation problem” that both philosophical and generative approaches had to face in the construction of a logical form, receives an elegant solution in LL, a formalism which can be considered as a descendent of CGs first presented in Ajdukiewicz (1935). The other advantage of CGs is that they are generic enough to encode various grammatical formalisms such as TAG (cf. de Groote 2001). Last but not least, another important interesting property of LL that has not been presented here is that it allows disambiguation of formulae via proof nets (for details cf. Lamarche and Rétoré (1996)).

References Ades Anthony E and Mark Steedman 1982. On the order of words. Linguistics and Philosophy 4, 517-558. Ajdukiewicz, Kazimierz 1935. Die syntaktische konnexität. In: Studia Philosophica 1, English translation in Storrs Mc Call (ed), 1969, Oxford: The Clarendon Press, 207-231. Bach, Emon 1979. Control in Montague Grammar. Linguistic Inquiry 10:4, 515531. Bach, Emon 1980. In defense of passive. Linguistics and Philosophy 3, 297-341. Bar-Hillel, Yehoshua 1953. A quasi arithmetical notation for syntactic description. Language 29, 47-58. Bar-Hillel, Yehoshua, Chaim Gaifman and Eli Shamir 1963. On categorial and phrase-structure grammars. Bulletin of the research council of Israel 9, 1-16. Bayer, Sam and Mark Johnson 1995. Features and agreement. Proceedings of the 33rd Annual Meeting of the the Association for Computational Linguistics, 70-76. Carnap, Rudolf 1928. The Logical Structure of the World, English translation 1967, Berkeley and Los Angeles: University of California Press.

41 De Groote, Philippe 2001. Towards abstract categorical grammars. Proceedings of the 39th Annual meeting of the Association for Computational Linguistics, 148155. Frege, Gotlob 1879. Begriffsschrift. English translation in Jean van Heijenoort (ed). From Frege to Gödel A source book in mathematical Logic, Harvard University Press, 1-82. Girard, Jean Yves 1995. Linear Logic, its syntax and semantics. In: Advances in Linear Logic. Girard Jean Yves, Yves Lafont and Laurent Reigner (eds) London Mathematical Society Lecture notes Series 222, Cambridge: Cambridge University Press. http://iml.univ-mrs.fr/~girard/Synsem.pdf.gz Lamarche François and Christian Rétoré 1996. Proof nets for the Lambek calculusan overview. In: Abrusci Michele and Claudia Casadio (eds), Proceedings of the 1996 Roma workshop on proofs and linguistic categories, 241-262. Lambek, Joachim 1958. The mathematics of sentence structure. The American Mathematical Monthly 65:3, 154-170. Lappin, Shalom 1991. Concepts of Logical Form in Linguistics and Philosophy. In: Asa Kasher (ed), The Chomskian Turn. Oxford: Blackwell Publishers, 300-333. Lyons, John 1968. Theoretical Linguistics. Cambridge: Cambridge University Press. McCall, Storrs (ed.) 1967. Polish Logic, 1920-1939. Oxford: Oxford University Press. Moot, Richard 2002. Proof Nets for Linguistic Analysis. PhD Dissertation OTS Utrecht University. Pentus, Mati 1993. Lambek grammars are context-free. In: Logic in computer science, IEEE Computer Society Press. Pustejovsky, James 1995. The Generative Lexicon. Cambridge, MA: MIT Press. Steedman, Mark 1985. Dependency and Coordination in the Grammar of Dutch and English. Language 61:3, 523-568. Wittgenstein, Ludwig 1922. Tractatus Logico-Philosophicus. English translation, London: Routledge and Kegan Paul.

Elżbieta Chrzanowska-Kluczewska The Jagiellonian University of Kraków [email protected]

An Unresolved Issue: Nonsense in Natural Language and Non-Classical Logical and Semantic Systems Abstract: In view of the contribution of the Lwów-Warsaw school of philosophical and mathematical logic to the development of non-standard systems (J. Łukasiewicz, S. Leśniewski S. Jaśkowski, Cz. Lejewski, A. Mostowski), it seems a scholarly obligation for a Polish linguist to inquire about the utility of such logics in considering the intricacies of natural semantics, especially in analysing fictional discourse and figuration. The unpredictability of natural language and the creativity of its users teaches us also that certain problems in stylistics and poetics disclose the limitations of more traditional methodologies. Catachresis, a far-fetched metaphor difficult to interpret or verging on nonsense, will be my case study in this respect. Among more interesting paradigms that can be postulated to deal with it, PossibleWorlds Semantics and Language-Games Theory/Game Semantics draw heavily from philosophy and logic. Cold are the crabs that crawl on yonder hills, Colder the cucumbers that grow beneath, And colder still the brazen chops that wreathe. The tedious gloom of philosophic pills! Edward Lear

0. Introduction The “philosophy of modern linguistics”, the theme that indirectly underlies my considerations in this article, eventually ought to determine what specific “philosophies of language” we need in solving linguistic problems. I deliberately say “philosophies” since linguists of different denominations have always listened to various philosophical schools and currents, trying to draw inspirations from different sources. But apart from the philosophical foundations needed in understanding the workings of human mind as reflected in language, we – as linguists – have always searched for the proper help and support coming from the side of logical systems. Yet even after several centuries of seeking

44 inspiration in logic in constructing linguistic explanation (going back to Aristotle, at least), the interface between logical and linguistic thinking remains still a vastly underdeveloped area, prickly with several unanswered questions and unsolved problems. For this we have our human language to blame. Natural language is a notoriously difficult phenomenon to describe. I believe it to be a Januslike, double-faced system, which – consequently – calls for a double explanation. On the one hand, it can generate grammatically well-formed and semantically speaking, fully interpretable expressions (supported by appropriate sets of syntactic rules and Logical Form),1 on the other (and contrary to Chomskyan idealism about a grammar as generating only well-formed strings), it is productive of all kinds of ill-formed and deviant expressions that strain our interpretative capabilities or are barely interpretable at all. As a literary semanticist, dealing with the question of different degrees of anomaly and nonsense in language,2 I want to raise anew the question to what extent non-classical logics and unconventional semantics can prove useful in analysing two areas of natural language, namely fictional discourse and figuration (with the latter pervading also non-literary, ordinary language). It is in these two fields that deviance from the syntactic and semantic rules is the most prominent and classical logical systems (typically, a standard bivalent propositional and first-order predicate calculi) have long since been proved to be much too simplistic and descriptively inadequate. 1. The sources of non-sense in natural language The terms sense and nonsense, though superficially obvious and interpreted in ordinary language as meaningfulness and meaninglessness, respectively, are both theory-oriented concepts (cf. ChrzanowskaKluczewska 2009, in print). For the sake of our discussion let me distinguish and very cursorily explain a couple of phenomena that lead to the breach of interpretability in human language. 1

2

The shape of Logical Form (LF) or, more generally, Semantic Representation (SR) remains an open question. It may but does not have to be the LF of the generative paradigm (cf. Hornstein 1995). It is assumed here that meaningfulness and meaninglessness, that is, (non)sensicality in natural language are scalar phenomena and so interpretability is also a matter of degree.

45 1.1. Semantic anomaly (semantic oddity) It is a result of an “unnatural” juxtaposition of parts of speech within phrases and sentences. A short excerpt from Edward Lear’s sonnet “Cold Are the Crabs” that opens our discussion, a model piece of the Victorian nonsense poetry, displays a whole bundle of what, in the Chomskyan tradition, has been called the violation of selectional restrictions. In what sense can crabs be ‘cold’? What exactly are the ‘brazen chops’ and can chops be ‘brazen’ and ‘cold’? How can chops ‘wreathe the gloom’? Can gloom be ‘tedious’? Can ‘philosophic pills’ (if we understand what they are, either literally or metaphorically) be ‘gloomy’? Syntactically correct, but with a wrong selection of adjectives and nouns modifying other nouns, the verses display also a wrong selection of semantically non-collocating subjects and predicates, as well as predicates and direct objects, all underlined, from the semantic viewpoint, by the ascription of strange properties to individuals and objects. Lear presents us with a piece of poetry which is odd but not fully meaningless. After all, “the gloom of philosophic pills” hovers above our concerns in this paper. 1.2. Logical deviance It occurs in natural language as a result of two interrelated phenomena: 1) reference-failure and 2) problematic truth-valuation. The use of nondenoting terms (to be more precise, non-denoting in the real/actual world), the foundational feature of any fictional creation, pervades our ordinary discourse as well, in all instances of hypothesizing, daydreaming, describing counterfactual situations, etc. The sentences such as “The Unicorn has entered her garden”, in which the subject position is filled with a non-denoting term, is not uninterpretable, yet the issue whether it qualifies as false (Russellian answer), devoid of truthvalue, that is marked with a truth-value gap (Fregean and Strawsonian suggestions), or possessing some intermediate truth-value, has been a prolonged philosophical and logical dispute between the proponents and opponents of bivalence and of many-valued systems of truth assignment (cf. Haack 1996). This mediating value, often symbolized as I (intermediate, indeterminate, also undefined in Kleene’s terminology, cf. Haack 1996) or ½ in Łukasiewicz’s notation, in some non-standard systems has been called meaninglessness (Bochvar, cf. Haack 1996),

46 non-significance (Routley 1969), or too severely perhaps nonsense (Routley 1969). Judging the “Unicorn sentence” in toto as meaningless, non-significant or a piece of nonsense would sound misleading to the ear of a linguist, for whom the sentence obviously makes sense (is interpretable under appropriate contextual conditions). The sentential values neither true nor false, i.e. truth-value gappy or else true-infiction, true-by-convention, true-under-special-conditions seem terminologically more acceptable to the natural language semanticist. We return to the problem of valuation in Sections 3 and 4. 1.3. Experiments with the syntactic structure Another potentially nonsense-creating mechanism, explored especially in poetry, is the play with the syntactic structure, leading to: 1) structural ambiguity (less severe) or 2) grammatical ill-formedness (more severe in consequences). Whereas the following instance of G. M. Hopkin’s “telescoped” (i.e. shortened) syntax is only structurally ambiguous: (1) Let him Easter in us, be a dayspring to the dimness of us, where interpretation will vary slightly depending on the syntactic category assigned to ‘Easter’, the same kind of heavily elliptical “experimental syntax” used in his famous poem “Windhover” (Pietrkiewicz 1997: 236): (2) Brute beauty and valour and act, oh, air, pride, plume, here Buckle! violates the canonical structure of an English sentence and makes interpretation difficult. Our interpretative efforts in such cases, will – by default – go in the direction of restoring a regular, full structure to the sentence with the help of the mechanism referred to as recategorization, or broadly speaking, restructuring. 1.4. Vagueness This is yet another feature of natural language that can add to obscurity and influence interpretation in an adverse manner. In the traditional philosophical and logical approach, vagueness has been related mainly

47 to the occurrence of imprecise predicates, usually adjectives, for which no clearly delimited denotational sets can be pointed out. This fuzziness of extensional sets is usually connected with such adjectival predicates that are gradable, e.g. ‘long’, ‘old’, ‘baldish’, ‘blue’, etc. This kind of predicate-oriented vagueness can be transferred onto referring expressions (secondary vagueness). Apart from the fuzziness of predicates, the indeterminacy of certain referring expressions, functioning as subjects or objects, may also create unclear, vague expressions, to wit: (3a) The students have been opposing the smoking regulations or (3b) He really dislikes his students where the nominal expression ‘the students’, a case of general reference (non-natural group of individuals), can receive either a collective or a distributive interpretation. On the former, we mean all the students en bloc, as one single body, “all without exception”, on the latter, the students as a group composed of individuals, “some, even many, but not necessary all”. This kind of indeterminacy raises additional queries about the differences in quantification between formal and natural languages. It has been pointed out by several authors dealing with vagueness over the recent years that the extension of vague expressions is contextually regulated. This pragmatic aspect of vagueness, calling for proper perspectivization and contextualization, takes us to the next possible source of decreased meaningfulness in human language. 1.5. Lack of proper context The absence of pragmatic anchoring/grounding, the failure of (re)contextualization, which is staple epistemic activity in our transactions with the world, may render any expression, proposition, fragment of a text, etc. partly, if not totally, meaningless. Perspectivalism, so important in the interpretation of vagueness is, in fact, a requirement for the interpretation of natural language as a whole, emphatically insisted upon in all pragmatically and cognitively-minded schools of modern linguistics.

48 1.6. Figuration Figuration, the use of stylistic devices of various sorts on all levels of linguistic description (phonetic, morphological, syntactic, semantic, graphic), apart from its cognitive (epistemic), rhetorical (persuasive) and aesthetic value, may often function as a nonsense-generating mechanism. By way of illustration, we shall turn soon (Section 5) to the discussion of the methodological obstacles in the description of catachresis, understood as a “bold” metaphor, sophisticated but often close to absurdity,3 difficult to interpret or hardly interpretable at all, thus approaching the limits of language as “sensical” system. The six above-mentioned factors that influence the degree of meaningfulness/meaninglessness in natural language can be treated as major but not exclusive impediments to interpretation (cf. ChrzanowskaKluczewska 2009: xiv, in print). The discussion of other related phenomena would take us beyond the main topic of our analysis. 2. The contribution of the Polish school of logic to the development of non-standard systems Since the larger area of natural language lies beyond the scope of explanation offered to us by the logical orthodoxy, for several decades linguists have been turning their attention to the modal extension of classical logic, allowing us to work within the framework of PossibleWorlds Semantics (including GTS, viz. Game-Theoretical Semantics of J. Hintikka, L. Carlson, J. Kulas, G. Sandu and others) as well as to nonstandard (deviant) systems of many-valued logics, narrowly and widely free logics, logics of inconsistency, intuitionistic logics, etc., which have been expected to help (but have not done it conclusively) a literary semanticist in solving the recalcitrant issues of truth-in-fiction and denotation-failure, among other problems. It seems proper for a Polish linguist to inquire about the utility of such systems in considering the intricacies of natural semantics, especially in the light of the contribution, sometimes not clearly realized or even neglected, of the Lwów-Warsaw school of philosophical and mathematical logic to the development of non-standard systems. 3

Contrary to nonsense, absurdity is a non-technical term applied in literary semantics and literary criticism.

49 2.1. The representatives Jan Łukasiewicz (1878-1956) was a founding father and one of the best-known representatives of the Lwów-Warsaw school, which flourished over almost first three decades of the 20th century. After World War II, he held a professor’s post at the Royal Academy in Dublin. A student of the well-known Polish philosopher Kazimierz Twardowski, he was also influenced by Meinongian semantics, which admits of inconsistent objects. In 1920 he proposed a three-valued system, one of the first in Europe at the time (E. Post elaborated his many-valued system in the same year). The philosophical reasons which he adduced against bivalent systems was the fact that they force determinism and fatalism, whereas the third-value for future contingents seemed to him philosophically satisfactory. His intermediate value ½ was, however, difficult to interpret (cf. Blackburn 1997: 222, Malinowski 2006: 14). It is usually assumed that this mediating value represents logical possibility between truth and falsehood. His later many-valued calculi, both with a finite number of logical values and infinite ones, e.g. running along a continuum between 0 (F) and 1 (T), have received a similar interpretation as different degrees of probability. Whether truth-values can thus be transformed into modality is an involved philosophical issue that cannot be discussed here for obvious reasons. Stanisław Leśniewski (1886-1939), a student of Twardowski and a professor of Alfred Tarski’s, co-founded the Warsaw branch of the school. His systems of mathematical logic are non-standard and his ontology has been classified as a first version of free logic. Stanisław Jaśkowski (1906-1965), a member of the second generation of the school, a student of Łukasiewicz, after World War II a professor and rector of the Nicolaus Copernicus University in Toruń, was also a pioneer in developing free and intuitionistic logical systems; he was one of the first to propose a paraconsistent logic. Czesław Lejewski (1913-2001), a student of Łukasiewicz, K. Popper and W. O. Quine, in the 1950s worked on non-referring expressions and a version of widely free logic, a very creative formal system which he called unrestricted interpretation. Andrzej Mostowski (1913-1975), a student of K. Gödel and Tarski, later a professor of mathematics at the University of Warsaw and Polish

50 Academy of Sciences, was concerned with infinite sets and one of the first to mention empty domains. It is great pity that none of them dealt really with natural language, except for some ideas of Lejewski that could be appropriated by researchers dealing with fictional discourse (cf. Woods 1974: 68). 2.2. Non-standard logics Let us briefly return to some of the systems that lay in the focus of attention of the above-listed scientists and which could be of interest to linguists. Free logic owes its name to K. Lambert, who called such calculi “free of ontological commitments”, which means mainly the freedom from existential requirements on the denotation of terms. More specifically, a narrowly free logic refers to any quantificational system whose domain of interpretation can be empty and whose terms need not denote. However, if they do denote, they refer to some actually existing individuals. Lejewski’s system can be classified as a widely free logic as it admits both of non-denoting terms and of the terms denoting existent and non-existent individuals alike. It is this very feature which is of interest to researchers of fictional discourse, where existent individuals are often mixed with entia imaginationis within the same domain. Neutral logics come close to widely free systems in this respect (cf. Woods 1974: 71). Lejewski envisaged his unrestricted interpretation in such a way that no existential claims need to be made about the referent of every sign. A null set can be accommodated as well and quantification is over signs rather than elements of a domain (by which he meant referents of the signs) and thus can be represented as “for some sign x” rather than “there exists an x such that…”. Intuitionistic logic, originally developed by the mathematicians A. Heyting and L. J. Brouwer, and with Kripke (relational) semantics later added to this programme, can be of interest for linguists for two reasons: 1) the Law of Bivalence does not hold in it, only the Law of NonContradiction, 2) truth-value can be temporarily undecidable for lack of evidence and it is justification rather than truth that counts for propositions. This system is constructivist in the sense that justification and ultimately truth-valuation can be construed with the in-coming information and evidence. M. Dummett (1991, among others) provided a lot of philosophical support for mathematical and logical intuitionism

51 and made it more accessible to linguists through his writings on the philosophy of language. Logic of inconsistency tolerates inconsistency, in the sense of contradictions, antinomies and paradoxes in the system. In the version described by N. Rescher and R. Brandom, which is based on PossibleWorlds Semantics, the Law of Excluded Middle (LEM) and the Law of Non-Contradiction are taken to hold only in standard worlds. Otherwise, we can talk about inconsistent worlds, where P and not P can obtain simultaneously.4 Their theory of non-standard possible worlds and of two modes of world-fusion (schematization and superposition, cf. Rescher and Brandom 1979: 9 ff.) should be of interest for literary semanticists and theoreticians of literature alike, especially in the analysis of, for instance, such transrealistic genres as Science Fiction, Fantasy and Cyberpunk, where contraries and contradictories often coexist. Paraconsistent logics, in turn, reject such a relation of logical consequence in which anything can follow from contradictory premises (orthodox approach). Thus, paraconsistent logics eschew the situation when the inference relation is trivial. They accommodate inconsistency “in a sensible manner” and allow inconsistent information to be treated as informative. The first formal paraconsistent logic was the discussive (discursive) logic developed by Jaśkowski (1948). 3. The utility of non-standard systems in describing the intricacies of natural semantics Possible-Worlds Semantics, a branch of model-theoretic semantics and an extension of modal logics, ever since its development in the 1960s and 1970s has considerably influenced our thinking about worlds of literary fiction. Levin’s book on metaphoric worlds of 1988 is only one among several voices that saw in the theory of possible worlds an apparatus worthy of implementation in text studies and literary theory. The possible worlds (pws) of formal logicians, however, have met with opposition among several linguists, wary of their metaphysical status 4

Rescher and Brandom in their book refer also to Jaśkowski’s paper (1948) as well as to his 1936 paper on many-valued systems republished in English translation in McCall (1967).

52 (though S. Kripke’s conceptual approach to pws has always sounded rational and has been the preferred version of pws ontology). Linguists and literary theoreticians were also unhappy about the austerity of logically envisaged alternative worlds, inhabited by (im)possible individuals, but otherwise poorly furnished. Some strong logical requirements imposed on the construction of pws, such as the compossibility of individuals, as well as a rather unclear and highly subjective relation of accessibility among pws were also criticized. After some decades of “taming” pws for the application to natural language creations in their entire richness, the pws of modal logicians have turned into text worlds (along the lines set by N. E. Enkvist, L. Doležel, P. Werth, among others) that combine authorial worlds with the recreated worlds of interpreters. Thus, text worlds are no longer underdetermined but have their gaps (spots of indeterminacy) filled in the process called concretization by R. Ingarden (1973) and actualization by W. Iser (1976/1980), both described in the phenomenological perspective. They are also perceived as complex structures consisting of embedded subworlds/scenarios. An even more enriched version of text worlds are discourse worlds (cf. Stockwell 2002) of cognitive poetics, which are dialogic constructions between the creator of a given world and its interpreters set in the context of their respective actual worlds. In this conception the formal vision of possible worlds as modal constructs of logicians have traversed a long path, a very telling development of a formal device in confrontation with the exigencies of the need to describe the richness of human imagination and its interpretational skills as reflected in natural language. 3.1. Non-standard truth valuation It has long been recognized that the area of natural language that can be covered by classical bivalent systems is really very limited (unmodalized declarative sentences only). The large area of modalized expressions, future contingents, non-factual and counterfactual conditionals, nonindicatives (questions and imperatives, most notably) require the recourse to modal logic, deontic logic, erotetic logic (logic of questions, cf. Groenedijk and Stokhof 1997, or Wiśniewski and Zygmunt 1997), the Speech Acts Theory, etc., instead. And the question of truthvaluation for fictional discourse, despite a prolonged argument, has not been solved conclusively. It looks like vast areas of natural language

53 require a three-valued system, despite several opinions on the logicians’ part that non-bivalent systems are too complex, inelegant, that certain theorems cannot be proved in them and axioms have to be changed. One of such severe criticisms of many-valuedness was voiced by S. Haack (1996), who referred to truth-value gap pejoratively as truth-value glut. Also Hintikka’s Game-Theoretic Semantics (GTS) tries to adhere to bivalence and its games are played only between the verifier and the falsifier. Yet, it seems that natural language needs some kind of third value as one of its descriptive devices. Indeed, from the point of view of linguistic considerations it does not matter much whether we call it “neither true nor false” and symbolize it as I or treat it as a real gap. For future contingents but also for several of our epistemically not clearly grounded statements we might need an intuitionistic, constructivist conception of temporarily undecidable truth. In many cases we rely also on warranted assertibility, on truth-by-convention. This is the case of our dealings with fiction, where such additional value as truth-in-fiction (called also metaphoric, fictional, poetic, allegorical truth) has reappeared in the literature. J. Woods (1974)5 at some point discussed also a 4-valued sentence logic with the values: T, F, fictruth, ficfalsity. Yet, he ultimately realized (Woods 1974: 127) that “neither classical logic nor many-valued systems are well-suited to the description of fictionality” and suggested the so-called olim operator O (“once upon a time”, “only in fiction”) for all sentences within a fictional text and followed by a bivalent valuation. And, indeed, many-valued systems above trivalence seem really redundant as is the case of fuzzy logics, where for natural language we find no good interpretation for truth/falsehood measured in fractions or degrees. The acceptance of Possible-Worlds Semantics may save us trouble with valuation since truth-values can now be assigned bivalently relative to particular worlds (this is the solution in Hintikka and Sandu 1994, where metaphoric truth as distinguished from ordinary truth is rejected).

5

Although Wood’s book appeared over three decades ago, it is still one of very few publications written by philosophers and logicians wholly devoted to problems encountered within natural language and remains of unfailing interest to literary semanticists.

54 3.2. Game Semantics With its roots in the dialogische Logik developed by P. Lorenzen and K. Lorentz, it found its model-theoretic, possible-world version in GTS of Hintikka and his collaborators. Hintikka’s system, applicable basically to propositional calculus, is based on the idealized conception of a twoperson full-information game, where all the previous movements of the players are known to them and strategies are stable. He utilizes also the games of “seeking and finding”, especially in connection with quantification. An interesting application of GTS to natural language discourse was proposed by Carlson (1983) under the name of dialogue games. I find this aspect of GTS most interesting for the discussion of ordinary dialogues and of fictional creation as well and still in need of further development. In my friendly polemic with Hintikka’s GTS (Chrzanowska-Kluczewska 2004), I propose a “softened” version of the conception of language-game,6 closer to the Wittgensteinian ideas and devoid of formal idealism unattainable in natural language, alas. 3.3. Logics of inconsistency and free logics Such logics, as mentioned above, find their attraction for descriptions of fictional discourse, founded especially on myths, legends and fantasy. In non-natural worlds that are invoked in such contexts, remote from the viewpoint of accessibility, and peopled with individuals from mixed domains (existent and nonexistent, possible and impossible alike), contradictions are bound to appear constantly, natural and logical laws are violated, etc. In turn, the discussive logic of Jaśkowski holds its import for ordinary discourse analysis, where the assertions put forward in a discourse by particular participants, even if self-consistent, are often inconsistent with the opinions cherished by other interlocutors. A discussive logic is, then, an important attempt at formal modelling of everyday conversations, debates, arguments, etc. Of course, consistency is still a default-value, 6

In private correspondence Prof. Hintikka very rightly pointed out to me that the stance of a literary semanticist assumed by me reflects a strongly “metaphorical” approach to the conception of game, whereas GTS is a scientifically-oriented system with its inner system-induced requirements to keep. This case is clearly indicative of the need of a platform on which logicians and linguists could try to understand each other.

55 but this kind of modelling allows for a certain amount of inconsistency in the set. In more serious cases, “we may view the logic of inconsistency as a functional equivalent of catastrophe theory in logic in its effort to handle gravely anomalous situations in a systematic and logically cogent way” (Rescher and Brandom 1979: 42). An extremely promising direction of development of paraconsistent systems are the so-called adaptive logics, which adjust themselves to the situation as new information becomes available. Since the set of premises in real life changes dynamically, previously inferred consequences have often to be withdrawn and substituted with new inferences. Such logics try to model the dynamism of human reasoning and the way our conclusions undergo constant changes. This is again the problem of a continuous recontextualization that happens in natural language. 4. The case of catachresis The unpredictability of natural language and the unbounded inventiveness and creativity of natural language users, the fact that we are often “symbol-misusing animals” to use K. Burke’s apt saying, discloses the limitations of particular methodologies in stylistics/poetics. Let us now pass on to a brief overview of different methodological solutions to the problem of metaphor construal and interpretation from the perspective of catachrestic constructions. Catachresis,7a difficult, poetic, “one-shot” metaphor, often verging on non-sense and absurdity (Miltonian “blind mouths”, Shakespearian “to elf one’s hair”, Learian “tedious gloom of philosophic pills”, T. Peiper’s vanguard “typhoidal silence”, or “obłąkany przez błękit” of the Polish linguistic poetry, where the phonetic effects override the semantic meaning and which could be translated as “muddled by mud” rather than “muddled by the azure”) forces us to attempt applying various methodological paradigms to explain its workings. The Aristotelian Paradigm, called also the Similarity/Comparison/ Substitution View of Metaphor, has not lost its import so far, as all the remaining theories are more or less explicitly founded on it. The fact that in a metaphorical interpretation we always try to look first for similarity 7

The case of catachresis I am discussing here corresponds to Catachresis Two in Chrzanowska-Kluczewska (forthcoming).

56 or analogy makes out of it a default procedure. Though this approach has often been referred to as The Rhetorical Theory of Metaphor and criticized for its purely linguistic and ornamental bias, some researchers think that Aristotle might well have been conscious of the cognitive foundation of metaphor in that it reorganizes our conceptual structure by making a previously unnoticed similarity salient. It is also worth emphasizing that the Aristotelian Paradigm is not completely static since the very verb metapherein indicates movement (to ‘carry over’, ‘transfer’, ‘change’) and so does its Latin equivalent translatio (and the Polish przenośnia as well). Thus not only the widely known cognitive model of G. Lakoff, M. Johnson and M. Turner (1980, 1989) can claim to be dynamically organized. The Rhetorical Model is also directional; Aristotle claimed that metaphors are reversible but that only one direction is more interesting and “bolder”. The Aristotelian Theory perceives metaphor as a puzzle to be solved. Consequently, the traditional schools of stylistics usually treat metaphors as one or two combined predications with at least one element missing. The idea that certain connotations of metaphorical terms should be suppressed (what the cognitive school of linguistics calls backgrounding) has also roots in this theory. Yet, catachrestic metaphors do not fare particularly well under this description. True, they are enigmas to their interpreters but the search for the similarities apparently underlying their constitutive parts does not always end in success. Catachresis more often than not is funded by the disparity rather than the similarity of its building blocks. The Interactive (Tensive) Theory of Metaphor is a better methodological instrument for dealing with catachrestic constructions – according to I. A. Richards (1936: 127) a metaphor is based in equal measure on the similarity and difference of the two concepts brought together. Richards highlights the fact that difference may at times eclipse the importance of resemblance. W. Nowottny (1962: 225) goes even further in claiming that the difference is a necessary trait of metaphor for it simply makes it more interesting. No doubt, such metaphorical extremes are especially well visible in catachresis, where the degree of tension between two disparate concepts is considerably heightened in relation to more conventional and less challenging metaphors. Nowottny stresses also the emotive aspect of metaphor, which can sometimes override its conceptual import. And, truly, several catachreses through their incongruity play upon our emotions more than on our “cool

57 reason”. M. Black (1962: 227) stresses also the dynamic side of the reader’s response to non-banal metaphors. Consequently, catachresis may count not only as the most tensive metaphor but also as the most emotionally-loaded and the most stimulating for the interpreter. Still, this model fails to capture certain aspects of catachresis. The Possible Worlds Theory of Metaphor, dating back to the 1980s when S. Levin (1988) started the discussion on figurative language in the framework of Possible-Worlds Semantics, in its main tenet claims that we cannot understand metaphors correctly against the background of the real/actual world but have to invoke the notion of possible (alternative, fictional) universes of discourse. Since every metaphor is marked with a certain degree of deviance from the accepted norms and logical rules, the main issue is to what extent possible worlds postulated for the interpretation of metaphor should differ from the actual world (Hintikka and Sandu 1994). Catachresis would definitely require a distant world for its interpretation. Interestingly, Hintikka and Sandu’s approach to metaphor open up a possibility of a new branch of linguistic analysis that could be called Possible-Worlds Stylistics. The Cognitive View of Metaphor, which is in fact a whole bundle of related theoretical variants, apart from the classic tripartite theory of metaphor mentioned before (two domains plus the projecting/mapping mechanism) relies also on the important notion called the blending of mental spaces (Fauconnier 1994).8 Mental spaces, often mistakenly identified with possible worlds, which they are not, play a crucial role in the construal of meanings. Although blending was initially introduced to show in what way the construal of conventional meanings proceeds, with time it has been recognized as a useful procedure to explain the process of expressing novel conceptualisations through language. The conceptual blending of input spaces produces a new quality, so-called blended spaces. The notion of the supporting generic space functions as a sort of tertium comparationis. S. Coulson (2002) enriched the idea of conceptual blending with the notion of semantic leap – a sudden shift in the creation and interpretation of new mental spaces. Although her suggestion goes in the direction of perceiving semantic leaps as a sudden frame-shifting in the creation of humour, I think it might be extended to 8

It would be interesting to compare the cognitive notion of blending with Rescher and Brandom’s two modes of world-fusion (schematization and superposition) mentioned in 2.2.

58 include as well some surprising, shocking, unexpected figurative meanings. Catachresis is an exemplar of such an unusual juxtaposition, leading to “jumps” in the conceptual operations concerned with the production and reception of “bold” linguistic expressions. Language-Games Theory/Game Semantics (with its roots in Ludwig Wittgenstein’s philosophical writings and in a formally conceived GTS) can prove very useful in approaching those aspects of natural language that possess a playful or gamesome, in a word ludic character. In my monograph (Chrzanowska-Kluczewska 2004) devoted to language-games played by and through the language of fiction, I postulate the taxonomy of games according to the most prominent player, that is into: a) semantic games of the text-producer, b) pragmatic games of the text-receiver (reader, critic, translator), and c) games of the text/language itself (stylistic figures and rhetorical devices included). Sub specie ludi catachresis can thus be studied from different perspectives, according to different strategies assumed by the players. But apart from regular games in which certain rules must be obeyed, I postulate also the category of pathological games. Among them I mention metaphor, which has been generally accepted as a powerful conceptual device but which – at times – shows its darker side. The idea that figures of language can also serve as a tool of deception has been a recurrent idea in the philosophy of language, to mention only J. Locke, F. Nietzsche or J. Derrida. Derrida has brought into focus the free play of signs in natural language, best seen in what he described as unlimited semiosis, in which all signs refer only to one another in the deceitful play of anasemia, where meaning becomes lost and language verges on the absurd. The unconstrained interaction of denotations and connotations found in loose associations and nonsensical combinations of signs (frequently catachrestic), accompanied by the rejection of extra-linguistic reference has been described by some critics as a vulgar version of deconstruction, a skewed interpretation of the Derridean description of the semiotic potential of language (cf. Derrida 1967/1999). The language-games theory (in its “soft” version) is the only one among the hitherto discussed models that is capable of encompassing, under the label of deviant games, the methodological considerations on the border zone of metaphoricalness, to which catachresis in its extreme version obviously belongs.

59 To conclude, different aspects of catachresis can be accommodated by the various models (including, importantly, Possible-Worlds Semantics and Game Semantics that capture its properties not dealt with by other theories) as long as the figure does not fade into sheer nonsense. The less striking and challenging it is, the easier it appears to be tackled by the descriptive tools of the particular approaches. The fringes of language occupied by near-nonsensical or totally absurd expressions have always been of marginal concern for linguistic models. For a brief span deconstructionists and postmodernists tried to make nonsensicality a paradigm of human language. In this they have generally failed but managed to bring the issue of nonsense9 into the focus of attention of linguists and literary theorists. 5. Conclusion Our, of necessity, very cursory considerations return us back to the question whether natural language is partly founded on classical and partly on non-classical logics. This option, together with the alternative that natural language as a whole should be treated non-standardly, as discussed by Dummett (1998: 108 ff.), reminds us also of McCawley’s dream, voiced in the heyday of Generative Semantics, of construing a natural language logic. I would gladly repeat his words here: “I will assume that the linguist’s semantic analysis and the logician’s have the same subject matter and that the linguist’s goals do not conflict with the logician’s” (McCawley 1981: 2). Despite superficially unsurmountable differences between formal and natural languages, in the name of consilience, which remains a valuable unificationist project for sciences, I am trying to make a somewhat shy appeal to logicians to turn their attention to human discourse as well, even at the cost of partly resigning descriptive simplicity and elegance so coveted in formal systems. We do have a “minimalist” solution to this problem, of course, which is giving up a possibility of a reasonable interface between logical and linguistic description, and some contemporary linguistic schools have been moving in this direction. Still, I consider it a rather unambitious project. 9

In our considerations throughout this paper we have not discussed nonsense as related to the Wittgensteinian conception of a totally private language (cf. Mulhall 2008). This is a separate issue, fascinating in itself, but far beyond the limited scope of our analysis.

60

References Black, Max 1962. Models and Metaphors. Studies in Language and Philosophy. Ithaca, NY: Cornell University Press. Blackburn, Simon 1994. The Oxford Dictionary of Philosophy. Oxford: Oxford University Press (Polish edition: 1997. Oksfordzki słownik filozoficzny, ed. by Jan Woleński. Warszawa: Książka i Wiedza). Carlson, Lauri 1983. Dialogue Games. An Approach to Discourse Analysis. Dordrecht: D. Reidel. Chrzanowska-Kluczewska, Elżbieta 2004. Language-Games: Pro and Against. Kraków: Universitas. Chrzanowska-Kluczewska, Elżbieta (2009, in print). What Is (Non)Sense? In: Elżbieta Chrzanowska-Kluczewska and Grzegorz Szpila (eds.), In Search of (Non)Sense, Newcastle u. Tyne: Cambridge Scholars Publishing, ix-xvii. Chrzanowska-Kluczewska, Elżbieta (forthcoming) Catachresis – a Metaphor or a Figure in Its Own Right? In: Monika Fludernik (ed.), Metaphor After the Cognitive Revolution. Coulson, Seana 2002. Semantic Leaps: Frame-Shifting and Conceptual Blending in Meaning Construction. Cambridge: Cambridge University Press. Derrida, Jacques 1967. De la grammatologie. Paris: Minuit (Polish version: 1999. O gramatologii. Warszawa: Wydawnictwo KR). Dummett, Michael 1991/1994. The Logical Basis of Metaphysics. Cambridge, Mass.: Harvard University Press (Polish edition: 1998. Logiczna podstawa metafizyki. Warszawa: Wydawnictwo Naukowe PAN). Fauconnier, Gilles 1994. Mental Spaces: Aspects of Meaning Construction in Natural Language. Cambridge: Cambridge University Press. Groenendijk, Jeroen and Martin Stokhof 1997. Questions. In: Johan van Benthem and Alice ter Meulen (eds.), Handbook of Logic and Language. Amsterdam & New York: Elsevier; Cambridge, Mass.: MIT Press, 1055-1124. Haack, Susan 1974/1996. Deviant Logic, Fuzzy Logic. Beyond the Formalism. Chicago & London: The University of Chicago Press. Hintikka, Jaakko and Jack Kulas 1983. The Game of Language. Studies in GameTheoretical Semantics and Its Applications. Dordrecht: Reidel. Hintikka, Jaakko and Gabriel Sandu. 1994. Metaphor and Other Kinds of NonLiteral Meaning. In: Jaakko Hintikka (ed.), Aspects of Metaphor, Dordrecht, Boston, London: Kluwer, 151-187. Hornstein, Norbert 1995. Logical Form. From GB to Minimalism. Oxford & Cambridge, Mass.: Blackwell. Ingarden, Roman 1931/1973. The Literary Work of Art. Trans. By George G. Grabowicz. Evanston: Northwestern University Press.

61 Iser, Wolfgang 1976/1980. The Act of Reading. A Theory of Aesthetic Response. Baltimore & London: The John Hopkins University Press. Jaśkowski, Stanisław 1948. Un calcul des propositions pour les systèmes deductifs contradictories. Studia Societatis Scientiarum Torunensis 1, 55-57. Lakoff, Geoffrey and Mark Johnson 1980. Metaphors We Live By. Chicago: The University of Chicago Press. Lakoff, George and Mark Turner 1989. More than Cool Reason. A Field Guide to Poetic Metaphor. Chicago & London: The University of Chicago Press. Lambert, Karel 2003. Free Logic: Selected Essays. Cambridge: Cambridge University Press. Lejewski, Czesław 1954-55. Logic and Existence. British Journal for the Philosophy of Science 5, 104-119. Leśniewski, Stanisław 1992. Collected Works, ed. by Stanisław J. Surma, Jan T. Srzednicki and D. J. Barnett. Dordrecht & Boston: Kluwer. Levin, Samuel R. 1988. Metaphoric Worlds. Conceptions of a Romantic Nature. New Haven & London: Yale University Press. Malinowski, Grzegorz 2006. Logiki wielowartościowe [Many-Valued Logics]. Warszawa: Wydawnictwo Naukowe PWN. Marciszewski, Witold (ed.) 1988. Mała encyklopedia logiki [Small Encyclopedia of Logic], 2nd ed. Wrocław: Zakład Narodowy im. Ossolińskich Wydawnictwo. McCall, S. (ed.) 1967. Polish Logic: 1920-1939. Oxford: Oxford University Press. McCawley, James D. 1981 Everything that Linguists Have Always Wanted to Know about Logic* *but were ashamed to ask. Chicago: The University of Chicago Press. Mulhall, Stephen 2008. Wittgenstein’s Private Language. Grammar, Nonsense, and Imagination in ‘Philosophical Investigations’, §§ 243-315. Oxford: Clarendon Press. Nowottny, Winifred 1962. The Language Poets Use. London & New York: Oxford University Press. Pietrkiewicz, Jerzy 1997 Antologia liryki angielskiej 1300-1950 [Anthology of English Lyric Poetry 1300-1950, double-language edition]. Warszawa: Instytut Wydawniczy PAX. Rescher, Nicholas and Robert Brandom 1979. The Logic of Inconsistency. A Study in Non-Standard Possible-World Semantics and Ontology. Totowa, NJ: Rowman and Littlefield. Richards, Ivor A. 1936. The Philosophy of Rhetoric. Oxford: Oxford University Press. Routley, Richard 1969. The Need for Nonsense. Australasian Journal of Philosophy 47.3, 367-384. Stockwell, Peter 2002. Cognitive Poetics. An Introduction. London & New York: Routledge. Wiśniewski, Andrzej and Jan Zygmunt 1997. Erotetic Logic, Deontic Logic and Other Logical Matters. Logika tom 17 [Logic Vol. 17]. Wrocław: Wydawnictwo Uniwersytetu Wrocławskiego.

62 Woods, John 1974. The Logic of Fiction. A Philosophical Sounding of Deviant Logic. The Hague & Paris: Mouton. Internet sources Curriculum vitae of A. Mostowski. Accessed 7 December 2008 at: http://www.springerlink.com/content/j73726205p274414/ Free Logic. Accessed 11 May 2009 at: http://en.wikipedia.org/wiki/Free_logic Game Semantics. Accessed 11 May 2009 at: http://en.wikipedia.org/Game_semantics Intuitionistic Logic. Accessed 11 May 2009 at: http://en.wikipedia.org/Intuitionistic_logic Jaśkowski, Stanisław. Accessed 7 December 2008 at: http://en.wikipedia.org/wiki/StanislawJaskowski Lear, Edward. Accessed 21 December 2008 at: http://en.wikipedia.org/wiki/Edward_Lear Lear, Edward. Cold Are the Crabs. Accessed 21 December 2008 at: http://ingeb.org/songs/coldaret.html Lejewski, Czesław. Accessed 7 December 2008 at: http://en.wikipedia.org/wiki/Czeslaw_Lejewski Paraconsistent Logic. Accessed 11 May 2009 at: http://plato.stanford.edu/entries/logic-paraconsistent Stanford Encyclopedia of Philosophy on-line. Available at: http://plato.stanford.edu/entries

Tadeusz Ciecierski University of Warsaw [email protected]

Varieties of Context-Dependence* Abstract: An adequate theory of context-sensitivity must take into the account the fact that several properties of linguistic signs are somehow context-dependent. In this paper I begin by sketching the functional approach to context-sensitivity, i.e. the account which combines the abovementioned observation with the idea of the parameterization of context. Next, I compare briefly the functional taxonomy of contexts with two other classificatory approaches presented in the recent philosophical literature. Finally, three concepts of derivative context-dependence are introduced and briefly discussed.

0. Introduction By the value of a semiotic property of an expression x we mean an entity which stands in a given semiotic relation to the expression x (and, sometimes to the expression x and its user). This use of the term “semiotic value” consciously departs from its functional sense – we are not assuming that an expression stands in a given semiotic relation to at most one entity. To give some examples: the value of the semiotic property of having an intension as applied to the sentence “Chicago is large” is the proposition that Chicago is large; the value of the semiotic property of conversationally implicating something as applied to the sentence “John has three daughters” is (among other things) the proposition that John does not have four daughters; the value of the semiotic property of having an illocutionary force as applied to the sentence “I hereby promise to spare your life” is that of having the illocutionary force of a promise; the value of the semiotic function of expressing the state of the utterer as applied to the sentence “I am not excited, damn you” would be the utterer’s state of being angry etc. I will *

I am grateful to Witold Hensel, Joanna Odrowąż-Sypniewska and Piotr Wilkin for helpful comments on an earlier draft of this paper.

64

assume throughout that most semiotic functions can be described in such a relational way. The purpose of what follows is to present a certain theoretical approach to the context-sensitivity of linguistic sings and their semiotic properties. The intuitive motivation for this approach consists of two truisms about expressions. First, linguistic signs have many types of 1 semiotic – i.e. syntactic, semantic and pragmatic – properties. Second, many of those properties are somehow context-dependent, i.e. there are certain expression-types such that the values of their semiotic properties vary depending on the circumstances of their use. My aim in this paper is to sketch some theoretical concepts which can be used in an analysis of various types of context-sensitivity. My plan is first to make some general remarks about the concept of contextdependence, and second, to introduce three notions of derivative contextsensitivity which one may consider as useful tools in analyzing the many varieties of context-dependence For more than one reason the issues discussed in this paper are related to the fashionable discussion between contextualism and minimalism. Nevertheless, I would like to stress that this paper is not a voice in this discussion and – in particular – should not be interpreted as a defense of a contextualist position of any sort. Its origins lie in rather more general considerations concerning the notion of context-dependence. 1. Context-sensitivity The simplest possible examples of distinct context-sensitive semiotic functions are those of truth value (extension of sentences) and truth conditions (intensions of sentences). There is an infinite number of sentence-types which have different truth conditions and truth values when used on various occasions – narrowly conceived indexical 1

Below, I am using the term “semiotic” in its traditional Morris-Carnap sense. Throughout this essay the terms “semiotic/syntactic/semantic/pragmatic property” and “semiotic/syntactic/semantic/pragmatic function” will be used as synonyms.

65

sentences provide an excellent case in point. Nonetheless, those two examples are only a drop in the ocean of others. Consider the sentence: (1) Typhoid fever is a terrible sickness. Among its semiotic properties one may consider a particular syntactic structure (with the variable binding operator [universal quantifier] and 2 compound sentential function), intensional and extensional structure, illocutionary force (usually that of assertion, but the sentence can be used also with other illocutionary forces), potential implicatures and presuppositions, various kinds of pragmatically conveyed information (e.g. that, in the normal assertive use, the speaker believes in what is said by (1)). The list, of course, is far from complete. Some of those properties can be assigned to sentence-types, while others to particular uses (or utterances) of sentence-types. The syntactic structure of (1) is an example of the first sort. On the other hand, unless the sentence is interpreted as tenseless, its truth conditions, truth value, as well as intensional and extensional structure are features of particular utterances of (1). If (1) is interpreted as tenseless, then all those features can 3 (probably derivatively) be assigned to the sentence-type also. If (1) is 2

3

“Intensional structure” in the sense of C.I. Lewis (1943-44) and Carnap (1947), “extensional structure” in the sense of Ajdukiewicz (1958); for a more comprehensive formulation see also: Ajdukiewicz (1967a) and (1967b) reprinted in Pelc (1979). Both ideas have recently been reinvented by proponents of the socalled “structured propositions”. There is an interesting general problem noted by authors such as Bar-Hillel (1954) and Prior (1969). One may either take a semiotic property of a given expression to be a feature of expression-types, provided that in each circumstance the expression has this property unchanged (and treat all the semiotic properties to be primarily features of utterances or uses), or – like Strawson – claim that there is a categorical difference between things which can be said about expression-types and utterances of expression-types. This problem deserves detailed and separate analysis which must be postponed for a further occasion. For the sake of this essay, I will assume the correctness of the first approach.

66

given a tenseless interpretation, all of these properties, except syntactic structure, are context-sensitive. An important theoretical approach to context-sensitivity (dating back to the work of Bar-Hillel, Scott and Montague) is based on the idea of a parametrization of context. Every linguistic expression somehow determines the set of parameters that must be specified in order to assign to the expression a specific semiotic property (of which its intension – the proposition expressed – is a paradigm example). Thus, in case of (1), if the semiotic functions in question are that of intension, extension, and intensional and extensional structure, the set of parameters simply consists of the time of utterance. If other functions are taken into account, parameters such as the speaker's beliefs and presuppositions must also be included. It is convenient to think about this way of representing contexts in terms of the general notion of maximal pragmatic context: the set of all possible parameters that may determine a particular type of semiotic property of an expression. The paradigmatic example is once again that of intension for which the maximal pragmatic context consists of parameters like the speaker, the addressee, the time of utterance, the place of utterance, the pointing gesture (directing intention) of the speaker, shared beliefs of the participants of the conversation, the co-text 4 of the utterance etc. Thus, the sentence: (2) I am glad to see you. selects as relevant parameters of the maximal pragmatic context: the speaker, the addressee and the time of utterance, while the sentence:

4

One may imagine languages in which parameters like the height above sea level or the air-pressure could also be relevant. The fact that only some aspects of the communicative situation are systematically exploited by actual languages suggests a deep connection between human cognitive organization and linguistic competence.

67

(3) I am glad to see Bob, Garry and Hillary on my list. selects the speaker, the time of utterance and the default subject-matter. In both cases, the actual context determines the values of the selected parameters, which – together with widely conceived linguistic rules and fixed facts about the utterance – determine the proposition expressed (the intension of the sentence). The final determination might be more or less indirect depending on the nature of linguistic rules involved. The rule for the first-person pronoun identifies the referent with the value of the speaker coordinate, while the rule for adverbs like “yesterday” requires more complicated “computation” of the reference. Thus: (4) expresses the proposition that Barack Obama is glad to see Dmitry Medvedev on the 7th of July 2009, while: (5) expresses the proposition that, on the 20th of January 2009, Barack Obama is glad to see Bob Gates, Gary Locke and Hillary Clinton on the list of the members of Obama’s cabinet. A similar analysis can be applied to other cases. Let us take for instance particularized implicatures. Let us imagine that sentence (2) is uttered to some participant of the New York City Marathon by his coach, and that the participant finished the marathon in the last place. It is therefore plausible to assume that the Maxim of Quality is somehow broken and that (2) must conversationally implicate something. But what

68

exactly is implicated is a complicated and highly context-dependent matter. It may be irony, it may be an allusion to the fact that the coach bet some money on the addressee of (2) and it may be both. Moreover, the way in which the sentence is uttered determines also the attitude of the coach towards the addressee of (2). Accordingly, the utterance of (2) may pragmatically entail that the coach is angry, and that the performance of the addressee is the cause of his anger. It is therefore not only figurative content that is context-dependent, but also the expression of the mental state of the speaker. In all those cases, factors such as the speaker’s beliefs and more or less conventional ways of expressing them can play the role of a relevant contextual parameter. 2. Towards functional taxonomy of contexts The approach to context-dependence sketched above can be labeled “functional”. Instead of speaking of context simpliciter, it speaks about context qua determinant of a particular semiotic property (e.g. context qua determinant of extension, context qua determinant of intension, context qua determinant of intensional structure, context qua determinant of illocutionary force etc.). Kinds of context are distinguished in virtue of being relevant for the determination of a specific semiotic property. Since contexts are used by agents to determine particular properties of an utterance, proponents of the functional approach may speak equivalently about the typology of uses of context as well as the typology of contexts. This account contrasts with non-functional approaches which distinguish different types of contexts and context-dependence by recourse to criteria which are not necessarily related to the nature of a particular semiotic feature of linguistic signs. Such non-functional criteria may appeal to the distinction between systematic and unsystematic kinds of contextdependence, or to aspects of context exploited in the interpretation (in this sense one may speak for example about personal, time- and placerelative types of contexts). Most work on context-sensitivity combines functional and non-functional approaches (rightly, I believe). The subtle

69

differences between particular theories of context-sensitivity often consist in the emphasis they put on functional and non-functional properties of language use. Let me now characterize briefly two approaches to the issue of the taxonomy of contexts – one basically functional and the other basically non-functional. As we shall see, both contain important observations which must be taken into consideration by every adequate theory of context-dependence. The best example of a mostly functional approach to contextdependence can be found in John Perry’s work (see for example: Perry 1997, 1998 and 2001). Perry draws a distinction between three types of contexts (or uses of context), labeled as “pre-semantic”, “semantic” and “post-semantic”. By a “pre-semantic use of context” Perry means whatever it is that we use to “figure out [in] which meaning a word is being used, or which of several words that look or sound alike is being used, or even which language is being spoken”. To this list one may add the syntactic structure of an expression uttered. In this sense, the interpretation of the utterance: (6) Lenin figured out Stalin a lot faster than Trotsky. depends on our previous determination of the logical form (deep structure) of sentence (6). In this instance, we contextually use historical knowledge about so-called Lenin’s Testament to determine the intended syntax of (6). By a “semantic use of context” Perry means non-accidental uses of the situation in which the utterance occurs, i.e. uses forced by the meaning or function of particular words or subutterances. This type of context use directly determines the content of an utterance – indexical, anaphoric and cataphoric uses of particular words provide an excellent case in 5 point. 5

Perry also distinguishes between automatic/intentional, and wide/narrow types

70

A post-semantic use of context (also, perhaps more aptly, called “content-supplemental use of context” (cf. Perry 2001: 44-50) is characterized by Perry as the case in which “we lack the materials we need for the proposition expressed by a statement, even though we have identified the words and their meanings, and consulted the contextual factors to which the indexical meanings direct us”. This concept is used by Perry to isolate the phenomenon of unarticulated constituents – cases in which content is determined by context but no analogous determination is present at the level of relevant linguistic items which simply do not occur in the utterance (a large part – if not the whole - of that phenomenon falls under the traditional heading of ellipsis). Utterance (5) could be analyzed in this manner – a list is usually a list of something, and some presuppositions made by participants of the communicative situation in which (5) occurs determine that it is a list of members of Obama’s cabinet rather than a list of people who accepted Obama’s invitation to a private party celebrating his presidential nomination. This interesting typology of contexts is mostly functional, because the difference between semantic and post-semantic uses of context applies to the way in which content is determined rather than to the type of semiotic property determined (in both cases it is the proposition expressed). But one may easily transform this typology into a functional one without losing the insight of Perry’s observations. The following table provides an illustration of the direction in which the functional characteristics might go – it starts with the semiotic property which is supposed to be context-sensitive, then groups together several sorts of dependence under a common heading:

of semantically used contexts. Those distinctions are not relevant to the present discussion.

71

Semiotic function determined

Type of the use of context

Syntactic structure (e.g. Flying planes can be dangerous.)

Meaning (e.g. John has many fashionable habits.)

Pre-semantic use of context

Language belonging (e.g. CAR TO CAR6.)

Proposition expressed

Semantic use of context (e.g. I am hungry.) Post-semantic use of context (e.g. Mary began a book.)

Table 1: A Functional Perspective on the Contextual Zoo

We have to be careful in developing the typology sketched in the table. One may expect for example that the notion of post-semantic use of context should be naturally extended to conversational implicatures (cf. Recanati 2007: 7), which – according to the classical account – are calculated on the basis of literal meaning (what is said). Indeed, in some sense, implicatures are decoded post-semantically, and contextsensitivity of figurative content must be in this sense post-semantical. But it is not post-semantical in the sense in which the determination of 6

In writing, the Polish word “car” means “tsar”, while “to” translates (in this context) into “is” – so the whole (written) expression means: “A tsar is a tsar”. The difference in Polish and English readings is recognizable only in pronunciation.

72

unarticulated constituents is. This is because the basis of computation – the literal meaning itself – is a category covering both the content determined in a semantical way and the content determined in a postsemantical manner. It is the literal content of an utterance that is contextually determined both in the case of classical indexicals (a semantic use of context) and unarticulated constituents (a post-semantic use of context). Although the contextual determination of both unarticulated constituents and figurative content is performed after the semantic interpretation of an utterance has been completed, the sense of “post-semantical” is different in each cases. Paraphrasing Russell, we may say that the case of implicatures is a new beast for our contextual zoo. Let us therefore call this new beast a “pragmatic use of context”. Another interesting example is that of Carnapian intensional 7 structure. Intensional structure of an expression is the structure isomorphic to its (deep) syntactic structure with intensions of atomic (terminal) elements substituted for their corresponding linguistic items. This property of expressions is sometimes determined by its context8 sensitive syntax, sometimes by its context-sensitive content, and, in still 9 other cases, by both of them. This shows that intensional structure conceived as a semiotic feature of expressions cuts across the distinction between pre-semantic and semantic uses of contexts. In light of this, our table should evolve and take the following form:

7 8 9

A similar analysis applies to the extensional structure in Ajdukiewicz’s sense. Let us ignore for a while the important question of the relation between this semiotic feature and its compounds. The question whether the content determined in the latter case could be the result of the post-semantic use of context is controversial. A positive answer would probably lead to the consequence that the syntax of an expression may be (indirectly) determined by the post-semantically used context. This result may be conceived as an argument against including the semantic values of unarticulated constituents in the intensional structure.

73

Semiotic function determined

Type of the use of context

Syntactic structure Meaning Pre-semantic use of context Language belonging Intensional structure (e.g. He believes that flying planes can be dangerous.)

Semantic use of context Proposition expressed

Proposition implicated

Post-semantic use of context Pragmatic use of context (e.g. I am glad to see you.)

Table 2: A Functional Perspective on the Contextual Zoo with New Beasts

This evolutionary stage of our chart is of course a very first step in the direction of an exhaustive functional typology of contexts. Its aim is only to point in the direction of where such a typology should go. Let me now turn to a second taxonomy of contexts proposed in the recent philosophical literature. In his stimulating book Reflecting the Mind, Eros Corazza proposed to distinguish narrowly and broadly conceived context (Corazza 2004: 54-58). By narrow context Corazza means this part of the circumstance in which an utterance occurs that helps the participants of the communicative situation to understand widely conceived indexical expressions used in the utterance. There are two types of narrow context: indexical and demonstrative. The concept of indexical context applies to the non-intentional or speaker-

74

independent features of the communicative situation – the place, time, the agent etc. The concept of demonstrative context applies to the speaker-dependent (or intentional) features of the communicative situation, like (widely conceived) pointing gestures together with other aspects which are responsible for determining a particular object as the 10 default referent. By broad context Corazza means all those aspects of the situation of an utterance which are not explicitly exploited in the semantical rules governing the interpretation of the utterance and its subutterances, but which nevertheless help us to understand the utterance. To use Corazza’s example: being dressed in a particular way may help us to determine whether by a particular use of the word “bank” in the utterance of “I have just come from the bank” one means the financial institution or the embankment. This fact about the situation of utterance is not systematically represented in the meaning of the word “bank”, and this contrasts with the case of indexicals where the relevant contextual parameter is more or less (vide demonstratives) explicitly mentioned in meaning rules governing the interpretation and referencefixing procedures. Both types of context go into the making of the general category of setting – “a scene or scenario underlying the linguistic interchange”. Corazza’s notion of broad context is theoretically very interesting, partly because it allows us to mark an important distinction between using context as a determinant of a particular semiotic property and using a particular semiotic property as an auxiliary tool in determining the content (or any other feature) of an utterance. For example, Perry’s pre-semantic context is something used to determine the language, syntax and meaning of particular strings of sounds or inscriptions. Of 10

Corazza does not explicitly mention such cases. Nonetheless, I think they should be classified as demonstrative contexts, unless one would like to separate demonstrative uses accompanied by a gesture from demonstrative uses in which the object of reference is somehow default but no gesture (widely conceived) is present. If, after watching the 100-meter dash won by Usain Bolt, someone says “This sprinter is as good as Carl Lewis” (without producing a gesture of any sort), he means Usain Bolt – the sprinter singled out by the situation.

75

those three features, at least language-relativity can be both determined and used contextually as partially determining the content. Thus the inscription “Ich” could be the first person German pronoun, since it appears in some of Goethe’s original poems (language- belonging determined contextually), while the presupposition that it is used as a German word determines that a particular proposition is expressed by the use of “Ich” (language-belonging exploited contextually). Corazza’s typology is not functional in character – it applies only to uses of context qua determinant of content. This is, of course, a justified approach, since communication is centered on this particular feature of utterances. To this the proponent of the functional approach adds that we must not ignore other context-sensitive features of expressions, and must, among other things, do justice to the Austin-Grice revolution. It should also acknowledge the dual role of semiotic properties Corazza brings into light. The fact that unstable presuppositions about the character of particular semiotic features can serve as contextdeterminants (of other semiotic features) makes the functional approach even more interesting – it is function that allows us to make the distinction between using context as a determinant of a particular semiotic property and using a particular semiotic property as an auxiliary tool in determining other semiotic properties of an utterance. 3. Three concepts of derivative context-dependence Can the multitude of semiotic properties and the multiplicity of context uses be somehow simplified? According to a moderate reductive strategy, one may single out a set of basic context-dependent semiotic properties and a set of derivatively context-dependent semiotic 11 properties. The relevant notion of derivative context-dependence is of 11

This strategy should not be confused with the approach which postulates a set of basic context-dependent expressions like that of Capellan, Lepore (2005). The strategy of Cappellan and Lepore differs also from the functional perspective in treating the basic set of indexical expressions as a basis for the elimination of other contextual phenomena rather than their reduction.

76

course far from univocal – I believe that it can be understood in at last two ways, one which may be called definitional and the other which may be labeled relational. To those two concepts one may add a third, which, although interesting in its own right, is not, strictly speaking, relevant to the issue under discussion. Let us label this third notion analytical and begin our presentation with it. We can say that a particular linguistic construction (expression) is analytically derivatively context-dependent with respect to another linguistic construction (expression) iff : (i) the former is analyzed in terms of the latter; (ii) the latter is context-dependent; (iii) the context for 12 the analysans is not fixed by the analysis. Thus, if an analysis of knowledge ascriptions or statements describing causal relations contains counterfactual statements which are (as is widely assumed) contextsensitive, it follows directly that the knowledge ascription or statement about causal dependence is context-sensitive unless the analysis 13 somehow fixes the context for the relevant counterfactual. It is important to appreciate the meaning of constraint (iii). For example, Stanley criticizes philosophers who may: (...) think that there is a prima facie case to be made, from the fact that a certain term t contains in the analysis of what it expresses a property that is expressed by a context-sensitive term t', that t is therefore context-sensitive. (Stanley 2004: 132) 12

13

We are assuming that analysis is something general, i.e. common to all particular instantiations of the analyzed construction, otherwise a trivial occurrence of an indexical expression in the analysis of a particular sentence would make this sentence derivatively context-dependent. Another example of analytical derivative context-dependence could be provided by the case of knowledge ascriptions combined with the theory that presupposes the context-sensitivity of belief attributions (like that of: Stalnaker (1999): 150166; Stalnaker’s theory deserves special attention because of the subtle difference between contexts and derived contexts and the stress laid on the dual role of context in determining the proposition expressed). The relation of the functional approach to Stalnaker’s notion of context set deserves independent study which must be postponed until a later time.

77

Stanley uses the examples of the context-insensitive expressions like “John's enemy” and “vacuum”. They contain (explicitly or in the analysis) context-dependent terms “enemy” (“in one context it may 14 mean an enemy of x, and in another context, an enemy of y”) and “empty” (“the notion of being a vacuum involves being completely empty”). In both cases either the form or the analysis of the expression fixes, the interpretation of the phrase in the relevant respect. Thus, in case of “John's enemy” the relevant subject is explicitly mentioned, and in case of “vacuum” the adjective “completely” determines the standard of emptiness that must be taken into consideration. As a result, the context-dependence of “John’s enemy” and “vacuum” is cancelled out. As I have mentioned above, the concept of analytically derivative context-dependence, as applying to linguistic contexts and constructions rather than features of expressions, does not bear directly on the problem of reducing the number of context-dependent semiotic properties. Matters are different in the case of definitionally derivative contextdependence. In this case, we assume that there are situations where one context-dependent semiotic function could be, at last partially, defined in terms of other context-dependent semiotic functions. As in the case of the first notion of derivative context-dependence, we have to assume here that a definition does not fix the context for the definiens, otherwise Stanley-like criticism applies to arguments to the effect that some definiendum is context-dependent because its definiens is. Roughly speaking, a particular semiotic property is definitionally derivatively context-dependent with respect to another semiotic property iff: (i) the former is defined in terms of the latter; (ii) the latter is contextdependent; (iii) the context for the definiens is neither explicitly nor implicitly fixed by the definition. 14

This example, although provides a good illustration of the general problem, is by itself problematic – the relational sense of a certain noun phrases (e.g. “the/a wife of x”, “the/a enemy of x”, “the/a dog of x” etc.) semantically requires a complement to be given. This complement is sometimes default in the conversation and – due to this fact – remains unmentioned. It does not follows, form this fact alone, that the relevant noun-phrases are context-dependent.

78

Paradigm examples of defintionally derivative context-dependence, namely the intensional structure in Carnap’s sense and extensional structure in Ajdukiewicz’s sense, have already been mentioned. Others include structured meanings of various kinds and the truth value of sentences. Structured semantic values (intensions, extensions, meanings etc.) are definitionally derivatively context-dependent both with respect to the (unstructured) semantic value of the whole expression and the syntactic structure of the expression (provided that: (i) compositionality holds, i.e. two expressions identical with respect to the structure and semantic values of terminal elements are identical with respect to their semantic values simpliciter; (ii) the syntactic structure happens to be context-dependent). In the case of truth value (extension of sentences), the definitionally derivative character of its context-sensitivity is the effect of the possibility of defining the concept of truth partially in terms of the relation of reference, which, when applied to natural languages, 15 must be contextually relativized. In any case, those semiotic functions which are definitionally derivatively context-dependent can either be excluded from the functional analysis of contexts or it must be proved that their context-sensitivity does not come down to the contextsensitivity of their constituent semiotic features. The third concept of derivative context-dependence, the relational one, is modeled on the notion of supervenience, or more precisely, on the notion of asymmetric covariance. On any account using this notion of derivative context-sensitivity we assume that some semiotic properties of expressions (as uttered on particular circumstances) may co-vary with other semiotic properties of the very same expressions-cum-contexts. The former properties are supervenient properties of an expression-cumcontext, while the latter – subvenient properties of the expression-cumcontext. Indiscernibility of subvenient properties entails indiscernibility of supervenient properties. Moreover, this dependence should be 15

The other component of the definition is satisfaction relation which depends rather on circumstances of evaluation; if one thinks that it is also contextsensitive, she can add that the truth value is doubly derivatively contextdependent in the discussed sense.

79

asymmetrical – the indiscernibility of previously supervenient properties should not entail the indiscernibility of previously subvenient 16 properties. This condition allows us to exclude uninteresting cases of interdependent semiotic functions. For example, some theorists, e.g. Bar-Hillel, Carnap (1952) introduce the notion of semantic information defined as a set-theoretical complement of the intension of the sentence. The interdependence of those semiotic properties is theoretically unimportant, moreover we should probably treat them as definitional variants of a single semantic property. Asymmetric dependence allows us to exclude cases of this sort. Let us use the following notation: we shall write ‘’ for “expression-type α as uttered in context c”; ‘’ for “expressiontype α as uttered in context c and possible world w”; and ‘ =F ’ for “expression-type α-cum-context c is identical, with respect to the semiotic property F, with expression-type β-cum-context c*” (and 17 similarly with the possible world argument). Depending on which concept of supervenience is our model, we have at least two sets of postulates for this notion of derivative context-dependence:

16

17

Some people would probably be inclined to treat with ontological seriousness the idea that semiotic functions are properties and take them on a par with all properties that are more or less constantly present in our philosophical and scientific theorizing about the world. Thus, she would be disposed to reject the semiotic eliminativism and instrumentalism – claims presupposing that semantic, pragmatic, and (even) syntactic features of linguistic signs are not real properties at all. Moreover, such a person would probably (intend to) use the term “supervenience” as designating the relation of dependence seriously conceived as metaphysical dependence. I prefer to speak about “asymmetrical covariance” – without excluding metaphysically uninteresting kinds of dependence, such as functional ones. Below I will be using “supervenience” in this more liberal sense. The approach sketched in this paper could be restated, without theoretical loss, in the terminology of occurrences – “mere combinations of the expressions with contexts”, cf. Kaplan (1989: 584-585).

80

Weak relational derivative context-dependence [WRD] – first formulation: Semiotic property D is derivatively context-dependent with respect to semiotic property B if and only if: (a) ∀w∀α∀c∀c' { =B } (b) ∃w∃α∃c∃c' { =D }

This definition requires that identity with respect to semiotic property B entail identity with respect to semiotic property D (but not vice-versa) within every possible world. Thus, we allow the possibility that the identity with respect to B-properties across possible worlds does not entail the identity with respect to D-properties across those worlds. A good example is provided by the case of extension and intension of expressions. Two expressions having the same intension when used in two contexts of the same possible world would have the same extension in that world. Meanwhile that would not generally be the case across possible worlds – identity of intensions in different possible worlds does not entail identity of extensions in those possible worlds. It is because the facts about possible worlds may differ, e.g. even if the sentence “She lives in Warsaw now” expresses in two possible contexts identical propositions about Monica Bellucci, Paris and 25th of January 2009, in one world Monica Bellucci can live in Paris on this particular date, while in another (on the very same date) she can (hopefully!) live in Warsaw. Strong relational derivative context-dependence [SRD]: D is derivatively context dependent with respect to B if and only if: (a*) ∀w∀w’∀α∀c∀c' { =B } (b*) ∃w∃w’∃α∃c∃c' { =D external or surface subject > prepositional object > indirect object (IO) > direct object (DO); 2 - Thematic Hierarchy (Jackendoff 1972): Agent >Experiencer > Theme; The analyses of scope preferences have resulted in the development of two models of the scopal ambiguity resolution process. Some authors argue for the reanalysis-based model (Fodor 1982, Johnson-Laird 1969, Tunstall 1998). In the first stage, a principle of linear order (or Ccommand at SS) is a primary determinant of scope preferences. The second stage involves integrating this analysis with other sources of information: (i) lexical biases of particular quantifiers to take wide or narrow scope, and (ii) real world-knowledge; these other types of information are used to confirm (or reject) the initial analysis and guide reanalysis. Other authors argue for parallelism (Kurtzman and MacDonald 1993; Filik et al. 2008). On this model, scopal ambiguity resolution is not divided into two temporally distinct stages; multiple constraints are thought to be operative together (a principle of linear order is just one source of constraints), possible analyses of a sentence are activated in parallel and compete for adoption. The reanalysis-based model and parallelism have different predictions with regard to the processing and reading times of multiple quantified sentences. The reanalysis-based model predicts that the principles of grammatical function and thematic hierarchy affect comprehension only after the forward scoping interpretation (corresponding to the linear order of quantifiers) has been reached; if this interpretation is incompatible with subsequent information, further processing follows – this reanalysis incurs a processing cost. Parallelism predicts that comprehension of ambiguities does not go astray, with a later revision and adjustment stage. Kurtzman and MacDonald argue that when several principles collectively favor one interpretation then that interpretation is built, but if the constraints are in conflict then competition between the alternative interpretations occurs before one eventually wins – this competition incurs a processing cost. We will discuss in more detail two on-line studies of scope preferences. Kurtzman and MacDonald (1993) investigated the 2

‘>’ indicates ‘takes scope over’.

130 interaction of the linear order principle and thematic hierarchy. Filik et al. (2008) examined the interaction of multiple factors, including the principle of linear order and grammatical function hierarchy. Kurtzman and MacDonald used ambiguous quantified sentences, e.g. ‘Every kid climbed a tree’, followed by a continuation sentence that was a reasonable continuation of a sentence under just one of its interpretations: (1a) The trees were in the park. (1b) The tree was in the park. The forward scoping interpretation is consistent with (1a), whereas the inverse scoping interpretation is consistent with (1b). Subjects were asked to judge whether the continuation sentence is indeed a reasonable continuation. The key finding of Kurtzman and MacDonald’s studies was that the forward scoping was significantly preferred in actives and that there was no such preference in passives. The results obtained indicated that no single principle can account for the scope preferences in both actives and passives, and pointed to the interaction of several principles: in actives both the linear order principle and thematic hierarchy collectively favor the forward scoping interpretation, whereas in passives the linear order principle and thematic hierarchy principle are in conflict, resulting in the competition between the forward scoping and inverse scoping analysis. Filik et al. (2008) manipulated three possible determinants of scope preferences: the linear order principle and grammatical function hierarchy, along with the lexical bias of ‘each’ to take wide scope. They used double object sentences and datives: (2a) Kelly showed [each/a] critic [a/each] photo. (2b) Kelly showed [each/a] photo to [a/each] critic. In double object sentences the indirect object comes first; in datives the indirect object comes after the direct object. The key finding of their studies was that total reading times at a region containing direct and indirect objects were longer for double object sentences with ‘a-each’ rather than ‘each-a’, whereas the effect for datives was reversed. Under the reanalysis hypothesis, comprehenders should experience most difficulty for sentences with ‘a-each’ order, irrespective of the sentence’s

131 construction – when the linear order principle conflicted with a lexical bias of ‘each’ to take wide scope. But the readers experienced most difficulty when the grammatical function hierarchy (indirect object > direct object) conflicted with the lexical bias of ‘each’ to take wide scope. Thus the results obtained were contrary to the reanalysis-based model and pointed instead to the interaction of several principles with the grammatical function hierarchy being a particularly strong determinant of scope preferences. The studies of scope ambiguities to date point to the interaction of several structural principles in the course of scope ambiguity resolution. They also show that lexical biases of particular quantifiers to take wide scope have an impact on scope preferences. Still, the experimental findings are few and mixed. Further testing is required to provide more valid and reliable data. This further testing should use measures that reflect comprehension processing more directly: eye tracking, self-paced reading, and other on-line methods. Also, the studies to date are incomplete. Further experimental studies are needed to investigate the time-course by which structural principles and real-world knowledge interact. Evidence for early effects of real-world knowledge would provide a strong support for parallelism (see Kurtzman and MacDonald 1993: 274). In our view, a complete theory of scope disambiguation will have to specify the role of real-world knowledge and contextual information. It has been noted that certain multiple quantified sentences such as (2) admit of more interpretations than can be differentiated by simple use of scope relationships. We claimed that no account that relies on the position of the quantifier in the construction would be successful in predicting the range of interpretations available for multiple quantified sentences. Similarly, it could be argued that no principle that relies on the position of the quantifier in the construction would allow to predict the range of scope preferences. Under our hypothesis, grammar determines the class of potential interpretations of multiple quantified sentences corresponding to every possible pre-order of quantifiers. Thus grammar determines all possible dependencies between quantifiers but the contextual information is necessary to determine the choice of an interpretation. It is a corollary of our account of scope that a complete theory of scope ambiguation will have to specify the ‘whens and hows’ of context effects. A further and more speculative corollary of our view is that if there are indeed default interpretations for some multiple

132 quantified sentences, they should be identified with interpretations adopted for a standard context, one that has to do with our (general) world knowledge. 2.1.2. Neurophysiological evidence Dwivedi et al. (2008) conducted a study of scopal ambiguities resolution process using Event Related Potentials (ERPs). ERPs reflect voltage changes in the electrical brain activity associated with cognitive processing. They used sentence materials found in Kurtzman and MacDonald (1993). Their most interesting finding was that sentences such as ‘Every kid climbed a tree’ with both plural and singular continuations patterned together: they elicited a (statistically significant) long-lasting negative-going wave as compared to unambiguous sentences as ‘Every kid climbed a different tree’. The results obtained indicate that there is no early preference assigned to scopally ambiguous sentences – the preferences in earlier research are due to later stages of processing. Moreover, a comparable waveform has been identified as an Nref – a waveform that marks referential ambiguity. Dwivedi et al.’s study has yielded neurophysiological evidence for the ambiguity hypothesis and parallelism. Further studies are of course needed to provide more valid and reliable support. 3. How do human grammars encode ambiguities? Our theory of scope assignments proposes that a single structure at LF determines a set of interpretations: For any quantified sentence in which the quantified phrases mutually c-command one another, a single multiple quantified LFrepresentation represents a uniquely specifiable class of interpretations corresponding to every possible pre-order of quantifiers. Now, there are two ways of representing the set: - by enumerating its elements, - by describing its elements in terms of a property P that pertains to all elements of the set and to nothing else.

133 In the first case, our grammatical rule for a sentence with 2 quantifiers would determine a 4-elements set: two dependent and two group interpretations (complete and incomplete). This approach faces the Combinatorial Explosion Puzzle: the number of distinct interpretations increases sharply as the number of quantifiers in the sentence increases. Consider (5): (5) Three fanatics have submitted four articles on the race issue to five dailies. According to Kempson and Cormack, (5) is 19-ways ambiguous, the majority of interpretations being pairwise logically independent. And so a comprehender, when confronted with (5), would have to generate 19 interpretations. Such a prediction is at least implausible, and so this approach does not work as a theory of interpretation for multiple quantified sentences. We adopt the second and more efficient way of representing the set of possible interpretations in terms of a property, that is, correspondence to every possible pre-order of quantifiers. Instead of giving a traditional enumeration of sentence interpretations or word senses, the idea is to relate senses to one another into one coherent structure. This line of thinking is represented in both computational linguistic and psycholinguistic approaches, e.g.: work on underspecified logical forms as a way of characterizing the space of possible semantic interpretations of a sentence (Pelletier and Schubert 1982; Hobbs and Schieber 1987); work on polymorphic representations as a way of characterizing the multiple senses of a word (Hirst 1987; Mineur and Buitelaar 1996), work on lexical representations as a way of accounting for both lexical and syntactic ambiguities (MacDonald et al. 1994).

References Abney, Steven P. 1989. A Computational Model of Human Parsing. Journal of Psycholinguistic Research 18, 129-144. Bach, Kent 1982. Semantic Nonspecificity and Mixed Quantifiers. Linguistics and Philosophy 4, 593-605. Barwise, Jon 1979. On Branching Quantifiers in English. Journal of Philosophical Logic 8, 47-80.

134 Carston, Robyn 2002. Thoughts and Utterances: The Pragmatics of Explicit Communication. Oxford: Blackwell. Cooper, Robin 1983. Quantification and Syntactic Theory. Dordrecht: Reidel. Dwivedi, Veena D., Natalie A. Phillips, Stephanie Einagel and Shari Baum 2008. The Neurophysiology of Scope Ambiguity. Proceedings of the 2008 Annual Conference of the Canadian Linguistic Association. Filik Ruth, Paterson Kevin B and Simon P. Liversedge 2008. Competition during processing of quantifier scope ambiguities: Evidence from eye movements during reading. The Quarterly Journal of Experimental Psychology 61, 459-473. Fodor, Janet 1982. The Mental Representation of Quantifiers. In Stanley Peters and Esa Saarinen (eds.) Processes, Beliefs, and Questions, Dordrecht: Reidel. Frazier, Lyn 1979. On Comprehending Sentences: Syntactic Parsing Strategies. Bloomington: Indiana University Linguistics Club. Grice, Henry Paul 1975. Logic and Coversation. In: Peter Cole and Jerry L. Morgan (eds.) Syntax and Semantics 3: Speech Acts. New York: Academic Press, 41-58. Hirst, Graeme 1987. Semantic Interpretation and the Resolution of Ambiguity. Studies in Natural Language Processing. Cambridge: Cambridge University Press. Hobbs, Jerry R. and Stuart M. Schieber, 1987. An Algorithm for Generating Quantifier Scopings. Computational Linguistics 13, 47-63. Ioup, Georgette 1975. Some Universals for Quantifier Scope. In John P. Kimball (ed.) Syntax and Semantics 4, New York: Academic Press, 37-58. Johnson-Laird, Philip 1969. On Understanding Logically Complex Sentences. Quarterly Journal of Experimental Psychology 21, 1–13. Kempson, Ruth M. 1979. Presupposition, Opacity and Ambiguity. In: Chooh-Kyr Oh and David A. Dinneen (eds.) Syntax and Semantics 11: Presupposition. New York: Academic Press, 283-98. Kempson, Ruth M. and Annabel Cormack 1981. Ambiguity and Quantification. Linguistics and Philosophy 4, 259-309. Kurtzman, Howard S. and Maryellen C. MacDonald 1993. Resolution of Quantifier Scope Ambiguities. Cognition 48, 243-279. MacDonald, Maryellen C., Neal J. Pearlmutter and Mark S. Seidenberg 1994. Lexical Nature of Syntactic Ambiguity Resolution. Psychological Review 101, 676-703. MacDonald, Maryellen C. 1994. Probabilistic Constraints and Syntactic Ambiguity Resolution. Language and Cognitive Processes 9, 157-201. May, Robert 1977. The Grammar of Quantification. MIT Ph.D. Dissertation. May, Robert 1985. Logical Form: Its Structure and Derivation. Cambridge, MA: MIT Press. Micham, Dennis L., Jack Catlin, Nancy J. Van Derveer and Katherine A. Loveland 1980. Lexical and Structural Cues in Quantifier Scope Relations. Journal of Psycholinguistic Research 9, 367-377. Mineur Anne-Marie and Paul Buitelaar 1996. A Compositional Treatment of Polysemous Arguments in Categorial Grammar. In: Kees van Deemter and

135 Stanley Peters (eds.) Semantic Ambiguity and Underspecification. Stanford: CSLI Publications, 125-143. Poesio, Massimo 1996. Semantic Ambiguity and Perceived Ambiguity. In: Kees van Deemter and Stanley Peters (eds.) Semantic Ambiguity and Underspecification. Stanford: CSLI Publications, 159-201. Pelletier Francis J. and Lenhart K. Schubert 1982. From English to Logic: ContextFree Computation of ‘Conventional’ Logical Translations. American Journal of Computational Linguistics 10, 165-176. Swinney, David A. 1979. Lexical Access during Sentence Comprehension: (Re)consideration of context effects. Journal of Verbal Learning and Verbal Behavior 18, 645-659. Tanenhaus, Michael K., James M. Leiman and Mark S. Seidenberg 1979. Evidence for Multiple Strategies in the Processing of Ambiguous Words in Syntactic Contexts. Journal of Verbal Learning and Verbal Behavior 18, 427-440. Tennant, Neal 1981. Formal Games and Forms for Games. Linguistics and Philosophy 4, 311-320. Tunstall, Susanne L. 1998. The Interpretation of Quantifiers: Semantics and Processing. Umass PhD, semanticsarchive.net Zawadowski Marek 1995. Pre-ordered Quantifiers in Elementary Sentences of Natural Language. In: Michal Krynicki et al. (eds.) Quantifiers: Logics, Models and Computation Proceedings of the Conference on Quantifieres, Dordrecht: Kluwer , 237-253.

Filip Kawczyński University of Warsaw [email protected]

The Hybrid Theory of Reference for Proper Names Abstract: In this paper I present main ideas of the Hybrid Theory of Reference for Proper Names. First, I try to define position of the Hybrid Theory within the discussion about reference. Then I briefly explain most significant aspects of the theory as they were defined by Gareth Evans. Apart from that, I also offer some additions to the theory. The addition, I spend most space on concerns phrases that I call “mock names” which are expressions that look like proper names but in fact are nothing more than abbreviations for descriptions used attributively.

0. Introduction The Hybrid Theory of Reference for Proper Names has arisen as a response to Descriptivism on the one hand, and Kripke’s Causal Theory on the other – both facing numerous difficulties. The Hybrid Theory attempts to reconcile some notions of the former with some elements of the latter. However, as we know, arranging familiar concepts into a new order often results in rise of many brand new ideas; and that is very true when considering the Hybrid Theory. Although the discussion about reference of proper names lasts at least since Frege’s famous paper “On Sense and Reference”, I suppose it is still reasonable to remind central thesis of two main adversaries in the dispute i.e. descriptivists on the one hand, and causal theorists on the other. I believe that tension between Descriptivism and the Causal Theory consists primarily in different attitudes towards role that intentional content associated with a given name plays in determining reference of the name. Descriptivists claim that the intentional content is wherein the crux of the matter of determining reference lies. Such content concerns properties of the object being reference of a name. For instance, in the

138 content which I intentionally associate with name “Bertrand Russell” it is included that object which is reference of that name possesses a property of being the author of “On Denoting”. It seems natural that linguistic items which can be used to express such intentional content are descriptions (e.g. “the author of ‘On Denoting’”). What is distinctive for descriptivist point of view is the assumption that the entire intentional content associated with a name ‘N’ – expressible by descriptions – uniquely identifies the object which is the reference of ‘N’. The most important descriptivist thesis is that description (single one or disjunction of many – depending on a version of the theory) associated with ‘N’ determines which object should be considered as the reference of ‘N’. As we know from Kripke, Descriptivism is wrong. A few arguments he presented against this theory in his book (1972) are commonly considered as compelling.1 There is no need to recapitulate them in details; in general, they all point at essential drawback Descriptivism suffers from, which consists in laying too great emphasis on the role played by intentional content in determining reference, while – as Kripke has shown – it is not the case that intentional content is a crucial factor in that process. In other words, descriptivists demand too much from speakers using proper names – in fact, we extremely rarely associate with a name we use some content which really uniquely identifies reference. Descriptions we are able to speak of some object usually are not distinct enough to identify it in an unambiguous way. Instead of the above descriptivist view Kripke proposed an idea commonly known as the Causal Theory. According to this view mechanism of proper names’ reference consists in causal communication chains of reference-borrowing. A way in which chains work seems fairly simple: if I hear someone using name “Bertrand Russell” as referring to Bertrand Russell I can borrow the name (and thus, incorporate it into my idiolect) and start using it on my own (as a competent user of it). In this 1

Three most famous of those arguments are: epistemic argument (from the lack of knowledge, e.g. Feynman case), semantic argument (Gődel-Schmidt case) and modal argument (from unwanted necessity).

139 theory intentional aspect of using name is reduced to a very minimum – namely to a rule which may be called “Don’t change the reference” rule. The rule says that if I borrow a name from someone else I should have an intention to use the name with the same reference with which person from whom I have borrowed the name uses it. As it was mentioned above, Kripke reproached decriptivists for putting too much emphasis on intentional content; paradoxically, lack of intentional aspect appears to be a nail in the coffin of his own theory. There are several persuasive arguments against the Causal Theory and they all unanimously show that the idea of chains of referenceborrowing together with “Don’t change the reference” rule cannot reveal a full picture of reference of proper names.2 If the chains, that are devoid of intentional content (strictly speaking: possess the minimum possible intentional content), were everything what constitutes our using of proper names, it is quite obvious that a lot can go wrong within such chains. In other words, causal chains themselves are rather poor mechanism of reference and if it would have been the case that they are fully responsible for institution of using proper names, proper names probably would have disappeared long ago as a “weak link” of the evolution of language. Thus, as we have seen, neither Descriptivism, nor the Causal Theory seems to be correct. However, also neither of them appears to be completely wrong. What can serve as an antidote for this awkward situation might be – and it is my firm belief that should be – the Hybrid Theory of Reference for Proper Names. Hybridists agree with descriptivists that intentional content associated with a name serves a significant function in determining reference of the name, yet disagree with the descriptivist statement that it plays decisive role and thus, do not agree with the claim saying that it is descriptions (expressing the content) that entirely determine the reference. On the other hand, 2

In my opinion three most powerful arguments against Kripke's theory are: 1) argument from the change of reference (famous “Madagascar“ case) – see Evans (1973); 2) argument from the lack of competence – see Evans (1973) and Putnam (1973); 3) argument from the lack of causal link – see Searle (1983).

140 hybridist follows Kripkean theory’s adherent in saying that there are some communication chains of reference-exchange. However, paths of causal theorists and hybrid theorists diverge when it comes to considering intentionality; while the former states that “Don’t change the reference” rule is enough, the latter says something exactly opposite – namely that some more intentional aspect than the above rule is necessary for building up a complete and correct picture of how proper names work. Basic groundwork for the Hybrid Theory I would like to stand up for has been laid by Evans in his influential book (1982). I think that conceptual framework established by Evans is by and large correct, however, it requires numerous additions in various places. Now I would like to briefly present key notions of the theory, as formulated by Evans, and also offer some further developments of the Hybrid Theory. 1. Practice Evans very aptly remarked that one of the most distinctive features of proper names was that they were always used within some practice of using a given name as referring to a given object.3 Moreover, proper names are the only expressions that require such specific practice. Imagine I utter a sentence “The tallest man who took part in Round Table Agreement prefers tea to coffee”. About the description “the tallest man who took part in RTA” we can reasonably assume that this is the first time whoever uses it – however, it inflicts no harm to correctness and comprehensibility of my utterance. Although I use the description for the first time in history, it still refers to the tallest man who took part in RTA (if there was such a person).4 Now we can conceive that I utter: “Mr. Burlesque prefers coffee to tea”. If no one has ever been named 3

4

The notion of practice is intentionally left without a definition. However, it is reasonable to consider – in a sense – the whole Hybrid Theory as a lengthy contextual definition of that notion. Furthermore, in my utterance I speak truly or falsely of the tallest man who took part in RTA that he prefers tea to coffee.

141 with the word “burlesque”, i.e. there is no established practice of using that word as referring to some particular person, my utterance cannot count as a correct use of language at all, because it is not defined to whom “Mr. Burlesque” refers. In such a case, the utterance should be qualified rather as some pseudo-use of language (as Strawson would put it). Thus, in the situation involving description although there is no special practice of using the phrase “the tallest man...” we still deal with correct and comprehensible use of language; on the contrary, we did not use the word “burlesque” as if it were a proper name. Broadly speaking, if a particular use of a proper name is about to be successful, it must be preceded with a specified range of preliminary uses, i.e. a practice of using a word as a proper name referring to some particular object must be established. Uses (from among preliminary uses) that are essential for establishing practice are those involved in defining some word as a proper name of some object. They can take very diverse shapes; we can say “I name this ship Bertrand” (as uttered together with a pointing gesture) as well as “I name the heaviest ship in the port of Rotterdam Bertrand” or even “Bertrand is the heaviest ship in the port of Rotterdam.” All of those I call – after Devitt (1981) – Naming Sentences (“NS” in short).5 NSs are not only those sentences that occur in situation of naming (or baptizing). As a NS should also be qualified every sentence including new proper name uttered until the practice in question may be considered as established. It is impossible to pinpoint exact moment when practice becomes established – but I think it is not a serious disadvantage of the Hybrid Theory. I believe it is enough to say what Evans has said on this issue, namely that a practice of referring to some object with name ‘N’ is established when members of the practice regularly use ‘N’ when they want to refer to that object; in other words – when the object is known as ‘N’ among them (Evans 1982: 376-377). The most important thing to say about NSs is that they always include 5

Naming itself has not been exhaustively analysed by Evans (neither by other names’ theorists). I have carried on such in-depth analysis of various types of naming, yet because of limited space here, its presentation must be put off to another paper.

142 phrase fixing reference to the object which is bearer of a name. A definite description (e.g. “the heaviest ship in the port of Rotterdam”) or a pronoun (e.g. “this” accompanied by a gesture) can play such a role. Regardless of the shape NS actually takes, it is always used by a producer of a practice. The distinction between producers and consumers of a practice is another key notion of the Hybrid Theory. 2. Producers, consumers and reference borrowing There is some intuitive and indubitable difference between my using name of my wife (whom I perceive every day) and the using of that name by my friend who has never seen my wife and knows her only from the stories I tell him by phone. On the other hand, some other doubts may appear as a result of comparing my uses of the name of my wife with my uses of the names like ‘Aristotle’ or ‘Shakespeare’ (i.e. names of the people to whom I have never borne any direct epistemic relation, see Tałasiewicz 2009). I think that Evans drew the distinction for two different roles played by members of a practice – i.e. producer's role and consumer's role – in order to avoid above mentioned doubts emerging from different kinds of epistemic relation that may occur between people using proper name and object being bearer of that name. Let us see now how a new practice takes off. Imagine a group of people who want to talk about (and thus to refer to) a certain object X. They are acquainted with X, i.e. they perceive it with their senses. The content of some of their mental states emerge from perceiving that object. If the speakers would like to express their beliefs about X they could use definite descriptions or pronouns, but if they really cared about exchanging thoughts about X, they would probably try to introduce a proper name for it. Thus, they start to provide a new practice of using a name, let us say ‘N’ as referring to X. Such members of the practice are its producers.6 As I mentioned earlier, all sentences that producers use to constitute the new practice belong to the class of NSs. It is now worthy 6

As I said in the previous footnote, I intentionally do not expand considerations concerning naming here.

143 to emphasize again that every NS must include as its component an expression correctly fixing reference to the object being named. By saying ‘correctly’, I mean that it is the case that all producers want to name the same object and they use words referring to the very object indeed. Thus, producers are those members of a practice who start the practice and develop it to a “grown-up” level. What is distinctive about producers is that they inject into the practice some new information concerning the bearer of a given name. The issue of the kind of information delivered by producers has not been discussed by Evans, and in my opinion the character of that information should be specified. I propose to define the kind of information specific for producers as data. Data would be the information about object gained in virtue of the acquaintance with that thing.7 If someone was perceiving a ship made of wood, that verity (that the ship is wooden) would be a datum for him; however, when he passed on that verity to somebody else (who was not acquainted with the ship), for the latter person it would become information. Only producers know directly the object that is about to be named, therefore only they can introduce some new data about it into the new practice of calling it ‘N’. However, of course, they can also inject into the practice some non-data information. Persons who are not acquainted with the bearer of a given name, and ipso facto do not inject new data, are called consumers. At the early stage of a practice, consumers are introduced by producers while as the practice grows, new consumers more and more often will be introduced by other consumers. There are different ways of introducing; the most common is via sentences of the form “N is the φ” (as uttered by introducer) where “the φ” is a definite description uniquely identifying individual (X). Evans accurately remarks that: When someone hears the claim “N is the φ,” and takes ‘N’ to be an ordinary proper name, he supposes that there is (or was) a person going about the world known as N; and that the claim embodies not only information that there is 7

Class of data is a subset of class of information.

144 something that is uniquely φ, but also an identification of that object as the object known as N. (Evans 1982: 378)

Recipient of the introducing sentence can deduce from it, inter alia, that there is some object X that is known as N, thus there is a practice of dubbing X ‘N’. It might be said that the hearer is an eye-witness (or rather an ear-witness) to the existence of such practice. Evans’ claim is important, because it aptly accounts for how we in fact use proper names.8 3. How to distinguish practices? A natural question to ask now is how to identify in which practice a speaker takes part when he uses a name ‘N’? In other words, how to distinguish various practices as separate? Doubtlessly, identifying the occurring name is not enough here. Words themselves can be identified syntactically by qualifying them to relevant types. However, it is extremely common case when tokens of the same name-type occur as referring to different objects and thus they for sure belong to separate practices. For instance, name ‘Filip’ refers to me, yet it also refers to countless other men and we certainly would not be willing to count every use of token belonging to type ‘Filip’ as part of the same single practice. Similarly, identifying the bearer of a given name is not enough. I am the bearer of the name ‘Filip’ but I can also be the bearer of many various nicknames, e.g. someone may dub me ‘Mr. Proper Name’. We definitely would not agree for judging use of ‘Filip’ and use of ‘Mr. Proper Name’ – both referring to the same object, me – as belonging to one practice. What may seem less obvious, identifying name and bearer together will not solve our puzzle too. Consider a following double-life scenario. There is a practice of calling me ‘Filip’. People who take part in this 8

It also shows how vicious are phrases that I describe as “mock names” in further part of this paper.

145 practice possess some information about me, e.g. that I am interested in philosophy of language or that I am thin, I have dark hair, and so on. By day I lead a peaceful life of thin and dark-haired philosopher, however, by night I become a spy in some secret service. As I work there, I put pillows under my clothes and wear a blonde wig to change my look. It is widely known that spies have nicknames, and because I try to be very clever and cunning I decided to take my original name as a nickname. In effect people with whom I work as a spy know me as Filip, who is stocky and blonde. Thus, people who know me by day call me ‘Filip’ and collaborators in secret service also call me ‘Filip’. However, even though both by day and by night we deal with tokens of the same name and with the exactly same person as reference of the name, I think that it is perfectly reasonable to speak of two distinct practices. One practice is run by day, and the other – by night. What then can enable us to distinguish practice occurring by day from the one led by night? The name and the person in question are the same. Yet, what changes from the former context to the latter is the set of information concerning the name and the bearer circulating amid practices' members. The set of information circulating by day includes that I am thin and dark-haired while the night one – that I am stocky and blonde. Two different sets of information define two separate practices. Thus, identifying sets of information is a key to identify distinct practices. 4. Two main thesis concerning reference Every theory of reference for proper names has to explain how the reference mechanism works, i.e. what determines that a given use of a name refers to this – and not to other – particular object. I believe that the Hybrid Theory found the golden mean between orthodoxies of Descriptivism and the Causal Theory. Undoubtedly, users of a name do have some beliefs that they associate with the name as well as with its bearer. However, this information does not have to be decisive for determining which object is reference of the name. There is no need to

146 know a single fact uniquely identifying Feynman to be able to refer to him with name ‘Feynman’.9 On the other hand, acquiring an ability to refer is definitely not as facile as Kripke claimed it to be and the information about bearer of the name is not redundant in the presence of causal communication chains. Now I would like to present two main theses of the Hybrid Theory concerning the way in which the mechanism of proper names’ reference works. The first thesis was offered by Evans and the second one is mine. I. Information possessed by a speaker about the bearer of a name ‘N’ does not determine which object the speaker refers to when using the name; however, it determines which practice the speaker’s use of the name belongs to (see Evans 1982: 384). II. Which object speaker refers to when using a name ‘N’ is determined by that which object has been named with ‘N’ during naming that has initiated that practice of using ‘N’ to which speaker’s use belongs. I cannot go into details here, but the Hybrid Theory based on two above theses is able to solve some traditional problems concerning proper names, like Frege’s puzzle, empty names issue or difficulty with sentences about existence (with proper name as a subject). On the other hand, the Hybrid Theory stays completely insensitive to arguments advanced against both Descriptivism and the Causal Theory (listed in the Introduction). 5. Mock proper names In this section, I would like to present one of the additions to the Hybrid Theory that I believe is quite an interesting and significant extension of the theory. In the above-cited passage, Evans (1982) uses the phrase “ordinary 9

I refer here, of course, to the famous Kripke’s Feynman case from the second lecture of his (1972).

147 proper names”, but he does not explain in the book what it means for a proper name to be an ordinary one and what differentiates ordinary proper names from not-ordinary ones. I expand Evans' theory to say that the great majority of proper names are ordinary proper names. ‘Filip’, ‘Aristotle’, ‘Shakespeare’, ‘Warsaw’, etc. – these are all ordinary proper names. However, I claim that there are some words that look like (or behave like) proper names, but there is something vicious about them that leads me to define them as mock proper names. Mock proper names are those for which the reference mechanism can be fully explained by Descriptivism, since they are nothing more than abbreviations of descriptions. To clarify my claim we need to move back to the naming act. The crux of naming is fixing the reference to the object which is about to be named. Thus, it is very important for NSs to include some phrase that correctly fixes the reference. There is several ways of fixing reference and one of them is to use a definite description uniquely identifying the object in question. As we know since Donnellan’s works (see his 1966 and 1968) definite descriptions can occur within either attributive or referential use. Very briefly speaking, description is used attributively when its descriptive content plays a decisive role in determining which object is the reference of that description. When we use description in attributive way we do not want to refer to some particular object but to whichever object that possesses property mentioned in the description. When we see Smith’s corpse and say “Smith’s murderer is insane”, we do not have an intention to refer to some particular person, but to whoever who in fact murdered Smith (see Donnellan 1966: 285-286). It might be said that in attributive use of a description it is the property mentioned in the description that is important, while the object referenced is considered for the sake of having that property. On the contrary, in referential use it is the object what is most significant whereas content concerning some property stays peripheral. Imagine I see a man at the party and utter the sentence “The man drinking whisky wears an awful tie”; in such case it does not matter whether that man drinks whisky or ice tea – I refer to this particular man

148 and speak truly or falsely of him that he wears an awful tie (for expanded considerations concerning attributive and referential uses of descriptions see my paper 2007).10 My claim is that every time we deal with naming via some ordinary proper name, if a description plays a role of element fixing reference to the object being named, the description is used referentially. On the other hand, if a description used attributively is used to fix the reference, then we deal with a mock proper name. Suppose, I am a researcher of aquatic fauna and I am especially interested in stating something about the heaviest fish in the Black Sea; I can naturally use description “the heaviest fish in the Black Sea” to do that. I believe that judging such use of description as attributive does not arouse any controversy – I do not want to talk about some particular fish, but about whichever animal possesses the property of being the heaviest fish in the Black Sea; so I may utter, for instance: “The heaviest fish in the Black Sea weighs less than the lightest elephant in Africa”. However, if using description by some reason appears inconvenient to me, I may try to create a short cut of the description and say “Let’s call the heaviest fish in the Black Sea Oscar”.11 Then, in my opinion, I do not introduce some new ordinary proper name, but some mock name. No actual act of naming took place; what I did in fact was create a definition: “Oscar =df object possessing property of being the heaviest ship in the Black Sea (whichever it is).” As a result, ‘Oscar’ looks like an ordinary proper name, but it is just a mock name – an abbreviation for description used attributively. If somewhere in depth of the Black Sea some rather slight but very predatory fish devoured Oscar in whole, we would not, I suppose, have any problem claiming that from this moment on the name ‘Oscar’ refers to that predatory fish. Both before and after the change of reference of the description “the heaviest fish in the Black Sea” we could use the sentence “The heaviest fish in the Black Sea weighs less than the lightest elephant in Africa” and in both cases we would express exactly 10

11

Thus, it does not matter whether the description “the man drinking whisky” is satisfied by the person I want to talk about. This is modified version of the example from Devitt (1981: 40-41).

149 the same proposition, namely that the relation of weighing less than occurs between some object as having some property (of being the heaviest fish in the Black Sea) and some other object as having another property (of being the lightest elephant in Africa). It seems entirely reasonable to ask whether mock names are proper names at all. They do not serve a function that is distinctive for proper names, namely that with a proper name we always refer to the same particular object. As we have seen, mock names do not refer to particular object at every time, but rather to the object which satisfies relevant description at the time of using the mock name. However, independently of whether we would like to qualify mock names as proper names or not, mock names exist in our language (although they are extremely less common than ordinary proper names). Words like ‘Zeus’, ‘Jack the Ripper’, and probably also ‘Homer’ are examples of mock names. They refer to whichever object has some particular feature; for instance, ‘Zeus’ refers to any object that is Greek king of the gods, is the ruler of Mount Olympus, and so on.

References Devitt, Michael 1981. Designation. Columbia University Press, New York. Donnellan, Keith 1966. Reference and Definite Descriptions. The Philosophical Review 75: 3, 281-304. Donnellan, Keith 1968. Putting Humpty Dumpty Together Again. The Philosophical Review 77: 2, 203-215. Evans, Gareth 1973. The Causal Theory of Names. Aristotelian Society Supplementary 47, 187-208. Evans, Gareth 1982. The Varieties of Reference. New York: Oxford University Press. Kawczyński, Filip 2007. O atrybutywnych i referencyjnych użyciach deskrypcji określonych. Filozofia Nauki 60: 4, 15-35. Kripke, Saul 1972. Naming and Necessity. Oxford: Blackwell Publishers. Putnam, Hilary 1973. Explanation and Reference. In: Maynard Patrick (ed.), Conceptual Change, Dordecht: Riedel, 199-221.

150 Searle, John 1983. Intentionality. An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press. Tałasiewicz, Mieszko 2009. Nazwy własne a użycia imienne. Filozofia Nauki, (forthcoming).

Agnieszka Kułacka King’s College London [email protected]

On the Nature of Statistical Language Laws Abstract: This article discusses the nature of language laws with particular focus on statistical language laws. We discuss the notion of law of science and describe the types of laws with regard to language laws. We also study the case of the Menzerath-Altmann law to show the contemporary methods of investigating language laws.

0. Introduction Mauro Dorato in The Software of the Universe. An Introduction to the History and Philosophy of Laws of Nature says: “Although the discovery of laws is commonly regarded as the most important goal of the scientific enterprise, as well as being the engine of the technological revolutions that continue to transform our lives, the role of the law of nature plays in our knowledge has not been understood, and is still at the centre of lively discussions among both scientists and philosophers.” (Dorato 2005: IX) Is the nature of the law and its role in linguistics fully and well understood by the researchers? The state of art lacks an up-to date description of statistical language laws. In this article I will draw an outline of the definition of law of science and its types as applied to linguistics. I will shortly discuss the approach of Neogrammarians to studying language laws. My main focus will be on the contemporary state of art in statistical linguistics with regard to discovery and verification of statistical language laws. 1. Definition of law of science The definition of the law of science provided below is based on Ajdukiewicz (1974), Armstrong (1983), Cackowski (1987), Dorato

152 (2005), Kemeny (1967) and Krajewski (1998). The law of science is perceived as a relation between a class of entities, a class of natural phenomena, a class of events, a class of things or/and a class of their characteristics. The relation, F, is defined on a class {x1 , x2 ,..., xn } described by the law in the following manner: (1)

F ( x1 , x2 ,..., xn ) = constans

Each law of science is formulated as a logical proposition, being either analytic or synthetic. Analytic propositions are those that are true simply by virtue of their meaning, and, as propositions, a priori are to be proved. Synthetic propositions as propositions a posteriori, can be falsified and verified in an experiment. They contain information about the real world, gathered by the analysis of experimental data. Thus the law of nature is a synthetic proposition, while any mathematical theorem or formula is an analytic proposition, derived either from previously proved other theorems or directly from established axioms of the theory. It is assumed that the law of science, which is strictly general, has the infinite range in terms of place and time. All phenomena that it describes are bounded by it regardless of the time they exist in and the place. For general laws the rule is abated by introducing initial or boundary conditions. The formula (1) can now be improved by implementing the conditions C1 , C2 ,..., Cw and the domain, D, of only these phenomena {x1 , x2 ,..., xn } which abide by the law. (2)

(∀x1 , x2 ,..., xn ∈ D)(C1 , C2 ,..., C w → F ( x1 , x2 ,..., xn ) = constans)

One should notice that n can be an infinite number. It is important for one to see the place of laws of science in the system of knowledge. The science is an open system requiring continuous improvement and closing various gaps. At the foot of the knowledge lie the individual facts, which need to be systemised in classes possessing certain characteristics. Having established that, a scientist may discover variety of relations between them which, when verified, can constitute a law of science. These laws together with primitive notions, axioms,

153 syntax and, if they all observe the rule of logical coherence, can institute a theory of science. 2. Types of laws Let us take a closer look at the distributions as they exist in science. Since we are mainly concerned with linguistics, the examples given will be taken from this research area. 2.1. Qualitative and quantitative laws The type of relation F as described by formula (2) divides the laws into two types: qualitative and quantitative. An example of the former is the Watson law stating that the third person singular plays a key role in the evolution of the Indo-European languages. It blocks the changes in the remaining verb forms (cf. Collinge 1985: 239-240). The relation between the entities is non-numerical and shows the influence of one onto the other. The Sherman law can serve as an example of a quantitative law. It concerns the distribution of the length of sentences and clauses in a text. It says that these lengths are neither chaotic nor deterministic, nor are they governed by any rules. However, the distribution depends on some forces acting during the speech act (cf. Altmann 1992). 2.2. Diachronic and synchronic laws This distinction drawn here is with regard to time. Diachronic events follow one another on timeline and synchronic phenomena can coexist in the same point of time. Kuryłowicz and Mańczak discovered a series of six diachronic laws, one of which is the principle of analogy. For each morphological derivation, resulting in two distinguished forms, the derived form will acquire the primary status, and the original form – the secondary status. This may explain the opposition of brethren/brothers. The original plural of a noun brother was brethren, but English speakers started to observe the rule for adding ‘s’ to make plurals, which resulted

154 in the original plural to take a secondary function of its modern meaning (cf. Collinge 1985: 249-252). The Krylov law is an example of a synchronic law as it describes the distribution of the number of meanings in a monolingual dictionary. It says that the relation between the number of meanings, x, of lexemes in a dictionary is inversely proportional to the frequency y of the lexemes with x meanings (cf. Hammerl and Sambor, 1993). 2.3. Deterministic and statistical laws The underlying force contributing to the division between deterministic and statistical laws is determinism. If the same conditions occur, the law must hold. In case of statistical laws, they either hold with certain probability or the relation occur only for sufficiently large samples. The example of a deterministic law is de Saussure’s law concerning the Lithuanian language. It says that the stress of the raising syllable of a Nominative singular noun is moved to the falling syllable of its Dative plural (Collinge 1985: 149-151). George Zipf formulated a series of statistical laws. One of them says that the most frequently used words in texts are the shortest regardless of the unit of measurement (Sambor 1972). 2.4. Empirical and theoretical laws The researchers’ access to entities lies at the foot of distinguishing between empirical and theoretical laws. Empirical laws can be verified and falsified in an experiment while theoretical laws concern the entities being outside the range of possible observation or research. Between the two, there is a blurry difference as the status of a given law can change once the technology to perform an experiment to prove or disprove it has been developed. Also after having gathered some data, a scientist makes the first attempt to formulate a law governing the entities.

155 3. Characteristics of Neogrammarians’ approach The phonetic laws as discovered in 19th century are the prototypes of statistical language laws, however the methods employed by Neogrammarians are distinctively different. Neogrammarians presented the approach to the research into language laws following a newly developed historical-comparative method. The laws they dealt with were qualitative, diachronic and deterministic. Moreover, they were convinced that all language laws have no exceptions and they observe the principle of analogy. If there are exceptions, some unknown forces that prevent some entities to abide the law have not been yet discovered. A classical example of this phenomenon is the Grimm law which needed to be modified to include all phonemes, and it is now known as the Verner law. Nevertheless, the major achievement of Neogrammarians was to fully formulate the notion of language law. The approach of considering a language as an organism observing certain rules of nature was borrowed from natural scientists of this era. August Schleicher was the first linguist who put forward the proposition of regarding glottology as a science. In his opinion a language is an element of the nature, which is born, develops and dies like every other living organism. Therefore, he argues, one has to employ the methods of science into linguistics research. The opinion was modified by R. Riedl, who claims that an individual, i.e. a person speaking the language, plays a key role in the process of language development. Later, Wilhelm von Humboldt made a detailed comparison of the characteristics of the law of science and the language law and made an attempt to discuss the notion of a phonetic law (Kovacs 1977). 4. Characteristics of modern approach The modern approach to study language laws is the immediate descendant of Neogrammarians’ approach. The continuous search for language universals is one path that is followed. In this paper we will be mainly focused on statistical linguistics and the view represented by the researchers in this area. These linguists are interested in quantitative and statistical laws, but they consider both diachronic and synchronic laws.

156 Our case study, the Menzerath-Altmann law, exemplifies a synchronic law and will be discussed in due course. The Piotrowski law is an example of a diachronic statistical law and states that the changes in a word form are the resultant of interaction between the old and new form. A differential equation to capture the law was formulated, in which the use of relative frequencies has been made. What is a statistical law in general? First of all, it is a law, therefore one can apply the formula (2) when giving a mathematical description of it. One should notice that the possibility of verifying any law is very limited. A researcher can only reach a representative sample given the time and space constraints. In this view any law can be perceived as statistical. It is not so, due to the fact that it is not the method that determines the nature of a law. How thus one can distinguish a law that is statistical from the one that is not? There are two major types of statistical laws: in the narrower sense and in the broader sense. The law which holds with a certain probability less than 1 is a subtype of the former. Another its subtype is a relation between a variable and the probability of it having a given value, called a probability distribution. The Krylov law, mentioned above, is an example of this subtype. One deals with a statistical law in the broader sense when a certain relation holds only for sufficiently large sample as in the case study. 5. Case Study – the Menazerath-Altmann law The Menzerath-Altmann law (henceforth MA law) states that the longer the language construct, the shorter its constituent. I verified the law in Polish and English Syntax. On the syntactic level, the law can be interpreted as statistically the longer the sentence measured in number of clauses, the shorter the average of its clauses measured in words. Gabriel Altmann in Altmann (1980) proposed a differential equation to describe the law. The decrease of the average length of a clause, dy, in term of the length of a sentence, dx, is directly proportional to the length of a clause, y, and inversely proportional to the length of a sentence, x.

157 (3) dy ∝ − y dx

x

which can be expressed as a differential equation: (4)

y dy = −b x dx

where b is a coefficient. What is the modern approach to the research into the law? At first the procedure has to be well designed. Then in the course of verifying the law some initial conditions can be discovered. Based on theoretical reasoning, a formula to capture the relation can be formulated. As there might be some coefficients in the formula, one can make an attempt to interpret them in the linguistic context. Finally, the necessity of the law needs to be discussed. 5.1. The procedure of verifying the law For each law the procedure is different and depends on the nature of this particular law. For MA law I designed the following procedure (cf. Kułacka 2008; 2009b): one needs to assign the ranks to average lengths of clauses, and the ranks of lengths of sentences are the number of their clauses. In Table 1 below I gathered the data from The da Vinci Code by Dan Brown. In the table the notation means as follows: x – the number of clauses, y – the average length of a clause, m – the number of the sentences analysed in the experiment, r – rank of the length of a clause, d i – the difference between x and r in the ith row. One has to notice that only 98% of the sentences were taken into account for it is statistical law and may not work on all data.

158

x

y

m

r

di

1 2 3 4 5 6 7 8 9

6.7584 6.1677 6.1676 5.7442 5.5928 5.3627 5.8571 6.4583 6.4444 total

745 465 275 86 28 17 5 3 2 1626

1 3 2 4 5 -

0 1 1 0 0 -

Table 1. The data from The da Vinci Code by Dan Brown.

Now we need to find the value of Spearman rank correlation coefficient.1 We use the formula: (5) rS = 1 −

6∑ d i

n(n − 1)

where n is the number of data. In our case n=5 and thus we obtain rS = 0.9 . The last stage of verifying the law is to check with statistical tables if rS is large enough. If so, the hypothesis that the longer the sentence the shorter its clause is confirmed. For a one-sided test at 5% significance level the value of r calculated for the data in Table 1 is sufficiently big. MA law has been verified. 5.2. Initial and boundary conditions of the law In my research I discovered the following initial and boundary conditions: (1) One has to apply the corollary of the Sherman law: a research should be performed on full chapters; (2) The text segment 1

This coefficient is used to measure a non-linear correlation between two sets of data.

159 under research must be sufficiently long; (3) The average length of a simple sentence must be greater than a certain value. To show the importance of the first initial condition, let us consider the data in Table 2. Regardless of the fact that a lot of data have been analysed, the law is reluctant to hold. This is due to the fact that Cujo by Stephen King is not split into chapters (for more examples and a detailed analysis, see Kułacka 2008). The notation is as for Table 1 in Section 5.1. x 1 2 3 4 5 6 7 8 9 10 11 12 14 18

y 7.3290 6.8215 6.7576 7.0869 6.8136 6.8938 7.1688 8.6250 7.2222 5.9099 5.1421 8.6667 total

m 1160 750 308 149 59 26 11 1 1 0 1 0 1 1 2472

r 1 3 5 2 4 -

Table 2. The data from Cujo by Stephen King

In Table 3 I gathered the data from the first two chapters and the prologue of The da Vinci Code. From the discussion in 6.1, we know that the law holds for this text, but only when a sufficiently long text segment was analysed. For the data in Table 3, though we deal with the same text, the law does not hold.

160 x 1 2 3 4 5 6 7 8

y 6.1538 5.6467 6.0000 4.5962 5.2667 6.2500 5.3571 4.7500 total

m 156 92 43 13 3 2 2 1 312

r 1 3 2 4 -

Table 3. The data from chapters P-2 of The da Vinci Code by Dan Brown.

During the research, I noticed that even though the initial conditions (1) and (2) were met for some texts, the law would not hold, which led me to the discovery of a boundary condition. I analysed an average length of a simple sentence (on average they were the longest clauses in texts) and I discovered that it was due to the fact that they were not sufficiently long. title Hobbit Semantics 2 The Outline of Mathematical Logic To kill a mocking bird Darkly dreaming Dexter

Polish version 5.8004 14.8514 10.0218

English version 6.6142 15.8654 11.8708

5.5487 4.8042

5.7181 5.6301

Table 4. Average lengths of simple sentences

In Table 4 I gathered average lengths of simple sentences for various texts in Polish and English versions. To kill a mocking bird and Darkly dreaming Dexter were the texts for which MA did not hold. As we can see in the table, the average length of a simple sentence is shorter than the ones for which the law holds. I assume that there is a lower boundary on the average length of a simple sentence below which MA law will not hold.

161 5.3. A discrete formula The formula that had been used in the research on MA law was for continuous data. The differential equation (4) was integrated and the following formula derived: (6) y = ax b where y is the average length of the constituent, x is the average length of a clause and a, b are coefficients. In Kułacka and Mačutek (2007) it has been proved that one has to apply the discrete formula (7) in a more adequate model for describing the discrete data. (7)

n

y (n) = y (1)∏ ( i=2

b + 1) i −1

where b is a coefficient, y(n) is n-clause sentence and y(1) the average length of a simple sentence in a text. 5.4. Hypotheses related to MA law After performing a preliminary research, I formulated several hypotheses: Hypothesis 1. MA law holds (or not) in a systematic way. To verify Hypothesis 1, let us consider values of Spearman rank correlation coefficients ( rS ) for analysed texts. They are shown in Table 5. The underlined values of coefficients are the ones which are sufficiently large (cf. Section 6.2.) and MA law holds for these text segments.

162 title

Hobbit

Semantics 2

The Outline of Mathematical Logic

To kill a mocking bird

Darkly dreaming Dexter

chapters

rS

n

m sentences

1 1-2 1-3 1-4 1-5 10.1 10.1-10.2 10.1-10.3 10.1-10.4 1.0-1.1 1.0-1.2 1.0-1.3 1.0-1.4 1.0-1.5 1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1 1-2 1-3 1-4 1-5 1-6 1-7

0.6786 0.6071 0.7500 0.8571 0.7143 0.8571 0.9833 0.9762 1 0.3 -0.4 1 1 0.9 0.5429 0.5717 0.3714 0.1429 0.0857 0.0857 0.0857 -0.0286 0.6 0.6 0.6 0.6 0.7 0.7 0.4

7 7 7 7 7 7 9 8 7 5 6 5 5 5 6 6 6 6 6 6 6 6 5 5 5 5 5 5 5

472 802 973 1182 1688 88 184 317 496 93 131 314 399 568 253 437 709 953 1196 1454 1635 1977 331 595 905 1187 1426 1717 1994

Table 5. The values of Spearman rank correlation coefficients.

For the first three texts the values of Spearman rank correlation coefficient increase until they reach the critical value. Also we can see that as we add new data the law continues to hold. For the remaining two texts the law did not hold and values of the coefficient neither reached

163 the indicated value nor increase enough to predict that they may reach it at some point. The coefficient seems to stabilise its value. Hypothesis 2. The minimal length of a text segment, for which MA law holds, depends on syntactic structure of the text. By syntactic structure I mean the complexity of a text in term of the number of clauses of 98% sentences and their length in term of the number of words. In table 6 I gathered the values of Spearman rank correlation coefficient calculated for the three texts for which MA holds. title

chapters

Hobbit

Semantics 2

The Outline …

1 1-2 1-3 1-4 1-5 10.1 10.1-2 10.1-3 10.1-4 1.0-1.1 1.0-1.2 1.0-1.3 1.0-1.4 1.0-1.5

rS

n

m

(Polish)

(Polish)

(Polish)

0.6786 0.3214 0.4643 0.7857 0.7857 0.8 0.8 1 1 0 0.1 0.7 0.9 0.9

7 7 7 7 7 4 4 4 4 5 5 5 5 5

487 829 982 1213 1712 104 212 336 537 86 116 263 383 541

rS

n

m

(English) (English) (English)

0.6786 0.6071 0.7500 0.8571 0.7143 0.8571 0.9833 0.9762 1 0.3 -0.4 1 1 0.9

7 7 7 7 7 7 9 8 7 5 6 5 5 5

472 802 973 1182 1688 88 184 317 496 93 131 314 399 568

Table 6. Spearman rank correlation for Polish and English versions.

On average the syntactic representation of an English text having the same semantic representation as a Polish text is longer. If we compare the columns 3 and 6 in the table, we can see that for English texts the law starts to hold on a shorter text segment than for their Polish versions. If we look at the values of Spearman rank correlation coefficient for Semantics 2, we can see that there are great discrepancies in terms of the number of clauses. MA law starts to hold for a shorter text segment for

164 the more complex version. 98% of sentences in the English version have 7 or less clauses while in the Polish version they have 4 or less clauses. The complexity of a clause in terms of number of words it comprises leads us to Hypothesis 3, which is the corollary of Hypothesis 2. Hypothesis 3. The minimal length of a text segment for which MA law holds, for scholarly texts is less than the corresponding length for literary texts. In Table 7 we gathered the minimal lengths of a text segment for which MA holds in Polish and English versions. title Hobbit Semantics 2 The Outline of Mathematical Logic

m(Polish) 1213 336 383

m(English) 973 88 314

Table 7. The minimal length of text segments

From the data in the table it can be clearly seen that for Semantics 2 and The Outline of Mathematical Logic, which represent scholarly texts, the minimal length of a text segment is shorter than for Hobbit, a representative of literary texts. If we compare these data with length of simple sentences of the texts as presented in Table 4, we can see that scholarly texts are more complex in terms of the number of words, which contributes to Hypothesis 2 being true: for more complex texts the minimal length of a text segment for which MA holds is shorter. Let us turn now to the coefficients b and y(1) occurring in the formula (7). Hypothesis 4. The value of the coefficient y(1) for a literary text is less than 10 and for a scholarly text greater than 10. The value of the coefficient b for a literary text is greater than -0.11 and for a scholarly text less than -0.11

165 In the table 8 I gathered the values of the coefficients b and y(1) calculated in the way shown in Kułacka and Mačutek (2007). title

chapters

Hobbit

Semantics 2

The Outline …

1-4

y(1) Polish 6.1987

y(1) English 6.9929

b Polish -0.0931

b English -0.0813

1-5 10.1-3

5.8004 14.6395

6.6142 16.3731

-0.0660 -0.3942

-0.0754 -0.2714

10.1-4 1.0-4

14.8514 10.0722

15.8654 12.0686

-0.3137 -0.1870

-0.2619 -0.2504

1.0-5

10.0218

11.8708

-0.1769

-0.2498

Table 8. The values of coefficients b and y(1).

If we look at the data in the columns 3 and 4 in the table, we can see that for both versions the values of coefficient y(1) are greater than 10 for scholarly texts, represented by Semantics 2 and The Outline of Mathematical Logic, and less than 10 for Hobbit representing literary texts. Also if we look at the numbers in the columns 5 and 6, which are the values of the coefficient b, we can notice that for a literary text they are greater than -0.11 and for scholarly texts they are less than -0.11. Hypothesis 5 is linked to Hypothesis 4 in the sense that it discusses the values of the coefficients for different languages. Hypothesis 5. The value of the coefficient y(1) is greater for the English version than for the Polish version. The value of coefficient b is approximately equal in both versions, regardless of the language. To verify the hypothesis, I will use the data gathered in the Table 8. If we compare the columns 3 and 4, it is clear that for equivalent values of the coefficient y(1) the ones for the English versions are greater. However, the data in the columns 5 and 6 suggest that the coefficient b is not susceptible to the language of choice. It may be responsible for the syntactic structure of a text.

166 5.5. The necessity of the law For better understanding the necessity of MA law (cf. Kułacka 2009a), let us consider a sentence. The elements of information, which are parts of the sense expressed by the sentence, are the number of words, the complexity of their pronunciation, their morphological and semantic complexity, the complexity of their syntactic structure and the complexity of the syntactic structure of the clauses. For each of them there exists a certain activation level. The number of all activations taking place simultaneously cannot be in excess of the upper boundary, i.e. the capacity of working memory. If then for the activation of syntactical structure of a sentence more capacity of working memory has been reserved, less can be allocated to the complexity for the other elements of the information, e.g. for the complexity of the syntactic structure of the words or their number in clauses. This, in turn, explains the necessity of MA law to hold: the more complex the sentence, the less complex are its clauses. In the research that I conducted on MA law in syntax, the complexity of a sentence was measured by the number of clauses and the complexity of a clause was measured by the number of words used. It is understood that this is the simplified way of a research on MA law. However, it is the only one that is possible in this state of the art. It is conceivable that the law is considered statistical for the very reason of the technique of the research, since to measure the real activation level of each of the elements would be too complicated at this stage. Moreover, following the Capacity constraint comprehension theory, the processes of activating the elements of information do not take place sequentially, as assumed by Köhler (1984), but simultaneously, which causes additional difficulty during measuring the real activation level of individual elements. The only existing tests examine the capacity of the whole working. 6. Conclusion Let us look back and compare the two approaches that we discussed in the paper: Neogrammarians’ approach and a modern one. Both groups of

167 researchers were interested in different types of laws. Neogrammarians dealt with qualitative, diachronic and deterministic laws, while statistical linguists focus on quantitative and statistical laws, both diachronic and synchronic. What else cannot be overlooked is the type of methods that were put in place for both researches. Neogrammarians employed the historicalcomparative method, while statistical linguists apply the hypotheticaldeductive method. The former took into account initial and boundary conditions in some cases and they made attempts to explain all the laws they discovered by one theory e.g. a wave theory. Statistical linguists gear the procedure of verifying a law to a particular law they conduct the research on. Like Neogrammarians they establish initial and boundary conditions if necessary, but unlike their predecessors they search for an adequate model and once found, it is shown by tests that they fit the empirical data. They also set hypothesis related to the law and establish links between them. If a mathematical model has been found, the linguists interpret the coefficients occurring in the formula in the linguistic context. Finally, they explain the necessity of each particular law by the use of Principle of Unification and Principle of Diversification. However, I should mention here that researches done by statistical linguists may look scattered and this paper makes the first attempt to bring the steps of statistical-linguistic research together. The study case shows the stages in practice.

References Ajdukiewicz, Kazimierz 1974. Logika pragmatyczna [Pragmatic Logic]. Warszawa: Państwowe Wydawnictwo Naukowe. Altmann, Gabriel 1980. Prolegomena to Menzerath’s Law. Glottometrika 2,1-10. Altmann, Gabriel 1992. Sherman’s Laws of Sentence Length Distribution. In: Pauli Saukkonen (ed.), What is language synergetics? Oulu: University of Oulu Printing Centre, 38-39. Armstrong, David M. 1983. What is a Law of Nature? Cambridge: Cambridge University Press.

168 Cackowski, Zdzisław (ed.) 1987. Filozofia a nauka. Zarys encyklopedyczny [Philosophy and Science. An Encyclopedic Outline]. Wrocław: Zakład Narodowy im. Ossolińskich. Collinge, Neville E. 1985. The Laws of Indo-European. Amsterdam: John Benjamins Publishing Company. Dorato, Mauro 2005. The Software of the Universe. An Introduction to the History and Philosophy of Laws of Nature. Hants: Ashgate Publishing Limited. Hammerl, Rolf 1987. Prawa językowe we współczesnej kwantytatywnej lingwistyce modelowej (Na przykładzie tzw. prawa Martina) [Language Laws in Contemporary Quantitative Model Linguistics (The Case Study: the Martin Law)]. Poradnik językowy 6, 414-428. Hammerl, Rolf 1989. Cztery etapy rozwoju lingwistyki kwantytatywnej [Four Stages of Development of Quantitative Linguistics]. In: Władysław Lubaś (ed.), Wokół współczesnego języka polskiego II. Studia Leksykograficzne 3. Wrocław: Ossolineum, 115-126. Hammerl, Rolf and Jadwiga Sambor 1993. O statystycznych prawach językowych. [The Statistical Language Laws] Warszawa: Zakład Semiotyki Logicznej Uniwersytetu Wrocławskiego. Kemeny, John G. 1967. Nauka w oczach filozofa [A philosopher looks at science]. Warszawa: Wydawnictwo Naukowe PWN. Köhler, Reinhard 1984. Zur Interpretation Des Menzerathschen Gesetzes [The Interpretation of The Menzerath Law]. Glottometrika 6, 177-183. Kovacs, Ferenc 1977. Struktury i prawa językowe [Linguistic Structures and Linguistic Laws]. Wrocław: Ossolineum. Krajewski, Władysław 1998. Prawa Nauki. Przegląd zagadnień metodologicznych i filozoficznych [The Laws of Science. A Review of Methodological and Philosophical Issues]. Warszawa: Książka i Wiedza. Kułacka, Agnieszka 2008. Badania nad prawem Menzeratha-Altmanna [A Research on the Menzerath-Almann Law]. LingVaria 2:6, 167-174. Kułacka, Agnieszka 2009a. The Necessity of the Menzerath-Altmann Law. In: Anna Michońska-Stadnik (ed.), Anglica Wratislaviensia XLVII. In print. Kułacka, Agnieszka 2009b. Warunki zachodzenia prawa Menzeratha-Altmanna. LingVaria 1:7, 17-28. Kułacka, Agnieszka and Ján Mačutek 2007. A Discrete Formula for the MenzerathAltmann Law. Journal of Quantitative Linguistics 14, 23-32. Sambor, Jadwiga 1972. Słowa i liczby. Zagadnienia językoznawstwa statystycznego [Words and Numbers. Some Issues of Statistical Linguistics]. Wrocław: Zakład Narodowy im. Ossolińskich.

Joanna Odrowąż-Sypniewska University of Warsaw [email protected]

Vagueness and Contextualism Abstract: One of the most characteristic features of vague expressions is that they seem to be tolerant: if two objects differ only marginally in relevant respect then if one is in the extension of the given vague predicate, the other should be as well. This feature makes vague expressions susceptible to sorites paradoxes (such as the Bald Man paradox). Recently, a new – contextualist – account of vagueness has been proposed that is supposed to solve the paradox. In my paper, I will try to assess two contextualist theories of vagueness: Fara’s and Shapiro’s, and show their deficiencies. I will also suggest that subvaluation is the most adequate logic for the contextualist account proposed by Shapiro.

0. Introduction – the sorites paradox The main problem with vague expressions (such as “tall”, “rich”, “bald”, etc.) is that they give rise to the sorites paradox. The most popular version of the paradox is the following: A pile of 100 000 grains is a heap. For any n, if a pile of n grains is a heap, then a pile of n–1 grains is a heap. -------------------------------------------------1 grain is a heap.

It is usually argued that the feature that is responsible for paradox is the supposed tolerance of vague predicates. The tolerance amounts to the fact that vague predicates are insensitive to marginal changes: adding one hair does not change a bald man into a non-bald one, removing one grain of sand does not change a heap into something that is not a heap, etc. So we may formulate the following tolerance principle: (TP) Suppose a predicate P is tolerant, and that two objects a, a’ in the field of P differ only marginally in the relevant respects (on which P is

170 tolerant). Then if a has P, then a’ has P as well (cf. Shapiro 2003: 42).

It is the second premise of sorites reasoning – the universal premise – that reflects the tolerance of vague terms and it is this premise that is usually considered a culprit. Hence, the majority of proposed solutions attempts to deny it. The trick is however to reject it without thereby asserting its negation: There is an n such that a pile of n grains is a heap but a pile n–1 is not a heap, or more generally: ∃n (F(n) ∧ ¬F(n–1)).

This last formula states in fact that there is a sharp boundary between bald and non-bald and between heaps and non-heaps. The existence of such a sharp cut-off point clearly does not agree with our practice of using vague predicates. It may be considered a datum that there is no cut-off point in the sorites series concerning a given vague term (i.e. a series that starts with, e.g., a 100 000-grain heap, has 999 999 intermediate steps differing by a single grain only, and ends with a 1grain ‘heap’) such that all competent users of that term agree that it is indeed a cut-off point. Tolerance gives rise to the existence of borderline cases, i.e. cases of which it is doubtful whether the expression applies to them or not. It is often argued that it is permissible to include the borderline cases either into the extension or into the anti-extension of the given vague predicate. Because there is no fact of the matter whether a vague word applies to penumbral cases, people may choose freely how they want to treat those cases. The nature of vagueness is such that it allows ‘permissible disagreement at the margins (Wright 1994: 138).1 Any adequate solution to sorites should tell us whether the inductive premise is true or false and if it is false, what truth-value its negation has. If the negation is true, then the solution should explain how its truth is compatible with the existence of borderline cases. If it is not true, then the solution has to say what revision of classical logic must be made to accommodate this fact. Furthermore, since denying the inductive 1

Shapiro (2003: 43) calls this the open-texture thesis.

171 premise at first sight amounts to the denial of tolerance, any solution that rejects that premise should explain why vague predicates seem tolerant to us (see Fara 2000: 50n). 1. Sorties and Contextualism Roughly speaking, contextualists argue that vague expressions are context-dependent and express various properties on different occasions. In this respect vague terms are similar to indexical expressions. Just as “you” may refer to different people in different contexts of use, “heap” and “tall” in one context may mean something different than in another context. Contextualists usually consider the so-called forced march version of the sorites argument. In this version we arrange men in a line starting with a man who has no hair on his head and ending with a man who has a head full of hair. In any given pair of adjacent men the men differ by just one hair. Now we recruit a group of volunteers and tell them to proceed along the line and proclaim of each men whether he is bald or not. Volunteers are compelled to give a verdict each time. If there is a difference of opinions between them they are supposed to reach the verdict by voting. We assume in addition that all our volunteers are competent speakers of English who will not want to call a person whose head is full of hair “bald”. So they start at the beginning of the line. The beginning is easy: they stand in front of the man with 0 hair on his head and are asked: “Is he bald?” They answer “Yes”. Then they face a man with 1 hair on his head and the procedure is repeated. And so on. However, the further along the line they go the more hesitant they will become. It will become harder for them to reach a verdict. The volunteers seem to be pulled in two different directions: on the one hand, on pain of being charged with incompetence they will have to make a switch at some point and start answering “No” to the questions posed to them. On the other hand, changing the answer at any point seems completely arbitrary and moreover suggests that the point chosen is significantly different from all the neighboring points, which is something that the volunteers do not want to imply.

172 1.1. Shapiro’s conversational solution Shapiro suggests that to solve the paradox we should restrict the alleged tolerance of vague predicates and assume a weaker principle of tolerance:2 (WTP) Suppose a predicate P is tolerant, and that two objects a, a’ in the field of P differ only marginally in the relevant respects (on which P is tolerant). If one competently judges a to have P, then she cannot judge a’ to not have P. (Shapiro 2003: 42)

So the principle does not concern the issue of whether a and a’ have P anymore: it is now a rule concerning competent judgment. It does not preclude the situation in which one of two marginally different objects has P and the other does not have P. It says merely that if we competently judge one to have P, then we cannot judge the other not to have P. The principle would not be violated if our competent speaker judges a to have P but does not consider the state of a’. If we then ask her to consider a’ she has two options: either she may judge a’ to have P or she may judge a’ not to have P, but then she would have to withdraw her previous statement about a having P. Shapiro’s contextual solution makes use of David Lewis’s notion of conversational score. Such a score is a “local version of common knowledge” (Shapiro 2003: 45). Lewis (1983: 33) argues that (…) at any stage in a well-run conversation, a certain amount is presupposed. The parties to the conversation take it for granted; (…) Presuppositions can be created or destroyed in the course of the conversation.

Apart from presuppositions on the score, there are assumptions and other statements to which the speakers (implicitly or explicitly) agreed during the conversation. For each type of element on the score there are relevant accommodation rules which govern the “kinematics of the conversational score” (Lewis 1983: 240). In general, thanks to such rules the conversational score adapts to the changing conditions of the 2

Shapiro stresses how much his account owes to that of Raffman (1994; 1996).

173 conversation: (…) conversational score does tend to evolve in such a way as is required in order to make whatever occurs count as correct play. (Lewis 1983: 240)

Let us now consider the forced march version of the paradox and see how the notions of conversational score and rules of accommodation may help solving it. The volunteers proceed along the line and proclaim each man bald up to, say, man 975. Thus, the statement “Man 974 is bald” has recently been added to the conversational score. However, presently they decide that man 975 is not bald. In declaring man 975 not bald – in observance to the weaker principle of tolerance – they implicitly deny that man 974 is bald, so “Man 974 is bald” is removed from the conversational record. Just as “Man 974 is bald” comes off the score, so does “Man 973 is bald” and quite a few recent judgments. A ‘jump’ in judgment does not violate the principle of tolerance, thanks to the fact that it involves a retraction of previous items from the conversational record. This phenomenon is called “backward spread” – how far the spread reaches is itself a vague matter. Of course the volunteers cannot withdraw all their previous judgments. At some point they will have to make another ‘jump’ and change their verdict. Each such ‘jump’ will result in another removal of recent pronouncements from the conversational score. The solution to the paradox is the following. The universal premise is false, but there is no counterinstance, because there is never a number n such that sentences “Man n is bald” and “Man n+1 is not bald” are both on the conversational record (at the same time). If “Man n is bald” is on the record and “Man n+1 is not bald” is to be added, “Man n is bald” is immediately removed. Change of score represents change of conversational contexts. It is to be expected that the extension of any vague predicate will vary with different conversational records among competent speakers, for there are many reasonable ways of drawing the boundaries between things that have P and things that do not have P. Shapiro (2003: 43) finds the supervaluational framework “natural and helpful here”, but

174 notices immediately that it has to be improved, for it does not do justice to the notion of truth. According to supervaluationism there are many admissible ways of making the boundaries of a vague predicate’s extension precise. If a sentence “a has P” is true in all such precisifications then it will be super-true, if it is false in all such precisifications then it will be super-false. Supervaluationists equate truth with super-truth and falsity with super-falsity. Therefore they treat borderline sentences, which are true in some admissible delineations and false in some admissible delineations, as devoid of truth value. Shapiro, following Lewis, argues that there is more to truth than super-truth and wants to regard borderline statements as true enough. According to Lewis (1983: 244), if a sentence is true in all delineations it is true simpliciter, [b]ut also we treat a sentence more or less as if it is simply true, if it is true over a large enough part of the range of delineations of its vagueness. (For short: if it is true enough.)

1.2. Fara’s interest-relativity solution Graff Fara (2000) argues that vague predicates are governed by various constraints among which is the similarity constraint, which replaces the tolerance principle:3 Whatever standard is in use for a vague expression, anything that is saliently similar, in the relevant respect, to something that meets the standard itself meets the standard; anything saliently similar to something that fails to meet the standard itself fails to meet the standard. (Fara 2000: 57)

Fara stresses that we use vague expressions with different standards on different occasions. Sometimes the variation of standards in use for a vague term is attributable to implicit comparison classes (e.g. tall for a jockey, tall for a three-year old). However, different norms may operate for one comparison class. “Old for a dog” may mean either “has 3

She mentions clear-case, relational and coordinate constraints. Those constraints resemble strongly penumbral connections mentioned by Fine (1975).

175 significantly more age than it is the norm for a dog to attain” (when said of a 20-year old dog) or “has significantly more age than the peak age of good health for dogs” (when said of a 14-year old dog).4 If two things are saliently similar, then one cannot be in the extension (or anti-extension) of a vague predicate, and the other not. However, if two things are similar, but not saliently so, then it may be that one is in the extension (or anti-extension) of a vague predicate, while the other is not. When we evaluate any given adjacent pair of objects in a sorites sequence, the very act of our evaluation raises the similarity of the pair to salience, rendering the proposition true for the pair we are considering. This is the reason why we cannot point to the boundary between those objects in a sorites series that have P and those that do not have P. Since the mere looking at any adjacent pair makes their similarity salient, “…the boundary can never be where we are looking. It shifts around” (Fara 2000: 59). For any n that we focus upon, the conditional “If a pile of n grains is a heap, then a pile of n –1 grains is a heap” will be true (at the moment at which we are considering it). However, once we stop considering that very pair, their similarity ceases to be salient and the boundary may well be located between them. The requirement that similarity is to be salient makes it possible to deny that the first and the last element of the sorites series are similar (in the relevant respect). Objects in adjacent pairs are saliently similar only when we consider them. So there is no problem with saying that somewhere in the sorities series there is a pair such that one object in that pair has P and the other does not (provided that pair is not the one we are currently focusing upon). This is Fara’s bare bones solution to the sorites. However, she adds the second layer to her solution – the layer that introduces interest-relativity. For Fara argues that similarity constraint is not purely semantic, but it is in part a consequence of the vagueness of our interests. She claims (2000) that

4

See Fara (2000: 67).

176 similarity constraints are empirical truths, made true, at least in part, because we have the kinds of interests that we do.

According to her the sentence “That car is expensive” should be analyzed as meaning “That car costs a lot” which in turn means “That car costs significantly more than is typical.” Similarly “John is tall” is to be analyzed as meaning that John has significantly more height than is typical. Significantly different from is a context-dependent relation (significantly to whom?) and moreover it depends on our interests. Thus, being tall and being expensive are relational properties the possession of which depends on the difference between height and cost on the one hand and some norm on the other. Furthermore, the said difference has to be a significant one. Whether a difference is significant or not depends in turn on our interests. Norms may be of different kinds, so “a lot” may mean “significantly more than is typical”, “significantly more than is wanted or needed”, “significantly more than is expected”, and so on. Fara claims further that two things that are (known to be) qualitatively different in some respect can be the same for present purposes: [T]wo things are the same (in a certain respect) for present purposes when the cost of discriminating between them (in that respect) outweighs the benefits. (Fara 2000: 69)

Those two things are in fact different, but we decide to ignore the difference, because discriminating between them would incur a certain cost. If that cost is greater than the benefits which one gains from making the distinction, it is better to treat the things as the same for present purposes. Now, the notion of being saliently similar may be defined in terms of being the same for present purposes: Two things are saliently similar when they are in fact the same for present purposes (i.e. the cost of discriminating between them would outweigh the benefits).

Both Shapiro and Fara solve the sorites by rejecting the universal premise. Both argue that the rejection of that premise does not result in acceptance of its classical negation. Shapiro argues that the claim

177 “∃n(F(n) ∧ ¬F(n–1))” is not true, for there is never the case that both conjuncts are on the conversational record at the same time. Fara argues that we can never assert “∃n (F(n) ∧ ¬F(n–1))” because when we look at any n and n–1 their similarity becomes salient to us and in consequence we treat them as the same for present purposes and judge them in the same way. 2. Critique 2.1. Objection against Fara’s approach Fara says that two things are the same for present purposes when the cost of discriminating between them outweighs the benefits.5 It should be noticed, however, that such a notion of being the same for present purposes applies to non-vague as well as vague terms. Let us assume that I bake a cake and I need 1kg of flour. I know that I have only 950 grams in my flour-tin. I decide to treat the amount as 1 kg. The cost of treating them differently is too big, for I would have to go shopping. Nevertheless treating them as the same (in respect of weight) for present purposes does not make me blind to the fact that they are not the same. I know perfectly well all the time that they are different, I merely decide to ignore that difference. Analogously, if I see that two men are marginally different in respect of their heights, I may decide not to differentiate between them (what is the point?), but this does not mean that I do not see the difference. And if I see the difference I could draw the border between them. 1 kg of flour and 950grams of flour are similar in respect of weight and – we may assume – saliently so. But their being saliently similar does not preclude me from saying that one is 1kg and the other is not. Even if I consider those two amounts and decide to treat them as the same I am perfectly capable of telling them apart. So treating two things as the same for present purposes does not give us a similarity constraint and therefore it cannot justify the claim that the boundary “can never be where we are looking”. 5

See Stanley (2003) for a different critique of Fara’s account and Fara (2008) for her rejoinder.

178 The vagueness of our interests and the resulting phenomenon of treating slightly different things as the same for present purposes, because the cost of differentiating them outweighs the benefits, applies to vague and non-vague predicates (such as “weights 1 kg”). Therefore it cannot explain the fact that vague terms are sorites-prone (for non-vague terms are not). Neither can it provide a solution to the paradox: treating objects as the same for present purposes does not have to result in the apparent nonexistence of a boundary. Thus, it appears to me that while Fara’s bare bones solution is valuable, we should dispense with the second layer interest-relative version. The bare bones solution stresses an important – and usually neglected – fact that it is salient similarity and not similarity as such that causes trouble.6 By constructing a sorites series we make it seem as if all the similarities in the series were equally salient but in fact it is not so. 2.2. Objection against Shapiro’s approach In her 2003 paper Keefe argues that there will be cases in which there is a ‘jump’ in judgment without the change in conversational score. If we assume that for a change in the score the consensus of the judges is needed then it is quite clear that in many situations no change will be effected. In the version of the forced march sorites paradox I have described above we have an idealized situation in which all judges are required to agree upon the verdict. In normal conversational situations judges may not agree with each other and may (and typically will) ‘jump’ at different points. We may ask, therefore, whether two competent judges can judge differently according to Shapiro’s account. What happens to the conversational score then? Lewis (1983: 245) considers Austin’s example of a situation in which someone says “France is hexagonal”. Is this acceptable? Well, it will depend on the standards of precision that are included in the conversational score. Under low standards of precision that sentence is true enough, under high standards it is false. If you raise the standards in 6

The importance of salience has been noticed many times by Lewis (see e.g. 1983).

179 a conversation “France is hexagonal” may lose its acceptability. Let’s imagine that A says “France is hexagonal”, but B contradicts him: “France is not hexagonal. Its shape is similar to hexagon, but in fact it is not hexagonal”. A may now reply along the following lines: A: “OK. Depends on how precise you want to be. If you want to be very precise, then I agree: France is not hexagonal.” Rules of accommodation operate here and standards of precision are changed accordingly. B’s utterance changes the context and raises the standards of precision. We may even assume that A’s first utterance was true in the original context (e.g. we may assume that C spoke just before B and said that Italy was boot-shaped and nobody objected (see Lewis 1983: 245)). B changes the context and A’s original utterance is no longer acceptable. Moreover A is well aware of this. Now, “hexagonal” – together with “flat” – belongs to a group of adjectives that have a loose and an absolute sense. If we take “hexagonal” in its absolute sense, nothing but actual hexagons will be hexagonal (just as pavements, desks, shelves etc. are not flat in the absolute sense of “flat”). The majority of vague predicates are not like that however. “Tall”, “rich”, “intelligent” do not have absolute sense. There may be various looser and stricter standards in conversations in which vague terms occur, but the appeal to a change in standards will not solve the issue of borderline cases. Let us take Harry who is a borderline case of “bald”. In a conversation A says: “Harry is bald”. B – who uses stricter standards – replies: “Harry is not bald”. What is A to do now? He does not have to concede that Harry is not bald after all. He may persist in claiming that Harry is bald just as B may insist that Harry is not bald. As we have already seen one of the characteristic features of vague expressions is that both A and B may be right. As long as the speaker gets positive and negative extensions right, it does not matter how he classifies borderline cases. One does not expect the competent users of language to agree on the borderline cases. On the contrary, one expects them to hesitate or even to disagree. Thus, it seems that in such a situation nothing should come off the record. In fact it might be regarded as a distinctive feature of vagueness that contradictory claims may both stay on the conversational score without making the conversation inconsistent, provided that they are made by different conversationalists.

180 3. Subvaluation as a logic for contextualists The feature mentioned in the last paragraph suggests that contextualists should choose subvaluation rather than supervaluation as their logic. In 1994, Hyde suggested that subvaluation might provide a satisfactory account of vagueness and sorites. In my 1999 paper, I have criticized his solution arguing that it depends heavily on treating vagueness as a species of ambiguity. However, it seems to me now that with the insights provided by contextualism subvaluationism fares much better. Subvaluation is a dual of supervaluation. As we have seen, supervaluationists argue that borderline statements are devoid of truth value. In contrast, subvaluationists claim that statements concerning borderline cases are both true and false. Hence, the logic underlying the subvaluation theory is a paraconsistent logic. The subvaluationists’ truth is truth in some admissible precisification. Since there are both such precisifications in which borderline statements are true and such precisifications in which they are false, all borderline statements are regarded as both true and false. In addition, the statements which are true in all admissible precisifications are considered determinately true and the statements which are false in all admissible precisifications are determinately false. The logic underlying the theory of subvaluation is inconsistent (for some statements A, both A and ¬A belong to the theory) but it is not trivial (the spread-principle: A, ¬A |=SbV B is not valid). The principle of adjunction fails: A, B |≠SbV A & B, which explains the validity of the Law of Non-contradiction: A, ¬A |≠SbV A & ¬A. In the subvaluation theory “validity” is defined as preservation of truth: an argument is SbV-valid if and only if whenever the premises are true in some admissible precisification the conclusion is true in some admissible precisification.

181 Hyde argues that the sorites reasoning is invalid because it equivocates between (slightly) different meanings of a vague term. His argument goes as follows (1997: 648): The sentence ‘A pile of n grains is a heap’, where a pile of n grains counts as a borderline case for ‘heap’ is both true and false, so it is true. Since it is also false, the material conditional ‘If a pile of n grains is a heap, then a pile of n–1 grains is a heap’ is true by virtue of the falsity of its antecedent. Nonetheless, pile of n–1 grains might be determinately not a heap, thus making the sentence ‘A pile of n–1 grains is a heap’ false.

This argument works very smoothly in the contextualist framework. The sentence “A pile of n–1 grains is a heap” is true on one occasion of use and false on another. Because there are occasions on which it is true and occasions on which it is false we may say that it is true enough and false enough. Nevertheless its being true enough does not suffice to make the paradoxical conclusion follow. Subvaluation seems to be a perfect logic for contextualism. The controversial subvaluationists’ claim that vagueness is a kind of ambiguity should be abandoned and the thesis that vague terms are context-dependent should be adopted instead. The claim that borderline statements are both true and false agrees very nicely with contextualists’ intuitions. Since context-dependent vague terms express different properties on different occasions of use the subvaluationists’ solution to sorites is still good: the reasoning is fallacious because it commits a fallacy of equivocation. The offending borderline statement is true on one occasion and false on another occasion of use. Therefore, the paradoxical conclusion does not follow. It should also be noted that subvaluation is far better for contextualism than supervaluation. In subvaluation framework, the main role is played by the notion of truth in a delineation, whereas in supervaluation super-truth is of prime importance. Contextualists put special attention to the fact that vague expressions may refer to different properties on different occasions and each such occasion is equally important. If a certain statement is true on one (‘legitimate’) occasion it is true enough. Hence, subvaluationists’ approach to truth seems much better suited to contextualists’ needs. In addition subvaluationists’ claim that borderline statements have both

182 truth values is much more in the spirit of contextualism than supervaluationists’ thesis according to which borderline statements are devoid of truth value. It seems to me that of the two approaches to vagueness described above, it is Shapiro’s that is a more promising one. However, he should make space in his approach for competent judges who have different opinions on borderline cases. Taking into account the possibility of divergence among competent speakers requires the restriction of the backward spread phenomenon. Backward spread is needed when one speaker is making judgments, but it does not apply when more speakers are engaged in a conversation. It might even be argued that the possibility of the occurrence of conflicting opinions regarding borderline cases on the conversational score at the same time is a distinguishing feature of discourses in which vague expressions occur. Moreover, as I have argued, it is subvaluation – and not supervaluation – which is best fitted to be the conversationalists’ logic of vagueness. References Fine, Kit 1975. Vagueness, Truth and Logic. Synthese 30, 265-300. Graff Fara, Delia 2000. Shifting Sands: An Interest-Relative Theory of Vagueness. Philosophical Topics 28, 45-81. Graff Fara, Delia 2008. Profiling Interest Relativity. Analysis 68, 326-335. Hyde, Dominic 1997. From Heaps and Gaps to Heaps of Gluts. Mind 106, 641-660. Keefe, Rosanna 2003. Context, Vagueness, and the Sorites. In: J.C. Beall (ed.) Liars and Heaps. New Essays on Paradox. Oxford: Clarendon Press, 73-83. Lewis, David 1983. Scorekeeping in a Language Game. In: David Lewis, Philosophical Papers. Volume 1. Oxford: Oxford University Press, 233-249. Raffman, Diana 1994. Vagueness Without Paradox. Philosophical Review 103, 4174. Raffman, Diana 1996. Vagueness and Context-Relativity. Philosophical Studies 81, 175-192. Shapiro, Stewart 2003. Vagueness and Conversation. In: J.C. Beall (ed.) Liars and Heaps. New Essays on Paradox. Oxford: Clarendon Press, 39-72. Stanley, Jason 2003. Context, Interest Relativity and the Sorites. Analysis 63, 269280. Wright, Crispin 1994. The Epistemic Conception of Vagueness. In: Terry Horgan (ed.) Vagueness. The Southern Journal of Philosophy, 133-160.

Jaroslav Peregrin Academy of Sciences of the Czech Republic & Charles University, Prague www: http://jarda.peregrin.cz

The Myth of Semantic Structure* Abstract: That behind the overt, syntactic structure of an expression there lurks a covert, semantic one, aka logical form, and that anyone interested in what the expression truly means should ignore the former and go for excavating the latter, has become a common wisdom. It is this wisdom I want to challenge in this paper; I will claim that it is a result of a mere confusion, that the usual notion of semantic structure, or logical form, is actually the result of certain properties of our tools of linguistic analysis being unwarrantedly projected into what we analyze.

1. Structure in language The term structure has become one of the ultimate key words of modern theory of language. More complex expressions of language are constituted from simple ones and ultimately from words; and structures are the ways of such compositions. In his path-breaking Syntactic Structures, Chomsky (1957) presented a classification of languages from the viewpoint of their structural complexity and indicated the relationship between the ensuing hierarchy and the hierarchy of automata; thus setting agenda for the study of syntax of both natural and formal languages for many years ahead. Let me note in passing that saying that language has a structure is a wholly uncontroversial observation – the utterances speakers of any human language make can be observed to share various parts, so that we straightforwardly come to construe them as concatenations of what we call words. Further we may decide to see words as more abstract entities, which occur within utterances in various forms (thus bringing morphology into the picture) and to see the concatenations of the forms as instances of rules operating on the words, thus reaching what is *

Work on this paper has been supported by the grant No. 401/07/0904 of the Czech Science Foundation.

184 standardly called syntax. This brings us to some distance from what we can literally observe; but to say that (“surface”) syntactic structures are simply perceptible is still not an oversimplification that would be too dangerous. But the concept of structure has come to be considered crucial not only in the context of syntactic studies, but also from the viewpoint of semantics. And many linguists and philosophers seem to take for granted that we can study expressions not only on the level of syntax, but that we can also descend ‘into’ or ‘under’ them and study their meanings on the level of semantics, where we should be able to discover, behind their syntactic structures, also semantic ones. This idea has been reinforced by the doctrine of logical form stemming from the writings of Bertrand Russell and his followers (which has become an integral part of the subconscious background of a great deal of approaches to language in the twentieth century): to truly understand what an expression says, Russell urged, we must not look at its surface, syntactic structure, we must use logical analysis to reveal its logical form, which shows what the expression is really about. Thus a sentence that looks like a subject-predicate statement, hence as a statement ascribing a property to an object, may, according to Russell (1905), turn out to be a statement of a much more delicate semantic structure, talking not about an object denoted by its subject term, but, say, about some constellations of properties. This observation of Russell has got mingled with Russell’s fondness of facts. At a certain period of his career (see esp. Russell 1914), Russell tried to account for the language-world relationship, and consequently for semantics, without using any other ingredient than ‘tangible’ parts of the world, i.e. avoiding any ‘supernatural’ entities like the senses of Frege (1892). He ended up with sentences (and their parts) on the one hand, and facts (and their parts, namely objects, properties and relations) on the other. What was important was that Russell considered facts as simply certain complex objects among other objects of the world – the fact such as that there is a tree ahead of me is, according to Russell, something I can bump into (it is enough to continue walking forwards). Facts of this kind have structures wholly independent of language, and when Wittgenstein and others realized that there is no way of doing

185 reasonable semantics with merely facts, that the minimum we have to take on board in addition is something like potential facts, i.e. propositions, this was straightforwardly carried over to propositions. Hence propositions came to be seen as potential conglomerates of objects (that may be actual within the minds of the speakers), structured in a way that has nothing to do with language.1 The picture emerging from such considerations is straightforward: the syntactic structure conceals a more deeply buried, but also more important structure – the semantic one. This way of looking at language was reinforced by the turn Chomsky made soon after his beginning, the turn from understanding the mathematical structures he employed to describe the complexities of syntax in an abstract way to their understanding as real parts of human language faculty. Here the picture was that of these structures working unobservably within the depths of human mind and having to disguise themselves into different kinds of structures that are capable of surfacing from the mind into the open. I think that by now it is time to take stock of these views. And I am convinced that if we “weigh them in the balances”, i.e. check them against the evidence we have (and not the one we are told to have), they will be “found wanting” – they will turn out to be something that we once accepted as interesting conjectures, but then forgot to dispose of when we came to know more about language. 2. Russell plus Chomsky: an unbeatable team? In a recent paper, Collins (2007: 807) summarizes the reasons that have led philosophers and linguists to the conclusion that beyond the surface, syntactic structure of our expressions, there looms a hidden semantic one in the following way: It could be said that modern philosophy of language was born in the realization that the structure of the proposition is essentially logical in some sense rather than linguistic, for natural language syntax appears to be ‘systematically misleading’ as to the meanings sentences express. Cutting a long and complex story short, the leading contemporary diagnosis of this 1

Wittgenstein (1922) talks about Sachverhalte ‘states of affairs’ (TLP 2.11; 4.1).

186 traditional thought is that it laboured under a conception of syntax that was too much in the thrall of how sentences appear to be structured. By positing various ‘hidden’ levels of structure, generative linguistics can be understood to have at least established the possibility that meaning is indeed linguistically structured. In other words, the traditional error – the original, albeit very fruitful, sin – was to think that there is no more to syntax than the ‘surface’ organization of words

Clearly, it is correct that the leading idea of a great deal of the philosophy of language of the first part of the twentieth century (and its smaller, but still substantive part in the second half of the century), was animated by the thought that to find out what our pronouncement is about, we must go beyond the misleading surface structure to the hidden “logical form”. It is also correct that Chomsky and other generative linguists were driven to postulating various kinds of structures beyond the overt one slowly singling out one of them as a “logical form”. However, I think it is essentially misleading to take these two ideas as complementary; and indeed I think it is wrong to take any of them at face value. I think that at least since the writings of the later Wittgenstein and Quine it has become ever more clear that the Russellian notion of logical form leads us into a blind alley; and I think that the term “logical form” in the mouths of Chomsky and his followers is simply a misnomer. Before we go to the analysis of Russellian and Chomskian notions of logical form, let me point out that at least since the half of the twentieth century there has been a growing tendency, within philosophy of language, to an alternative construal of the talk about logical forms, a tendency that, I suggest, is on the right track. One of those who became utterly skeptical about the Russellian concept of logical form was the later Wittgenstein (once himself a champion of the Russellian approach). It is instructive look at the story about Wittgenstein’s ‘awakening from the dogmatic slumber’, thanks to the interference of his friend Sraffa, as presented by Monk (1990: 59): One day (they were riding, I think, on a train) when Wittgenstein was insisting that a proposition and that which it describes must have the same ‘logical form’, the same ‘logical multiplicity’, Sraffa made a gesture, familiar to

187 Neapolitans as meaning something like disgust or contempt, of brushing the underneath of his chin with an outward sweep of the finger-tips of one hand. And he asked: ‘What is the logical form of that?’ Sraffa’s example produced in Wittgenstein the feeling that there was an absurdity in the insistence that a proposition and what it describes must have the same ‘form’. This broke the hold on him of the conception that a proposition must literally be a ‘picture’ of the reality it describes.

I think that be this story literally true or not, the fact is that at that time Wittgenstein started to be prone to see logical analysis not as a matter of digging into the depths of an expression to bring something buried there to light, but rather as something as erecting a watchtower over a vast unknown landscape to “command a clear views of it” (1953: §122). Quine was equally suspicious about the Russellian approach. His verdict is that what we call logical form is in fact something very different from what Russell held it to be. He claims (1980: 21) What we call logical form is what grammatical form becomes when grammar is revised so as to make for efficient general methods of exploring the interdependence of sentences in respect of their truth values.

Hence, according to him, logical form is nothing that can be found within language, it is merely an expedient we use when we want to account for language. Davidson’s (1970: 140) view is very similar: To give the logical form of a sentence is to give its logical location in the totality of sentences, to describe it in a way that explicitly determines what sentences it entails and what sentences it is entailed by.

All of this relegates logical forms from parts of the subject matter of theories a language into the toolboxes of some of them. We use them if we want to make certain properties of expressions more palpable; but they are not something that we would discover and report.

188 3. Russell Russell’s analyses were inescapably weighted by the enormous syntactical parsimony of the logic Russell employed to capture the alleged logical forms; as a result, there was simply no way for the forms to coincide with the surface ones. Things would be very different if he had allowed himself a richer logical language, of the kind commonly used by semanticists today. In his celebrated ‘On Denoting’ (1905), Russell strove to show that the logical form of such a statement as (1) The king of France is bald has little to do with the syntactic/surface form of the sentence and, instead, amounts to (1') ∃x (S(x) ∧ ∀y (S(y)→(x=y)) ∧ P(x)) Why? Because only a formula of this kind can capture the correct truth conditions of (1). No formula of (what Russell took to be) the syntactic structure of (1), i.e. no formula of the shape P(a), could suffice. 2 However, if we equip ourselves with a more powerful logical language than that of the first-order logic employed, in effect, by Russell, it is easy to replace the formula with an equivalent formula that does have the subject-predicate structure. The point is that with, say, the apparatus of λ-calculus at hand, we can define P* ≡Def. λp.p(P) S* ≡Def. λq.(λp.(∃x (p(x) ∧ ∀y (p(y)→(x=y)) ∧ q(x))))(S) and consequently we can rewrite (1') equivalently as3 P*(S*) 2

3

Most syntacticians would now deem Russell’s syntactic analysis unsatisfactory. But this is unimportant in the present context; the point is independent of the specific nature of the syntactic structure. See Peregrin (2001: §10.3) for more details.

189 This indicates that what Russell calls logical form is merely what becomes of a sentence when it is squeezed into the Procrustean bed of a simple logical language. (It should be stressed that for Russell the simplicity of the logical language was paramount, since he considered it essential to the whole enterprise of logical analysis that the logical building blocks needed to analyze natural language should be minimal, even at the cost of making the resulting analysis complicated.) Of course all of this does not mean that the Russellian concept of logical form is totally senseless – surely it does make sense, however only in a situation when we, doing logical analysis, purposefully restrict ourselves to a simple language. (Also we do not claim that such restrictions do not have their point; they surely do: sparseness fosters perspicuity and smooth tractability). The ensuing logical form then is nothing absolute, but rather only something as ‘the simplest analysis of the given expression by means of the given formal language’. This means that the only nontrivial usage of the term “logical form”, is the usage which is de facto technical and which tells us nothing about expressions as such, but rather only about the consequences of our choice of the means of the analysis. 4. Chomsky It is very difficult to find any explicit articulation of what exactly a logical form is supposed to be in Chomsky’s writings. He never says anything more informative than that it is one of the levels constituting the language faculty, which interacts, in certain ways, with other levels (see Chomsky, 1986, 2005, etc.). But what are the “levels”? Are they supposed to be some palpable (though not yet clearly localized) entities or locations within the brain?; or are they simply abstract entities the relationship of which to the structures of the brain is left unspecified? And why should we think that there are such levels in the mind/brain in the first place? As Chomsky keeps stressing that his approach to language is utterly scientific, the answer to the last question must be that their existence is implied by empirical data. Hence either there is a direct empirical evidence of their presence in the mind/brain, or there is an indirect

190 evidence. I think we can exclude the first possibility, not only Chomsky does not claim anything like this, but due to the unclarity regarding the nature of the “levels” it is not even clear what such a direct evidence could amount to. Hence the evidence is probably supposed to be indirect, and indeed it seems that it is this kind of evidence that is cited in the papers of Chomsky and his followers. The most frequently cited data are judgments about grammaticality of various kinds of expressions (and non-expressions). From this viewpoint, the “broadly Chomskian” approach to languages is neatly characterized by Laurence (1996: 282, 284): I count a view as Chomskian if it treats the linguistic properties of utterances as inherited from features of the language processor. Chomsky himself explicitly says that he does not think that linguistics directly provides a theory of language processing, and he has had a somewhat skeptical outlook on developments in psycholinguistics. Still, Chomsky insisted that linguistic competence – what he takes linguistic theory to be a theory of directly – is a central and essential component of our language processor. I therefore take accounts of the nature of linguistic properties which link them essentially to features of the language processor to be broadly Chomskian in spirit. ... On this version of the Chomskian view, the semantic properties of utterances would be thought of as being "inherited" from the semantic properties of the representations at this level, and, in general, the linguistic properties of utterances would be inherited from the associated representations at each of the various levels of processing. The model I have in mind here is actually very straightforward. Given the empirical claim that language processing consists in recovering a series of representations at various linguistic levels, the view is simply that it is in virtue of being associated, in language processing, with. these representations that an utterance has the linguistic properties it has. So, just as an utterance has a certain syntactic structure in virtue of being associated with a representation which has that structure, so it has a certain content or meaning in virtue of being associated with a representation which has that content or meaning.

Hence in general we may say that the claims that the language faculty contains the various levels, including the logical form, is the results of studying the “linguistic properties of utterances” and of considerations

191 of what kind of mechanism could produce such utterances. Hence it is the well-known ‘black box’ kind or reasoning: we see inputs and outputs and conclude what is happening in between them despite the fact that we cannot see it. Of course that such ‘black box reasoning’, if it is to lead to the conclusion to the effect that this or that is in the box, must involve not only showing that the conclusion explains the observed data, but also that there is no different, equally adequate explanation available. Only thus can we be substantiated in claiming that it is this very thing that is in the box (though the substantiation, of course, is still of a different kind than one based on an observational report.) I do not see anything like this in Chomsky’s writings. But perhaps the tacit idea is that due to the complexity of the language faculty, one good conjecture is more than enough. But what seems to me to be more troubling is the nature of the data considered as the inputs and outputs of the black box. As we have already pointed out, they concern mostly the grammaticality of expressions. Yet this does not seem to me to be the most interesting aspect of language. If we are to see language in terms of inputs and outputs of organisms, then its most wonderful aspect seems to me to be that we can use expressions to achieve unbelievably complex effects. By emitting a sound I can make somebody get under my car and help me fix my engine; or I can make her go to the zoo, buy a banana and give it to a particular monkey there. And these are, clearly, empirical data. What is it that grants expressions these almost miraculous abilities? Could it be that it is some structures involved in their production? It seems to me that structures that could be usefully invoked to explain these semantic features of expressions would have to be social ones.4 (To be sure, there is a sense in which everything that is social is somehow anchored within the brains of the members of the society; but just as it would be clearly preposterous to replace studying the rules of football by means of studying the brains or legs of football players, it is preposterous to replace studying the rules of language – qua interpersonal institutions – by studying the brains of the speakers.) 4

See Peregrin (2008) for a more detailed discussion of this claim.

192 There is nothing 'unscientific' in admitting the fact that interactions of people bring about complicated patterns, which, though surely existing merely thanks to the brains of the persons, constitute facts which should be paid attention on their own score. 5. What do we see when we see a language? I think that Chomsky's rhetoric has fostered an illusion that the existence of logical forms is an empirical fact – that getting hold of the logical form of an expression is akin to, say, revealing the inner organs of an insect. To me, this view is badly misleading: although in certain contexts, disregarding the gap between a model and reality may be acceptable and helpful, doing so when the nature of meaning and the nature of language are at stake is preposterous. I think that reading the claim that the existence of logical form is a ‘scientific fact’, we should keep in mind the nature of the situation. Wittgenstein once claimed that “when we look into ourselves as we do philosophy, we often get to see just (...) a picture. A full-blown pictorial representation of our grammar. Not facts; but as it were illustrated turns of speech” (1953, §295). I want to add that when we look at our language, at our “turns of speech”, we likewise often do not see the facts, but again “just a picture”. A picture we were educated to see. From this viewpoint it seems to me to be important to try to isolate what we truly see when we see a language. What I think we see, and hence what should figure as our ultimate empirical basis when studying language, are the facts concerning people emitting certain sounds (or producing certain kinds of inscriptions), and using specific types of such sounds in specific ways with specific effects. The survey of which types of sounds, i.e. which expressions, they use constitutes the field of syntax. Here is where we encounter the structure of language: the expressions of any natural language form an open class of compounds based on a finite stock of primitive building blocks, words (or perhaps, in some cases, some smaller units, like morphemes). Studying the specific roles of individual expressions within our ‘language games’, then, constitutes what has traditionally been called

193 pragmatics; but as we have no other data (and, in particular, no data directly for what has traditionally been called semantics – no detectable fibers connecting expressions with things), semantics must be extracted from this basis too. (And of course this may make us doubt the very existence of any clear boundary between semantics and pragmatics; or, more radically the very existence of semantics as something separate from pragmatics.) The syntactic structure remains crucial: the semantic properties of expressions must be conceived of as compatible with the openness of the class of expressions, i.e. as somehow ‘compositionally’ projectable from simple to more complex expressions. However, there is no obvious new kind of structure independent of the syntactic one for semantics to reveal. (True, not all aspects and elements of the syntactic structure are equally important from the viewpoint of semantics, so it is often helpful to work with simplified, purified or adjusted versions of the syntactic structure – but these, far from being independent of the basic syntactic structures, are merely their derivates.) Thus, an autonomous semantic structure is – in the best case scenario – a convenient fiction or a working conjecture, or – in the worst – a myth stemming from our uncritical acceptance of received wisdoms. In the latter case we should be wary of it, for it creates a dangerous illusion of explanation. Hence I think that the argument from the authority of (some of5) the founding fathers of logical analysis of language fails; they simply did not vindicate the reality of logical forms. Given this, the question What makes us think that there is something as semantic structure? becomes pressing. And it is hard to avoid the suspicion that the main reason is that some theoreticians of language are flummoxed when it comes to semantics, and hasten to adopt the short5

Frege’s views, for that matter, were much more cautious than those of Russell. If, for example, we look at his Begriffsschrift (1879), we see that the translation of natural language into his concept notation, viz. logical analysis, is, for him, nothing more than divesting natural language statements of the parts irrelevant from the viewpoint of proving and inferring. His later writing may contain pronouncement slightly more resembling Russell’s stance, but I do not think that he qualifies as an exponent of the straightforward Russellian dualism of surface/logical form.

194 circuit conclusion that semantics is a more deeply buried kind of syntax (which allows them to deal with it by means of the battery of methods which have turned out so profitable for the investigation of syntax). I see no reason for assuming that there is a concept of semantic structure beyond the Quinean concept of syntactic structure revised so as to make for efficient general methods of exploring the interdependence of sentences in respect of their truth values. Linguistic expressions are instruments we use for certain purposes (and this claim should not be read as contradicting the claim that our brains are wired up in such a way that we are largely predisposed to employ just the instruments of these kinds, which may make it appropriate to talk, as Pinker (1994), does, about our linguistic capacity as about an instinct), 6 and an expression’s semantics is a matter of what specific purpose that expression is usable for. Syntax is a matter of the fact that words are instruments not like hammers or cars, but more akin to toothwheels or valves – they do not serve self-standing purposes one by one, they must function conjointly with many other words. (And needless to say, by the conjoin functioning they can achieve wonderful effects.) And syntax, we can say, is the study precisely of the ways they can be joined; just as engineering is the study of how real toothwheels, valves etc. can be combined to produce usable machines7. And would it not be preposterous to decree that engineers should study, besides how the toothwheels, valves etc. are, or should be, combined, also another kind or level of combination, this time related not to the toothwheels and valves themselves, but rather their usabilities or the individual contributions they bring to the usabilities of the ultimate constructs?

6

7

It is important to distinguish between a word as such, the sound/inscription type (say “dog”) and the slot within our language faculty (if there is something like this) into which it fits. Though the latter may be inborn and largely predetermine what we will do with a word which will fill the slot, the word as such is an instrument in a sense freely (= arbitrarily) chosen to fulfill this task. And without doubt, the empirical study of how this composition works in real time has elicited a significant body of results – making up the concept of syntactic structure that Collins discusses in his paper.

195 As it is not possible to place all the conceivable constructs in front of our eyes, we must deal with them as with potentialities, and we have to see the individual toothwheels and valves as bearing specific contributions to the usability of the ultimate constructs which they may become parts of; and in the same way we come to take words as having their peculiar meanings and seeing composing sentences (and perhaps supersentential wholes) as paralleled by composing their meanings. However, this view makes sense only insofar as there is only one structure in play. It is, of course, important to realize that what has now come to be called syntax by most linguists and some philosophers (largely due to Chomsky’s influence) is not quite what corresponds, on our engineering picture, to how the constructs are composed of their co-operating parts, but rather to the technologies their producers use to put them together. This fact is, I think, a normal and respectable case of a paradigm shift within a scientific discipline; but we should keep in mind that given this, there is no longer a reason to assume that all parts of what is now called syntactic structure should be relevant for semantics. (Unless we were to picture semantics as a matter of propositions put together on an assembly line parallel to the sentence-producing one within the great assembly hall of a language faculty – but my point here is that such a view should not be a matter of course.) 6. Conclusion The idea that the task of semantics can be solved by means of associating expressions with ‘semantic structures’ or ‘logical forms’ independent of their syntactic structures is a myth – such association does not solve what semantics is to solve. Semantics is to explain what grants the sound/inscription types that constitute our languages the peculiar powers that they have and that makes them so usable for us. (Traditionally, explaining these powers was seen as tantamount to

196 explaining the nature of peculiar entities attached to them, viz. meanings; but we need not presuppose that this is an inevitable way. 8) I am aware that especially the preceding section may appear as the expression of a specific philosophical standpoint (a pragmatist one, for that matter), which may make the reader think: “I wash my hands; I am not a pragmatist so this lament is not of concern for me”. But I would like to stress that even if you do not share this very standpoint, the basic question remains in force: what makes us think that there is something as a semantic structure independent of the syntactic one? Collins (2007: 807) talks, in connection with Frege and Russell, about “the empirical mismatch” and the consequent “need to explain how meanings are paired with structures”. But as I argued above, the claims of the classics of logical analysis to the effect of the mismatch between surface structure and logical form can in no way be seen as reports of empirical findings – they tell us nothing about natural language as such, they only report the fact that if we want to translate it into a logical language of a very simple structure, discrepancies are bound to arise. Turning this fact into a fact about language is tantamount to changing the train of empirical linguistics for that of speculative metaphysics.

References Chomsky, Noam 1957. Syntactic Structures. The Hague: Mouton. Chomsky, Noam 1986. Knowledge of Language. Westport: Praeger. Chomsky, Noam 2005. Language and Mind. Cambridge: Cambridge University Press. Collins, John 2007. Syntax, More or Less. Mind 116, 805-850. Davidson, David 1970. Action and Reaction. Inquiry, vol. 13; reprinted in and quoted from Davidson: Essays on Actions and Events. Oxford: Clarendon Press, 1980, 137-48. Frege, Gottlob 1879. Begriffsschrift. Halle: Nebert. Frege, Gottlob 1892. Über Sinn und Bedeutung. Zeitschrift für Philosophie und philosophische Kritik 100, 25-50. 8

See Peregrin (2009) for a more thorough discussion of the role of meanings in semantics.

197 Laurence, Stephen 1996. A Chomskian Alternative to Convention-Based Semantics. Mind 105, 269-301. Monk, Ray 1990. Ludwig Wittgenstein: the Duty of Genius. London: Cape. Peregrin, Jaroslav 2001. Meaning and Structure. Aldershot: Ashgate. Peregrin, Jaroslav 2008. Inferentialist Approach to Semantics. Philosophy Compass 3, 1208-1223. Peregrin, Jaroslav 2009. Semantics without meaning? In: R. Schantz (ed.), Prospects of Meaning. Berlin: de Gruyter, to appear. Pinker, Steven 1994. The Language Instinct. New York: Morrow. Quine, Willard Van Orman 1980. Grammar, Truth and Logic. In: Stig Kanger and Sven Öhman (eds.), Philosophy and Grammar. Dordrecht: Reidel, 17-28. Russell, Bertrand 1905. On denoting. Mind 14, 479-493. Russell, Bertrand 1914. Our Knowledge of the External World. London: Allen and Unwin. Wittgenstein, Ludwig 1922. Tractatus Logico-Philosophicus. London: Routledge. Wittgenstein, Ludwig 1953. Philosophische Untersuchungen. Oxford: Blackwell.

Salvatore Pistoia Reda Philosophy and Social Sciences Department, University of Siena [email protected]

Scalar Implicatures, Communication, and Language Evolution Abstract: This text deals with Scalar Implicatures (SIs). According to the main tenets of Grice, derivation of SIs is a pragmatic phenomenon that occurs at the root of the sentence, in a global fashion. This is the so-called “globalism”. Theorists like Chierchia argue that in order to get specific readings otherwise unavailable SIs can (and in fact must) occur in embedded contexts. According to Chierchia, SIs get computed in parallel with the computation of the semantic value of the sentence. This is the so-called “localism”. But there is also a third approach, supported by Recanati, that shares the account for embeddability with localism, and the pragmatic interpretation of implicatures with globalism. From an evolutionary point of view, I argue against the Recanati’s approach. Recent linguistic literature deals with language evolution. According to Hauser, Chomsky and Fitch, the assertion that language is an adaptation for communication is far too vague to be addressable, and moreover it fails to recognize the distinction between questions of computation and questions of evolution. I argue that Recanati fails to recognize the same distinction, or something very close to it. Therefore, his mixed approach to SIs needs to be modified. Finally, I present some evidence in order to reject the game-theoretical approach to language evolution.

1. The phenomenon of SIs Even though he never talked about “scalar” implicatures, in his writings Paul Grice discussed and described the phenomenon. SIs are distinctive type of generalized conversational implicatures triggered by linguistic items that are part of an informational scale whose components are bound by an entailment relation. The following are scales:

200 direction of entailment r scale a. [few r some r all] scale b. [1 r 2 r 3 r 4 r 5 r n ] scale c. [sometimes r often r always] scale d. [or r and] According to Grice and neo-Griceans, SIs arise with the presumption that the speaker is keeping to the first maxim of quantity, namely make your contribution as informative as is required for the current purposes of the exchange. If I utter a sentence holding a scalar item of a certain level, then I negate the sentence holding the higher item in the scale. In correspondence to the term “some” in (1), the “not all” implicature gets activated: (1) Some friends of mine had voted for Berlusconi (2) Not all friends of mine had voted for Berlusconi.

Note that the logical form of (1) (1lf) ∃x (friend of mine (x)

had voted for Berlusconi (x))

is compatible with the logical form of (3), i.e. the opposite of (2), (3) All friends of mine had voted for Berlusconi (3lf) ∀x (friend of mine (x) i had voted for Berlusconi (x)),

while the logical form of (2) goes like the following: (2lf) ¬∀x (friend of mine (x) i had voted for Berlusconi (x)).

This explains why we can utter (4) without logical contradiction: (4) Some friends of mine had voted for Berlusconi. Rather, all of them did.

201 In order for the hearer to get the right implicature, and so to rule out the logically compatible sentence, he must be able to reason like this (the following is a classical instance of Gricean reasoning schema): I. The speaker uttered (1) that contains the scalar item “some”, part of the scale a.; II There exists an item stronger than “some”, i.e. “all”, and the two items are not significantly different in complexity; III. According to the conversational maxim, if he had been able to use “all” he would have used it; IV. For some reason he isn’t able to say that (3) is valid; V. The speaker is well informed; VI. Thus, it’s not the case that (3).

As for now, notice that the transit from IV to VI is not trivial at all, and someone could well be uncomfortable with it. Echoing Levinson, we may distinguish between a strong and a weak interpretation. According to the first one, the speaker didn’t say that all friends of him had voted for Berlusconi because he knows that some friends did not [K nq]. According to the second one, he did not say that just because he does not know whether or not all of them did; he only knows that some of them did [nK q]. As the different position of the negation shows, these different readings might well be used to introduce the problem of embedded implicatures. There are two viable approaches to SIs. The first “globalist” approach dates back to Grice. According to this approach, SIs are computed by a pragmatic device that receives as an input the semantic value of the sentence, and then strengthens it, returning as an output the implicature. The one just described is a post-propositional mechanism that affects the whole sentence. To quote:

202 In the tradition stemming from Grice (1989), implicatures are considered a wholly pragmatic phenomenon and SIs are often used as paramount examples. Within such a tradition, semantics is taken to deal with the compositional construction of sentence meaning [...] while pragmatics deals with how sentence meaning is actually put to use [...]. Simply put, on this view pragmatics takes place at the level of complete utterances and pragmatic enrichments are root phenomenon (something that happens globally to sentences) rather than a compositional one. (Chierchia, Fox and Spector to appear: 1)

Theorists supporting the second “local” or “grammatical” approach to Sis, argue that Sis are computed in tandem with the computation of the sentence’s truth conditions. This constitutes a remarkable departure from Grice. […] if SIs can be systematically generated in embedded contexts something in this view [that is, the Grice-inspired view] has got to go. Minimally, one is forced to conclude that SIs are computed compositionally on a par with other aspects of sentence meaning. (Chierchia, Fox and Spector to appear: 1)

Localists argue that only a local approach can account for embedded generation of implicatures, which is necessary in order to get some otherwise unavailable readings. As Recanati tells us,1 one of the first assaults on the Grice “Conversational Hypothesis”2 was related to the fact that it seemed unable to account for embedded implicatures. As in sentence (5), (5) John has either 3 or 4 sons

the implicature seems to occur locally, within the scope of the connective “either … or”, and that seems to be incompatible with the pragmatic/global nature of conversational implicatures. The globalists/localists debate may well be considered as a debate between a semantic (grammatical) approach and pragmatic (inferential) one to implicatures. One may think that the logical space is completely satisfied by these two positions, and in fact this is the idea of the 1 2

See Recanati (2003). See Cohen (1971) and (1977).

203 majority of theorists of both positions. However, Recanati argued the case for a third (mixed) approach, he calls it “modulation approach”. According to this one, it is in fact possible to account for the phenomenon of embedded implicatures without ending up with a fullblooded semantic proposal. The logical possibility of such approach is based on the distinction between two kinds of pragmatic processes: primary pragmatic processes and secondary pragmatic processes. Primary pragmatic processes (e.g. free enrichment) can affect what is said, while secondary pragmatic processes are typical Grice-like postpropositional inferences (I refer to Recanati’s works for further details).3 Recanati’s claim is that embedded implicatures are a case of free enrichment in which the standard meaning is contextually strengthened. These implicatures fall within the scope of operators and this tells us that they are not classical conversational implicatures à la Grice, but pragmatic constituents of what is said (according to Recanati, in the computation of what is said we can activate the non literal as well as the literal candidate). Moreover, he holds that between localists’ approach and his own approach there is no incompatibility at all: We have seen that there are two viable approaches to embedded implicatures: a semantic approach in terms of default implicatures, and a pragmatic approach in terms of free enrichment. Which one is to be preferred? Well, I’m not sure that we really have to choose. (Recanati 2003: 320)

2. Evolution and mechanisms For the time being, we do not need to move for any of the hypotheses so far discussed. I would like to direct now your attention toward a facet rarely discussed in-depth by the literature, especially the philosophical one on implicatures. To my knowledge, one explicit hint is the following, once again by Recanati: The default generation-and-removal of scalar implicatures therefore mimics, within grammar, the Gricean search for maximal informativeness. We may perhaps think of Gricean post-propositional mechanism as being the 3

See Recanati (1995, 2001 and 2004).

204 evolutionary source of the grammatical mechanism which Chierchia describes. It is as if pragmatic mechanism had been incorporated into the design of grammar to make it more efficient. (Recanati 2003: 308)

What I am talking about is language evolution. To be sure, language is a topic we can deal with in a biological perspective. So, I argue, it might be interesting to undertake an evolutionary standpoint to look at the debate between those who think that a specific phenomenon is ruled by purely linguistic principles, according to the constraints of UG (Universal Grammar), and those who support the critical role of pragmatics in the derivation. We will see whether or not there is any chance to provide argumentative evidence to one of the three approaches. Well, let me start by looking for a more detailed analysis of Recanati’s passage. Even if he is not explicitly adopting the view according to which the implicature-computing mechanism is the result of an evolutive process, his logical notice is quite clear: that mechanism might be evolved to make the design of grammar more efficient. One may be tempted to ask: what does it mean more “efficient”? What’s the aim in relation to which design of grammar can be said to be more or less efficient? In fact, I see Recanati as not referring to something like the connection of different interfaces: this would count as some sort of best solution coherent with the Chomsky ideal of Optimal Design.4 The position he’s presenting, though, states that pragmatic mechanisms have been incorporated into syntax to let the speakers improve their communicative abilities. Later in the text, always without explicitly adopting the theory he sketched, Recanati writes: Even though it presumably evolved from a pragmatic mechanism involving the Gricean maxim of quantity the default generation of scalar implicatures is not itself a pragmatic mechanism in the full-blooded sense [...]. (Recanati 2003: 308)

To put it in evolutionary terms, improvement in human communicative abilities could have resulted in an increase in Darwinian fitness for 4

See Chomsky (2001).

205 agents provided with that communicational system. You can find lots of works in the literature supporting the prominent function of communication in language evolution. Linguists like Pinker and Jackendoff, for instance, argued that language is in fact an adaptation for communication. To be sure, you may consider this position stronger than the one sketched by Recanati, but I don’t think so. You may argue that Recanati did not say that language is an adaptation for communication, but that some linguistic mechanisms of derivation could have evolved to provide agents with an efficient system of communication. Thus, even if the question does not concern language as a whole, those mechanisms could be some kind of adaptation for communication. But that would entail that language is a “collection of special-purpose mechanisms, each shaped by evolution to perform a particular function” (Spelke and Kinzler 2007: 89), and I think no one would accept that view. Moreover, we should be presented with a precise, but still unavailable, description of these functional distinctions. As a conclusion, the hypothesis presented by Recanati is not weaker than the Pinker’s one. In practice, it is conceptually equivalent, but it carries an empirically stronger assumption. We should now concentrate closer to language evolution. To deal with a topic like this, i.e. is language an adaptation for communication?, you have to deal with distinct (though essentially related) topics at different times. Roughly speaking, we have to ask to ourselves: 1. what is language (as a uniquely human feature)? 2. can language be an adaptation for something? 3. what about this “something”? I will try to deal with these topics in that order. 2.1 Evolution and communication According to Pinker and Jackendoff, language is to be considered a highly articulated system that “evolved in the human lineage for the communication of complex propositions” (Pinker and Jackendoff 2005: 204). Besides, language is made out of different special-to-language elements like conceptual structure, speech production, speech perception, phonology, word learning and syntax. To be sure, you can find homologs in nonhuman communication: for example, birds have

206 a great ability to repeat birdsongs and to produce formants (much greater than primates. However, Pinker and Jackendoff see an essential difference in complexity: [...] birds and primates produce formants [...] in their vocalization by manipulating the supralaryngeal vocal tract, a talent formerly thought to be uniquely human. Nonetheless, by all accounts such manipulations represent a minuscule fraction of the intricate gesture of lips, velum, larynx, and tip, body and root of the tongue executed by speakers of all human languages. (Pinker and Jackendoff 2005: 208)

At the same time, human traits like words are to be considered as essentially linguistic features. Grant, it’s possible to use words-faculty in other fields of human knowledge, but even in those cases it would be nothing but a tiny fraction. That’s what language is. As a consequence, according to Pinker and Jackendoff it is a truism to say that language is an adaptation for communication: [...] the design of language – a mapping between meaning and sound – is precisely what one would expect in a system that evolved for the communication of propositions. We cannot convey recipes, hunting techniques, gossip or reciprocal promises by “manner of walking or style of clothes or hair,” because these forms of behavior lack grammatical devices that allow propositions to be encoded in a recoverable way in details of the behavior. (Pinker and Jackendoff 2005: 224)

2.2 Narrow faculty of language Indeed, there is another way to look at the topics we are dealing with. Hauser, Chomsky and Fitch (2002), for instance, argued for a distinction between two understandings of the faculty of language, the “narrow faculty of language” and “the broad faculty of language”. According to them, the broad faculty of language itself contains the narrow faculty of language, i.e. a computational system, plus what they call “a sensorymotor” system and “a conceptual-intentional” system. Although broad faculty of language holds some specific properties which allow humans to ”readily master any human language without explicit instruction”

207 (Hauser, Chomsky and Fitch 2002: 1571), it does not hold some necessary but non sufficient features for language (i.e., memory, respiration etc.). Concerning narrow faculty of language, they state: We assume, putting aside the precise mechanisms, that a key component of FLN [narrow faculty of language] is a computational system (narrow syntax) that generated internal representations and maps them into the sensory-motor interface by the phonological system, and into the conceptual-intentional interface by the (formal) semantic system. (Hauser, Chomsky and Fitch 2002: 1571).

According to them, only narrow faculty of language is uniquely human (as an evidence, there are no data concerning recursion in nonhumans), while broad faculty of language is shared with nonhumans. Recall the case of words. Hauser, Chomsky and Fitch consider words as an uniquely human feature, but not as a uniquely linguistic features; as a consequence, they don’t think words to be part of the narrow faculty of language. The evidence for the contrary provided by Pinker and Jackendoff is to weak to be endorsed, and one may in fact say that: Words have qualities unique to language, just as chess moves have qualities unique to chess, and theorem-proving has qualities unique to mathematics. (Fitch, Hauser and Chomsky 2005: 202)

Other features, like speech production, are in fact involved in linguistic communication but cannot be part of narrow faculty of language, because they are not uniquely human. Consider the case of lowered larynx, once considered a human special-to-language feature. Fitch himself and his colleague David Reby recently discovered that red male deers have a permanently lowered larynx which they pull down even further during roaring.5 Chomsky, Hauser and Fitch admit of course the importance of this anatomic feature for language, but they consider it as a pre-adaptation, rather than an adaptation. Other examples are related to vocal learning. It is in fact possible to find complex instances of that 5

Check the Fitch homepage at St. Andrews for nice examples: http://www.standrews.ac.uk/~wtsf/.

208 feature in many nonhuman species (recent researches involve pinnipeds). Finally, from a purely methodological point of view, Fitch, Hauser and Chomsky consider the distinction between broad faculty of language and narrow faculty of language to be an useful one. If you see language as a whole system uniquely human, then you will deprive yourself of important comparative data. As is well known, theorists can count on neither comparative nor paleontological data when they deal with the analysis of uniquely human behavioral traits. As narrow faculty of language is the abstract computational system I mentioned, so is recursion. Chomsky, Hauser and Fitch consider all the arguments supporting the view that language is an adaptation as out of the mark. They write: If FLN is indeed this restricted, this hypothesis has the interesting effect of nullifying the argument from design, and thus rendering the status of FLN as an adaptation open to question. Proponents of the idea that FLN is an adaptation would thus need to supply additional data or arguments to support this viewpoint. (Hauser, Chomsky and Fitch 2002: 1573)

The hypothesis that language is an adaptation for communication is far too vague to be of any use. They then propose a refinement of the concept of adaptation, and introduce a new distinction between current utility and functional origin. Consider the first one. The question “what language is for?” would be then equivalent to “in which domains of human knowledge is language useful?”. But the latter is clearly senseless. It seems to me that we would have lots of possible answers, none of them asserting that communication is the only (nor even main) domain of language. On the one hand, if you take language to be something close to broad faculty of language, then you should consider the utility of it in inner speech. On the other hand, if you take language to be something close to narrow faculty of language, you then should consider the relevant function of it in problem-solving. For theorists who support communication, things get even worse if you consider functional origin. Take broad faculty of language. It is quite difficult even to understand what one would mean by asking for the “origin” of a multicomponent system. Although the question of functional origin of narrow faculty of language seems to be addressable, there is no reason to

209 move for communication and exclude other options. Language could have evolved for reasons related to thought and knowledge organization. Recent experimental works by Elizabeth Spelke6 underline the linkingfunction of language concerning what she calls the human (although, not uniquely human) “core knowledge”. 2.2.1 Evolutionary game theory At this point, let me briefly refer to evolutionary game theory. Evolutionary game theory is a specific reinterpretation of standard game theory that gives itself the purpose to describe biological evolution of living populations. In the last 10-15 years scholars of this field have started to be interested in language evolution. For instance, Martin Nowak and his colleagues argue that a proto-language had evolved in order to overcome the bottleneck due to human limited capacity of expression. According to them, the process of “word formation” and development of the “basic grammatical rules” could be explained by the presence of an error-limit in the production and perception of speech. Due to constraints in the metric space and in the number of consistent sounds, the increasing of expressiveness that is gained introducing new sounds is balanced by the loss of accuracy due to closeness of sounds within the metric space. Thus, word formation, that is assigning meaning to strings of sounds rather than to atomic sounds, represents the best solution to overcome these physical constraints. This could be the origin of language. As they state: The origin of life has been described as a passage from limited to unlimited hereditary replicators, whereas the origin of language as a transition from limited to unlimited semantic representation. (Nowak and Krakauer 1999: 8030)

I really disagree with that approach, but not because of technical deficiencies. It seems to me that from a very general point of view the conclusions it presents are highly out of touch with evidences it 6

See Hespos and Spelke (2004), Hauser and Spelke (2004) and Feigenson, Dehaene and Spelke (2004).

210 effectively gives. It could be fully acceptable to assert that the processes they describe amount to some sort of structural adjustment of articulatory features to physical constraints. But this would in fact count as a theory concerning articulatory and phonatory system, not language. That language is made out of speech production and perception features is another theory, namely the Pinker’s theory on language evolution. And I have already provided some arguments to find it unconvincing. 3. Values and Processes Before I go ahead, let me briefly recap. In this paper, I first presented the phenomenon of Sis and standard approaches to it, namely the grammatical one à la Chierchia and the inferential one à la Grice. I said that only the grammatical approach seems able to account for embedded implicatures. I then presented the modulation approach as a pragmatic approach that can account for embeddability. According to Recanati, some pragmatic processes are involved in the very determination of what is said. SIs may be then considered as a special case of one of those processes, i.e. free enrichment. Later in his work on Embedded Implicatures Recanati mentions the possibility of considering those linguistic mechanisms as a result of an evolutive process. I discussed the view that language is an adaptation for communication, and I provided some arguments in order to reject it. It could be useful to repeat again that I presented this view despite the fact that Recanati never explicitly adopted it or argued for it. For that reason, in the rest of this paper, I will try to justify my “narrative” choice. Generally speaking, I take my digression to be an useful one because it advances an important distinction between questions of computation and questions of evolution. As Fitch, Hauser and Chomsky write: “Crucially, questions of mechanism are distinct from and orthogonal to questions of adaptive function” (Fitch, Hauser and Chomsky 2005: 184). An application of that distinction might lead us to a better understanding of different approaches to pragmatics. As some think, 7 Relevance theorists’ objections to Grice are based on a partial misunderstanding of 7

See for instance Saul (2002).

211 his goals. The aim of Grice is not to provide a theory of cognitive processes behind audience interpretation of implicatures and communication, Saul says, but to provide a theory of saying and implicating. I see the distinction mentioned in the quotation above to play a crucial explanatory function. In addition, these remarks might help us to reject game theoretical analysis of language evolution. The first applications of game theory to the study of human communication was in fact developed in a Gricean environment and concerned reconstruction of reasoning and related mechanisms (e.g. disambiguation).8 Perhaps evolutionary game theory is just the result of an oversimplified theoretical transfer. As I said, computer models provided by Nowak and colleagues could be extremely useful to describe physical processes related to articulatory system. But, getting more specific, one may ask: can the debate on language evolution cast some light on the question we started with? Can we choose which approach to adopt or rather which approach to exclude? I think, safely enough, we can answer yes to those questions. We know from the reading of Chomsky, Hauser and Fitch that the computational system is something we should consider separately from physical features due to empirical constraints as well as from features related to lexical knowledge, contextual knowledge, argumentative and communicative practices. To put it this way, communication may well be the most adequate justification to some linguistic mechanisms; but we cannot consider it as something that gives shape to the computational system. The way it works corresponds to that of an algorithm which gets applied to a pre-existent structure. As a consequence, we should be suspicious towards mixed approaches à la Recanati. In fact, I think he overemphasizes the possibility of integration between grammar and pragmatics. In Recanati 2003 he proposes to consider two different topics. The first one concerns pragmatic default values. Recanati understands the computation of such values as part of the computational system. To be sure, Chierchia himself says that strengthened values can be processed by default, that is according to computational constraints. 8

See Lewis (1969), Rabin (1990), Parikh (2001), Stalnaker (2005) and Jaeger (2008).

212 The second topic concerns primary pragmatic processes. Recanati, as well as supporters of the Truth-Conditional Pragmatics, would consider those processes as actually primary, that is as processes that work locally. Both globalists and localists would disagree. Now, I see the postulation of primary pragmatic processes as the very theoretical move we are not allowed to do with regard to the debate on language evolution. Again, communication can provide adequate justification to some mechanism, because of its search of maximal informativeness. But it cannot give shape to the computational structure. With respect to this, modulation approach is unsustainable.

References Atlas, Jay and Stephen Levinson 1981. It-Clefts, Informativeness, and Logical Form: Radical Pragmatics (revised standard version). In: Peter Cole (ed.), Radical Pragmatics, New York: Academic Press. Chierchia, Gennaro 2004. Scalar Implicatures, Polarity Phenomena and the Syntax/Pragmatics Interface. In: Adriana Belletti (ed.), Structures and Beyond, Oxford: Oxford University Press. Chierchia, Gennaro, Danny Fox and Benjamin Spector to appear. The Grammatical View of Scalar Implicatures and the Relationship between Semantics and Pragmatics. In: Paul Portner, Claudia Maienborn and Klaus von Heusinger (eds.), Handbook of Semantics, Berlin: Mouton de Gruyter. Chomsky, Noam 2001. Su natura e linguaggio. Siena: Edizioni dell’Università degli Studi di Siena. (Rep. as: Chomsky, Noam 2002. On Nature and Language. Cambridge: Cambridge University Press). Cohen, Jonathan 1971. Some Remarks on Grice’s Views About the Logical Particles of Natural Language. In: Yehoshua Bar-Hillel (ed.), Pragmatics of Natural Languages, Dordrecht: Reidel. Cohen, Jonathan 1977. Can the Conversationalist Hypothesis Be Defended? Philosophical Studies 31, 81-90. Feigenson, Lisa, Stanislas Dehaene and Elizabeth Spelke 2004. Core systems of number. Trends in Cognitive Sciences 8:7, 307-314. Fitch, Tecumseh, Marc Hauser and Noam Chomsky 2005. The evolution of the language faculty: Clarifications and implications. Cognition 97, 179-210. Grice, Paul 1989. Studies in the Way of Words. Cambridge, MA: Harvard University Press.

213 Hauser, Marc, Noam Chomsky and Tecumseh Fitch 2002. The Faculty of Language: What Is It, Who Has It, and How Did It Evolve? Science 298, 15691579. Hauser, Marc and Elizabeth Spelke 2004. Evolutionary and developmental foundations of human knowledge. In: Michael Gazzaniga (ed.), The Cognitive Neurosciences, Cambridge, MA: MIT Press. Hespos, Susan and Elizabeth Spelke 2004. Conceptual precursor to language. Nature 430, 453-455. Horn, Laurence 1989. A Natural History of Negation. Chicago: University of Chicago Press. Jackendoff, Ray [typescript]. Your Theory of Language Evolution Depends on Your Theory of Language. Jaeger, Gerhard 2008. Application of Game Theory in Linguistics. Language and Linguistics Compass 2:3, 406–421. Lewis, David 1969. Convention. Cambridge, MA: Harvard University Press. Levinson, Stephen 1983. Pragmatics. Cambridge: Cambridge University Press. Levinson, Stephen 2000. Presumptive Meanings: The Theory of Generalized Conversational Implicature. Cambridge, MA: MIT Press. Maynard Smith, John, and George Price 1973. The logic of animal conflict. Nature 246, 15-18. Nowak, Martin, and David Krakauer 1999. The evolution of language. Proc. Natl. Acad. Sci. USA, 96, 8028-8033. Parikh, Prashant 2001. The Use of Language. Stanford: CSLI Publications. Pinker, Steven 1994. The Language Instinct: How the Mind Creates Language. New York: HarperCollins. Pinker, Steven and Ray Jackendoff 2005. The Faculty of Language: What’s Special about it? Cognition 95, 201-236. Rabin, Matthew 1990. Communication between rational agents. Journal of Economic Theory 51, 144–170. Recanati, François 1995. The Alleged Priority of Literal Interpretation. Cognitive Science 19:2, 207-232. Recanati, François 2001. What is said. Synthèse 128, 75-91. Recanati, François 2003. Embedded Implicatures. Philosophical Perspectives 17:1, 299–332. Recanati, François 2004. Literal Meaning. Cambridge: Cambridge University Press. Saul, Jennifer 2002. What Is Said and Psychological Reality; Grice’s Project and Relevance Theorists’ Criticisms. Linguistics and Philosophy 25, 347-372. Sbisà, Marina 2007. Detto e non detto. Le forme della comunicazione implicita. Bari: Laterza. Spelke, Elizabeth and Katherine Kinzler 2007. Core knowledge. Developmental Science 10:1, 89-96.

214 Stalnaker, Robert 2005. Saying and meaning, cheap talk and credibility. In: Anton Benz, Gerhard Jaeger and Robert van Rooij (eds.). Game Theory and Pragmatics, Palgrave MacMillan. Trapa, Peter, and Martin Nowak 2000. Nash equilibria for an evolutionary language game. Journal of Mathematical Biology 41, 172–188.

Stefano Predelli University of Nottingham [email protected]

Semantics and Contextuality: The Case of Pia’s Leaves Abstract: This essay defends a response to a classic contextualist argument against the traditional paradigm in truth-conditional semantics. According to that argument, traditional semantics fails to take into account certain truth-conditionally relevant forms of contextuality, not reducible to classic forms of either ‘pre-semantic’ or ‘meaning governed’ contextual dependence. The response put forth here grants the contextualist contentions that, in the cases under discussion, ambiguity, ellipsis, indexicality, or classic Gricean manoeuvres are of no relevance. However, it counters that the intuitions put forth by the contextualists are naturally assimilable to standard forms of pre-semantic contextuality, and are thus not problematic from the traditional semantic standpoint.

0. Introduction This essay aims at responding to a long standing challenge against a certain understanding of the relationships between meaning and truth, dominant in the tradition of so-called natural language semantics. In a sense that will hopefully become clearer in what follows, it is the trademark of what I call the traditional paradigm that meaning determines truth-conditions.1 An influential criticism against the traditional paradigm, originating in the work of John Searle and Charles Travis and echoed with renewed vigour in the so-called ‘truthconditional pragmatics’ movement, insists that meaning is, at least in some important cases, insufficient for the establishment of the intuitively correct truth-conditions (see for instance Searle 1980 and Travis 1985). What must be taken into account, so it is alleged, are also contextual 1

For an introduction to some prototypical examples within the ‘traditional paradigm’ see Kaplan (1977) and Dowty et al. (1981).

216 factors of a type that fails to be recognized by the traditional approach to contextuality. The relative clause at the end of the foregoing paragraph is of crucial importance. For it is an uncontested truism that the traditional paradigm is well equipped to deal with certain forms of contextual dependence, in particular with so-called pre-semantic contextuality, and with the sort of meaning-governed contextuality exemplified by the phenomenon of indexicality. The anti-traditionalist challenge thus aims at unveiling novel forms of contextual dependence, allegedly not reducible to classic examples of indexicality and pre-semantic regimentation. This challenge is in turn pursued by presenting examples whose intuitive truthconditions may apparently not be derived solely on the basis of linguistic meaning, even once the traditional sources of contextuality are taken into account. One notorious example will guide me throughout this essay, Travis’ tale of Pia’s leaves (Travis 1997) – although my considerations will easily be extendable to any other scenario the anti-traditionalists have concocted. My response grants Travis’ intuitions pertaining to the truthvalues of the utterances involved in that tale, but argues that these intuitions are in fact naturally explainable on the basis of the sort of presemantic contextuality recognized by the traditional paradigm. In Sections 1 and 2 I briefly present the relevant aspects of the traditional approach to meaning, truth conditions, and contextuality. In Section 3, I summarize Travis’ objection to that approach. In Sections 4 and 5, I explain why that objection is ineffective, and how the forms of contextuality it highlights are harmlessly consistent with the traditional paradigm. 1. The Traditional Paradigm Take an English sentence S. According to the traditional paradigm, it is the responsibility of the syntactic analysis of S to provide a suitable representation of S which will eventually be assigned appropriate semantic values by the semantic interpretive system. Merely for concreteness’ sake, I settle here for the relatively widespread understanding of such a representation as an LF (logical form), typically

217 understood as a complex syntactic item encoding the sort of information of semantic relevance. My casual commitment to this model of the syntax/semantic interface is not immediately relevant for my argument, and alternative views may be taken on board without further ado. It is an undisputed truism that LFs are distinct from surface English sentences, and that the choice of the LF appropriate for the use of a given sentence on a certain occasion is a contextual, non-meaning governed business. The classic phenomena motivating this stance (though by no means the only ones) are ambiguity and ellipsis. So, for instance, the English sentence (1) John went to the bank may not immediately be supplied as input for the process of semantic analysis, and hence may not serve the role of an LF, due to the presence of the lexically ambiguous term ‘bank’. What is needed, from the viewpoint of a system devoted to the analysis of the relationships between meaning and truth, is a process of ‘disambiguation’, that is, a process involving the choice of the appropriate meaning-bearing item – in the traditional notation, a choice between ‘bank1’, eventually interpreted in terms of the sides of a river, and ‘bank2’, associated with financial institutions. The reason why ‘disambiguation’ is, in some logical sense of priority, ‘prior to’ semantic evaluation, should be obvious: any apparatus devoted to the discussion of the role meaning plays with respect to truth-conditions must operate on the assumption that the meaning-bearing items have been selected in an appropriate manner.2 It is an equally undisputed truism that a similar stance is appropriate for instances that involve structural, rather than lexical ambiguity, as in (2) Mary is very rich and happy.

2

I hasten to add that this sense of priority has nothing to do with ‘cognitive temporal priority’: issues in cognitive linguistics are utterly tangential for the aim of this essay.

218 Indeed, the traditional slogan that meaning determines truth-conditions must be understood so as to allow for structural truth-conditional effects: just as ‘John loves Mary’ should be evaluated differently from ‘Mary loves John’, a sentence such as (2) should be disambiguated with respect to the scope of ‘very’ in order to obtain the results presumably appropriate on this or that occasion: either that Mary is exceedingly rich and normally cheerful, or that her levels of wealth and happiness are both noticeably above average. It is uncontentious that the criteria appropriate for the choice of this or that LF, say, for the choice of ‘bank1’ or for the decision that the semantic effect of ‘very’ extends over ‘rich and happy’, depend on context. The sense of context at issue here has to do with what is sometimes called ‘wide contextuality’: it is obviously extra-linguistic competence and common sense that motivate the interpreter’s preference for, say, ‘bank1’ in scenarios having to do with the explanation of why John left with a fishing rod in his hand. For reasons that should by now be sufficiently clear, this type of contextual dependence is thus ‘presemantic’, that is, it is a pre-condition for an application of the semantic apparatus eventually able to yield the desired semantic results. The traditional paradigm also recognizes an importantly different, meaning-governed form of contextuality, demanded by the presence of indexical expressions such as ‘here’ or ‘now’. So, for instance, the sentence (3) John is here now may not be interpreted ‘in isolation’, due to the presence of expressions which, by virtue of their very meaning, require appropriate contextual parameters in order to yield appropriate truth-conditional contributions. For this reason, what is being supplied as input to the system of semantic interpretation whenever indexical languages are at issue are not lone LFs, but LFs accompanied by a repository of the (typically extralinguistic) items demanded by the indexicals: a location for ‘here’, a time for ‘now’, etc. It is customary to refer to such a repository as a ‘context’. Since the sense of ‘context’ relevant here is importantly different from the everyday sense of the term, and in particular from the

219 ‘wide’ sense of context to which I alluded above, I shall hereinafter refer to it as s-context (for ‘context in the semantically relevant sense of the term’).3 One point in this respect is of fundamental importance. On the one hand, the relationship between, say, ‘now’ and a time is meaninggoverned: it is part and parcel of any appropriate account of the meaning of this expression that it recognizes its semantic dependence on an item of that sort. Which time is relevant on a certain occasion of language use is, on the other hand, an unquestionably extra-semantic business. It may of course well be true that, more often than not, such a time is easily identifiable as the time of speaking. Still, such an identification is surely not the responsibility of a system devoted to the study of the meaning of ‘now’ – as testified by the fact that instances in which ‘now’ does not intuitively pick out the time of utterance, as in recorded messages or instances involving the so-called ‘historical present’, do not (or at least not inevitably) require a revision of the meaning of that expression. What this indicates is that, in the case of ambiguous, elliptical, indexical languages such as English, pre-semantic considerations must yield a two-fold input suitable for semantic evaluation, formally representable as a pair consisting of an LF and an s-context, on the basis of extra-semantic, context-grounded considerations of plausibility and conversational appropriateness. Questions of ambiguity and ellipsis are not of immediate relevance for the discussion of Travis’ example, and have been introduced above merely as a pedagogical example of uncontroversially extra-semantic decisions. What deserves closer scrutiny, on the other hand, is the structure and make up of the scontexts appropriate in this or that scenario. I turn to some additional preliminary considerations in this respect, and to a few details pertaining to the inner workings of the semantic apparatus, in the next section.

3

In Predelli (2005) I referred to s-contexts as ‘indexes’ (not to be confused with the use of ‘index’ in, for instance, Lewis 1980).

220 2. S-contexts and the Traditional Paradigm As explained in Section 1, an utterance taking place in a particular setting must be represented by means of a LF/s-context pair, taking into consideration the peculiar features of that setting. Once such a pair has been identified, the traditional semantic apparatus proceeds by assigning appropriate meanings to the simple expressions under study, by determining their semantic value with respect to the s-context in question, and by establishing the semantic effects achieved by syntactic structure. So, to return to the example of (3), repeated here for readability’s sake, (3) John is here now and assuming for simplicity’s sake that this superficial sentence suffices for the role of LF, what is needed is, among other things, an assignment of meanings to ‘John’, ‘is’ (presumably in the sense of ‘is located’), ‘here’, and ‘now’. On any adequate account, these meanings eventually yield, with respect to any s-context c, respectively John, the relation of being located, the location of c, and the time of c. On the basis of obvious compositional regularities, this does in turn yield the apparently desired outcome that (3) is true as long as John is at that place at that time. This conclusion is a biconditional, typically rendered by means of a relational outcome: (3), when interpreted with respect to an s-context c, is true with respect to all and only those circumstances in which John is at the location of c at the time of c. It follows that sentences (or, more precisely, their LFs) are assigned a truth-value with respect to two parameters: an s-context and a circumstance, this latter parameter roughly corresponding to ‘different ways things may be’ (but see later on this ‘correspondence’). This much, of course, is the trademark of the classic double-index approach to indexicality, an approach that remains uncontroversially untouched by the anti-traditionalist arguments. Still, a more ‘direct’ result of truth-value is obtainable for LF/scontext pairs (that is, for utterances, at least in the formal sense of the term). Intuitively, my utterance of, say (3), is interpreted as true tout

221 court iff it is true at the circumstances I inhabit – iff, in other words, the actual world is such that, in it, John is at the place where I am speaking at the time of my speaking. Formally, this intuition is rendered by equipping s-contexts with a ‘privileged’ circumstance of evaluation: truec(S) iff the semantic value of S with respect to the s-context c and the possible world of c is the Truth. In this sense, then, s-contexts provide a representation not only of the items required by the meanings of the indexicals, such as a place for ‘here’ or a time for ‘now’. They also reflect the understanding of ‘context’ in the sense of the speaker’s whereabouts, i.e., of the way things happen to be in the world she inhabits. It is a relatively immediate consequence of this stance, together with the considerations from Section 1, that pre-semantic representational decisions, grounded on common sense and wide context, also affect the choice of the s-context’s circumstance – the choice of how things happen to be from any viewpoint relevant for the appraisal of our intuitive assessment of this or that utterance. This point, as we shall see, will be of immediate relevance for the discussion of Travis’ example and, more generally, for the rejection of the antitraditionalist challenge. 3. Pia’s Leaves Pia owns a Japanese maple tree, whose leaves are naturally russet. For reasons that need not concern us here, she paints the leaves green. Here are two scenarios, developed so as to elicit contrasting intuitions of truth-value for utterances of one and the same sentence. In scenario one, Pia is addressing a photographer interested in green subjects. She utters: (4) The leaves are green. Pia’s utterance, so Travis insists, is intuitively true. Switch to scenario two, where Pia is talking to a botanist interested in determining the natural colour of certain plants. Pointing to her tree, she utters (4) again. This time, so it would seem, her utterance is false: the leaves are merely painted green, but are ‘in fact’ russet.

222 According to Travis, this example indicates the possibility that one and the same sentence, (4), be uttered truly on some occasions, but falsely on others. Since the colour of the leaves has remained unchanged, so the story goes, this provides evidence of contrasting results of truth-value with respect to one and the same state of affairs, i.e., a result of contrasting truth-conditions. This much, however, is allegedly at odds with the resources provided by traditional semantics. Of course, (4) contains indexical items, the verb’s tense and possibly, though more controversially, the choice of a domain of discourse able to identify certain leaves on the basis of the meaning of ‘the leaves’. Still, none of this is apparently of any relevance from the viewpoint of the examples cited above: what is at issue are Pia’s leaves in either scenario, and the temporal gap separating her conversations may safely be regarded as irrelevant. Indexicality, so Travis concludes, in of no help for a traditional explanation of the aforementioned presumed truthconditional discrepancy.4 Of course, all of the above would remain utterly irrelevant for the assessment of traditional semantics if (4) were somehow ambiguous or elliptical, and if results of contrasting truth-conditions were obtainable on the basis of different choices of LF’s, corresponding to alternative decisions of disambiguation or ellipsis-unpacking. Yet, (4) does not seem to be ambiguous or elliptical in any sense that may independently be invoked by the traditional semanticist. For one thing, the aforementioned utterances of (4) may well occur in ‘discourse initial position’, and more generally in the absence of any well motivated syntactic condition for an analysis of that sentence as elliptical for richer constructions. For another, nothing in (4) looks at least prima facie relevantly ambiguous: surely, structural ambiguity must be out of the question, and neither ‘the leaves’ nor ‘is green’ seems appropriately analyzable in terms of a plurality of lexical meanings: ‘the leaves’ speak of foliage, and ‘is green’ speaks of a certain colour. If all of this is taken on board, it appears that the only defence available to the traditional semanticist is to deny that our intuitions 4

For a defence of the traditional paradigm related to (but not identical with) appeals to indexicality, see Szabó (2001).

223 pertaining to Pia’s utterances deserve to be immediately accepted as constraints for an empirically adequate analysis. In itself, of course, this is not a novel view: speakers may well be confused between the sort of properties of an utterance which deserve to be accounted at the semantic level, and the kind of aspects that are more appropriately derivable from ‘pragmatic’ considerations, for instance, on the basis of classic Gricean manoeuvres.5 Still, it is difficult to see how such a defence could plausibly be developed in the case of Pia. This initial difficulty is of course not a proof that Gricean or quasi-Gricean approaches may not yield some benefit in this respect. The point remains that, here as in strategies grounded on appeals to indexicality, ellipsis, or ambiguity, the onus of proof seems to lie squarely on the traditionalist side: in the absence of non ad hoc arguments for at least superficially unmotivated manoeuvres, Travis’ assessment of Pia’s case seems initially convincing. Travis’ conclusion is that his assessment of Pia’s utterances spells trouble for the traditional view: one non-elliptical, non-ambiguous, nonindexical sentence may indeed be used truly on some occasions, and falsely on others, in order to describe one and the same state of affairs. What follows, according to Travis, is that context must intervene in the establishment of the correct truth-conditions at a level incompatible with the traditionalist slogan that meaning determines truth-conditions: even after ambiguity or ellipsis have been resolved, and even after all indexicals have been interpreted, the meaning of the expressions in (4) and its syntactic structure fail to establish a fixed truth-conditional outcome in the absence of further contextual elements. In what follows, I explain how Travis’ conclusion does not follow from his premises. I thus grant without further ado the contentions that (i) the sentence uttered by Pia, namely (4), is not relevantly indexical, (ii) it is not elliptical or structurally ambiguous, and does not contain lexically ambiguous expressions, and (iii) Travis’ intuitions about the truth-values of Pia’s utterances constrain the shape for an empirically adequate semantic account. I then explain how, even after these assumptions are taken on board, the traditional paradigm is sufficiently

5

For a related answer to Travis, see for instance Sainsbury (2001).

224 well equipped to yield a pre-theoretically acceptable analysis of Pia’s scenarios. 4. Pia and the Traditionalists Travis’ case is grounded on certain intuitions pertaining to the truthvalue of Pia’s utterances – for further reference, her utterance Phot as a reply to the photographer’s request, and her utterance Bot during the discussion with the botanist. This is indeed as it should be: Travis’ point is that traditional semantics is empirically inadequate, that it, that it is ill equipped to reflect certain relatively solid pre-theoretic inclinations on what may be uttered truly on this or that occasion. Yet, the source of our intuitions in Pia’s scenarios is worthy of closer consideration. Why think that Phot is true and Bot false, when nothing in Pia’s plant has been allowed to change? Though vague, Travis’ reply to this query must be on the right track: what matters, among other things, must be considerations of ‘wide contextuality’, presumably having to do with non-meaning governed questions of relevance, appropriateness, or informativeness. Although we know (or at least, as far as Pia’s example goes, may well know) what it is for a surface to be green, we apparently discriminate between what significantly ‘counts as’ the colour of certain items on the basis of the topic of conversation or the conversants’ interests: from the photographer’s viewpoint, but not for the botanist’s purposes, what apparently matters is the hue of the outermost layer covering the foliage. As Travis puts it: The English ‘is green’ speaks of a certain way for things to be: green. One might say that it speaks of a certain property: (being) green. If we do say that, we must also say this about that property: what sometimes counts as a thing’s having it sometimes does not. (Travis 1997: 98)

This much, of course, must necessarily be taken on board by the antitraditionalists: if, for some reason, considerations of wide contextuality did not ‘really’ affect what counts as green on this or that occasion, any correct semantic verdict ought to disregard our pre-theoretic intuitions as irrelevantly misled by considerations of contextual salience, thereby

225 depriving the anti-traditionalist of any appeal to divergent truth-values. If, for instance, an argument could be mounted to the effect that the ‘true’ colour of an object is determined by its outermost layer, our intuitions to evaluate Bot as false would naturally be explainable in terms of an error theory: though perhaps misleading as an indication of botanical relevance, our assessment of the leaves as green would remain inevitably true. Thus, if the anti-traditionalist argument is even initially plausible, it must be the case that our intuitive assessments of the leaves’ greenness, shaped by considerations of common sense, conversational topic, and the like, must indeed be left untouched. For all semantic purposes, in other words, Pia’s leaves are correctly assessed as green from a photographer’s viewpoint, but not for any aim of interest from the botanist’s perspective. To put it otherwise: the plant’s state, though clearly not irrelevant, is in itself insufficient for the decision to classify its leaves as endowed of the property of being green, that is, for the decision whether that foliage is to be allowed within the extension determined by the English expression ‘is green’. But if this is the case, for the very reason adduced by the anti-traditionalists, the circumstances relevant for the assessment of Pia’s utterances do after all change, notwithstanding the absence of any ‘intrinsic’ change in her plant’s state: in the scenario for Phot, but not in that for Bot, what is appropriate is an understanding of ‘how things happen to be’ willing to classify the painted leaves as green objects. In the jargon from Section 2: the semantic representations of Pia’s utterances must involve distinct ‘privileged’ circumstances of evaluation, and hence, in turn, distinct scontexts. It follows that, according to the guidelines to which anti-traditionalists themselves are committed, Pia’s scenarios must be represented in terms of distinct LF/s-context pairs: in either case, an LF appropriate for (4), but, in the case of Phot, an s-context containing as its circumstance a possible world cw such that, according to cw, the leaves belong in the extension of ‘is green’, and, in the case of Bot, an s-context whose circumstance is a distinct possible world cw* such that, in cw*, those very same leaves are members of the complement of that extension. Once these representations are taken into consideration, Travis’ intuitions

226 about the utterances in question do not amount to evidence in favour of a discrepancy in truth-conditions, but merely as evidence for the harmless distinction between their truth-values at distinct possible worlds. It is clear that the idea of a ‘possible world’ of relevance from the viewpoint of semantic analysis must be distinct from a ‘metaphysical’, objective idea of ‘the way things happen to be’ with the leaves. After all, by assumption, the intrinsic state of the leaves remains unchanged as Pia abandons her discussion with a photographer, and turns to her botanist acquaintance. Still, that such an intrinsic state is of little importance from the viewpoint of our intuitive response to Phot and Bot is not an ad hoc epicycle concocted with the sole aim of rescuing the traditional picture from undesirable counterexample – it is rather part and parcel of the very conditions needed to get the example started. 5. Conclusion In Section 3 I granted the anti-traditionalist assumptions that, in the case of Pia, none of the following traditional sources of contextuality is of relevance: (i) pre-semantic contextuality of the type relevant for the resolution of ambiguity or the unpacking of ellipsis; (ii) semantic contextuality appropriate for the evaluation of indexical expression; and (iii) ‘post-semantic’ contextuality, for instance of the sort involved in classic Gricean manoeuvres for determining merely pragmatically imparted information. It is a consequence of (iii) that the intuitions about the truth-values of Phot and Bot are of semantic interest: any semantically adequate account ought to evaluate the former as true, and the latter as false. It follows from (i) that no proposal concerned with the distinction between the LFs appropriate in either scenario provides an appropriate solution to our puzzle: the semantic representation of both utterances involve one and the same syntactic construct. Finally, once (ii) is taken for granted, no issue pertaining to the composition of the relevant s-contexts needs to be addressed, when it comes to the interpretation of any indexical item occurring in Pia’s sentence. The anti-traditionalist argument is grounded on the conviction that (i)(iii) exhaust the traditionally recognized forms of contextual dependence, and that, as a consequence, the traditional paradigm is ill

227 equipped for dealing with the intuitive contextual sensitivity of Pia’s utterances. The solution outlined in Section 4 challenges this conviction, on the basis of a two-fold claim: (a) traditionally recognized forms of pre-semantic contextuality are not restricted to questions pertaining to the choice of an appropriate LF, but also affect the selection of an adequate s-context; and (b) the selection of an s-context matters not only for questions of indexicality, but also for the indication of a circumstance of evaluation with respect to which results of truth-value are to be obtained. In particular, although it is (or at least may well be) true that the LFs involved in the representations of Phot and Bot are one and the same, and that the accompanying s-contexts do (or at least may well) coincide with respect to the parameters demanded by the indexical items in (4), it is also the case that these s-contexts differ in their indication of circumstance. It follows that Phot and Bot are represented by distinct LF/s-context pairs, on the basis of whatever form of ‘wide contextuality’ one may wish to appeal at the pre-semantic level of representation – in particular, with respect to any form of contextuality needed to get the anti-traditionalist’s intuition started. The conclusion is that the anti-traditionalist examples fail to provide evidence of truth-conditional discrepancy in any semantically interesting sense of the term: although Phot is true and Bot is false, this much merely reflects the utterly harmless of possibility that distinct inputs for semantic analysis end up being assigned to different semantic outcomes. At least as far as Pia’s leaves go, in other words, the traditional paradigm is perfectly at ease with our intuitive semantic assessments.

References Dowty, David R., Robert E. Wall, and Stanley Peters 1981. Introduction to Montague Semantics. Dordrecht: Reidel Publishing Company. Kaplan, David 1977. Demonstratives. In: Joseph Almog, John Perry, and Howard Wettstein (eds), Themes From Kaplan. Oxford: Oxford University Press, 1989. Lewis, David 1980. Index, Context, and Content. In Stig Kanger and Sven Öhman (eds), Philosophy and Grammar. Dordrecht: Reidel Publishing Company.

228 Predelli, Stefano 2005. Painted Leaves, Context, and Semantic Analysis. Linguistics and Philosophy 28:3, 351-74. Sainsbury, Robert 2001. Two Ways to Smoke a Cigarette. Ratio 14:4, 386-406. Searle, John 1980. The Background of Meaning. In: John Searle, F. Kiefer, and M. Bierwisch (eds.), Speech Act Theory and Pragmatics. Dordrecht: Reidel Publishing Company. Szabó, Zoltan G. 2001. Adjectives in Context. In Istvan Kenesei and Robert M. Harnish (eds), Perspectives on Semantics, Pragmatics, and Discourse, A Festshrift for Ferenc Kiefer, Amsterdam: John Benjamins, 119-146. Travis, Charles 1985. On What Is Strictly Speaking True. Canadian Journal of Philosophy 15:2, 187-229. Travis, Charles 1997. Pragmatics. In: Bob Hale and Crispin Wright (eds.), A Companion to the Philosophy of Language. Oxford: Blackwell Publishers.

Jiří Raclavský Masaryk University [email protected]

Is Logico-Semantical Analysis of Natural Language Expressions a Translation? Abstract: It is sometimes assumed that logico-semantical analysis of natural language consists in translation of natural language expressions into formal language ones. A moment reflection reveals that this translational thesis has unacceptable consequences. Firstly, to explain the meaning of the formal expression which is a translation of a natural language expression, one has to translate it into another language, thus an infinite regress of translations arises. Secondly, the translation does not disclose the meaning (it indicates only the sameness of meanings), which is a serious drawback because the semanticist’s aim is to explicate meanings. In addition to a criticism of that translational thesis, I offer an alternative explanation of typical findings of semanticists (written juxtapositions of natural and formal expressions) which fits the idea that logico-semantical analysis of natural language should provide pairs.

0. Introduction Logical semantics of natural language construes natural language as a semiotic system, i.e. as a set of signs coding extra-linguistic entities called (linguistic) meanings.1 The theoretical enterprise of logicosemantical analysis of natural language consists in explication of these meanings, i.e. in their modelling by means of rigorous tools (e.g. logic), explaining also general coding mechanisms of natural language, etc. A typical finding of a theoretician providing logico-semantical analysis of natural language is an analyst’s sentence (as I shall call it). It often takes the form: 1

It is assumed that a communication between users of the same language L (ideally) proceeds in the way that user U1 wishing to communicate messagemeaning M displays (by an acoustic or graphical means) an expression ‘E1’ which codes M in L, and U2, when encountering ‘E1’, grasps so M.

230 The meaning of the expression: ‘E’ is: ϕ. where ‘E’ (which is usually unquoted) is an expression of some natural language (the name of which should occur in the sentence) and ‘ϕ’ a term (or formula) of some formal language which serves for capturing or displaying of the meaning of ‘E’. The sentence is stated in natural language which is enriched by formal language utilized by the semanticist, forming an enhanced natural language. A particular analyst’s sentence of this enhanced natural language delivers a certain message about the meaning of ‘E’ to other analysts. Those juxtapositions of ‘E’ and ‘ϕ’ (i.e. a piece of a natural language and a piece of a formal language) in the analyst’s sentence may tempt somebody to the conviction that logico-semantical analysis of natural language consists in translation of common natural language expressions into a certain formal language. It seems also to be in accordance with the widely held belief that natural languages “hide” the proper logical forms of their expressions, and therefore some kind of “translation” into a more perspicuous language is needed.2 On the other hand, it has been often claimed that logico-semantical analysis of natural language should provide pairs. Nevertheless, analyst’s sentences show only pairs. Can this conflict be reconciled? Does logico-semantical analysis of natural language really consist in translation of natural language into formal language (within the enhanced natural language)? A time ago there were published two lucid manifestations of the two mutually exclusive opinions related to our topic. Pavel Tichý argued against the translational view (cf. Tichý 1992 and two of his posthumously published papers 1994, 1994a), stating in fact that: 2

A natural language such as English can contain (as its proper part) a portion of standard arithmetical notation. Such sublanguages of a natural language are not the semanticist’s formal languages used for investigation of natural language.

231 the task of logico-semantical analysis of natural language is to explicate the meanings of natural language expressions, not to translate them into formal language

i.e. suggesting thus to yield pairs. On the other hand, Jaroslav Peregrin criticized Tichý’s opinions (cf. Peregrin 1993), so defending Translational Thesis (as I shall call it): the task of logical semantics is to translate natural language expressions into formal language

i.e. suggesting thus to produce pairs. The present author is indebted to both these theoreticians for their arguments which are partly incorporated (and sometimes more elaborated) in the next section. The key aim of this paper is to reject the thesis that logico-semantical analysis of natural language consists in translation into formal language.3 1. Against Translational Thesis Let us begin with an example which is familiar to anybody concerned with the problem of translation between languages. Suppose an Englishman studying Czech who would like to know what the expression ‘Skot je rohatý’ exactly means in Czech. When his teacher tells him that this expression means the same as the expression ‘Das Rinde ist gehörnt’ does in German, something odd happens. The answer is not an appropriate one – for the questioner is eager to know the meaning, not the translation, of the former expression into another language. This shows that knowing how to translate an expression ‘E1’ of L1 into another language L2 by means of an expression ‘E2’ does not amount to knowing what ‘E1’ means in L1 (cf. Tichý 1994a, 53). The knowledge of the meaning congruence of ‘E1’ and ‘E2’ due to their intertranslability 3

We will assume a natural view that an expression ‘E2’ of a language L2 is the translation of ‘E1’ of L1 iff ‘E2’ has in L2 the same meaning as ‘E1’ does in L1.

232 does not entail the knowledge of a particular meaning. This is also evident from the fact that an additional answer, stating that ‘Das Rinde ist gehörnt’ means the same as a certain expression in French, does not increase the knowledge of the sought meaning. It is also clear enough that ‘the meaning of ‘E2’ in L2’ is a (rigid) description which does not display the meaning we are looking for. When the descriptum is unknown, no identities of the form ‘the meaning of ‘E2’ in L2 = the meaning of ‘E1’ in L1’ are capable to exhibit the meaning which the logico-semantical analysis of natural language should yield. Thus to say how a particular expression ‘E1’ can be translated into another language by means of an expression ‘E2’ is an essentially mistaken answer to the question ‘What is the meaning of the expression ‘E1’?’. This question is still not answered by a shift to another language. Moreover, it is a shift that leads to an infinite regress of translations (cf. Tichý 1994a: 53). Our example might be probably challenged by means of the following reasoning. Suppose that our questioner receives the answer ‘Skot je rohatý’ means in Czech the same as ‘Bovines are horned’ in English’. A bilingual person who has mastered English and German could be analogously satisfied by the explanation ‘Skot je rohatý’ means in Czech the same as ‘Das Rinde ist gehörnt’ in German’. Nevertheless, both these answers are in fact inappropriate because each of them fails to display the meaning in question. Realize also that a possible satisfaction on the part of a non-theoretician should not arise in case of a semanticist, for the aim of logical semantics is to display the meaning in a rigorous theoretical framework. At the first sight, it seems that semanticists do that. Yet it is not exactly so. Imagine somebody stating that ‘Skot je rohatý’ means in Czech the same as the formula ‘YZ@’ means in her or his favourite formal language Lϕ. Are you enlightened about the exact meaning of ‘Skot je rohatý’? Surely not. The translation of ‘Skot je rohatý’ into a formal language leaves the meaning unexplained. Indeed, to grasp the meaning of ‘YZ@’, one needs an explanation concerning what this expression means in the strange language Lϕ. Again, the problem concerning the meaning of ‘Skot je

233 rohatý’ was only shifted elsewhere. And an infinite regress of translations then arises (cf. Tichý 1994: 9-10).4 Notice also that there seems to be a paradox lurking behind the Translational Thesis. Because if the aim of logical semantics of natural language, as it is usually assumed, is to explicate meanings, yet it is impossible to provide meanings because only expressions count, then the aim of logical semantics of natural language cannot really explicate meanings – contrary to the assumption. There is an indirect argument for Translational Thesis (cf. Peregrin 1993: 75-76); it runs as follows. It is claimed that semantics should produce pairs. Since meanings coded by expressions are abstract, language-independent entities, it is clearly impossible to write down pairs. However, it is not difficult to disarm this argument. The main purpose of language is to direct audience to some thoughts (meanings) by means of perceptible items, namely by acoustic or graphical tokens of expressions. Undoubtedly, to direct another semanticist to certain extralinguistic meaning that is coded by some natural language expression, one cannot help but use a certain expression (of formal language). Clearly, one cannot say something without using words; but surely, it does not follow from this that we never speak about anything other than words (cf. Tichý 1992: 75-76). Therefore, pairs written down by a semanticist are only tools directing the attention of other theoreticians to pairs. Summing up this section: though Translation Thesis − i.e. that logico-semantical analysis of a natural language expression ‘E’ consists in its translation into a formal expression ‘ϕ’ − seems to be right, it produces really doubtful consequences. Firstly, when such translation is offered, one must ask what the meaning of ‘ϕ’ is. In accordance with Translation Thesis, the analyst should yield some other expression of some (perhaps another) language which has the same meaning, thus all these expressions being intertranslatable. A vicious infinite regress of 4

Of course, unless we endorse a vicious circle by explaining the meaning of ‘E2’ by its intertranslability with ‘E1’.

234 translations results from this. Another odd consequence of Translation Thesis is that an analyst is construed as proposing congruence of two descriptions. Thus the analyst’s sentence is in fact paraphrased to ‘The meaning of ‘E’ (in the natural language L) is the same as the meaning of ‘ϕ’ (in the formal language Lϕ)’. However, ‘the meaning of ‘E’ (in L)’ does not really present a rigorously modelled meaning of ‘E’ − the meaning of ‘E’ is therefore left unexplicated. Provided that the very project of logico-semantical analysis of natural language is not idle as such, Translation Thesis must be refuted. Hence, when doing logicosemantical analysis of natural language, we do not provide translations. 2. Logico-semantical analysis of the analyst’s sentence Nevertheless, Peregrin suggested another, non-trivial, argument supporting Translational Thesis. Though it was directed against Tichý’s logical system − transparent intensional logic (henceforth TIL) − used for hyperintensional analysis of natural language meanings, it can be modified to attack also other logical frameworks used for explication of meaning. In its general setting, the argument goes as follows. An expression ‘ϕ’ of a formal language used for analysis of meaning must be interpreted in order to mean the same (or at least nearly the same) as an expression ‘E’ of the investigated natural language (otherwise the association of ‘ϕ’ with ‘E’ would be quite arbitrary). Here is the original Peregrin’s version (cf. Peregrin 1993: 75): what makes Tichý’s notation, TIL-language, exactly interpreted is an explicit definition stating that, for instance, ‘λwλt [0Shavewt 0Jane 0Fred]’ signifies the construction of doing so-and-so (I am using ‘signify’ as a neutral semantic term). The strength of the argument can be appreciated when realizing the fact that Tichý would be the last to deny that TIL-terms signify constructions. So it seems that a particular juxtaposition of a natural language expression and a TIL-term is justified by the fact that they both signify the same construction, i.e. that they are intertranslatable. In this section I am going to show that the discussed pairs of natural language expressions and TIL-terms are not, despite the appearance, intertranslatable, because both members of these pairs signify a

235 particular construction in an entirely different way. The justification of such juxtapositions of natural language expressions and TIL-terms recording their logico-semantical analyses is rooted in the fact that each of them signifies, though in a different manner, one and the same construction. An essential part of this explanation consists in the logicosemantical analysis of the analyst’s sentence; I will employ TIL for this matter. Tichý’s transparent intensional logic was first formulated by him in the very beginning of 1970s as a typed λ-calculus (differing from that of Montague in various important respects) and it was remarkably modified in Tichý’s late book (see Tichý 1988 for a rigorous exposition of current TIL). The atomic types of TIL include individuals, truth-values, possible worlds, and real numbers (used also for representation of timemoments). Intensions are (often partial) functions from world-time pairs to objects of a certain type. For instance, propositions are intensions having truth-values as their values, properties of individuals are intensions having classes of individuals as their values. Intensions are denotata of “empirical” expressions. For instance, typical sentences denote propositions; typical monadic predicates denote properties, etc. On the other hand, numerals denote numbers; proper names denote individuals, etc. Every entity is constructible by (infinitely) many abstract procedures called by Tichý constructions. Unlike intensions, constructions are not set-theoretical entities; constructions are close to algorithms (algorithmic computations). Constructions are typically structured (the way functions are not). Since constructions are suggested to be (explications of) meanings of expressions, the semantic scheme has a procedural, hyperintensional level: an expression ‘E’ expresses (means): | the construction C (which constructs) denotes (names): | the denotatum D (an intension or a non-intension)5 5

It should be added here that some constructions are abortive, they construct nothing (e.g., [0÷ 03 00]).

236 The value of an intension denoted by an expression in a given possible world W and time-moment T is called the referent of that expression in W, T (to ascertain the referent of such expression one has to empirically investigate the state of affairs). Constructions are usually recorded by some kind of λ-terms because they are capable to faithfully depict constructions. (I will use a slightly simplified notation of contemporary users of TIL.) It is not inconvenient to view constructions as objectual correlates of these TIL-terms. (Thus realize clearly that constructions are not expressions of any particular formal language; notice also that TIL-language has a “fixed interpretation”.) We may also say that a construction is that abstract, language-independent entity which combines functions and nonfunctions denoted by sub-expressions of a certain expression into a unit, a complex whole. Here are several semantical statements refined within TIL-framework. An expression ‘E2’ of a language L2 is the translation of ‘E1’ of L1 iff ‘E2’ expresses in L2 the same construction as ‘E1’ expresses in L1. Two expressions are synonymous (typically in one language L) iff they both express one and the same construction. Two expressions are equivalent (typically in one language L) iff they express (in L) constructions C1 and C2 which construct one and the same object (such two constructions C1 and C2 may be called equivalent constructions). Before we proceed further, a few words about constructions which are objectual correlates of constants, the so-called trivializations, are needed. A slightly simplified definition: a construction which picks out an object X and leaves it, without any change, as it is, is called trivialization of X; it shall be recorded by ‘0X’. If X is, for instance, Fido, the construction 0 Fido constructs simply just Fido. If, on the other hand, X is a construction C, then 0C constructs this construction C (0C picks out C and leaves it as it is). Consider a particular example, namely the construction [0+ 02 03] which is the logico-semantical analysis of the expression ‘2+3’. This construction constructs in the following way: it takes the addition function, it takes the pair (or string) of numbers, and applies the former to the latter. The result of the constructing of [0+ 02 0 3] is the number 5 (which is the denotatum of ‘2+3’). Note carefully that the trivialization of [0+ 02 03], i.e. 0[0+ 02 03], constructs the

237 construction [0+ 02 03], not the number constructed by [0+ 02 03]. We may say that the constructing of C is “blocked” if C occurs in 0C. Note that trivializations are indispensable for correct analyses of expressions such as ‘Xenia computes 2+3’. This sentence describes an agent as being related to some procedure − not to the result of this procedure (the number 5). Thus λwλt [0Computewt 0Xenia [0+ 02 03]] is a wrong analysis, the constructing of the construction [0+ 02 03] must be “stopped” by trivialization; i.e. only λwλt [0Computewt 0Xenia 0[0+ 02 0 3]] is the correct analysis. Another relevant example is made by explicit belief sentences. Since the agent is not conscious of all logical consequences of the content of her belief, she is not directed towards a mere proposition, but to a particular construction of that proposition (one needs to “stop” the constructing of that propositional construction by trivialization). We have said that due to TIL, the meanings of expressions are constructions − not intensions or other kinds of denotata. A TILanalyst’s sentence is thus of the form: The meaning of (the expression) ‘E’ is (the construction) C. Let us provide its logico-semantical analysis. The expression ‘the meaning of’ which is used in this sentence denotes the relation “(the) meaning of” which is applied to an expression and a construction. I will assume the standard praxis of representing expressions by Gödelian numbers yielded via a particular Gödelization, i.e. 0g(‘E’) constructs the Gödelian number of ‘E’. Further, since the second relatum of the relation “the meaning of” is a construction C, we have to pick out this construction C as it is. Thus we have to deploy, on the level of meaning, the trivialization of C, namely 0C, for it is the construction C itself that is “conceptually grasped” here, i.e. picked out by a trivial, one-step, procedure 0C. The construction expressed by the TIL-analyst’s sentence is thus: λwλt [0TheMeaningOf 0g(‘E’) 0C]

238 This construction constructs an analytically true or analytically false proposition.6 Now we are ready to compare the meaning of an expression such as ‘Fido is a dog’ and a TIL-term displaying its logico-semantical analysis (e.g., in a particular TIL-analyst’s sentence). In a diagram: ‘Fido is a dog’ ‘λwλt [0Dogwt 0Fido]’ expresses: | expresses: | 0 [λwλt [0Dogwt 0Fido]] λwλt [0Dogwt 0Fido] denotes: | denotes: | 0 that proposition λwλt [ Dogwt 0Fido] We have said above that two expressions of two languages are intertranslatable iff they express (mean) in these languages one and the same construction. It is quite clear from the diagram that ‘Fido is a dog’ originally belonging to ordinary English expresses a construction entirely distinct from that expressed (within the enhanced English) by the TIL-term ‘λwλt [0Dogwt 0Fido]’. The former expression expresses a certain construction C, while the latter expression expresses the trivialization of this construction C, i.e. 0C.7 (Notice also that the 6

7

Since no expression has its meaning absolutely, the analyst’s sentence has to relate ‘E’s having of particular meaning to a particular language L. Let L be a function from (Gödelized) expressions to (first-order) constructions, i.e. the first-order code of English (whereas English is construed as synchronically given). The proper analysis of the analyst sentence is then λwλt [0TheMeaningOfIn 0g(‘E’) 0C 0L] (this analysis was published by the present author in one of his articles written in Czech in 2005). Not only that the two expressions in question are not synonymous or equivalent, the constructions C and 0C expressed by them are even categorically different − they belong to different types (for good reasons, constructions are stratified into distinct orders, cf. Tichý 1988: 66). Whereas λwλt [0Dogwt 0Fido] is a first-order construction (belonging thus to the type ∗1), 0[λwλt [0Dogwt 0Fido]] is a secondorder construction (belonging to the type ∗2). We may also say that the analyst’s sentence belongs to the second-order code (a function from expressions to second-order constructions) of English which is utilized for commenting on semantic features of the first-order code of English (remember Tarski’s metalanguage/object language distinction on this occasion). For more details please

239 justification of the respective pair is possible due to the fact that the TIL-term denotes the trivialization of C, i.e. 0C, whereas C is expressed by that natural language expression.) Hence, the two expressions which are expressive of distinct constructions cannot be intertranslatable. The aforementioned argument in favour of Translation Thesis thus falls through. Therefore, we have sustained the thesis that the aim of logic-semantical analysis of (natural) language is to provide pairs (which are recorded as pairs). Let us add two comparisons, one with Tichý, the second with Montagovians. It is interesting to find that Tichý himself rejected intertranslability of TIL-terms with the respective natural language expressions: ‘we use the [TIL] formula to name [i.e. to denote] the construction which ... is expressed (not named! [i.e. not denoted]) by the English sentence.’, ‘By juxtaposing a formula in that [TIL] notation with an English sentence ... we do not offer the formula as a translation of that sentence’ (Tichý 1980: 352); analogous statements can be found also in Tichý’s unpublished monograph Introduction to Intensional Logic completed in 1976. However, within the simple type-theoretic framework which Tichý used in those times, he could not fully explain the matter. Unlike simple type-theory which cannot classify constructions, only his ramified type-theoretic framework enables us to adequately show the lack of intertranslability of those expressions (and also properly analyze the analyst’s sentence) due to the explicit treatment of constructions within the framework. Montague (1974) and his followers repeatedly speak about translation of natural language expressions in the language of Montague’s Intensional Logic (IL), suggesting thus that pairs have the same model-theoretic meaning (as it is known, IL lacks a hyperintensional level, which is its deficiency). On the other hand, some Montagovians have already conceded that they need not provide real translations: ‘Intensional Logic [serves] as a consult the author’s (forthcoming) papers providing explications of semantic notions and solutions to semantic paradoxes.

240 formalized part of our metalanguage’, whereas our metalanguage contains also ‘ways of referring to object language (e.g., English)’ and ‘Intensional Logic could provide us with names for meanings’ of expressions of the object language − thus Montague’s method of translations is not necessary (Dowty, Wall and Peters 1992: 264). It seems that my proposal based on Tichý’s ramified type-theoretic framework is in conformity with their portrayal. 3. The conceptual role of formal language used within enhanced natural language One might perhaps doubt the logico-semantical analysis of the analyst’s sentence given above, demanding an independent explanation of why natural language expressions are not intertranslatable with expressions of formal language used by a semanticist. I.e. to be given an explanation of why there is such an incommensurability of the formal language utilized for logico-semantical analysis of natural language (to which the formal language is appended) with the original part of that natural language. Let me firstly introduce a simple test showing the principally different role of expressions belonging to the formal language used by the semanticist for explication of meanings from the original, usual expressions of that natural language. Within the enhanced natural language, ordinary expressions occur sensibly only in the supposition usualis (as I shall call it). On the other hand, formal expressions that we have embedded into that language for explication of meanings of its original expressions have a conceptual role, thus they occur sensibly in the supposition conceptualis. The usual expressions such as ‘Fido’ or ‘canine’ serve us for talking about ordinary things such as individuals, properties, etc., so we can expand them into their synonymous mates such as ‘(the) individual Fido’, ‘(the) property canine’. Formal expressions such as ‘λwλt [λx [0Caninewt x]]’ can be expanded to their synonymous mates such as ‘the construction λwλt [λx [0Caninewt x]]’; the prefix ‘construction’ thus indicates the conceptual character (and so the occurrence in the supposition conceptualis) of the subsequent expression.

241 It is then easy to ascertain that expressions of a formal language (viz. TIL-language) are meaningful in the enhanced natural language only in the supposition conceptualis (compare with ‘the construction λwλt [λx [0Canitywt x]] is a construction of the kind closure’), while being nonsensical in the supposition usualis (‘the property λwλt [λx [0Caninewt x]] is instantiated by Fido’). On the other hand, original expressions of natural language such as ‘canine’ are nonsensical when occurring in the supposition conceptualis (‘the construction canine is a construction of the kind closure’), while being meaningful in the supposition usualis (‘the property canine is instantiated by Fido’). Thus the two kinds of expressions have clearly distinct semantic properties and one cannot insist, in order to defend Translational Thesis, on the possibility that TIL-terms can denote – in some contexts – ordinary things such as individuals, properties, etc., so that they can be intertranslatable with expressions (usually) denoting the ordinary things. One may ask why it is so that the formal language used by a semanticist, the language with the coding means were explained to us in English, is not translatable with English when the formal language is used for explication of the coding means of English.8 To elucidate this matter, recall the situation when you were introduced to (typed) λcalculus (if you were not, imagine that you were). You were said, for instance, that its expression ‘[+ 2 3]’ means the application of addition to two and three. This explanation was performed in English. On that occasion, you were investigating λ-calculus by means of English which served for the explication of meanings of λ-terms. Let us call the language under inspection an investigated language and the language

8

Of course, I am omitting here that the semantics of formal languages is usually given model-theoretically. We may admit that a basic description of the semantics of formal expressions can be given in a natural language and that the model-theoretical jargon is in a sense a shorthand presentation of the matters which can be given in the natural language as well. (Working with a variety of possible interpretations so frequent within the area of mathematical logic is irrelevant here, we discuss a formal language with a fixed interpretation; Church’s own explanation of the semantics of typed λ-calculus is admittedly the best example of the matter I intend to discuss here.)

242 used for explication of the semantic features of the investigated language an explicative language. Since the notation of λ-calculus fittingly codes procedures-meanings (intended to be communicated by means of that language), it was suggested to use this formal language for explication of coding means of natural languages which are admittedly less transparent. On this occasion, the investigated language is English, not λ-calculus, and the explicative language is λ-calculus, not English, i.e. it is λ-calculus which is utilized to transparently display the meanings (procedures) which are coded by English expressions. Thus the role of λ-terms within enhanced English is conceptual because they serve for certain conceptual tasks.9 This explains why the role of formal expressions belonging to the explicative language within the investigated language is not usual (it is conceptual), while the role of original expressions within the investigated language remains usual. In consequence, one should not expect that expressions having such principally distinct roles and semantic features (formal expressions are meaningful only in the supposition conceptualis, while natural expressions are meaningful only in the supposition usualis) would be intertranslatable. Finally, let us complete the overall picture of the aim of logicosemantical analysis of natural language. The purpose of an explication is to provide rigorous concepts where there were only (often vague, imprecise) pre-theoretical notions before. Users of a natural language surely do understand its expressions. However, they do so only pretheoretically. A logical semanticist speaking this natural language is somebody who grasps the meanings coded by its expressions as well. Yet she or he goes further and offers their exact, rigorous explicans. The situation is analogous with that in physics: its aim is to explain, for instance, which physical laws and powers are needed to ride a bike; on the other hand, cyclists, and physicists among them, can ride bikes without rigorous theoretical knowledge of how they do it. A theoretician 9

For the case of Montague-like approach, the situation is in principle analogous: the explicative language is a language of model theory with a fragment of ordinary English which is used for description of the semantics of two investigated languages, viz. the language of Montague’s IL and ordinary English.

243 who investigates English can use TIL for the explication of meanings of usual English expressions. On that occasion, TIL-terms serve for illumination (displaying) of natural language meanings. Thus TIL-terms have a conceptual role. So there is no natural reason to expect that their addition to an investigated language (English, in this case) makes them becoming its proper, usual part.10

References Dowty David R., Robert E. Wall and Stanley Peters 1992. Introduction to Montague Semantics. Dordrecht: Kluwer. Montague, Richard 1974. The Proper Treatment of Quantification in Ordinary English. In: Richmond Thomason (ed.), Formal Philosophy. Selected Papers of Richard Montague, 247-270. Peregrin, Jaroslav 1993. Is Language a Code? The Logical Point of View 2, 73-79. Tichý, Pavel 1980. The Logic of Temporal Discourse. Linguistics and Philosophy 3, 373-369.11 Tichý, Pavel 1988. The Foundations of Frege’s Logic. Berlin, New York: Walter de Gruyter. Tichý, Pavel 1992. The Scandal of Linguistics. From the Logical Point of View 1, 70-80. Tichý, Pavel 1994. Cracking the Natural Language Code. From the Logical Point of View 3, 6-19. Tichý, Pavel 1994a. The Analysis of Natural Language. From the Logical Point of View 3, 42-80. Thomason, Richmond H. (ed.) 1974. Formal Philosophy. Selected Papers of Richard Montague. New Haven, CT: Yale University Press.

10

11

The author of this article is currently supported by the GAČR grant no. 401/07/P280. The first, rather longer, version of this paper was written in January 2007. All Tichý’s published papers are reprinted in Tichý, Pavel 2004. Pavel Tichý’s Collected Papers in Logic and Philosophy. Vladimír Svoboda, Bjørn Jespersen and Collin Cheyne (eds.), Dunedin: University of Otago Press, Praha: Filosofia.

Fabien Schang University of Nancy 2 [email protected]

Beyond the Fregean Myth: The Value of Logical Values Abstract: One of the most prominent myths in analytic philosophy is the so-called “Fregean Axiom”, according to which the reference of a sentence is a truth value. In contrast to this referential semantics, a use-based formal semantics will be constructed in which the logical value of a sentence is not its putative referent but the information it conveys. Let us call by “Question-Answer Semantics” (thereafter: QAS) the corresponding formal semantics: a non-Fregean many-valued logic, where the meaning of any sentence is an ordered n-tupled of yes-no answers to corresponding questions.

1. The meaning of meaning Our preliminary task is a return to the Fregean theory of meaning, where sense and reference are endorsed before being modified in their content. The contribution of Polish logic to the following paper will clearly appear meanwhile, both in the critical part of Frege’s logic and the constructive part of an alternative semantics. 1.1. The “Fregean Axiom” It is well known that, according to Frege (1892: 110), the meaning of a proper name is given by its sense and its reference. By a proper name, Frege means any expression corresponding to individual (a, b, c, …), predicate (F, G, H, …), or sentential (p, q, r, …) constants. According to his principle of compositionality, the reference (or sense) of a sentential constant p is determined by any reference (or sense) occurring in p. So far so good: the reference is that which an expression refers to, and the sense is the way by which this expression comes to refer to it. But the peculiarity of Frege’s theory lies in the sense and reference of a

246 sentence. On the one hand, the sense of a sentence is associated with a so-called “proposition” (Gedanke), which is not a grammatical proposition but an enigmatic abstract entity. On the other hand, the reference of a sentence could be expected to be a fact, or state of affairs; but Frege (1892: 34) opts for another entity: a truth value. So werden wir dahin gedrängt, den Wahrheitswert eines Satzes als seine Bedeutung anzuerkennen. Ich verstehe unter dem Wahrheitswerte eines Satzes den Umstand, daβ er wahr oder daβ er falsch ist. Weitere Wahrheitswerte gibt es nicht. Ich nenne die Kürze halber den einen das Wahre, den anderen das Falsche. Jeder Behauptungssatz, in dem es auf die Bedeutung der Wörter ankommt, ist also als Eigenname aufzufassen, und zwar ist seine Bedeutung, falls sie vorhanden ist, entweder das Wahre order das Falsche.1

A consequence of this so-called “Fregean Axiom” (dubbed so by Roman Suszko) is that every true sentence refers to one and the same thing: the True, in the sense that “all true (and, similarly, all false) sentences describe the same state of affairs, that is, they have a common referent” (Suszko 1975: 170). It results in a fully extensional logic endorsing the logical replacement theorem, where any component can be freely substituted by another component with the same truth value without changing the meaning of the composed sentence. The Fregean Axiom may seem unnatural if one tends to link the meaning of a sentence to its subject-matter, i.e. its contentual state of affairs. In this respect, Suszko’s view that the reference of a sentence is a situation appears more plausible - two sentences are identical if and only if they refer to the same situation. By a situation, Suszko means the Wittgensteinian Sachverhalt that turns into a state of affairs (Tatsache) when it makes its corresponding sentence true. The present paper supports Suszko’s nonFregean line while departing from his two-valued logic. 1

The subsequent English translations are generally from Frege (1960): “We are therefore driven into accepting the truth value of a sentence as constituting its reference. By the truth value of a sentence I understand the circumstance that it is true or false. There are no further truth values. For brevity I call the one the True, the other the False. Every declarative sentence concerned with the reference of its words is therefore to be regarded as a proper name, and its reference, if it has one, is either the true or the false”.

247 1.2. Towards “non-Fregean logics” Suszko was not the sole logician to be somehow surprised by Frege’s sentential reference. The pioneer of tense logic, Arthur Norman Prior, equally assailed the Fregean view that sentences refer to truth values: “The theory with which Frege’s name is especially associated is one which is apt to strike one at first as rather fantastic, being usually expressed as a theory that sentences are names of truth values” (Prior 1957: 55). A clear-cut difference should be drawn between Prior’s and Suszko’s stances, however: the former proposed a many-valued logic for tensed sentences in Prior (1957), while the latter always blamed the introduction of many-valued logics. By constructing new logical values like (10011), where each single (0)-(1) value designates the truth value of a sentence at a definite moment of time, Prior urged for the use of further logical values beyond the true and the false while constructing these new values in combined terms of plain truth (1) and falsehood (0). But Suszko maintained his bivalent position, with strong words against Jan Łukasiewicz: “Łukasiewicz is the chief perpetrator of a magnificent conceptual deceit lasting out in mathematical logic to the present day” (Suszko 1977: 377). According to Suszko, every value beyond the true and the false is not a logical, but an algebraic value that is the referent of a sentence. Why does Suszko make a distinction between logical values and any further value as a sentential referent?2 This may have to do with the structural properties of logic in general: truth is the only value that counts to define logical consequence, so that any further value is merely counted as false or untrue. But it remains to be seen why the non-classical values are ruled out by him from the logical area.

2

Thus, the logical valuations and algebraic valuations are functions of quite different conceptual nature. The former relate to the truth and falsity and the latter represent the reference assignments. (Suszko 1977: 378).

248 1.3. Between Frege and Suszko: a Question-Answer Semantics Suszko went on saying that “any multiplication of logical values is a mad idea and, in fact, Łukasiewicz did not actualize it” (Suszko 1975: 378). Such a statement is as much surprising as right, if one takes a logic to be two-valued whenever the relation of logical consequence is identified by only two logical values: one designated value (the true), and one non-designated value (the untrue). By doing so, Suszko argued that any supplementary value beyond the true could be equated with a non-designated value (including Łukasiewicz’s third value of indeterminacy). But we disagree with Suszko in this respect: not only may logical consequence be characterized in more than only one modeltheoretical way, i.e. in monovalent terms of truth-preservation;3 but we also maintain with Frege that logical values are sentential referents. In a nutshell: our coming semantics appears as a trade-off between Frege’s and Suszko’s views of reference. For one thing, our insistence upon the question-answer game leads to a semantics that squares with Frege’s theory of judgment but departs from his two-valued characterization of any judgeable content (i.e. the sense of a sentence, or proposition). Similarly, it agrees with Suszko’s view that there can be more than two referents or semantic correlates for sentences while arguing against him that these so-called algebraic values are properly logical values. We borrow this semantics from Stanisław Jaśkowski’s technique of product systems, where logical values are an nordered combination of classical values 1 and 0 that is reminiscent of Prior’s tensed values.4 Let us call by Question-Answer Semantics (thereafter: QAS) the subsequent formal semantics, where questions give the sense of a sentence and answers convey their reference. Questions and answers essentially contribute to the meaning of a sentence in any scientific inquiry, as argued in Frege (1918: 62-3): 3

4

The first author to have qualified the Tarskian characterization of logical consequence is Grzegorz Malinowski, according to whom the latter need not be defined in strict terms of truth-preservation. See Malinowski (1990). About the origin of product systems in many-valued logics, see Jaśkowski (1936).

249

Ein Fortschritt in der Wissenschaft geschieht gewöhnlich so, daβ zuerst ein Gedanke gefaβt wird, wie er etwa in einer Satzfrage ausgedrückt werden kann, worauf dann nach angestellten Untersuchungen dieser Gedanke zuletzt als wahr erkannt wird. In der Form des Behauptungssatzes sprechen wir die 5 Anerkennung der Wahrheit aus.

And just as in Frege’s theory of judgment, a difference is to be made between the sentential content of a judgment and the judgment itself: the thought that is expressed by a declarative sentence is primarily considered and, then, judged to be true or false by a thinking subject. Thus in Frege (1919: 143): Eine Satzfrage enthält die Aufforderung, einen Gedanken entweder als wahr anzuerkennen, oder als falsch zu verwerfen. (…) Die Antwort auf eine Frage ist eine Behauptung, der ein Urteil zu Grunde liegt, und zwar sowohl, wenn die Frage bejaht, als auch wenn sie verneint wird.6

Unlike Frege, we claim that the so-called reference of a sentence, while being a logical value, is not a truth value (between truth and falsehood) but an answer to an initial question (between yes and no). We depart from Frege’s theory of judgment in at least two respects: on the one 5

6

“The scientific procedure usually includes a number of steps. A thought is conceived, first, which may be stated in an interrogative sentence; then, in the end of an inquiry, this thought is recognized as true. The recognition of truth is finally expressed in the form of the affirmative sentence.” “An interrogative sentence contains the request to recognize a thought as true, or to reject it as false. (…) The answer to a question is an assertion based on a judgment, whether the question receives a positive answer or a negative answer.” Hence the Fregean split of a declarative statement into three main steps, in Frege (1918: 62): “Wir unterscheiden demnach: 1. das Fassen des Gedankens das Denken. 2. die Anerkennung der Wahrheit eines Gedankens - das Urteilen, 3. Die Kundgebung dieses Urteils das Behaupten” (“We thus distinguish between: 1. the conception of the thought - thinking. 2. the recognition of the truth of a thought - judging, 3. the manifestation of this judgment - asserting.”) Frege uniquely refers to complete interrogative sentences without interrogative pronouns (who, where, what, etc.) and answered with either yes or no.

250 hand, not every judgment is an assertion, contrary to what the German logician assumed throughout his works. Frege’s identity (judgment = assertion) is mainly due to the activity of scientific investigation that purports to attain the truth. A corollary of the preceding difference is that not every denial is a negative assertion, on the other hand. While Frege took truth to be an ideal object which the scientist strives to reach, nothing prevents one from questioning this Platonist picture and preferring a pragmatist depiction of truth as a common agreement within a scientific community. If so, then truth is not a mythical object but a down-to-earth construction that is expressed through an affirmation and relies upon the speaker’s arguments. Let AR4 = be a logic of acceptance and rejection for this non-Fregean view of truth. Its structure includes a formal language L of sentential variables Var = {p1, …, pn, q1, …, qn, …}, a propositionforming operator of question Q upon L, and a matrix M that is an interpretation model of L. The matrix M = includes: - a set of logical constants: n for negation, ∧ for conjunction, ∨ for disjunction, and → for conditional; - an interpretation function A from L to A4, that turns any proposition Q(p) into a statement (or judgment) A(p); - a set A4 of four logical values, and a subset of two designated values. Following Frege, each statement about p is an answer to a corresponding question about its sentential content. But unlike Frege, not every judgment is an assertion in AR4. The question Q that any thinker asks about the thought that p is a twofold one: Q(p) = 〈q1;q2〉, where q1 = “do I hold p to be true?” and q2 = “do I hold p to be false?”.7 The ensuing answer A(p) = 〈a1(p);a2(p)〉 includes either an affirmation expressing acceptance: ‘‘yes’’ (a(p) = +), or a denial expressing rejection: “no” (a(p) = −). The four logical values are thus a combination of answers to 7

Another formulation for q2 is “do I hold not-p to be true?”, where the truth of notp is not equivalent to the untruth of p. About the resulting distinction between inconsistency and incoherence, see Schang (2009).

251 questions about a sentence. These correspond to a variety of judgments: positive assertion for A(p) = 〈+;−〉 = 1, conjecture for A(p) = 〈+;+〉 = 2/3, doubt for A(p) = 〈−;−〉 = 1/3, and negative assertion for A(p) = 〈−;+〉 = 0. While we agree with Frege that Eine Satzfrage enthält die Aufforderung, einen Gedanken entweder als wahr anzuerkennen, oder als falsch zu verwerfen,

we find it oversimplifying to add that Die Antwort auf eine Frage ist eine Behauptung, der ein Urteil zu Grunde liegt, und zwar sowohl, wenn die Frage bejaht, als auch wenn sie verneint wird.8

For the content of a question may be denied without its opposite being thereby affirmed: the third logical value of doubt means that the thinker denies both the truth and the falsehood of a sentential content, but that does not mean that the sentence is neither true nor false per se. The Fregean Axiom led to this objectivist myth of truth values as embedding the referents of sentences; but nothing compels one to accept such a realist view of logical values and our own interpretation of logical values consists in viewing them as mere epistemic attitudes without any ontological commitment. Hence the ensuing distinctions between: - judgment and assertion The latter is just one among four possible sorts of judgment A(p); - assertion and affirmation Assertion is just one sort of affirmation, in addition to the weaker answer of conjecture; Behauptung is read as a synonym of assertion in Frege’s texts (see note 8), whereas it is read in AR4 as a mere yes-answer a = + beyond the particular case of assertion 〈+;−〉; - denial and negation The former concept is a no-answer, whereas the latter commonly refers to the sentential content and expresses a thought rather than an answer 8

“An interrogative sentence contains a demand that we should either recognize the truth of a thought, or reject it as false. (…) The answer to a question is an assertion based on a judgment; this is so equally whether the answer is affirmative or negative.”

252 about it. Following the terminology of Searle and Vanderveken (1985), denial is an illocutionary negation and negation is a locutionary operator; but both equally come from the Latin verb negare, which means “denying” or “saying no”. 2. Meaning in use The way in which meaning can be redefined is related to the way in which judgments can be used. Let us return to Frege’s arguments for his minimal theory of judgment, before turning to our own way that pays a good deal of attention to the concept of negation. 2.1. Negation and denial Needless to say, Frege’s logic is focused on the foundation of mathematics and, consequently, the use of declarative sentences. Frege repeatedly said that assertion purports to tell the truth by means of such a sentence, so that any other linguistic use is to be ruled out from his mathematical logic. But the very practice of scientific research may lead to a more fine-grained theory of judgment, especially concerning the role of negation. Frege (1919) attempted to answer two intertwined questions. The first one is about the different sorts of judgments: Gibt es zwei verschiedene Weisen des Urteilens, von denen jene bei der bejahenden, diese bei der verneinenden Antwort auf eine Frage gebraucht wird? Oder ist das Urteilen in beiden Fällen dasselbe?9

The second one refers to the status of negation in a judgment: Gehört das Verneinen zum Urteilen? Oder ist die Verneinung Teil des Gedankens, der dem Urteil unterliegt?10 9

10

“Are there two different modes of judgment, the one being employed when the answer is yes and the other when the answer is no? Or is this the same judgment in both cases?” “Does denial belong to the judgment? Or is denial a part of the thought that the judgment assumes?”

253 As regards the former question, Frege claimed that there is only one sort of judgment: assertion. Given this preliminary answer, he replied to the latter that negation is a property of the sentential content only, thus making irrelevant any distinction between affirmative and negative judgments. The Fregean Begriffschrift intended to bring out the assertive force of a judgment by means of the vertical stroke |, in addition to the horizontal stroke – for sentential contents. Thus s p means that the thought (or proposition) that p is asserted by the speaker; besides, the view that there could be only one sort of judgment entails that any negative judgment amounts to a negative assertion: s ¬p. Turning the strokes into capital letters, let us symbolize by A and R the opposite attitudes of acceptance and rejection, with A(p) for a1(p) = + and R(p) for a1(p) = −.11 Accordingly, Frege claims that R(p) and A(¬p) don’t make any difference, since whoever rejects or denies p thereby affirms its opposite negation ¬p. A lexical way to make this point is to argue that every denial is an affirmation like ‘‘It is false that p’’, where being false for a thought p means that the opposite thought ¬p is true. This so-called Equivalence Thesis has been challenged by Parsons (1984), and we do the same here: every negative assertion is a denial, but the converse need not hold.12

11

12

Our symbolism makes it clear that denying p need not be the same as affirming its sentential negation ¬p: it merely means a no-answer concerning the truth of p. Any conflation of R(p) and A(¬p) comes from the assumption of bivalence, and the latter is not assumed in our four-valued logic. In symbols: A(¬p) → R(p) is valid in AR4, whereas R(p) → A(¬p) is not; see section 2.2. Vernant (2003) also called this inference the ‘‘Russell’s law’’, where the latter already argued that not every sentence is either asserted or denied by a speaker. But what prevented Russell from taking a step further was his contemporary struggle against psychologism: “Logically speaking, the notion of denying a proposition p is not relevant; only the truth of non-p concerns logic” (Russell 1904: 41). The following attempts to show that such a distinction between asserting non-p and denying p does not lead to psychologism altogether.

254 2.2. The variety of negations A distinction between denial and negation had already been claimed throughout the history of logic: from the four categoricals in Aristotle’s logic to illocutionary forces in Searle and Vanderveken (1985), through Arnauld and Nicole’s theory of judgment, the manifold use of negation suddenly passed under silence with the rise of mathematical logic in the late 19th century.13 Actually, the reduction of logical negation to the Stoic sentential negation seems to be counterbalanced by the very use of negation expressions in natural language. Moreover, the same can be said within the practice of science: despite Frege’s view that only sentential negation matters for the scientific language of truth-search, assumptions equally count in addition to axioms in the elaboration of reasoning and should not be presented in an assertive form. If so, why did Frege restrict denial to assertive negation? Parsons (1984) notes that he did so mainly for sake of notational economy: the simpler a logical symbolism is, the more valuable. Frege (1919: 155) clearly argues for this connection between simplicity and efficiency: Bei der Annahme von zwei verschiedenen Weisen des Urteilens haben wir nötig: 1. die behauptende Kraft im Falle des Bejahens, 2. die behauptende Kraft im Falle des Verneinens, etwa in unlösicher Verbindung mit dem Worte falsch, 3. eine Verneinungswort wie nicht in Sätzen, die ohne behauptende Kraft ausgesprochen werden. Nehmen wir dagegen nur eine einzige Weise des Urteilens an, haben wir dafür für nötig: 1. die behauptende Kraft, 2. eine Verneinungswort. Eine solche Ersparung zeigt immer eine weitergetriebene Zerlegung an, und 13

Vanderveken claimed that his formal semantics cannot make a crucial use of logical values, because these denotations are irrelevant to the meaning of a speech act. But he says so because of his natural assumption of the Fregean Axiom, according to which logical values cannot be but truth values. Our rejection of the Fregean Axiom does justice to algebraic semantics and logical values.

255 diese bewirkt eine klarere Einsicht.14

To the contrary, we see in the Fregean truth-valuations a reductive limitation in the expression of judgments. The simplicity of one single judgment (i.e. assertion) does not entail an efficient account of how negation is used in our daily judgments. In order to show the explanatory value of our logical values and to defend the use of a negation ‘ohne behauptende Kraft’, let us exemplify the results of AR4 and its applications in philosophy. A first application is an investigation into the meaning of this peculiar logical constant: negation. Whereas Frege only paid attention to the classical sentential negation, other logical uses of negation may be rendered within the conceptual frame of QAS and our logic of acceptance and rejection. Just as Parsons (1984: 140) argued that Frege (1919) “limits his argument to sentences which have truth values (…) sentences or propositions without truth values are exactly the cases in which [the Equivalence Thesis] is most doubtful”, our answer-values help to interpret the non-classical values (those beyond truth and falsehood) in a more intuitive way that avoids any mention of truth values. Thus, a “gappy” sentence p is taken to be neither true nor false when the answerer denies both q1 and q2, that is: A(p) = 〈−;−〉 = 1/3; and a “glutty” sentence p is taken to be both true and false when the answerer affirms both q1 and q2, that is: A(p) = 〈+;+〉 = 2/3. Correspondingly, a difference between the logical values does not entail any difference in general features of logical negation: it does not turn the true into the false in AR4, insofar as truth and falsehood do not appear any longer in the logical values but are contained in the questions and then contribute to the sense of a sentence (rather than its reference). In fact, logical negation proceeds by reversing the ordered pair of a logical value: 14

“Under the assumption of two ways of judging there must be: 1. the assertive force for affirmation, 2. the assertive force for denial, inextricably related with the word false, 3. a negative word like not in the sentences expressed without assertive force. Shouldn’t we adopt only one way of judging, then there must be: 1. the assertive force, 2. one negative word. Such an economy is always the sign of a more penetrating analysis, thus yielding a clearer insight.”

256 For any pair A(p) = 〈a1(p);a2(p)〉, A(¬p) = 〈a2(p);a1(p)〉 This can explain why intuitionist logic invalidates excluded middle or double negation, or why paraconsistent logic invalidates Duns Scot’s law or disjunctive syllogism: their divergent view of truth is such that their subset of logical values does not result in a designated value for any interpretation of the classical validities.15 2.3. Change in meaning A second efficient application of our logical values is the theory of opposition, recalling in the same time the Four categorical sentences of the Aristotelian traditional logic. On the one hand, an objection can be made to our preceding logical values and, especially, the values of conjecture and doubt: there hardly seems to be any difference between affirming or denying both the truth and falsehood of a sentence, in the sense that they commonly lead to a similar state of indecision. This requires a change in the characterization of the logical values in QAS. On the other hand, this can be done if we change the content of the questions Q. Instead of the two preceding questions, another ordered set of three questions can be suggested in turn: q1(p) = ‘‘is there no evidence for p?’’, q2(p) = ‘‘is there some (but not all) evidence for p?’’, and q3(p) = ‘‘is there all evidence for p?’’. This results in an enlarged set of eight logical values A(p) = 〈a1(p);a2(p);a3(p)〉. This device allows us to make a preliminary distinction between conjecture: A(p) = 〈−;+;−〉, and doubt: A(p) = 〈−;−;−〉; more interestingly, it also helps to give a recursive specification of logical oppositions. Following the seminal works of Piaget and Gottschalk,16 our logical values can be used to express oppositions by means of various negations. 15

16

Each of these non-classical logics is characterized by a restricted range of logical values for their sentences. Thus intuitionist (gappy) logic has a restricted domain of valuation A3 = {0,1/3,1},{1}, while paraconsistent (glutty) logic has another restricted subset A3 = {0,2/3,1},{1,2/3}. But logical negation proceeds in an uniform way throughout these logical systems. See Schang (2009). See Piaget (1949) and Gottschalk (1953).

257 Let A(p) = 〈−;−;+〉 meaning that there is all evidence for p. Assuming that this logical value is an appropriate counterpart of necessary truth, we can reconstruct the Aristotelian square of modalities by this way and compare it with his Four categorical (non-modal) sentences. The contrary opposite of necessary truth is necessary falsehood, or impossibility, according to which there is no evidence for p: A(p) = 〈+;−;−〉. Its contradictory opposite is negative possibility, according to which there is either no or some (but not all) evidence for p: A(p) = 〈+;+;−〉. And its subaltern opposite is positive possibility, according to which there is either all or some (but not all) evidence for p: A(p) = 〈−;+;+〉.

We thus obtain a group of four opposite-forming operators OX, where X designates a type of transformation within the theory of opposition. Given that the answers of affirmation (a = +) and denial (a = −) proceed by involution, the denial of a denial is an affirmation17 and gives rise to the following group of oppositional transformations. 17

The entrenched rules of bivalence and involution are thus preserved in a certain sense, within our many-valued logic of acceptance and rejection: every answer is either an affirmation or a denial, tertium non datur; and the denial of a denial is an affirmation. It must be noticed that these properties are not properties of logical negation (which is a sentential operator n) but denial (the component a = − of a logical value).

258 Let 〈x;y;z〉 be the general form of an answer A, and let x’ be the denial of a given answer x. Thus we have: - (Sub)contrariety: O(S)CT(〈x;y;z〉) = 〈z;y;x〉 - Contradiction: OCD(〈x;y;z〉) = 〈x’;y’;z’〉 - Subalternation: OCT(〈x;y;z〉) = 〈z’;y’;x’〉 Applying these dynamic operations helps to overcome one of Frege’s objections to an allegedly unclear distinction between affirmative and negative judgments. To his opinion that nothing establishes if “Christ is immortal” or “Christ is not mortal” are affirmative or negative judgments, our formal framework replies that a judgment is negative whenever such a clause like “not” is attached to the main verb of the sentence. Consequently, the judgment “Christ is not mortal” is a noanswer to the question whether Christ is mortal; and conversely, the judgment “Christ is immortal” is a yes-answer to the question whether that Christ is not mortal is true (or, equivalently, that Christ is mortal is false). This difference between negative verbs and negative prefixes is exemplified by the role of privatives in natural grammar; it has been equally brought out by the Four categorical sentences in Aristotle’s traditional logic, where the contrary E of affirmation A is a contraffirmation and not a denial (which is the contradictory O).18 Moreover, our logical values give a value-functional definition of logical oppositions. Such a recursive definition was impossible in modern logic thus far, given the overwhelming use of the Fregean Axiom and the resulting limitation in the use of logical tools.19 Indeed, the contradictory of a true sentence is determinately given as false, 18

19

The relevance of traditional logic has been supported by Fred Sommers in his term logic, where sentential negation is not the only sort of logical negation. See Englebretsen (1981), where the logical form of contraffirmation is ‘S is not-P’, as opposed to the mere denial ‘S is not P’. To be, or not to be: such is the lexical distinction between affirmative and negative judgments, which mean that negative assertions are affirmative judgments with a negative sentential content. An exception is McCall (1967), who suggested a logical characterization of contrariety as a unary modal operator. But he did so still in the Fregean fashion, with single truth values in a infinite-valued matrix.

259 whereas no definite value could be assigned to the contrary of a false sentence. With the help of our ordered combinations of logical values, such a distinction is now easily given provided that these are not single truth values. Finally, this new oppositional calculus may lead to a semantic calculus for changing beliefs, since our valuations correspond to belief attitudes. Furthermore, they give a dynamic interpretation of the theory of opposition in turning a belief state into one of its logical opposites: a speaker contradicts another by turning an assertion into a negative conjecture, for instance. 3. Conclusion: the explanatory value of logical values Our examination of Frege’s theory of sense and reference attempted to bring two main results: - firstly, to show that identifying the reference of a sentence to a truth value can be rightly challenged by means of a non-Fregean logic; - secondly, to construct an algebraic semantics that departs from Frege’s truth-valuations while making use of logical values as the referents of sentences. The distinction between referential semantics and use-base semantics is therefore tighter than it might appear: the so-called referents of sentences are equally determined by the implicit use of a questionanswer game. Such a game has been clearly advocated by Frege as the basis of any scientific practice, but his objectivist myth of truth values prevented him from going beyond the bivalent one. Finally, this paper has opposed two sorts of values for the logical values: an economical value in Frege’s single theory of judgment, where denial is equated with positive assertion; an efficient value in our logic of acceptance and rejection, where the technique of product systems gave rise to a more fine-grained analysis of denial and accounted for the plurality of logical negations.

260 References Englebretsen, George 1981. Logical Negation. Van Gorcum Frege, Gottlob 1960. The Philosophical Writings of Gottlob Frege. Max Black and Peter Geach (eds.), Oxford: Blackwell. Frege, Gottlob 1892. Über Sinn und Bedeutung. French translation in Ecrits logiques et philosophiques (transl. by C. Imbert), Editions du Seuil 1971, 102126. Frege, Gottlob 1918. Der Gedanke. French translation in Ecrits logiques et philosophiques (transl. by C. Imbert), Editions du Seuil 1971, 170-195. Frege, Gottlob 1919. Die Verneinung. French translation in Ecrits logiques et philosophiques (transl. by C. Imbert), Editions du Seuil 1971, 195-213. Gottschalk, Wilhelm H. 1953. The Theory of Quaternality. Journal of Symbolic Logic 18, 193-6. Jaśkowski, Stanisław 1936. Investigations into the System of Intuitionist Logic. In: Storrs McCall (ed.), Polish Logic 1920-1939, Oxford: Oxford University Press, 1967, 259-263. Malinowski, Grzegorz 1990. Q-consequence operation. Reports on Mathematical Logic 24, 49-59. McCall, Storrs 1967. Contrariety. Notre Dame Journal of Formal Logic 8, 121-132 Parsons, Terence 1984. Assertion, denial, and the liar paradox. Journal of Philosophical Logic 13, 137-152. Piaget, Jean 1949. Traité de Logique Opératoire (2 nd ed., 1972) Prior, Arthur Norman 1957. Time and Modality. Clarendon Press Russell, Bertrand 1904. Meinong’s Theory of Complexes and Assumptions. I, II & III, Mind 13, 1904, quoted from Essays in Analysis, ed. D. Lackey, G. Allen & Unwin, London, 1973. Schang, Fabien 2009. Inconsistent logics! Incoherent logics? The Reasoner 3, 8-9. Searle, John and Daniel Vanderveken 1985. Foundations of Illocutionary Logic. New York: Cambridge University Press. Suszko, Roman 1975. Abolition of the Fregean axiom. In: Parikh Rohit (ed.), Logic Colloquium. Symposium on logic held at Boston, 1972-73. Lecture Notes in Mathematics 453, Berlin: Springer-Verlag, 169-239. Suszko, Roman 1977. The Fregean axiom and Polish mathematical logic in the 1920’s. Studia Logica 36, 377-80. Vernant, Denis 2003. Pour une logique dialogique de la denegation. In: Francoise Armengaud, Marie-Dominique Popelard and Denis Vernant (eds.), Du dialogue au texte. Autour de Francis Jacques, Kimé, Paris, available at: http://web.upmf-grenoble.fr/SH/PersoPhilo/DenisVernant/Denegation.pdf

Andrew Schumann Belarusian State University, Minsk [email protected]

Modal Calculus of Illocutionary Logic Abstract: The aim of illocutionary logic is to explain how context can affect the meaning of certain special kinds of performative utterances. Recall that performative utterances are understood as follows: a speaker performs the illocutionary act (e.g. act of assertion, of conjecture, of promise) with the illocutionary force (resp. assertion, conjecture, promise) named by an appropriate performative verb in the way of representing himself as performing that act. In the paper I propose many-valued interpretation of illocutionary forces understood as modal operators. As a result, I build up a non-Archimedean valued logic for formalizing illocutionary acts. A formal many-valued approach to illocutionary logic is offered for the first time.

1. A non-Fregean formal analysis of illocutionary acts Conventional logics including the most non-classical logics like fuzzy logics, paraconsistent logics, etc. satisfy the so-called Fregean approach according to that the meaning of a well-formed expression (for instance, the meaning of a propositional formula) should depend on meanings of its components (respectively, on meanings of propositional variables), i.e. the meaning of a composite expression should be a function defined inductively on meanings of atoms. This feature distinguishes the most formal languages from the natural one. In the speech living practice we could exemplify a lot that the meaning of a composite speech act may be not a function defined inductively on meanings of elementary speech acts included in that composite expression. This means that a logic of speech acts cannot satisfy the Fregean approach in general. Now let us consider the structure of speech acts and their compositions. We are trying to show how compositions in natural language (i.e. in speech acts) differ from compositions defined inductively within the conventional formal logic. Speech acts from the logical point of view are said to be illocutionary acts. We know that

262 whenever a speaker utters a sentence in an appropriate context with certain intentions, he performs one or more illocutionary acts. We will denote the simple illocutionary act by F(Φ). This denoting means that each simple illocutionary act can be regarded as one consisting of an illocutionary force F and a propositional content Φ. For example, the utterance “I promise you (F) to come (Φ)” has such a structure. Usually, the illocutionary force is expressed by performative sentences which consist of a performative verb used in the first or third person present tense of the indicative mood with an appropriate complement clause. In our case we used the example of illocutionary act with the performative verb “promise.” However, the illocutionary force is not totally reduced to an appropriate performative verb. It also indicates moods of performance (e.g., moods of order are distinguished in the following two sentences: “Will you leave the room?”, “If only you would leave the room”). Notice that an illocutionary force of performative verb can be expressed nonverbally too, e.g. by means of intonations or gestures. Traditionally, the illocutionary force is classified into five groups that are called illocutionary points: assertives, commissives, directives, declaratives, expressives.1 The illocutionary points are used in the setting of simple illocutionary acts. When a simple illocutionary act is successfully and non-defectively performed there will always be effects produced in the hearer, the effect of understanding the utterance and an appropriate perlocutionary effect (for instance, the further effect on the feeling, attitude, and subsequent behavior of the hearer). Thus, an illocutionary act must be both successful and non-defective. Recall that in classical logic a well-formed proposition is evaluated as either true or false and in conventional logics as a degree of truth (the latter could be however very different, e.g. it could run the unit interval [0, 1] as in fuzzy logics, trees of some data as in spatial logics and behavior logics, sets of truth values as in higherorder fuzzy logics and some paraconsistent logics, etc.). But as we see, a 1

In short, they are distinguished as follows: “One can say how things are (assertives), one can try to get other people to do things (directives), one can commit oneself to doing things (commissives), one can bring about changes in the world through one’s utterances (declarations), and one can express one’s feelings and attitudes (expressives)” (Searle and Vanderveken 1984: 52).

263 non-defective simple illocutionary act is evaluated as either successful or unsuccessful in the given context of utterance (notice that within the more detailed consideration in the same way as in non-classical logics, the meaning of non-defective simple illocutionary act could be evaluated as a degree of successfulness). Regarding compositions in speeches, i.e. appropriate composite illocutionary acts built up from simple ones, we could distinguish two cases: first, the case of compositions that satisfy the Fregean approach and, second, the case of compositions that break the latter. The criterion will be as follows. If a logical superposition of simple illocutionary acts may be also evaluated as either successful or unsuccessful, it is said to be a complex illocutionary act. In this case we have no composition defined inductively. If it may be examined as either true or false, then this logical superposition is said to be an illocutionary sentence. In this case we can follow the Fregean approach. Logical connectives (∨, ∧, ⇒, ¬) which we use in the building of complex acts or sentences we will call illocutionary connectives. They differ from usual ones. As an example, the illocutionary disjunction in the utterance “I order to leave the room or I order to not leave the room” differs from the usual propositional disjunction, because it doesn’t express here the law of excluded middle and a hearer can reject it as an unsuccessful illocutionary act. Sometimes a logical superposition of simple illocutionary acts with propositions also sets a complex illocutionary act. Some examples are as follows: “If it rains, I promise you I’ll take my umbrella” (the illocutionary implication of the form Ψ ⇒ F(Φ)), “It rains and I assert that I’ll take my umbrella” (the illocutionary conjunction of the form Ψ ∧ F(Φ)). But we can consider cases when a logical superposition of simple illocutionary acts with propositions doesn’t get a complex illocutionary act. As an example, “If I think so, then really it is so” (the illocutionary implication of the form F(Φ) ⇒ Φ). Indeed, it is a true illocutionary sentence. Thus, I distinguish illocutionary sentences from illocutionary acts. Just as illocutionary acts express appropriate performances, so illocutionary sentences say about logical properties of illocutionary acts. Existence of

264 illocutionary sentences (i.e. some inductive compositions in composite speeches) allows us to set a special logic that is called illocutionary logic. It studies logical and semantic properties of illocutionary acts and illocutionary sentences. Therefore “just as propositional logic studies the properties of all truth functions …, so illocutionary logic studies the properties of illocutionary forces without worrying about the various ways that these are realized in the syntax of English” (Searle and Vanderveken 1984). Illocutionary logic plays an important role in modern analytical philosophy of language and in logical models of speech acts: its aim is to explain how context can affect the meaning of certain special kinds of illocutionary acts. The first formalization of illocutionary logic was created by J.R. Searle and D. Vanderveken (1984). In that work, a semantic-phenomenological approach was proposed and in the framework of this approach all the conditions of success and non-defection of illocutionary acts were precisely investigated. In this paper I propose a logical-syntactic approach to illocutionary logic, according to that I consider illocutionary forces as modal operators which have a many-valued interpretation. This approach can supplement the approach of J.R. Searle and D. Vanderveken. 2. A many-valued illocutionary logic with the only performative verb “think” In order to show how we could set up compositions in illocutionary logic and combine Fregean and non-Fregean compositions within a logical system in general, we first try to define a propositional logic with the only performative verb “think.” Let us consider a propositional language L that is built in the standard way with the additional unary operator F. It is called the illocutionary force of the performative verb “think.” We will say that the illocutionary act F(Φ) is a performance of a proposition Φ. From the point of view of social constructivism (Berger and Luckmann 1971), the content of social acts and the content of performances of any propositions are not physical facts. Therefore performances cannot be evaluated as either true or false.

265 The performance of Φ that we obtain by using the performative verb “think” can be either successful or unsuccessful. It is successful if F(Φ) represents a true propositional content of Φ (i.e., if Φ is true) and it is not successful if F(Φ) represents a false propositional content of Φ (i.e., if Φ is false). The success and unsuccess of performance will be denoted by 1/2 and −1/2 respectively. Further, let us suppose that atomic propositions, i.e. propositional variables, (they belong to the set VarL := {p, p1, p2, …}) can have only one of the following two truth values: 1 (“true”) and 0 (“false”). Let our language L be associated with the following matrix M = , where • {1, 1/2 , 0,−1/2} is the set of truth values, • {1} is the singleton of designated truth values, • ¬, F are unary operations for negation and illocutionary force respectively, both are defined as follows: ¬x =

x) =

• ⇒, ∨, ∧ are binary operations for disjunction, conjunction, and implication respectively, their definitions:

x ⇒ y = 1 – sup(x, y) + y =

x∨y=

266

x∧y=

Then it could easily be proved that the unary operator F satisfies the following conditions: (1) ∀a ∈ M. a ≥ F(a), (2) ∀a ∈ M. ¬a ≥ ¬F(a), (3) ∀a, b ∈ M. (F(a) ∧ F(b)) ≥ F(a ∧ b), (4) ∀a, b ∈ M. (F(a) ∨ F(b)) ≤ F(a ∨ b), (5) ∀a, b ∈ M. (F(a) ⇒ F(b)) ≥ F(a ⇒ b), (6) ∀a ∈ M. F(F(a)) = F(a), (7) ∀a ∈ M. ¬F(a) = F(¬a). Let e be a valuation of atomic propositions, i.e. e: VarL → {0, 1}. We can extend e to the case of the following valuation Ve: L → {1, 0, 1/2, −1/2} by using the operations of M which are assigned to appropriate logical connectives. The valuation Ve is called an illocutionary valuation. Let Φ ∈ L. The performance of Φ, i.e. F(Φ), is called an unsuccessful for e if Ve(F(Φ)) = −1/2 , i.e. Ve(Φ) ∈ {0, −1/2}. The formula F(Φ) is called a successful performance for e if Ve(F(Φ)) = 1/2 , i.e. Ve(Φ)∈ {1, 1/2}. Further, the formula Φ is called a true sentence for e if Ve(Φ) = 1 and a false sentence for e if Ve(Φ) = 0. Notice that the element a ∧ ¬a is not a minimal in M, because a ∧ ¬a ≥ F(a ∧ ¬a) and (F(a) ∧ ¬F(a)) ≥ F(a ∧ ¬a). Consequently, the minimal element of M (that is called ‘illocutionary contradiction’ or ‘unsuccess

267 of performance’) is assigned to a sentence of the form F(Φ ∧ ¬Φ) when somebody thinks a propositional contradiction. Let us show that the illocutionary valuation defined above satisfies the informal understanding of the concept ‘illocutionary act.’ First of all, we should notice that the illocutionary force of “think” sets up a warp of the logical space of propositional relations (by analogy with the gravitational force which warps the physical space). For instance, suppose that there are two successful illocutionary acts: “I think that if it is harmful for my health, then I will not do it” and “I think that smoking is harmful for my health.” Both do not imply that the illocutionary act “I think that I will not smoke” is successful. However, we could entail according to modus ponens if the illocutionary force of the verb “think” was removed in both expressions. So, we can claim that the illocutionary force may be considered in terms of the warpage of logical space in the same way as the gravitational force has been regarded recently in terms of the warpage of Euclidean space. This feature is illustrated by inequalities (1) – (7) that could be converted to equalities if we removed the illocutionary force F. Namely, inequality (1) means that the implication F(Φ) ⇒ Φ is a true sentence of illocutionary logic. For example, “If I think that he is God, then he is God” (but not vice versa) is an example of illocutionary tautology. Formula (2) means that the implication ¬F(Φ) ⇒ ¬Φ is also a true illocutionary sentence. Indeed, something exists if I think so and something doesn’t exist if I don’t think so. Inequality (3) means that the implication F(Φ ∧ Ψ) ⇒ (F(Φ) ∧ F(Ψ)) is a tautology of illocutionary logic. For instance, the following illocutionary sentence is true: “If she thinks that the weather is good and the world is fine, then she thinks that the weather is good and she thinks that the world is fine” (but not vice versa). Continuing in the same way, we can see that formulas (4) – (6) satisfy an intuitive meaning, too. Notice that the verb “think” readily differ from others within formula (7). The illocutionary act “I think that it is not white” is equivalent to “I don’t think that it is white.” In the meantime, an illocutionary act with the negation of an illocutionary force that we obtain using other performative verbs can differ from the same illocutionary act with a positive illocutionary force, but with the

268 negative propositional content. An example of illocutionary negation: “I do not promise to come”, an example of an illocutionary act with a negative propositional content: “I promise not to come.” As we see, these acts are different. I have just exemplified how we can combine Fregean and nonFregean compositions of simple illocutionary acts within a logical system. I have done this by distinguishing composite expressions evaluated as 0 and 1 from composite expressions evaluated as 1/2 and – 1/2. The first expressions correspond to the Fregean approach (when the meaning of the whole is reduced to its components), the second ones evidently do not (because they assume to be ordered by the dual relation, where conjunction is understood as supremum and disjunction as infimum; as a result, the meaning of composite expression changes meanings of components: conjunction of components starts to be interpreted as disjunction on the level of the whole expression and vice versa). Taking into account that we had postulated the existence of the only performative verb, we defined illocutionary logical operations by induction. In this meaning, we built up a many-valued illocutionary logic of Frege’s style and did our best to present both Fregean and nonFregean compositions within conventional logic. However, it was just an illustration in the simplest case of using the only performative verb. This exemplification allowed us to clarify that non-Fregean compositions in speech acts can be regarded logically too, by a kind of dualization, when meanings start to be ordered by the dual relation. 3. An ordering relation on the set of all illocutionary forces Now set up the more complicated problem to construct a many-valued illocutionary logic closed under the class of all performative verbs. We can be sure that in this case our logic will not be of Frege’s style. In the beginning, we should define a partial ordering relation ≤ on an infinite class of all illocutionary forces (expressed by all performative verbs). This allows us to define the basic logical connectives (∨, ∧, ⇒, ¬) on the set of simple illocutionary acts. Intuitively, the relation Ve(F1(Φ)) ≤ Ve(F2(Ψ)), where Ve is supposed to be an evaluation function run over

269 well-formed formulas (however, it is not defined yet), holds if and only if an illocutionary act F1(Φ) logically entails an illocutionary act F2(Ψ). In the latter case it is not possible to perform the first act without thereby performing the second (i.e. for all contexts of utterances i ∈ I, if F1(Φ) is performed at i, then F2(Ψ) is performed at i, too). A new ordering relation also provides us with a possibility to consider so-called disjoint or opposite illocutionary acts F1(Φ), F2(Φ). Both are disjoint/opposite when F1(Φ) ∧ F2(Φ) is not simultaneously performable, more precisely when the successful performance of F1(Φ) is relatively inconsistent (1) with the achievement of the illocutionary point of F2 on Φ, or (2) with the degrees of strength of illocutionary point of F2, or (3) with the satisfaction by Φ of the propositional content conditions of F2, or (4) with the presupposition of the preparatory conditions of F2(Φ), or finally (5) with a commitment to the psychological state of F2(Φ). In the next section, we are trying to examine general properties of opposite illocutionary acts. 4. A square of opposition for illocutionary acts First, assume that two illocutionary forces F1 and F2 are in the semantical opposition. This means that they cannot at the same time both be successful, however both may be unsuccessful. In other words, an illocutionary act F1(Φ) is semantically equivalent to an illocutionary act F2(¬Φ) (clearly, in this case F2(Φ) is semantically equivalent to F1(¬Φ)), i.e. Ve(F1(Φ)) = Ve(F2(¬Φ)) and Ve(F1(¬Φ)) = Ve(F2(Φ)). For example, the performative verbs “order” and “forbid” have such opposition. The square of opposition or logical square for illocutionary acts is a natural way of classifying illocutionary forces which are relevant to a given opposition. Starting from two illocutionary forces F1 and F2 that are contraries, the square of opposition entails the existence of two other illocutionary forces, namely ¬F1 and ¬F2. As a result, we obtain the following four distinct kinds of opposition between pairs of illocutionary acts:

270 F1(Φ)

F2(Φ)

¬F2(Φ)

¬F1(Φ)

Figure 1. The square of oppositions for illocutionary forces F1, F2.2

• Firstly, illocutionary forces F1 and F2 are contrary. This means that they cannot be successful together in corresponding illocutionary acts F1(Φ) and F2(Φ) with the same propositional content, but as we see both may be unsuccessful: “I insist that he takes part” and “I insist that he doesn’t take part.” A speaker cannot successfully simultaneously say: “I order you to leave the room” and “I forbid you to leave the room.” • Secondly, illocutionary forces F1 and ¬F1 (resp. F2 and ¬F2) are contradictory, i.e. the success of one implies the unsuccess of the other, and conversely. For instance, the success of the illocutionary act “I order you to do this” implies the unsuccess of the corresponding act “I request that you do not do it,” and conversely. • Next, illocutionary forces ¬F1 and ¬F2 are subcontrary, i.e. it is impossible for both to be unsuccessful in corresponding illocutionary acts ¬F1(Φ) and ¬F2(Φ) with the same propositional content, however it is possible both to be successful. • Lastly, illocutionary forces F1 and ¬F2 (resp. F2 and ¬F1) are said to stand in the subalternation, i.e. the success of the first (“the 2

For instance, F1 is “bless” and F2 is “damn.” There can be an infinite number of such squares for different illocutionary forces. At least, there exists an appropriate square of opposition for each illocutionary force. In the latter case F1 is “bless to do” and F2 is “bless not to do.”

271 superaltern”) implies the success of the second (“the subaltern”), but not conversely. As an example, the success of the illocutionary act “he orders” implies the success of the illocutionary act “he requests”, but not vice versa. Consequently, the success of an illocutionary act F1(Φ) or F2(Φ) implies the success of the corresponding illocutionary act ¬F2(Φ) or ¬F2(Φ), respectively. Consequently, the unsuccess of an illocutionary act ¬F2(Φ) or ¬F2(Φ) implies the unsuccess of the corresponding illocutionary act F1(Φ) or F2(Φ), respectively. Thus, pairs of illocutionary acts are called contradictories (contradictoriae) when they cannot at the same time both be successful or both be unsuccessful, contraries (contrariae) when both cannot at the same time be successful, subcontraries (subcontrariae) when both cannot at the same time be unsuccessful, and subalternates (subalternae) when the success of the one act implies the success of the other, but not conversely. There is a very easy way to detect whether the square of opposition for the given illocutionary acts holds. The expected criterion is as follows: if F(¬Φ) ⇒ ¬F(Φ) is a true illocutionary sentence, i.e. Ve(F(¬Φ)) ≤ Ve(¬F(Φ)), then the square of opposition for illocutionary acts holds. As a corollary, the following expressions are to be regarded as true illocutionary sentences: (8) ¬F(¬Φ) ∨ ¬F(Φ) – tertium non datur, (9) ¬(F(¬Φ) ∧ F(Φ)) – the law of contrary. 5. A non-Archimedean interpretation of illocutionary forces Now we are trying to develop a formal approach to evaluations of illocutionary forces due to non-Archimedean semantics proposed by Schumann (2008) and Schumann (2009). Suppose B is a complete Boolean algebra with the bottom element 0 and the top element 1 such that the cardinality of its domain |B| is an infinite number. Build up the

272 set BB of all functions f : B → B. The set of all complements for finite subsets of B is a filter and it is called a Frechet filter, it is denoted by U. Further, define a new relation ≈ on the set BB by f ≈ g = {a ∈ B : f(a) = g(a)} ∈ U. It is easily be proved that the relation ≈ is an equivalence. For each f ∈ BB let [f] denote the equivalence class of f under ≈. The ultrapower BB/U is then defined to be the set of all equivalence classes [f] as f ranges over BB. This ultrapower is called a nonstandard (or nonArchimedean) extension of B, for more details see Robinson (1966) and Schumann (2008). It is denoted by *B. There exist two groups of members of *B: (1) functions that are constant, e.g. f(a) = m ∈ B on the set U, a constant function [f = m] is denoted by *m, (2) functions that are not constant. The set of all constant functions of *B is called standard set and it is denoted by °B. The members of °B are called standard. It is readily seen that B and °B are isomorphic. We can extend the usual partial order structure on B to a partial order structure on °B: • for any members x, y ∈ B we have x ≤ y in B iff *x ≤ *y in °B, • each member *x ∈ °B (which possibly is a bottom element *0 of °B) is greater than any number [f] ∈ *B\°B, i.e. *x > [f] for any x ∈ B, where [f] isn’t constant function. Notice that under these conditions, there exists the top element *1 ∈ *B such that 1 ∈ B, but the element *0 ∈ *B such that 0 ∈ B is not bottom for *B. The ordering conditions mentioned above have the following informal sense: (1) the sets °B and B have isomorphic order structure; (2) the set *B contains actual infinities that are less than any member of °B. These members are called Boolean infinitesimals.

273 Introduce three operations ‘sup’, ‘inf’, ‘¬’ in the partial order structure of *B: inf([f], [g]) = [inf(f, g)]; sup([f], [g]) = [sup(f, g)]; ¬[f] = [¬f]. This means that a nonstandard extension *B of a Boolean algebra B preserves the least upper bound ‘sup’, the greatest lower bound ‘inf’, and the complement ‘¬’ of B. Consider the member [h] of *B such that {a ∈ B: h(a) = f(¬a)} ∈ U. Denote [h] by [f¬]. Then we see that inf([f], [f¬]) ≥ *0 and sup([f], [f¬]) ≤ *1. Really, we have three cases: • Case 1. The members ¬[f] and [f¬] are incompatible. Then inf([f], [f¬]) ≥ *0 and sup([f], [f¬]) ≤ *1, • Case 2. Suppose ¬[f] ≥ [f¬]. In this case inf([f], [f¬]) = *0 And sup([f], [f¬]) ≤ *1. • Case 3. Suppose ¬[f] ≤ [f¬]. In this case inf([f], [f¬]) ≥ *0 And sup([f], [f¬]) = *1. Now define hyperrational valued matrix logic MB as the ordered system , where • *B is the set of truth values, • {*1} is the set of designated truth values, • for all [x] ∈ *B, ¬[x] = *1 – [x], • for all [x], [y] ∈ *B, [x] ⇒ [y] = *1 − sup([x], [y]) + [y], • for all *x, *y ∈ °B, *x ∨ *y = sup(*x, *y), • for all *x, *y ∈ °B, x ∧ y = inf(*x, *y), • for all [x], [y] ∈ *B\°B, [x] ∨ [y] = inf([x], [y]), • for all [x], [y] ∈ *B\°B, [x] ∧ [y] = sup([x], [y]).

274 [f]

¬[f¬]

[f¬]

¬[f]

Figure 2. In case [f¬] ≤ ¬[f], the square of oppositions for any member [f], [f¬], ¬[f¬], ¬[f] of *B holds true, i.e. [f], [f¬] are contrary, [f], ¬[f] (resp. ¬[f¬], ¬[f]) are contradictory, ¬[f¬], ¬[f] are subcontrary, [f], ¬[f¬] (resp. [f¬], ¬[f]) are said to stand in subalternation.

Let us consider a propositional language LB that is built in the standard way with the additional set of unary operators F1, F2, F3, ... We suppose that for each performative verb, this set contains an appropriate sign. So, this set should be uncountable infinite. We associate the language LB with the matrix MB in accordance with the illocutionary evaluation Ve satisfying the following properties: • Ve(F(Ψ)) = [f] ∈ *B\°B if Ψ does not contain unary operators F1, F2, F3, ...; • Ve(Fi(Fj(Ψ))) = ([f]i ⇒ Ve(Fj(Ψ))), where [f]i ∈ *B\°B and ⇒ is an operation of MB.; • Ve(Ψ) = *x ∈ °B if Ψ does not contain unary operators F1, F2, F3, ...; • Ve(Φ ∧ Ψ) = (Ve(Φ) ∧ Ve(Ψ)), where on the right-side ∧ is an operation of MB; • Ve(Φ ∨ Ψ) = (Ve(Φ) ∨ Ve(Ψ)), where on the right-side ∨ is an operation of MB; • Ve(Φ ⇒ Ψ) = (Ve(Φ) ⇒ Ve(Ψ)), where on the right-side ⇒ is an operation of MB.

275 It can easily be proved that the following formulas are illocutionary tautologies according to the matrix MB in the same measure as they were in the matrix M (see Section 2): (10)

F(Φ) ⇒ Φ,

(11)

¬F(Φ) ⇒ ¬Φ,

(12)

F(Φ ∧ Ψ) ⇒ (F(Φ) ∧ F(Ψ)),

(13)

(F(Φ) ∨ F(Ψ)) ⇒ F(Φ ∨ Ψ),

(14)

F(Φ ⇒ Ψ) ⇒ (F(Φ) ⇒ F(Ψ))

Instead of formula (7), the matrix MB satisfies the relations of square of oppositions for illocutionary acts. Formula (6) does not hold in general, because each illocutionary force effects in a unique way. For instance, a cyclic self-denying illocutionary act “I promise I will not keep this promise” F(¬F(F(¬F(…)))) is evaluated in the following complex form: Ve(F(¬F(F(¬F(…))))) = ([f] ⇒ (¬[f] ⇒ ([f] ⇒ (¬[f] ⇒ (…))))). As we see, a cyclic self-denying illocutionary act does not correspond to any member of *B. The matter is that we have there an infinite sequence ([f] ⇒ (¬[f] ⇒ ([f] ⇒ (¬[f] ⇒ (…))))) whose whole truthvalue depends on a truth-value of the most right atomic formula, but the sequence is infinite and we have no end-right atomic formula. The novel logic is not Fregean, though it is evidently formal and furthermore it can be shown that some restrictions of this logic may be complete (Schumann 2008). The point is that we obtain an uncountable infinite set of well-formed formulas for that there is no induction in general (it depends on the non-Archimedean structure of this set).

276 6. Conclusion In the paper I have proposed a many-valued calculus in that illocutionary forces and performances are considered as modal operators of a special kind. As a result, I have constructed an easier formalization of illocutionary act theory than usual ones. This formalization could be applied in model-theoretic semantics of natural language and natural language programming.

References Berger, Peter and Thomas Luckmann 1971. The Social Construction of Reality. Garden City (N.Y.): Anchor Books. Robinson, Abraham 1966. Non-Standard Analysis. Studies in Logic and the Foundations of Mathematics. Amsterdam: North-Holland. Schumann, Andrew 2008. Non-Archimedean Fuzzy and Probability Logic. Journal of Applied Non-Classical Logics, 18:1, 29 - 48. Schumann, Andrew 2009. A Non-Archimedean Valued Extension of Logic LΠ∀ and a p-Adic Valued Extension of Logic BL∀. Journal of Uncertain Systems (to appear). Searle, John R. 1969. Speech acts; an essay in the philosophy of language. Cambridge: Cambridge University Press. Searle, John R. 1979. Expression and meaning: studies in the theory of speech acts. Cambridge: Cambridge University Press. Searle, John R. and Daniel Vanderveken 1984. Foundations of Illocutionary Logic. Cambridge: Cambridge University Press.

Barbara Sonnenhauser Ludwig-Maximilians-Universitaet Muenchen [email protected]

‘Subjectivity’ in Philosophy and Linguistics Abstract: This paper is concerned with the notion of ‘subjectivity’ in a primarily linguistic perspective. Dealing with subjectivity, linguistics is confronted with basically the same problems as philosophy. These problems are mainly based on the underlying dualistic thinking rooted in classical Aristotelian logic. The present paper proposes a triadic redefinition of subjectivity in terms of Gotthard Günther’s transclassical logic and Charles S. Peirce’s triadic sign conception. This redefinition of subjectivity and its application to language is exemplified with the category of ‘indirectivity’ in Bulgarian.

0. Introduction Since Benveniste (1974a) and Lyons (1982) the notion of ‘subjectivity’ has been playing an increasing role in linguistics. Dealing with subjectivity, however, linguistics is faced with quite the same problems as philosophy. These problems mainly concern the question of how to objectify ‘the subjective’ without making it an ordinary object and without becoming entangled in reflexive argumentation. The present paper argues that linguists cannot simply ignore the philosophical questions related to the notion of subjectivity. Not taking into account the philosophical aspects, the notion of subjectivity becomes superfluous for linguistics and should be replaced by something like ‘ego-linguistics’ (cf. Weiss 2009). 1. ‘Subjectivity’ in philosophy The notion of subjectivity poses considerable problems for philosophical reasoning. These problems centre on the question of how to account for the ‘subject’ of subjectivity without reflexively presupposing it, the question of the referent of the pronoun I and the relation between I and ‘I’, and the question of how to objectify ‘the subjective’ (cf. Frank 1991,

278 1994). These problems are based on the underlying assumption, anchored in classical logic, of the division of the world into an objective and a subjective domain. Since both domains can be captured only by negating the other, classical thinking actually amounts to a monovalent ontology (cf. Günther 1978). Hence, the major reason for the problems with subjectivity consists in the dualistic thinking of classical Aristotelian logic, which is even more prevalent in linguistic approaches where the objectivity of language as opposed to the subjectivity of thinking does not seem to offer any possibility of escaping this dualism. This dualism is also evident in traditional conceptions of the linguistic sign (e.g. Saussure, Jakobson) commonly underlying linguistic analysis. Consisting of a form-side and a content-side, it is dyadic in nature and, moreover, does not offer any possibility of including the sign user, i.e. of regarding him as integral part of the sign (cf. Sonnenhauser 2008). This conception has farreaching consequences for linguistic analyses and the conception of the process of communication. The linguistic equivalent to the philosophical problems sketched here concerns the question of how to objectify subjectivity by means of language. Since linguists tend to implicitly or explicitly ignore philosophical issues (e.g. Lyons 1994), this problem is usually ignored as well. Therefore, linguistic conceptions of subjectivity exhibit a number of shortcomings. 2. ‘Subjectivity’ in linguistics Linguistic approaches conceive subjectivity – roughly – as the expression of a self or the representation of the speaker’s perspective in discourse (cf., e.g. Finegan 1995, Stein and Wright 1995). They assume the existence of some kind of ‘subject’ (in a non-grammatical sense), typically equated with a speaker or an observer, which can be ascribed the property of ‘subjectivity’.

279 2.1. Approaches Benveniste (1974a) considers the expression of subjectivity as one of the most characteristics properties of language. He regards subjectivity as the ability of a speaker to present himself as subject by means of using the pronoun I and related nominal and verbal categories expressing, or referring to, the notion of ‘person’. As examples for the pervasiveness of subjectivity in language, i.e. for linguistic exponents of subjectivity, he cites (1974a: 292) not only personal pronouns, but also ‘indicators of deixis’ such as demonstratives, adverbs and adjectives organising the spatial and temporal relations around the subject as origo. Taking into account the linguistic expression of time, this domain of subjectivity can be extended even further. However, Benveniste is not always explicit about the distinction of ‘person’ and ‘subject’, which carry different philosophical implications (cf. Sturma 2008). In an earlier, less well-known paper, Benveniste (1974b) clarifies the relation of I and not-I, with the latter embracing You and It. These three notions are organised in terms of two oppositions (1974b: 263f): the opposition of subjectivity comprising I as opposed to You, and the opposition of personality comprising I and You as opposed to It.1 Most linguistic approaches to subjectivity referring to Benveniste ignore this important distinction – probably, because Benveniste himself gave it up in his later articles, among them the often cited subjectivity-paper (1974a). Lyons (1982: 102) defines ‘subjectivity’ as “the way in which natural languages provide for the locutionary agent’s expression of himself and his own attitudes and beliefs”. He draws two important distinctions: that between the subjective (perceiving) and the objective (observing) self, cf. (1a) vs. (1b), and that between the subjectivity of uttering (2a) and inherent subjectivity (2b): (1) a. I remember switching off the light. b. I remember myself switching off the light.

1

Here, Benveniste indeed distinguishes between ‘personality’ and ‘subjectivity’.

280 (2) a. You must not smoke here. b. Do not smoke here. Lyons’ reasoning is situated within his critical engagement with AngloAmerican linguistics, philosophy and logic, which he regards as being “dominated by the intellectualist prejudice that language is, essentially, if not solely, an instrument for the expression of propositional thought” (Lyons 1982: 103), i.e. of objective facts. Modifying Descartes, Lyons proposes the loquor, ergo sum as starting point and regards his approach as ‘locutionary subjectivism’ as opposed to traditional ‘locutionary’ or ‘Cartesian objectivism’. Despite this obvious philosophical commitment, he holds that “linguists need not be involved in the philosophical […] issue of the ontological status of the self” (Lyons 1994: 17). On closer inspection, however, locutionary subjectivism and Cartesian objectivism share the same dyadic basis, i.e. the ontological distinction of a subjective and an objective domain. They differ only in which domain is regarded as primary: the subjective in the former case, the objective in the latter. This illustrates what Günther has in mind considering idealism and realism as identity theories (1978: 109), as logically equivalent (1978: 111), as not being able to disprove each other (1978: 255), hence as interchangeable.2 Traugott (1989, 1995) pursues a diachronic approach to what she calls ‘subjectification’, i.e. the increasing anchoring of meanings and interpretations “in the speaker’s subjective belief state/attitude toward the proposition” (Traugott 1989: 35). This is illustrated in (3) with the diachronic development of evidently from a manner adverb into a strongly subjective epistemic sentential adverb (Traugott 1989: 46f): (3) a. Yif thay finde euidently that i haue doon extorcion ‘If they find from evidence that I have performed extortions’ b. No Idea, therefore, can be undistinguishable from another … for from all other, it is evidently different ‘evident to all’ 2

This assumptions can also be found in Baxtin (2000) and Peirce (EP I, EP II), who are both skeptical towards dualistic thinking.

281 c. He is evidently right ‘I conclude that he is right’ ‘Subjectivity’ as the result of subjectification is thus based on the opposition ‘speaker-related’ (subjective) vs. ‘non speaker-related’ (objective), which Traugott derives from an underlying speaker-hearer dyad. Dealing with the subjective and objective construal of an entity, Langacker (1985, 1990, 1995) elaborates a relational approach to subjectivity, based on the assumption of an asymmetric relationship between the subject and the object of observation. He regards an entity as subjective to the extent that it functions asymmetrically as the observer in a viewing situation, losing all awareness of Self as its observes an Other, and as objective to the extent that it achieves prominence as well-articulated object of observation, distinguished from both background and observer (Langacker 1985: 121f). This is illustrated in (4): me in (4a) displays an objective construal of the speaker, whereas in (4b) the speaker is construed subjectively, since there is no overt expression referring to him (1985: 137f): (4) a. There is snow all around me. b. There is snow all around ∅. Within Langacker’s approach, however, there is no chance to distinguish between the objectified subject / self / observer and any other object in the world. Moreover, there is no way to linguistically express subjectivity since linguistic expression goes hand in hand with putting the respective entity onstage, thereby objectifying it. These have been merely a few out of many approaches dealing with subjectivity of language and the expression of subjectivity in and by means of language. They suffice, however, to illustrate the main difficulties linguistic approaches to subjectivity are confronted with.

282 2.2. Problems The above-mentioned linguistic approaches are faced with a number of problems. On a more superficial level, one observes different conceptions of subjectivity, which is evident especially with Traugott’s and Langacker’s approaches. The utterance in (5), for instance, is analysed as ‘subjective’ within Traugott’s approach, since it makes explicit reference to the speaker, but as ‘objective’ in Langacker’s, since the subject I is overtly expressed and hence put onstage as object of observation: (5) I promise to do it. This disagreement has to do with the different relations assumed to be basic for the emergence of subjectivity: for Traugott, it is the opposition speaker vs. hearer (I vs. You) which is crucial, for Langacker it is the difference between the observing subjects within a communicative setting, i.e. speaker and hearer, and the observed objects (I/You vs. It). More serious problems, however, arise from the underlying assumptions shared by the approaches mentioned. If language is – as Benveniste does – assumed to be fundamentally subjective, the assumption of specific subjective elements is tautological. Moreover, since every choice of linguistic elements in some sense reflects a speaker’s choice, the definition of subjectivity in terms of reference (of meaning or interpretation) to a speaking subject is circular. Furthermore, linguistic approaches treat subjectivity as primitive notion which is not defined any further and hence assumed as being given. Another problem consists in the model of communication implicitly or explicitly underlying these approaches. Within this model the speaker is regarded as agentive subject, language as ready-made object, and the hearer as passive recipient, with the task of decoding (Jakobson 1971: 130). This technical model of communication hardly fits natural language, but might be quite natural if one assumes a dyadic and static conception of the linguistic sign as these approaches do. The main problems for subjectivity in linguistics and the linguistic expression of subjectivity can be summarised under the heading of

283 ‘dualism’. In order to account for subjectivity in language, it has to be objectified, and in objectifying subjectivity there does not seem to be a way to reflect upon it without having the Self, the Subject, the I etc. collapse with the object of observation and with ‘ordinary’ objects in the world. This problem is shared by philosophical approaches to subjectivity and relates in both cases to classical logic with its laws of the excluded middle, of identity and of non-contradiction. If there were a way to overcome the restrictions imposed by classical logic, there might also be a way to account for subjectivity without facing the problems sketched here. 3. A triadic conception of subjectivity As will be shown in this section, Gotthard Günther’s proposal of a transclassical, multivalued logic and Charles Sanders Peirce’s triadic and processual sign conception provide the tools necessary for a redefinition of subjectivity which is also applicable to natural language(s). 3.1. Günther Günther (1978) criticises classical logic for its assumption of a symmetric relationship between subject and object, since this actually leads to a collapse of both into one ontological notion. By this approach, any third possibility between the positivity of the object of thinking and the negativity of thinking is excluded (tertium non datur), i.e. there is no way to reflect upon the process of thinking. This is evident, for instance, with the classical assumption of double negation as being equivalent to the positive expression, i.e. ‘p = n(np)’. The transclassical, non-Aristotelian logic elaborated by Günther, however, takes also the process of negating as logically relevant. Hence, the classical equivalence does not hold: ‘p ≠ n(np)’. Whereas with the classical dyadic negation it is not possible to say anything more than what is said by the positive proposition, transclassical negation has the ability of ‘accretion’, i.e. of increase of information. As a consequence,

284 the original ‘p’ does not remain identical to itself, but undergoes specific changes during the process of negation (Günther 2000). Interestingly, this transclassical character of negation manifests itself in natural language as well, where double negation of contrary – and even contradictory – items does not necessarily equal the respective positive item: not unhappy is not equivalent to happy, and neither is not impossible equivalent to possible. This is usually taken as indication of the illogical character of language, or the non-applicability of logic to language (e.g., Leinfellner-Ruppertsberger 1992, Jespersen 1992),3 but barely as shortcoming of classical logic itself. Accepting the transclassical notion of negation implies giving up the tertium non datur, which in turn has consequences for the conception of subjectivity: the emergences of a subject cannot be accounted for as the result of negating the objective, i.e. subjectivity is more than just ‘not objective’ or ‘not positive’. Instead of being defined as ‘non-objective’, the subject emerges due to a self-referential distinction from its environment (Günther 1980: 81), i.e. by not only negating the objective, but also reflecting upon this distinction: “ich bin nur insofern Subjekt, als ich mich von etwas, das Objekt ist, unterscheide und mir überdies dieser Unterscheidung bewußt bin“ (‘I am subject only insofar, as I differ from something which is object, and, moreover, I am aware of this distinction’; Günther 1980: 82).4 As has been pointed out above – not only p and np are relevant for logic, but also the process of negating p. This, in turn, necessitates the assumption of three components of reality, which Günther (1978) calls ‘I’, ‘You’ and ‘It’. With three components of reality, subjectivity and objectivity cannot be complementary to each other. Rather, they are distributed over these three components which enter into two kinds of relations: an exchange relation between I and You, and an ordering relation between I and It, and You and It. That is, there are not only two kinds of subjectivity (Iand You-subjectivity), but also two kinds of objectivity: ‘being’ as given 3 4

Levinson (2000) and Horn (2001) offer pragmatic explanations. Evidence from neurophilosophy (e.g. Metzinger 2004: 55), where the Self is regarded as content of some permanent, dynamic process of transparent selfmodeling, supports this assumption.

285 object (objective objectivity) and ‘reflection’ as created object (subjective objectivity). Subjectivity and objectivity appear in two varieties each: subjective subjectivity, objective subjectivity (the subjective object) and objectivity (the objective object). Subjective subjectivity emerges as the reflection on the relation between the subjective and the objective object, i.e. as ‘reflection-of-reflection’. The relevant relations and reflections are summarised in Figure 1: reflection-of-reflection (subjectivity)

objectivity

It

subjectivity / reflection (subjective object, objectified subjectivity)

You

I

Figure 1: Distributed subjectivity and objectivity

Within this triadic conception, subjectivity is uncoupled from egological anchoring; it rather emerges as process of reflection. Subjectivity in this sense, i.e. as process, constitutes indeterminacy before determination, negativity withdrawing into further levels of reflection as soon as it gets objectified. Natural language as positive (i.e. objective) medium therefore cannot express subjectivity per se, but it may provide specific means to trigger processes of reflection leading to the inference of subjectivity. Triadicity and processuality, which are characteristic of transclassical logic, are also essential notions within Peirce’s semiotics,5 which allows a triadic conception also of linguistic signs.

5

Peirce regards his semiotic as logic. For a reflection-logic reconstruction of his semiotic cf. Ort (2007).

286 3.2. Peirce Peirce defines a sign as genuine triadic relation that cannot be reduced to dyadic relations (EP II: 272f): “A Sign, or Representamen, is a First which stands in such a genuine triadic relation to a Second, called its Object, as to be capable of determining a Third, called its Interpretant, to assume the same triadic relation to its Object in which it stands itself to the same object. The triadic relation is genuine, that is, its three members are bound together by it in a way that does not consist in any complexus of dyadic relations.” Peirce conceives the sign not only as triadic relation, but also as processual, since the interpretant must itself be able to determine an interpretant, i.e. it ‘degenerates’ to a representamen. Therefore, the sign process does not come to an end: “The Third must, indeed, stand in such a relation, and thus must be capable of determining a Third of its own; but besides that, it must have a second triadic relation in which the Representamen, or rather the relation thereof to its Object, shall be its own (the Third’s) Object, and must be capable of determining a Third to this relation.” (EP II: 273) This definition covers two kinds of sign processes: deduction, as described in the first part of this definition, and abduction, as described in its second part.6 Deduction is illustrated in Figure 2: R5 I4

R4 I3

R3 I2

R2 I

O

R

Figure 2: Deductive semiosis

6

Induction as third kind of sign process is not relevant for the argumentation here.

287 With deduction, the sign process leaves the object identical, except for some precisifications. This is different for abduction, where the object undergoes a process of development, as illustrated in Figure 3:

R3 I2 R2 I

O

O2

R

Figure 3: Abductive semiosis

With abductive semiosis, the object of R2 as degenerated I is the relation of R and O, and it is this relational object O2 that determines the interpretant I2. The object of I is thus ‘more’ than the object of R on the prior step of semiosis. Accordingly, the object and interpretant get distanced on the one hand, and enriched on the other. The concept of abduction captures the basic insights of transclassical logic (cf. Ort 2007: 293) and can be related to Günthers notion of subjectivity: O constitutes the objective object, O2 the subjective object (i.e. reflection as objectified subjectivity), I subjective subjectivity (i.e. reflection-of-reflection). In the course of this sign process, I degenerates to R2, i.e. subjectivity gets objectified. Whether this relation of the sign triad is conceived as R, i.e. subjective object, or as I, i.e. subjective subjectivity, is a matter of perspective and not fixed in advance. Since signs may also be linguistic signs, Peirce’s conception can be applied to natural language(s) as well. This will be illustrated in the following section with a specific verbal form in Bulgarian, traditionally called preizkazno vreme (‘reported tense’) or preizkazno naklonenie (‘reported mood’), and sometimes associated with subjectivity (e.g. Siméonov 1982). In order to avoid any theoretical commitment as regards semantics or categorical status of this form – which is not my

288 concern here – I regard it as expression of ‘indirectivity’ (cf. also Fielder 1995). 4. Indirectivity in Bulgarian For Bulgarian, traditional grammars propose a category, usually called ‘reported tense’ or ‘reported mood’ (here: ‘indirectivity’). This form is built with the l-participle and the auxiliary săm (‘to be’) in all persons except for the third. Table 1 illustrates the perfective7 paradigms of the ‘definite past’ (aorist), the ‘indefinite past’ (perfect), and the ‘indirect form’ with the verb da napiša (‘to write’).8 As can be seen, the indefinite past and the indirect form differ formally only in the third person singular and plural where the latter lacks the auxiliary: perfective aorist 1 Sg 2 Sg 3 Sg 1 Pl 2 Pl 3 Pl

napisach napisa napisa ‘he wrote’ napisaxme napisaxte napisaxa ‘they wrote’

l-participle perfect (+ auxiliary) indirect form (- auxiliary) napisal săm napisal săm napisal si napisal si napisal e napisal ø ‘he has written’ ‘he has (supposedly) written’ napisali sme napisali sme napisali ste napisali ste napisali sa napisali ø ‘they have written’ ‘they have (supposedly) written’

Table 1: Paradigms for Bulgarian perfect aorist, perfect and the indirect form

The forms without the auxiliary are discussed quite controversially, as regards both categorical status (tense, mood, separate category, mere discourse phenomenon) and semantics (expression of evidentiality, nonconfirmativity, doubt, indirect witness, etc.). Chvany (1988) accounts for these forms in terms of the feature [±Distance] with respect to the discourse situation (cf. also Fielder 1995). Distancing from the discourse situation allows the speaker, “to step behind a mask, to speak as a distant person […] without specifying 7 8

Bulgarian has a grammatical perfective-imperfective aspect opposition. These are the notions used by Fielder (1995).

289 the reasons” (Chvany 1988: 83), which Chvany regards as the basis for the different interpretations discussed in the literature. The feature of ‘distancing’ is also found in Siméonov (1982), who associates the distanced forms with ‘subjectivity’. Thus, (6a) is an ordinary present-tense utterance, whereas (6b) is accounted for as ‘distanced’:9 (6) a. V planinata

vali

snjag. [Siméonov 1982: 142]

in the.mountains snow:PRS.3SG snow

‘It’s snowing in the mountains.’ b. V planinata valjalo snjag. in the.mountains snow:l-PTC snow

‘It’s snowing in the mountains, they say.’ Siméonov (1982) argues that distanced forms exhibit a greater degree of ‘subjectivity’ than non-distanced forms. He does not, however, associate subjectivity with the relatedness of the respective forms to the speaker or the speaker’s evaluation of the state of affairs expressed. This would be redundant and circular, since, as has been argued above, and as Chvany (1988: 75) emphasises, “any choice reflects, and is pragmatically attributable to, the speaker’s putative evaluation of En [narrated event, B.S.] or DS [discourse situation, B.S.]”. Siméonov (1982: 143) accounts for the difference in subjectivity as follows: With the indicative in (6a), the referent corresponds to objective reality or some fact within an imaginary universe of discourse. The point of reference for the utterance in (6b) is constituted by the messages uttered in (6a) and hence is more distanced from reality than (6a). To put it differently, “le mode de l’énociation indirecte sert à subjetiver la fonction référentielle du mode indicatif“ (Siméonov 1982: 142). Due to this subjectification, “l’esprit du locuteur” (ibid.) is located within a domain which is negatively determined by factors such as ‘nonexperience’, ‘lack of direct evidence’, ‘surprise’ etc. This at the same 9

Siméonov postulates also ‘doubly distanced’ forms such as V planinata bilo valjalo snjag, where we have two l-participles (bilo and valjalo). These forms, which are sometimes regarded as ‘dubitative’, are not discussed here.

290 time establishes a distance toward a domain positively determined by ‘direct information’, ‘experience’, ‘confidence into the source of information’, etc. Because of this distance, Siméonov (1982: 143) associates the indirect form with the domain of ‘before’ and the indicative with the domain of ‘after’. These domains, however, are not temporal notions relating to the time axis, but relate to the notions of ‘not (yet) knowing’ and ‘knowing’, i.e. the difference between possibility and experience, thinking and thought, negativity and positivity. Based on this characterisation of the indirect form as ‘not yet knowing’, Siméonov (1982: 143) derives further possible interpretations, such as those illustrated in (7): (7) a. Toj imal he

have:l-PTC

pari. money

‘He pretends to have money.’ b. Toj imal pari! ‘[It turns out that] he does have money!’ c. Toj imal pari! ‘He, having money?’ (ironic) The difference between the definite past and the indirect form in terms of distancing and hence subjectivity can be captured by the triadic conception of subjectivity proposed in this paper. Figure 4 illustrates the semiotics of non-distanced forms: the sign representamen R1 denotes a snowing-situation O1 and at the same time yields an interpretant I1 which stands in the same relation to O1. This I1, which can be regarded as the interpretation of vali snjag, degenerates to R2, which in turn yields an interpretant I2 relating to the same object O1, and so forth.

291 I 1 = R2 Interpretation

I2

O1 denoted situation

R1

Figure 4: Non-distanced forms (vali snjag)

The non-distanced definite past gives rise to a deductive process of interpretation which does not alter the object, i.e. the denoted situation. This is different for the distanced indirect form which triggers an abductive process of interpretation as illustrated in Figure 5. I2 Interpretation

I 1 = R2

O1 denoted situation

O2 R1 – O1

R1

Figure 5: Distanced forms (valjalo snjag)

The object of is not the denoted situation (O1) itself, but the relation between as representamen R1 and the denoted situation O1. Since this O2 is a relation, it is not determinate – hence the interpretation is not determinate either. This corresponds to the state of ‘not yet knowing’ Siméonov speaks of. And exactly this relational, i.e. indeterminate, object which is reflected upon in the

292 interpretant I1 and linguistically manifest in the interpretant’s degenerated form R2, is responsible for the variety of distanced interpretations (in I2) associated with the Bulgarian indirect form. Hence, Siméonovs characterisation of the indirect form as ‘distanced’ and ‘subjective’ does not presuppose any ego-logical anchoring, but fits the triadic and processual conception proposed here. 5. Conclusion This paper has argued that traditional linguistic approaches to subjectivity and most philosophical accounts share the problem of an underlying dualism. It has been proposed that a triadic conception of subjectivity – based on Günther’s transclassical logic and Peirce’s triadic sign conception – can avoid the problems associated with dualistic thinking and is more adequate also to account for linguistic data. This has been illustrated with the expression of indirectivity in Bulgarian. The conception of subjectivity in language emerging from the account proposed here is rather different from traditional approaches: subjectivity is not to be located in meaning components, it does not consist in the relatedness of a specific element’s meaning or interpretation to some subject, however defined. Subjectivity rather arises out of the sign process itself – more specifically, out of linguistic elements triggering a process of reflection, of abductive reasoning. Linguistic phenomena such as parentheticals, delocutive particles in Russian (e.g., mol and deskat’) and the tripartite definite article in Macedonian (cf. Sonnenhauser 2010) can be adduced as further examples of subjective – in the sense defined here – elements in language.

References Baxtin, Michail M. 2000. Avtor i geroj v ėstetičeskoj dejatel’nosti. Sergej Bočarov and Vadim Kožinov (eds.), Baxtin, M. M. Avtor i geroj. K filosofskim osnovam gumanitarnych nauk. Sankt-Peterburg: Azbuka, 9-226.

293 Benveniste, Émile 1974a. Über die Subjektivität in der Sprache. In: Émile Benveniste, Probleme der allgemeinen Sprachwissenschaft. München: List, 287297. Benveniste, Émile 1974b. Die Struktur der Personenbeziehungen im Verb. In: Émile Benveniste, Probleme der allgemeinen Sprachwissenschaft. München: List, 251-264. Chvany, Catherine V. 1988. Distance, Deixis and Discreteness in Bulgarian and English Verb Morphology. In: Alexander M. Schenker (ed.), American Contributions to the Tenth International Congress of Slavists. Columbus & Ohio: Slavica, 69-90. Dahl, Östen.2000. Egophoricity in Discourse and Syntax. Functions of Language 7/1, 37-77. EP I: Houser, Nathan and Christian Kloesel (eds.) 1992. The Essential Peirce. Selected Philosophical Writings. Volume 1 (1867-1893). Bloomington: Indiana University Press. EP II: Houser, Nathan et al. (eds.) 1998. The Essential Peirce. Selected Philosophical Writings, Volume 2 (1893-1913). Bloomington: Indiana University Press. Fielder, Grace. 1995. Narrative Perspective and the Bulgarian l-Participle. The Slavic and East European Journal 39:5, 585-600. Finegan, Edward 1995. Subjectivity and Subjectivisation: An introduction. In: Dieter Stein and Susan Wright (eds.), Subjectivity and Subjectivisation. Cambridge: Cambridge University Press, 1-15. Frank, Manfred. 1991. Selbstbewußtsein und Selbsterkenntnis. Stuttgart: Reclam. Frank, Manfred (ed.) 1994. Analytische Theorien des Selbstbewusstseins. Essays zur analytischen Philosophie der Subjektivität. Frankfurt/Main: Suhrkamp. Günther, Gotthard 1978. Idee und Grundriß einer nicht-Aristotelischen Logik. Die Idee und ihre philosophischen Voraussetzungen. Hamburg: Felix Meiner. Günther, Gotthard 1980. Das Problem einer trans-klassischen Logik. In: Beiträge zur Grundlegung einer operationsfähigen Dialektik, Band 3. Hamburg: Felix Meiner, 73-94. Günther, Gotthard 2000. Identität, Gegenidentität und Negativsprache. http://vordenker.de/ggphilosophy/gunther_identitaet.pdf (11.2.2009). Horn, Laurence 2001. A Natural History of Negation. Chicago & London: University of Chicago Press. Jakobson, Roman 1971. Linguistics and Communication Theory. In: Selected Writings, Vol. II. The Hague: Mouton, 570-579. Jespersen, Otto 1992. The Philosophy of Grammar. With a New Introduction and Index by James D. McCawley. Chicago: University of Chicago Press.

294 Langacker, Ronald 1985. Observations and Speculations on Subjectivity. In: John Haiman (ed.), Iconicity in Syntax. Amsterdam & Philadelphia: Benjamins, 109150. Langacker, Ronald 1990. Subjectification. Cognitive Linguistics 1:1, 5-38. Langacker, Ronald 1995. Raising and Transparency. Language 71:1, 1-62. Leinfellner-Rupertsberger, Elisabeth 1991. Die Negation im monologischen Text: Textzusammenhang und „Foregrounding“. Folia Linguistica 25, 111-142. Levinson, Stephen 2000. Presumptive Meanings. Cambridge: Cambridge University Press. Lyons, John 1982. Deixis and Subjectivity: Loquor, ergo sum? In: Robert Jarvella and Wolfgang Klein (eds), Speech, Place, and Action. Studies in Deixis and Related Topics. Chichester: John Wiley, 101-124. Lyons, John 1994. Subjecthood and Subjectivity. In: Marina Yaguello (ed.), Subjecthood and Subjectivity. The Status of the Subject in Linguistic Theory. Paris: Ophrys, 9-17. Metzinger, Thomas 2004. The Subjectivity of Subjective Experience: A Representationalist Analysis of the First-Person-Perspective. Networks 3-4, 3364. Ort, Nina 2007. Reflexionslogische Semiotik. Zu einer nicht-klassischen und reflexionslogisch erweiterten Semiotik im Ausgang von Gotthard Günther und Charles S. Peirce. Weilerswist: Velbrück. Siméonov, Yosif 1982. Quelques problèmes de la grammaire contrastive dans l’optique du rapport représentation/expression. Săpostavitelno ezikoznanie 7:1-2, 135-144. Sonnenhauser, Barbara 2008. On the Linguistic Expression of Subjectivity: Towards a Sign-Centered Approach. Semiotica 172:1-4, 323-337. Sonnenhauser, Barbara 2010. Die Diskursfunktionen des ‘dreifachen Artikel’ im Makedonischen: Perspektivität und Polyphonie. Die Welt der Slaven, 55/2 Stein, Dieter and Susan Wright (eds.) (1995). Subjectivity and Subjectivisation. Cambridge: Cambridge University Press. Sturma, Dieter 2008. Grundzüge der Philosophie der Person. In: Alexander Haardt and Nikolaj Plotnikov (eds.), Diskurse der Personalität. Die Begriffsgeschichte der ‘Person’ aus deutscher und russischer Perspektive. München: Fink, 27-45. Traugott, Elizabeth 1989. On the Rise of Epistemic Meanings in English: An Example of Subjectification in Semantic Change. Language 65:1, 31-55. Traugott, Elizabeth 1995. Subjectification in Grammaticalisation. In: Dieter Stein and Susan Wright, (eds.). Subjectivity and Subjectivisation. Linguistic Perspectives. Cambridge: Cambridge University Press, 31-54. Weiss, Daniel 2009. Fundamentals of Ego-Linguistics. In: Sandra Birzer et al. (eds.), Proceedings of the Second International Perspectives on Slavistics Conference (Regensburg 2006). München: Sagner, 149-163.

Piotr Stalmaszczyk University of Łódź [email protected]

Gottlob Frege, Philosophy of Language, and Predication Abstract: This paper discusses selected problems in the philosophy of language and linguistics as exemplified with Frege’s approach to predication. It also investigates the relevance of this approach for contemporary linguistics, in particular generative grammar. Though Fregean semantics is not concerned with natural language categories, his line of reasoning (especially the distinction between saturated and unsaturated functions) may be applied to analyzing predication as a grammatical relation. The paper also offers a preliminary classification of predication types into thematic, structural and propositional predication.

0. Introduction The major aim of this paper is to discuss the possible inspirations from philosophy of language and logic for contemporary linguistics. The paper concentrates on Frege’s views concerning the relation between logic and language, with special focus on Fregean predication, compared with appropriate developments in contemporary generative grammar. I will also try to find out to what extent Frege’s dissatisfaction with ‘ordinary language’, resulting in important formal developments, can be directed at improving linguistic analyses. The claim of the present paper is that some of the ‘Fregean tools’1 may prove useful in analyzing such traditional linguistic notions as predication. I focus on one theoretical approach, namely generative grammar, and only one notion, that of predication. Predication is considered here as both a semantic relation and an appropriate structural configuration 1

In the sense of Pietroski (2004: 29-30).

296 enabling this relation to occur.2 Section 1 introduces this notion, sections 2 and 3 are devoted to Frege’s conceptual notation, and his ideas on functions and arguments, section 4 provides a preliminary classification of predication types, and section 5 briefly discusses the copula and propositional predication. 1. Predication Research in philosophy of language, logic, and linguistics makes ample, both explicit and implicit, use of the concept of predication. In traditional grammar, predication is the relation between the subject and the predicate. In logic, predication is the attributing of characteristics to a subject to produce a meaningful statement combining verbal and nominal elements. This understanding stems from Aristotelian logic, where the term (though not explicitly used by the philosopher) might be defined as “saying something about something that there is”.3 In more recent logical inquiries, this classical definition is echoed by the ‘thingproperty relation’ (e.g. in Reichenbach 1947). Quine (1960) treats predication as the basic combination in which general and singular terms find their contrasting roles, he also considers it to be one of the mechanisms which joins occasion sentences. This idea is close to Lorenzen’s (1968) ‘basic statements’ (Grundaussagen), the simplest structures of a language that are composed of a subject and a predicate. Strawson (1971) stresses that predication is an assessment for truthvalue of the predicate with respect to the topic, and according to Link (1998) it is the basic tool for making judgments about the world. Similarly Krifka (1998) claims that predication establishes a relation of a specified type between a number of parameters, or semantic arguments. 2

3

For a background discussion on the notion of predication, and its importance for linguistics and philosophy of language, see Rothstein (1985), Lenci (1998) and Stalmaszczyk (1999). This paper develops the ideas presented in Stalmaszczyk (2006). This definition may be inferred from Aristotle’s concept of a proposition, understood as a “statement, with meaning as to the presence of something in a subject, or its absence, in the present, past, or future, according to the division of time” (On Interpretation, 17a23).

297 For example, sentences with intransitive verbs establish a relation that holds of the subject for some event, and sentences with transitive verbs establish a relation that holds between the subject, the object, and some event. In Davidson’s approach to verb semantics predication can be specified as a relation between a verb and one of its semantic entailments (Davidson 1967), or as a combinatory relation which makes it possible to join a property and an argument of the appropriate semantic type to form a formula whose truth or falsity is established according to whether the property holds of the entity denoted by the argument or not. The traditional grammatical approach to predication was continued by, among others, Hockett (1958). On the other hand, Jespersen (1937) abandoned the term altogether, and instead introduced the concept of nexus, a relation joining two ideas. Additionally, some recent crosslinguistic studies point to the necessity of redefining the concept of predication as a sentence-constituting device serving to unite elements of a proposition in order to make it utterable (Sasse 1991). Early generative grammar, in the Chomskyan tradition, paid very limited attention to the notion; later studies, e.g. Williams (1980) and Rothstein (1985, 1992), viewed predication as a primitive syntactic relation. Whereas Williams argued for an indexing approach and almost equaled predication with semantic role assignment, Rothstein claimed that the semantic and syntactic concepts of predication are distinct, and the relation which holds between predicates and subjects at S-structure can be defined in purely syntactic terms. Higginbotham (1987) understands predication as a formal binary relation on points of phrase markers, and Bowers (1993) postulates the existence of a functional category responsible for implementing the relation. Cinque (1992) observes that in the generative approach to linguistic analysis an abstract predication relation underlies all sentences, even those lacking a genuine semantic predication and presentational sentences.4

4

For the background discussion on different approaches to predication in traditional grammar, contemporary linguistics and philosophy of language, see Stalmaszczyk (1999).

298 Before discussing Fregean inspirations and implications for the theory of predication, it is necessary to introduce briefly his achievements important for philosophy of language and linguistics, especially his work on conceptual notation, and functions and arguments. 2. Conceptual notation Frege was one of the first modern logicians to see the need for a formalized conceptual notation, and devoted to this problem his first major work, Begriffsschrift, eine der aritmetischen nachgebildete Formelsprache des reinen Denkens (‘Conceptual Notation. A Formula Language of Pure Thought, modeled on Arithmetic’, 1879, henceforth CN). In the Preface he claimed that:5 If it is a task of philosophy to break the power of words over the human mind, by uncovering illusions that through the use of language often almost unavoidably arise concerning the relations of concepts, by freeing thought from the taint of ordinary linguistic means of expression, then my Begriffsschrift, further developed for these purposes, can become a useful tool for philosophers. (CN, 50-51)

It is clear from the above quotation that Frege did not consider spoken language as a sufficiently precise instrument for logic. He pointed to the need for creating a language made up of signs, clear of any double meaning, he also claimed that “[t]he main task of the logician is to free himself from language and to simplify it. Logic should be the judge of languages” (Letters to Husserl, 1906, 303). Elsewhere Frege stated that “[i]nstead of following grammar blindly, the logician ought rather to see his task as that of freeing us from the fetters of language” (Logic, 244).6 As observed by Carl (1994: 54), this “struggle against language and grammar” directs the logician’s concern to the issue of the thought 5

6

Unless otherwise noted, all page references following the abbreviated (or full) titles are to the English translations of Frege’s texts collected in Beaney, ed. (1997) and listed in the References. The German term Begriffsschrift has also been translated as ‘concept-script’ or ‘ideography’. For a critical discussion of Frege’s views on the ‘defects of language’, see Hanfling (2000: 153-163).

299 expressed by a sentence. Consequently, Frege distinguishes between “judgement” (Urteil) and “judgeable content” (beurteilbarer Inhalt – the possible content of judgement, the thought expressed by a sentence, or the ‘complex of ideas’). In the conceptual notation, the former is represented by “|––A”, whereas the latter by “––A”. Frege explains the convention in the following way:7 The horizontal stroke, from which the symbol |–– is formed, binds the symbols that follow it into a whole, and assertion, which is expressed by means of the vertical stroke at the left end of the horizontal, relates to this whole. The horizontal stroke may be called the content stroke, the vertical the judgement stroke. The content stroke serves generally to relate any symbol to the whole formed by the symbols that follow the stroke. What follows the content stroke must always have a judgeable content. (CN §2, 53)

In other words, the content stroke serves to distinguish the formation of a judgeable content from the act of judging. Frege is concerned in CN with splitting up the content of a judgement, which he accomplishes by introducing the distinction between argument and function. In CN §9, a constant component which represents the totality of the relations is called a function, and the symbol which is regarded as replaceable by others and which denotes the object which stands in these relations is the function’s argument. Furthermore, Frege observes that the distinction between function and argument “has nothing to do with the conceptual content, but only with our way of grasping it” (CN §9, 66) and “for us, the different ways in which the same conceptual content can be taken as a function of this or that argument has no importance so long as function and argument are fully determinate” (CN §9, 68). In other words: “the way we distinguish between argument and function is not fixed by the conceptual content of a sentence” (Carl 1994: 62). I return to the consequences of this claim below, here it needs to be added that in 7

Frege’s notation has met with criticism from other logicians and philosophers, already Wittgenstein commented that: “Frege’s ‘judgement stroke’ ‘|––’ is logically quite meaningless” (TLP 4.442). On the other hand, it is possible to consider this sign as a metalogical symbol. For a recent re-evaluation of Frege’s judgement stroke, see Smith (2000) and Green (2002). See also the chapter on assertion in Dummett (1981).

300 Frege’s system the grammatical categories of subject and predicate have no significance: “I believe that the replacement of the concepts subject and predicate by argument and function will prove itself in the long run” (CN, Preface, 51) and “[a] distinction between subject and predicate finds no place in my representation of judgement” (CN, §3, 53). This is a major issue which recurs throughout Frege’s writings, e.g. in one of the letters to Husserl he stresses that “[w]e should either tidy up logic by throwing out subject and predicate or else restrict these concepts to the relation of one object’s falling under a concept (subsumption)” (Letters to Husserl, 1906, 303), and in Logic he concludes that “from all this we can see that the grammatical categories of subject and predicate can have no significance for logic” (Logic, 242). Very interestingly, some calls for abandoning the traditional notions can be found in Otto Jespersen’s Analytic Syntax (1937), one of the first modern (linguistic) attempts at formalizing grammar of natural language. Jespersen’s motivation, however, is very different from Frege’s; in a sense, it reverses the logician’s argumentation: It would probably be best in linguistics to avoid the word predication altogether on account of its traditional connexion with logical theories. In grammar, we should not of course forget our logic, but steer clear of everything that may hamper our comprehension of language as it is actually used; this is why I have coined the new term nexus with its exclusive application to grammar. (Jespersen 1937: 120)

Jespersen’s bipartite and relational approach to predication shows affinities with traditional grammar and classical semantics; however, it is also an interesting antecedent of the more recent generative inquiries. 3. Functions and arguments The notion of a function is further investigated and developed in Funktion und Begriff (‘Function and Concept’, 1891, henceforth FC), and Grundgesetze der Arithmetik, Volume I, (1893, henceforth GGA), where Frege introduced a strict distinction between functional expressions and functions. These publications show the fundamental shift in Fregean semantics: from the unary semantics of conceptual

301 content (as developed in Begriffsschrift) to the two-tiered semantics, where the notion of conceptual content splits into sense and reference.8 This article focuses on one issue only – the treatment of grammatical predicates, an issue of rather marginal importance for the main line of Frege’s inquiry, but of considerable interest for linguistic research. In Fregean semantics, indicative sentences are analyzed similarly to analytic expressions and mathematical formulae, and therefore a grammatical predicate (i.e. a subject-predicate expression) is treated as a type of function expression denoting a function, and as such has certain properties common to all functions. The function splits into two parts: the sign of the argument and the expression of the function. The latter contains an empty place, it is incomplete or unsaturated: “the argument does not belong with a function, but goes together with the function to make up a complete whole; for a function by itself must be called incomplete, in need of supplementation, or unsaturated” (FC, 133). Elsewhere, Frege explains that “The expression of a function is in need of completion, unsaturated” (GGA, 211). Saturation is achieved through insertion of an argument into the empty place. The argument “only serves to complete the function that in itself is unsaturated” (GGA, 212). Importantly, the same requirements hold for grammatical predicates, consider the following examples: (1) Caesar conquered Gaul. (2) Berlin is a capital city. In (1) the predicate conquered Gaul is incomplete and takes a name (argument) – Caesar – to saturate it, similarly in (2) the predicate is a capital city requires a completing argument – Berlin. The argument (object-name) is complete in itself. The expression … conquered Gaul is a functional expression, which designates a concept, i.e. conqueror of Gaul. Only after saturating the functional expression with a “proper 8

More precisely, conceptual content splits into three different notions: sense, reference and extension; this tripartite distinction, however, applies only to predicates, whereas in names and sentences Frege identifies reference with extension, cf. the discussion in Penco (2003).

302 name, or an expression that replaces a proper name, does a complete sense appear” (FC, 139). In Fregean logic, “concepts are functions”,9 and therefore they take arguments and have truth-values, where the value of a function for an argument is “the result of completing the function with the argument” (FC, 134). The following examples illustrate this issue: (3) a. Caesar conquered Gaul. b. Frege conquered Gaul. c. 2 conquered Gaul.

(conqueror of Gaul = Caesar) (conqueror of Gaul = Frege) (conqueror of Gaul = 2)

As observed by Thiel (1968: 46), all examples in (3) are senseful since Frege’s expansion of the concept of function allows any object to serve as argument, but only for (3a) is the truth-function true. A grammatical predicate may be polyadic and take more than one argument, as in (4):10 (4) Berlin is the capital of the German Empire. Here, the expression is the capital of is a two-place predicate, or relation (i.e. a function of two arguments), requiring two arguments (names) for saturation. According to Frege, functions with two arguments “are doubly in need of completion in that a function with one argument is effected. Only by a further completion do we reach an object, and this is then called the value of the function for the two arguments” (GGA, 214). Frege was primarily concerned with deriving arithmetic from logic; however, the idea of function saturation applies also to grammatical predicates. Adopting the convention used by Higginbotham (1990), the open places in the predicate are marked with numerals:11 9

10 11

Cf. the following definition: “a concept is a function whose value is always a truth-value” (FC, 139). A concept, just like a function, is unsaturated “in that it requires something to fall under it; hence it cannot exist on its own” (Letter to Marty, 1882, 81). On the analogy between concepts and functions, and concepts and relations, see Dummett (1981: 255-257). The same is true for sentence (1), with conquer being a dyadic predicate. This convention of representing argument places with numerals was already used by Quine and Davidson, see also Higginbotham (1985).

303 (5) Berlin [is the capital of (1,2)] the German Empire. In the following examples, from Higginbotham (1990), the expressions headed by one of the major syntactic categories of N(oun), V(erb), A(djective), and P(reposition) are understood as n-place predicates. The open places associated with the predicates are marked with numerals, their appropriate arguments are indexed, and the underlined constituents belong to the categories noted: (6) a. Mary considers them1 [N fools (1)] b. Mary1 [V persuaded (1,2,3)] me2 of something3 c. John1 left the room [AP proud of Mary (1)] d. John1 is [P in (1,2)] the garden2 e. The1 [N’ conviction that the earth is in danger (1)] is widespread. A sentence can contain no open positions, since no true or false statement can be made with an open sentence. As noted by Higginbotham (1990), the open positions in the various words and phrases that make up a sentence must be eliminated, or discharged, as defined by the rules of compositional semantics. 4. Types of predication Frege has been commonly credited with proposing a bipartite analysis of expressions into a functor and its argument(s). His approach to predication, understood here as a primitive logical relation, may be thus termed functional: a function has to be saturated by an argument.12 Fregean approach contrasts with the concatenative approach to predication, rooted in the Aristotelian tradition. Importantly, both these

12

Fregean predication should be analyzed in connection with his theory of truth and semantic relations. However, this paper takes a simplified approach and focuses only on aspects relevant for a linguistic theory of predication. The mutual relations between the notions of truth, existence, identity and predication are discussed in Klement (2002) and Mendehlson (2005).

304 approaches find application in modern theories of grammar.13 In Aristotelian semantics, predication is the relation constituted by two elements, the subject with the predicate, with tense specification being a third constitutive (concatenating) element, cf. (7):14 (7) Aristotelian predication: Proposition ⇒ Subject∩Tense∩Predicate In Fregean semantics the application of a function to an argument is not a mere juxtaposition of the two elements. The function combines with the argument into a self-contained whole due to the fact that it contains a logical gap (the place-holder, or argument-place) which needs filling. As concluded by Frege in Über Begriff und Gegenstand (‘On Concept and Object’, 1892, henceforth CO): “not all the parts of a thought can be complete; at least one must be unsaturated or predicative; otherwise they would not hold together” (CO, 193). Tichý (1988: 27) comments that “the function latches on its argument, sticking to it as if through a suction effect”. Therefore, in Fregean semantics, predication is a relation in which an argument saturates an open position in the function, cf. the simplified formula (8):15

13

14

15

For a re-analysis of the Fregean approach in contemporary generative grammar, see Rothstein (1985), Eide and Afarli (1999). A formal approach to Fregean semantics is presented by Chierchia (1985) and Bowers (1993). Some implications of Fregean semantics for categorial grammar are discussed by Wiggins (1994). For a discussion of functionality and predication, see Klement (2002: 28-32). Cf. the Aristotelian definition of simple proposition quoted in note 3, above. This is not to claim, however, that Aristotelian predication is limited to structural configurations, on the contrary, it has deep ontological grounding. As observed by Moravcsik (1967: 82) Aristotle “takes predication to be showing the ontological dependence of the entity denoted by the predicate on the entity denoted by the subject”. It needs to be stressed at this point that Fregean semantics is not concerned with natural language predicates. His line of reasoning, however, may be applied to analyzing predication as a grammatical relation.

305 (8) Fregean predication: Proposition ⇒ [Function (1, …, n)]∩Argument(1, …, n) Formula (8) aims at capturing Tichý’s observation that “the function latches on its argument”, furthermore, the “suction effect” is attributed to the presence of the open position(s) – ‘(1, …, n)’. I use the term “Proposition” in (8) as a generalized term for Frege’s “act of judgement” and “assertion”.16 As observed by Stevens:17 Rather than analysing the proposition into a series of elements (subject, predicate, copula), Frege construes the predicative part of the proposition as a function which is essentially incomplete or ‘unsaturated’. (Stevens 2003: 224)

The logical approach to functions might be insightful for the grammatical analysis of predicates. In this paper, I am predominantly concerned with the nature of the relation holding between two types of “linguistic devices”: those which have an identifying function (cf. Frege’s arguments), and those which have a predicating function (Frege’s functions). I firmly believe that the same line of argumentation applies to the structural organization of syntactic arguments and predicates. Sentence (4), repeated below, was analyzed as involving a two-place predicate (9b): (9) a. Berlin is the capital of the German Empire. b. Berlin [is the capital of (1,2)] the German Empire.

16

17

(= (4)) (= (5))

Cf. the distinction introduced by Frege in Thought (329): (1) The grasp of a thought – thinking, (2) The acknowledgement of the truth of a thought – the act of judgement, (3) The manifestation of this judgement – assertion. In other words, there is a need for a predicative constituent in a proposition to bind the propositional content into a ‘fully-fledged unit’, cf. Stevens (2003: 230231).

306 At the same time, however, in this sentence the entire expression is the capital of the German Empire is a one-place predicate saturated by the sentential subject:18 (10) Berlin [is the capital of the German Empire (1)] Comparison of (9b) with (10) points to the necessity of distinguishing between two types of predication: polyadic (i.e. ‘grammatical’ in Frege’s terminology) and monadic, e.g.: (11) a. Berlinα [is the capital of] the German Empireβ (polyadic predication) b. Berlinα [is the capital of the German Empire] (monadic predication) I will assume here that polyadic predication is a relation which occurs between a predicate and its argument(s), and is a consequence of the predicate’s semantic properties. In modern generative grammar (e.g. Chomsky 1981, 1982), this type of predication is associated with semantic interpretation, and as such falls under the scope of Theta Theory, one of the modules in the Government and Binding model of generative grammar. For this reason, it may be referred to as thematic predication. The appropriate context, together with a brief description, is provided below: (12) Thematic predication: i. Argument1 [Predicate (1,2,…)] Argument2 … ii. Thematic predication deals with semantic interpretation of arguments and thematic role assignment. 18

Frege himself introduces this two-step approach, cf. the following comment, from his Notes for Ludwig Darmstaedter: “The sentence ‘The capital of Sweden is situated at the mouth of Lake Mälar’ can be split up into a part in need of completion and the saturated part ‘the capital of Sweden’. This can further be split up into the part ‘the capital of’, which stands in need of completion, and the saturated part ‘Sweden’” (Notes for Ludwig Darmstaedter, 364).

307 Monadic predication, on the other hand, involves a one-place predicate and a referring term functioning as its unique argument (e.g. structural subject), and can be thus termed structural predication:19 (13) Structural predication: i. Argument [Predicate (1)] ii. Structural predication deals with configurational relations between nodes described/defined on phrase markers. In the Government and Binding model of generative grammar, the obligatory presence of the closing argument (subject) in a subjectpredicate structure follows from the second clause of the Extended Projection Principle (EPP).20 Later, Chomsky (1986) suggested that this part of the EPP might be derived from the theory of predication, as developed in Williams (1980) and Rothstein (1985), and observed that the EPP “is a particular way of expressing the general principle that all functions must be saturated” (Chomsky 1986: 116). Chomsky explicitly referred to Frege, and observed that a maximal projection (e.g. VP or AP) may be regarded as a syntactic function that is “unsaturated if not provided with a subject of which it is predicated” (Chomsky 1986: 116). There is one more crucial property of structural predication, not captured by the above descriptions. As observed already by Aristotle, predication is constituted by three elements: the subject, the predicate, and the tense element, cf. (7) above. Also Frege, though working in a completely different tradition and acknowledged above for the functional approach to predication, referred to the copula as the “verbal sign of predication” (see below), possibly implying that predication is a 19

20

I understand monadic predication similarly to Rothstein (1992: 153), who defines it as the relation holding between the subject of a sentence and the “remainder”. The Projection Principle formed in Chomsky (1981: 29) states that the subcategorization properties of each lexical item must be represented categorially at each syntactic level. Chomsky (1982: 10) extends the Principle by adding a second requirement: that clauses have subjects. More recently, Afarli and Eide (2000) propose that the EPP is the effect of a proposition-forming operation of natural language, induced by a predication operator, and Afarli (2005) shows the effects of semantic saturation.

308 relation involving three, rather than two, elements. It is also possible to regard the ‘content stroke’, introduced in CN §2, as a third element involved in predication.21 In accordance with the above observations, it is assumed here that structural predication involves two terms – the subject argument and the predicate function – and additionally an operator of predication, which means that the schema and definition (13) are inadequate, and require the following reformulation:22 (14) Structural predication: i. Argument [Operator [Predicate (1)]] ii. Structural predication deals with configurational relations between nodes described/defined on phrase markers triggered by the operator of predication. In other words, I make here two basic claims about structural predication:23 (15) I. Structural predication is monadic, in the sense that the predicate takes only one argument (predication subject); II. Structural predication is tripartite, in the sense that an appropriate operator is required to trigger predication. 5. The copula and propositional predication In On Concept and Object Frege invokes the concept of unsaturatedness, or incompleteness, in order to elucidate the distinction between singular

21

22

23

A similar observation has been made by Afarli and Eide (2000: 47, n. 5). See also Smith (2000) on the meaning and function of the judgement stroke in Frege’s logic. In the semantics developed by Chierchia (1985) and Bowers (1993), the predication operator is a function that takes the property element to form a propositional function, which in turn takes an entity to form a proposition; for further details, see Eide and Afarli (1999) and Afarli (2005). For a discussion of these claims, see Stalmaszczyk (1999).

309 terms and general terms.24 In Fregean semantics, a singular term is a name of an object or a definite description. It is a saturated expression that can be used in the subject position of a subject-predicate sentence. A general term is a name of a concept. It is an unsaturated expression, realized by a common noun, adjective or verb, which may be used in the predicate place of a subject-predicate sentence, e.g.: (16) a. Socrates is a man. b. Socrates is wise. c. The Greek philosopher sleeps. The distinction between singular and general terms has interesting consequences for the status of the copula. In sentences (16a) and (16b) the singular term is linked to the general term by the copula is. Frege very carefully distinguishes here between two different uses of the word is.25 Consider the following sentences, based on Frege’s examples: (17) a. He is Alexander the Great. b. It is the number four. c. It is the planet Venus. d. It is green. e. It is a mammal. In the first three instances, is is the “is of identity”, used “like the ‘equals’ sign in arithmetic, to express an equation” (CO, 183). In the last two examples, it serves as a copula, “as a mere verbal sign of predication” (CO, 182). In other words, “something falls under a concept, and the grammatical predicate stands for this concept” (CO, 183). One more example helps to clarify the distinction: 24 25

See the discussion in Heintz (1973). Obviously, this distinction has an ancient tradition, cf. Aristotelian essential vs. accidental predication, for a detailed discussion, see Lewis (1991). Frege’s originality, however, lies in showing the different underlying patterns of names and predicates.

310 (18) a. The Morning Star is Venus. b. The Morning Star is a planet. In (18a) we have two proper names, “the Morning Star” and “Venus”, for the same object. In this sentence, the word is forms an essential part of the predicate, it carries full predicative force. The predicate is formed by is together with the name. This is the relation of equation, which involves two arguments. In (18b), on the other hand, we have one proper name “the Morning Star”, and one predicate, the concept-word “planet”. Here the word is is just the copula, the “verbal sign of predication”. In this instance, we have the relation of an object’s falling under a concept (i.e. subsumption). Note that Frege distinguishes here between two different relations: that of one object falling under a concept (subsumption), and that of one concept being subordinated to another (subordination). Only in the first case, can we talk about the subjectpredicate relation. In contrast to equation, the relation of subsumption is irreversible. We may add labeled brackets to (18), in order to explicitly illustrate the distinction made by Frege: (19) a. [Name The Morning Star] [Predicate [is] [Name Venus] ] b. [Name The Morning Star] [Copula is] [Predicate a planet] The distinction between singular terms and general terms features explicitly in yet another definition of predication. According to Quine (1960), predication is the basic combination in which general and singular terms find their contrasting roles: Predication joins a general term and a singular term to form a sentence that is true or false according as the general term is true or false of the object, if any, to which the singular term refers. (Quine 1960: 96)

Since in Quine’s approach to predication the focus is on proposition formation, this instance of the relation may be termed propositional predication.

311 6. Conclusion The aim of this paper was to demonstrate that Frege’s approach to functions has consequences for the treatment of predicates and predication in the theory of grammar. I have tentatively postulated the existence of thematic, structural and propositional predication, every instance of the relation being linked to Frege’s logical and philosophical inquiries.

References Afarli, Tor and Kristin Eide, 2000. Subject Requirement and Predication. Nordic Journal of Linguistics 23, 27-48. Afarli, Tor 2005. Predication in Syntax. Research in Language 3, 67-89. Aristotle 1941. De Interpretatione (On Interpretation). Translated by E.M. Edghill. In: The Basic Works of Aristotle. Edited by Richard McKeon. New York: Random House, 38-61. Beaney, Michael 1997. Introduction. In: M. Beaney (ed.), 1-47. Beaney, Michael (ed.) 1997. The Frege Reader. Oxford: Blackwell Publishers. Bowers, John 1993. The Syntax of Predication. Linguistic Inquiry 24, 591-656. Bright, William (ed.) 1992. International Encyclopedia of Linguistics. New York and Oxford: Oxford University Press. Burge, Tyler 2005. Truth, Thought, Reason. Essays on Frege. Oxford: Clarendon Press. Carl, Wolfgang 1994. Frege’s Theory of Sense and Reference. Its Origins and Scope. Cambridge: Cambridge University Press. Chierchia, Gennaro 1985. Formal Semantics and the Grammar of Predication. Linguistic Inquiry 16, 417-443. Chomsky, Noam 1981. Lectures on Government and Binding. Dordrecht: Foris. Chomsky, Noam 1982. Some Concepts and Consequences of the Theory of Government and Binding. Cambridge, Mass.: MIT Press. Chomsky, Noam 1986. Knowledge of Language. Its Nature, Origin, and Use. New York: Praeger. Cinque, Guglielmo 1992. Predication. In William Bright (ed.), vol. 3, 268-269. Davidson, Donald 1967. The Logical Form of Action Sentences. [Reprinted in Donald Davidson, 1980. Essays on Actions and Events, Oxford: Clarendon Press, 105-122.] Dummett, Michael 1981. Frege: Philosophy of Language (Second edition). London: Duckworth.

312 Eide, Kristin and Tor Afarli, 1999. The Syntactic Disguises of the Predication Operator. Studia Linguistica 53, 155-181. Frege, Gottlob [1879] 1997. Begriffsschrift (translated by M. Beaney). In: M. Beaney (ed.), 47-78. Frege, Gottlob [1882] 1997. Letter to Marty, 29.8.1882 (translated by H. Kaal). In: M. Beaney (ed.), 79-83. Frege, Gottlob [1891] 1997. Function and Concept (translated by P. Geach). In: M. Beaney (ed.), 130-148. Frege, Gottlob [1892] 1997. On Concept and Object (translated by P. Geach). In: M. Beaney (ed.), 181-193. Frege, Gottlob [1893] 1997. Grundgesetze der Arithmetik, Vol. I (translated by M. Beaney). In: M. Beaney (ed.), 194-223. Frege, Gottlob [1897] 1997. Logic (translated by P. Long and R. White). In: M. Beaney (ed.), 227-250. Frege, Gottlob [1906] 1997. Letters to Husserl, 1906 (translated by H. Kaal). In: M. Beaney (ed.), 301-307. Frege, Gottlob [1918] 1997. Thought (translated by P. Geach and R. H. Stoothoff). In: M. Beaney (ed.), 325-345. Frege, Gottlob [1919] 1997. Notes for Ludwig Darmstaedter (translated by P. Long and R. White). In: M. Beaney (ed.), 362-367. Green, Mitchell S. 2002. The Inferential Significance of Frege’s Assertion Sign. Facta Philosophica 4, 201-229. Hanfilng, Oswald 2000. Philosophy and Ordinary Language. The Bent and Genius of our Tongue. London and New York: Routledge. Heintz, John 1973. Subjects and Predicables. A Study in Subject-Predicate Asymmetry. The Hague and Paris: Mouton. Higginbotham, James 1985. On Semantics. Linguistic Inquiry 16, 547-593. Higginbotham, James 1987. Indefiniteness and Predication. In: Eric Reuland and Alice ter Meulen (eds.), 43-70. Higginbotham, James 1990. Frege and Grammar. [Unpublished MS, Oxford: Oxford University.] Hockett, Charles F. 1958. A Course in Modern Linguistics. New York: The Macmillan Company. Jespersen, Otto 1937. Analytic Syntax. Helsingor. [Reprinted New York: Holt, Rinehart and Winston, 1969]. Klement, Kevin C. 2002. Frege and the Logic of Sense and Reference. London and New York: Routledge. Lenci, Alessandro 1998. The Structure of Predication. Synthese 114, 233-276. Lewis, Frank A. 1991. Substance and Predication in Aristotle. Cambridge: Cambridge University Press.

313 Link, Godehard 1998. Algebraic Semantics in Language and Philosophy. CSLI Lecture Notes No. 74, Stanford: Center for the Study of Language and Information. Lorenzen, Paul 1968. Methodisches Denken. Frankfurt am Main: Suhrkamp Verlag. Krifka, Manfred 1998. The Origins of Telicity. In: Susan Rothstein (ed.), 197-235. Mendelsohn, Richard 2005. The Philosophy of Gottlob Frege. Cambridge: Cambridge University Press. Moravcsik, Julius M. E. 1967. Aristotle on Predication. Philosophical Review 76, 80-96. Penco, Carlo 2003. Two Theses, Two Senses. History and Philosophy of Logic 24, 87-109. Pietroski, Paul M. 2004. Events and Semantic Architecture. Oxford: Oxford University Press. Quine, William V. O. 1960. Word and Object. Cambridge, Mass.: MIT Press. Reichenbach, Hans 1947. Elements of Symbolic Logic. Berkeley: University of California Press. [Reprinted 1966, New York: The Free Press] Reuland, Eric and Alice ter Meulen, (eds.) 1987. The Representation of (In)definitness. Cambridge, Mass.: MIT Press. Rothstein, Susan D. (ed.) 1998. Events and Grammar. Dordrecht: Kluwer Academic Publishers. Rothstein, Susan D. 1985. The Syntactic Forms of Predication. Bloomington: Indiana University Linguistics Club. Rothstein, Susan D. 1992. Predication and the Structure of Clauses. Belgian Journal of Linguistics 7, 153-169. Sasse, Hans-Jürgen 1991. Predication and Sentence Constitution in Universal Perspective. In: Dietmar Zaefferer (ed.), 75-95. Smith, Nicholas 2000. Frege’s Judgement Stroke. Australasian Journal of Philosophy 78, 153-178. Stalmaszczyk, Piotr 1999. Structural Predication in Generative Grammar. Łódź: Wydawnictwo Uniwersytetu Łódzkiego. Stalmaszczyk, Piotr 2006. Fregean Predication: Between Logic and Linguistics. Research in Language 4, 77-90. Stevens, Graham 2003. The Truth and Nothing but the Truth, yet Never the Whole Truth: Frege, Russell and the analysis of unities. History and Philosophy of Logic 24, 221-240. Strawson, Peter F. 1971. Logico-Linguistic Papers. London: Methuen. Thiel, Christian 1968. Sense and Reference in Frege’s Logic. Dordrecht: D. Reidel Publishing Company. Tichý, Pavel 1988. The Foundations of Frege’s Logic. Berlin and New York: Walter de Gruyter.

314 Wiggins, David 1984. The Sense and Reference of Predicates: A Running Repair to Frege’s Doctrine and a Plea for the Copula. The Philosophical Quarterly 34, 311-328. Williams, Edwin 1980. Predication. Linguistic Inquiry 11, 203-238. Wittgenstein, Ludwig 1995. Tractatus Logico-Philosophicus (translated by C. K. Ogden). London and New York: Routledge. Zaefferer, Dietmar (ed.) 1991. Semantic Universals and Universal Semantics. Berlin and New York: Foris Publications.

William J. Sullivan University of Wrocław Maria Curie-Skłodowska University, Lublin [email protected]

Order Abstract: Partly because of their interests, linguistic theories during the past three centuries have generally failed to notice that the linear order in linguistic output (texts) is a problem that requires explanation. Now that we know how non-linear the neurocognitive store is, the problem is even more pressing. A black box analysis of a problem of anataxis in Russian shows that a relational network approach solves the problem within the linguistic system itself.

0. Introduction The PhiLang2009 conference had an interesting dual focus: the philosophy underlying various approaches to linguistic analysis and the question of unresolved problems. The dual focus is most appropriate if we consider a particular unsolved problem, the source of linear order in speech and writing, and the different approaches to language description that have been taken over the last three centuries. It is not at once obvious why order should be a problem. The evidence for the human linguistic system is schizophrenic. What is produced by our mouths is a linear chain of syllables. What is produced by our pens is a linear chain of letters and words. Yet if we assume that the oral and written texts we produce are attempts to communicate something we (think we) know, it is clear that the linear order must have a source, simply because our cognitive store is not linear. That is, all parts of it exist simultaneously, and many of its parts are not linguistic but sensory. The logical inference here is that linear order is provided by the linguistic system during the process of encoding a particular message into a text.1 But during the 90’s a colleague who 1

The only alternative is to assume a linearizer distinct from the linguistic system. That is an unprovable additional complication, so I ignore it.

316 was then a recent MIT graduate told me that it was still an unsolved problem for generative linguistics, “even in the minimalist era,” and a workshop on linearity is planned for 2010 (cf. Kremers 2009). It is my contention that the two problems, i.e. the philosophy of linguistic theories and the unknown source of linear order, are inextricably intertwined. Only by unraveling them can we solve the problem. In fact, the path has already been outlined, though few have recognized it. I begin, therefore, with a sketch of linguistic philosophies over the past three centuries to show why the problem of linear order has not generally been recognized as a problem, let alone solved. Then, using a problem of anataxis in Russian, I show how a pure relational network (RN) approach imposes linear order on unordered semantic input. 1. Linguistics as a parasite discipline In every era there are fields of study that lead the way for other disciplines. They steer the course of inquiry as pilots for other fields. Those other fields of study follow their lead, becoming (in general harmlessly) parasitic on some pilot discipline (Koerner 1979: 525-26). Whatever the position of linguistics today (cf. Ducrot 1966, quoted in Koerner 1979:525), it began its climb to its contemporary status in the 18th century as a parasite discipline seeking status as a science. To begin with, linguistics as then understood had to distinguish itself and its approach to the study of language from more traditional associations with grammar based on Latin, etymology and philology, dictionary compilation, logic, and other related fields of study. It has, since the 18th century, looked to other disciplines as a source of philosophical underpinnings in an attempt to transform itself from an art (philology/etymology) into a science. The pilot discipline of choice in the 18th century was classificational biology as developed by Linnaeus. Linnaeus classified animals into larger and larger categories and defined the relations between these classes on the basis of physical similarity. After William Jones published his description of Sanskrit, he and others noticed that it would be possible to classify languages into larger and larger families of more and less closely related languages. Schlegel

317 developed the procedure, paralleling Cuvier’s comparative anatomy. His work was based solely on “the investigation of the (grammatical) structure of language” and the comparison of “morphological entities” (Koerner 1979: 527). Linguistic classification, like biological classification, was essentially a static, ahistorical field until the 1820’s. Biology in the 19th century saw a development of interest from relations between classes of animals to the way these animals had evolved (cf. Wallace and Darwin). Classificational biology slowly developed into evolutionary biology. There was a parallel development in linguistics in the thinking of the Grimms and Verner all the way down to the Neo-Grammarians. Here, too, classification grew into evolution. Saussure’s early work was appreciated more for its historical merit than for its descriptive implications. In America during the early 20th century the focus shifted from Indo-European languages to Amerindian tongues (Boas 1911/1963). Linguistic research became distinctly anthropological in nature. Bloomfield (1933), originally with anthropological training, modified his approach with input from Watsonian behaviorism. So by the mid-20th century linguistics had shifted from biology to social science. Seeking greater structural precision, Chomsky (1957 and later) redefined linguistics as syntax on a mathematical basis. This is almost where we stand today,2 with one exception. That exception is pure relational network linguistics, to which I return in section 3. Up to Chomsky, no one seems to have noticed that linear order might be a problem. As the biological classifiers focused on the number and shape of body parts, a study they called morphology, the linguistic classifiers focused on vocabulary and the precise differences in the shape of words, leading to linguistic morphology and phonology. In both disciplines, small changes in form over time led to theories of evolution, resulting in several sound laws for Indo-European linguistics.3 The linear 2

3

Chomsky has seriously refined and extended his model many times, each time changing some basic postulates and invalidating much if not all previous generative work in the field, but his approach remains mathematical. I slight the various cognitive schools, which have done good work, because it seems to me that they are continuing to evolve too quickly to characterize at the present moment and because they do not solve the problem of linearization in a general way. In fact, Darwin’s original theory suits languages better than it does biology.

318 order of morphemes in words showed closely parallel patterns in IndoEuropean languages and evidently caused little comment. It was simply a fact, like the shape of a dog’s legs. The anthropological approach in the Americas did notice substantial differences in morphological order and function, but anthropologists generally focus on the physical and are in the habit of finding cultural differences. Different linearizations were welcomed and recorded. Watsonian behaviorism is also a materialist discipline (cf. the dispute between Skinner and Chomsky), and its reaction to different linearizations was the same: they are just facts. Chomsky recognized that a difference in linearization was a problem for two reasons. First, a major class of transformations, e.g. TPASS and all other movement transformations, involve a change in linear order. Second, his firm belief in the universals of language required an explanation for the differences between, say, SVO and SOV languages. His solution to the problem was typical: a decades-long search for universal underlying word order, a search that continually failed. During the Chomskyan era there have been many minor breaks with his theoretical approach, all of which he has called notational variants, and one or two major breaks. I define a minor break as one that accepts his primary postulates (Universal Grammar, sentence-based syntax) while disagreeing with secondary assumptions (e.g. various barriers or the adoption of particular nodes). Breaks began appearing almost from the start (e.g. the lexicalist vs. transformationalist approaches to English nominalizations in the 60’s). Some have been fairly tenacious, none more so than G. Lakoff’s insistence that the base of linguistics is semantics, not syntax.4 Langacker (e.g. 1990), like Lakoff in his semantic base to linguistics, has actively incorporated the findings of cognitive psychology into his approach. In some cases he seems to translate semantic hierarchy into syntactic order directly. Still, this by itself has not yet provided a general solution to the source of linear order and is unlikely to do so. The other major break with Chomsky comes with Yngve (1996), though Yngve was never a generative linguist. Yngve’s approach, called 4

Note, however, his continued focus on the sentence.

319 hard science or human linguistics,5 is physics based. But Yngve does not seem to be eager to incorporate the findings of cognitive psychology or neurology into his approach, and the linear order that appears in speech and writing is simply a datum for him. It is unlikely that Yngve’s current approach will provide a solution to the problem, either. A reasonable and realistic answer can be found via a RN approach, to which I now turn. 2. Relational Network Theory Relational network theory, though far from new, is relatively unknown outside of Denmark and couple of academic islands in Canada and the United States. But it is the only approach that fits all the known facts, recognizes the problem, and provides a solution. Hjelmslev (1943/1953) defined a linguistic system as a totality that consists of relationships, not things. Each thing in such a totality is a relationship defined by the relationships it contracts. Hjelmslev defined these relationships algebraically. I define them logically. But work in neurology over the past generation shows a great deal about how it may be effected in reality in the brain (cf. Lamb 1999 and Paradis 2004 for detail). The model I present herein is logically based but consistent with the neurological findings, and when combined with them, predicts the findings of cognitive psychology. The overall structure of a linguistic system for which we have evidence in Russian and some other languages is given in Figure 1. There are five strata (hence the name neurocognitive stratificational theory). Each stratum has a tactic pattern wherein the basic elements of that stratum are defined and related to other elements of the same stratum. Each stratum also has realizational relations to the two adjacent strata. The cognitive store surrounds the linguistic system (not shown to scale). During an act of communication, most input from cognition is probably to the semology when a message is encoded linguistically.

5

Actually not as contradictory as it seems.

320 But it is likely that there are direct relations between cognition and any stratum.6 Cognitive Store Semology

Syntax

Morphology

Phonology

Hypophonology

SOUND

Figure 1. Outline of the linguistic system, relative to the cognitive store.

To get some idea of how linguistic elements are defined by their relationships, consider Figure 2. The little box in the center of the diagram, usually called a diamond in the literature, represents the stress phoneme in Russian. It is related downward to the acoustic characteristics of stress and upward to those morphemes that are marked for stress. Its domain is the syllable, to which it is related by the line to the left, and its range is the phonological word, where it occurs once, optionally preceded and succeeded by unstressed syllables. 6

For example, in the morphological environment of the Russian verb, Jakobson used to say, the phonemic feature [labial] signals first person.

321 This approach works analogously on each stratum, e.g. with cases in the syntax.

to Morphemes marked for stress

Syllable

Pword

increased volume increased length change in pitch etc.

Figure 2. RN definition of the stress phoneme in Russian.

3 . Semology and Syntax The two strata at the top of Figure 1 are semology and syntax. The basic elements of semology are called sememes and the basic elements of syntax are called lexemes. Semology structures discourse blocks and defines chunks of information. It relates sememes to each other in groupings, some of which can be called predications, if a term is needed. Functionally, predications are semantically cogent groupings of sememes. The sememes are related to lexemes in the syntax, which linearizes the lexemes. In some cases, more than one linearization is possible. Consider a case in which more than one syntactic order is possible. A pre-Chomskyan structuralist approach to such a case accepts both orders, identifying differences in meaning. A Chomskyan approach, lacking semology, takes one order as basic and derives the other from it. Traditionally this is called a case of metathesis, by extension from

322 historical linguistics. In stratificational theory, both orders are imposed on an unordered input from semology by the syntax. To distinguish this from historical metathesis, which is a real case of reordering over time, we refer to it as anataxis. 4. Anataxis in Russian Number Phrases The linear order problem we consider today involves Russian number phrases, in which two linearizations are possible. Pjat’ rublej ‘5 rubles’ is the normal, unmarked order. Rublej pjat’ ‘about 5 rubles’ communicates the same cost but the amount is only approximate. That is, the actual amount needed may be a little more or a little less than five rubles. Previous descriptions do not explain the different orders. Older structuralist descriptions accepted both orders, each one connected with one meaning. Chomskyan descriptions incorporated a version of the Jakobson-Trubetzkoy concept of markedness: the base generates pjat’ rublej (the unmarked order) and there is an optional approximative transformation (e.g. TPRX) that produces the marked reordering. But none explain the basic source of linear order. It is simply a given. Again, we know that the cognitive store is hierarchically organized but not linear. That is, what we know and want to communicate we know now. We must linearize it during the process of encoding it into sound. Clearly it must either be linearized in the linguistic system or we must theorize a separate but related linearizing module. The latter choice presents substantial, perhaps insoluble difficulties in addition to being an added theoretical complexity. A better approach lets the linguistic system handle it all. That is, the linguistic system can provide both marked and unmarked linearizations and show the difference in meaning. 5. Encoding Russian Number Phrases An engineer by training, I approach this as a problem in black box analysis (BBA). The gray box in Figure 3 presents the preliminary BBA. Relations between semantics and semology are on the left, relations between syntax and morphology are on the right. There are three

323 possible sememes: PRX, ruble, and 5. Only two lexemes are involved, rubl’ and pjat’, and in the outputs, both forms and meanings overlap, i.e. they are similar or partly identical. The difference in meaning is communicated by the difference in linearization. The fact that genitive plural is required for rubl’ is ignored, as it involves a substantial amount of morphology, which time and space considerations preclude. But both linearizations must be provided by the network in the black box on the basis of the occurrence or non-occurrence of PRX. Our task is to find such a network.

PRX

5 ruble Cog Store

Semology

Syntax

sememe inputs (L) simultaneous pjat’ two output lexemes (R) in one of two orders rubl’ Our task: find network inside box that does the job. Morphology

Figure 3. Preliminary BBA for Russian metathesis.

There are three distinct input relations from the cognitive store that may participate in semologically well-formed combinations. The inputs are, as indicated in Figure 3, ruble, 5, and PRX. The combinations are ruble & 5 & PRX and ruble & PRX. We can combine them algebraically as ruble & 5 & (PRX, Ø) or ruble & 5 & [PRX], i.e. the number and the noun and PRX or nothing. PRX is, in short, optional here. But there are contexts where its occurrence would be contradictory, e.g. in the environment of rovno ‘exactly’. Russian permits rovno pjat’ rublej ‘exactly 5 rubles’, but rovno rublej pjat’ ‘exactly about five rubles’ sounds a little strange. Such structures need not be provided in the semology, leaving us with the two basic combinations: ruble & 5 & [PRX]. The semolexemic relations between semology and syntax are oneto-one in this simple example, so the syntax accepts both inputs (actually either input) from the semology. The syntax of simplified number

324 phrases has two positions. A number phrase is related to a 1-2 sequence. Position 1 is related to both noun and number by an ordered OR node. But the noun is marked, occurring in position 1 if PRX is active. If PRX is active, the number cannot be realized in position 1. If it is not active, then the number is realized in position 1. Position 2 is related to both noun and number via an unordered OR node. What is realized in position 2 is whatever was not realized in position 1. Before providing a graphic description of the logic underlying the verbal description, I give a summary of the four relationships that appear in the description in Figure 4. Type Unordered AND

OR

Ordered

Purpose of order Linearization

Marked-unmarked

Figure 4. Basic logical relationships

A graphic description of the network is given in Figure 5. What it says is a pure relational network representation of the above verbal description. Again, three sememes participate in two acceptable combinations: ruble & 5 & [PRX]. Note that the order I write them in is not significant. Again, the syntax accepts both combinations, putting out rublej pjat’ if PRX is present and pjat’ rublej if it is not. There are no labels interior to the network, no symbols requiring a homunculus in the brain to read and interpret. Figure 5 accepts an unordered input and puts out the syntactic order appropriate to the context. This order problem is solved. My broader claim is that all linearization problems can be solved in parallel fashion, i.e. unordered on one stratum to ordered on another. Experimental evidence for a system like that of Figure 1 in some languages is not new. Indications for a stratal boundary between

325 phonology and hypophonology is in Dell and Reich (1977). This evidence involves speech errors during encoding, including anticipation, perseveration, and spoonerisms. Ongoing research by the present author shows parallel error types between morphology and phonology, syntax and morphology, and semology and syntax. The theory of spreading activation introduced by Reich in his unpublished dissertation, and developed in Dell & Reich (1977) and Dell (1986) accounts for the appearance of correct linearizations (if everything works correctly) and for speech errors (if interstratal coordination slips). Cognitive Semology Store

Syntax to Morphology

PRX

5

pjat’

ruble

rubl’ Figure 5. Anataxis in Russian number phrases

6. Conclusion Most non-RN theoreticians did not notice that linear order in linguistic output is a problem and hence, none faced or solved the problem. No linear order can be shown to exist in the cognitive store, so neurocognitive stratificational theory assumes none. In the case of Russian number phrases, two linearizations appear in speech and writing. The two linearizations show partial but not complete overlap in the form and in the meaning. The difference in linear order

326 communicates the difference in meaning. The RN description accounts for all this without false assumptions about underlying or semantic linearization or assumptions of some universal underlying word order. By extension, all linear ordering (of morphemes in a lexeme, phonemes in a morpheme and across syllables, etc.) can be provided within the linguistic system. Additionally, allowing loosely-yoked distributed parallel processing with spreading activation across the strata (cf. Dell 1986) can account for the rate at which we speak and can even predict those performance errors known as slips of the tongue. The potential of a relational network approach has barely been tapped.

References Bloomfield, Leonard. 1933. Language. New York: Holt, Rinehart & Winston. Boas, Franz. 1911/1963. Introduction to the Handbook of American Indian languages. (Reprint; original appeared in the first number of the Handbook in 1911, Washington: Government Printing Office.) Washington: Georgetown University Press. Chomsky, Noam. 1957. Syntactic Structures. The Hague: Mouton. Dell, Gary S. 1986. A spreading-activation theory of retrieval in sentence production. Psychological Review 93(3): 283-321. Dell, Garry and Peter A. Reich. 1977. A model of slips of the tongue. LACUS forum III. 438-47. Hjelmslev, Louis. 1943/1953. Prolegomena to a Theory of Language. Omkring sprogteoriens grundlæggelse, trans. by Francis J. Whitfield. Baltimore: Waverly Press. Hockett, Charles. 1983. The changing intellectual context of linguistic theory. LACUS forum IX. 9-42. Koerner,. Konrad E. F. 1979. Pilot and parasite disciplines in the development of linguistic science. LACUS forum V: 525-34. Kremers, Joost. 2009. Linearization workshop. http://user.uni-frankfurt.de/~kremers/DGfS2010-Linearization.html Lamb, Sydney M. 1999. Pathways of the Brain. Amsterdam: John Benjamins. Langacker, R. W. 1990. Concept, Image and Symbol: The Cognitive Basis of Grammar. Berlin: Mouton de Gruyter. Paradis, Michel. 2004. A Neurolinguistic Theory of Bilingualism. Amsterdam: John Benjamins.

327 Robins, R. H. 1968. A Short History of Linguistics. Bloomington: Indiana University Press. Yngve, Victor H. 1996. From Grammar to Science: New Foundations for General Linguistics. Amsterdam: John Benjamins.

Mieszko Tałasiewicz University of Warsaw [email protected]

Asymmetrical Semantics Abstract: The paper is an attempt to sketch out a theoretical paradigm which would be able to accommodate a number of competing accounts of various problems belonging to the theory of language. The paradigm which I shall refer to as “Asymmetrical Semantics” is not in itself a solution to any of the specific problems. It is rather a way of ordering the existing solutions and defining the scope of their application. A definitive assessment of the usefulness of this paradigm requires that many of the existing approaches be analyzed with respect to their relationship to the principle of classification contained in this paradigm. To this extent the present work is a preliminary draft of a forthcoming research project.

0. Introduction The starting point is an observation that the correspondence relations between words and reality are based on completely different mechanisms when, while seeing a situation, we attempt to translate it into words than when, on hearing a description, we try to imagine the corresponding situation. It is one thing to name an object which is in front of our eyes and quite another to identify the reference for the name given. Semantics is a theory of correspondence. Different correspondence mechanisms require different explanatory theories. An adequate semantics of a natural language must thus be asymmetrical: describe differently the correspondence relations between language and world and world and language. In the light of the above observation it is apt to distinguish two kinds of discourse situations:1 A-situations, where the object is given and the use of a word is a reaction to it, and D-situations, where the word is 1

I do not, at this stage, wish to propound any formal discourse theory; discourse situation can be taken rather loosely here. My understanding of discourse situations is usually similar to that of Barwise and Perry (1983).

330 given and the corresponding object has to be identified. A-semantics and D-semantics are, respectively, semantic theories which can be used to describe adequately the correspondence relations in A-situations and D-situations. Thus, it can be expected that the majority of language phenomena will get a different treatment in A- and D-semantics. The meaning of expressions and their syntax in A-semantics are supplemented by a number of pragmatic considerations and saturated by unverbalized knowledge and countless details which appear before our eyes and which we do not mention, but which influence what we say, how we say it and whether what we say is true. In D-semantics, on the other hand, the meaning and syntax must be self-sufficient and one with the other determines the reference and logical value of our utterances. A- and Dsemantics are connected with conflicting intuitions: we have different intuitions in A-situations and different – in D-situations. Each situation admits different constructions and each seems to respond to different theories describing the syntax and semantics of generated expressions. Such an approach determines the pragmatic relativization of semantics, but does so in a rather special way. It is often the case that pragmatic and semantic aspects are intermingled in the substance of particular theories dealing with language phenomena. Asymmetrical Semantics incorporates pragmatics in the applicability conditions of semantic theory, but not in the theory itself. This allows us, to some extent, to purify semantics or at least is an excuse for discussing anew the boundaries between semantics and pragmatics. I take Russellian distinction between knowledge by acquaintance and knowledge by description (Russell 1910) for a prefiguration of the proposed paradigm and I draw from it the names of the discourse situations used here. There are other prefigurations, too. The ones I can name with confidence are those in Situations and Attitudes by Barwise and Perry. The authors went to great lengths to examine the mutual relations between the world and the language which we use to talk about this world. Their research produced a distinction between actual and factual situations. Actual situations belong to the real world: they have a causal effect on what we do and make us respond in language. In the

331 terminology of Asymmetrical Semantics, they are correlates of sentences generated in A-discourse-situations. Factual situations are abstract entities, intended to be interpretations of the sentences we utter in Ddiscourse-situations.2 The proposed distinction must in turn be distinguished from the classification of speech acts with respect to the speaker and the hearer, which at first glance appears to be somewhat similar in principle. The two are in fact independent classifications, all combinations being possible (speaker in D-situation and hearer in A-situation being perhaps the rarest but funniest of them).3 Asymmetric semantics must not be equated, either, with a concept which the literature refers to as two-dimensional semantics. Here, the similarity is in the name rather than the substance. Two-dimensional semantics (cf. e.g. Chalmers 2006) is a formal tool developed within the semantics of possible worlds and is used to explicate the differences between necessity and apriority. It has little to do with the proposed paradigm. If we really desired to find any connection, it might be that two-dimensional semantics rehabilitates the descriptivist elements – banished by Kripke – in the theory of names. As we shall see later, Asymmetrical Semantics, too, allows us to take a more forgiving view of the descriptivist theory of names. 1. Applications Highlighting the distinction between A- and D-semantics may serve to downplay many unnecessary disputes which have for decades dominated the philosophy of language, with little hope of abating. The protracted nature of these disputes is explained by the fact that the opposing views spring from sound footings and convincing intuitions, which, when taken a certain way, appear to be simple and unquestionable truths. The 2

3

It would be interesting to study the relations between this distinction and worldly facts, which enter causal relations, and propositions, which enter logical relations, the concepts proposed by Angelika Kratzer (2002). I took some preliminary steps to this end in Tałasiewicz (2008, 2009). Such discourse configuration is sometimes exploited by some television shows, e.g. Blind date.

332 only way forward in such cases is a separation of rights: different views are correct for different ranges of applications. Asymmetrical Semantics provides a clear criterion for defining such ranges of applications – the same criterion for surprisingly many unrelated controversies. The main aim of the research programme set out in these pages will be to analyze in depth the existing controversies and to attempt a resolution through the medium of the Asymmetrical Semantics criterion. At this initial stage, we look at some of them in broad detail.4 1.1. Dispute over the semantics of proper names The dispute over how proper names refer to their designates has been going on for decades, and no successful resolution is in sight. The two main planks in this dispute are: Searle’s descriptivist theory (1958/1997) and Kripke’s causal theory (1980), as well as variations thereof. Broadly speaking, the descriptivist theory says that proper names acquire meaning through a collection of descriptions and refer to their designates through just such a collection. According to the causal theory, proper names designate directly; they are conferred on their designates through the original ‘baptism’ and subsequently passed on from speaker to speaker in speech acts which form a causal chain. Exponents of both theories seek succor in clear – it would seem – intuitions. Kripke, seeking to ground his theory in intuition, uses the proper name of Kurt Gödel, whom he knew personally, and argues that even if all descriptions referring to Gödel (e.g. that he proved a theorem which later came to be known as Gödel’s theorem) turned out to be falsehoods and mystifications, then, the proper name ‘Kurt Gödel’ would still refer to the same person, whom Kripke used to meet at the university. The descriptivist claim whereby some description is constitutive for Gödel is not convincing. 4

Among those left outside our focus area, the dispute about the boundary of pragmatics and semantics is worth mentioning (cf. e.g. Bianchi 2004) as well as rising controversies around the phenomenon called ‘semantic underdetermination’. The researchers would distinguish e.g. truth conditions from ‘ways of being true – see for instance Sainsbury (2002). Asymmetrical Semantics suggests that truth-conditional approach may be fruitful in Ddiscourse, while ‘ways of being true’ approach – in A-discourse.

333 In his exposition of descriptivism Searle concentrates on examples such as Aristotle – a person very distant from us in time and space. In this case our intuitions are different. The name ‘Aristotle’ tends to bring to mind certain descriptions: that he was a pupil of Plato, teacher to Alexander the Great, author of Metaphysics, Nichomachean Ethic, Poetics and a number of other works. Should it turn out that all these descriptions could not be true of one individual, then we would be compelled to call Aristotle’s existence into question. We might conclude that there was no one who answered to Aristotle in the old sense. Again, such a claim would be hard to refute. Kripke’s insistence that the name Aristotle has come to us via a causal chain which transmits a relation that, over two millennia ago, assigned this name to a certain infant is less than convincing. Asymmetrical Semantics helps resolve the dispute: both theories are correct – in their respective ranges of applications. The causal theory does a good job describing the semantic relation between the name and its designate in A-situations: when we ourselves name the object or when the designate is introduced to us by that name. D-situations are well served by the descriptivist theory: when we first register the name, along with some description of its application, but when the existence of its designate can only be assumed.5 It bears pointing out that, ever since the dispute started, some philosophers have felt that both sides have had a claim to the truth. Such a conciliatory position gave rise, for example, to the hybrid theory of Gareth Evans (1973, 1997). It has never become the prevailing view though, and for a reason. Evans points out correctly that there is a grain 5

The solution proposed within the paradigm of Asymmetrical Semantics clears the descriptivist theory of many challenges which were raised in the perspective of A-discourse. It does not clear it of those attacks which are targeted at it irrespective of the type of discourse. The latter includes, for example, a claim that the descriptivist theory cannot give an account of the fact that certain sentences predicating one of the descriptions determining the meaning of a name about the designate of this name – for example ‘Aristotle was the teacher of Alexander the Great’ – may be synthetic sentences. I believe however that a relatively minor modification of the descriptivist theory can help avoid such challenges. It comes down to treating the theory as a diachronic one, the theory of change of meaning.

334 of truth on both sides of the divide, but does not seek to separate the claims only to combine the theories in a rather mechanical fashion. He holds that the semantic relations between names and their designates are always partly causal and partly descriptivist (cf. e.g. Kawczyński 2009). As a result, his conception combines the shortcomings of its predecessors rather than their strengths. Asymmetrical Semantics shows that the relations are either of one kind or another, depending on the discourse situation. 1.2. Dispute over the principle of distinction between attributive and referential use of definite descriptions The distinction was introduced by Donnellan (1996, 1997). A sentence which contains a definite description used attributively in the subject of the sentence, may predicate something truly only of an object which satisfies this description (on condition that there is exactly one such object). A sentence which contains a definite description used referentially in the subject of that sentence, may predicate something truly of an object that has actually been picked out by this description in the circumstances accompanying its use (even if, due to the disputants’ mistake, the object does not actually satisfy the description used in its identification). Much as the distinction itself is not questionable, the principle of how it should be drawn is a subject of dispute. Is it a pragmatic or semantic distinction? Relativisation in favour of use supports the pragmatic nature. This line is followed by Kripke (1977, 1997). He argues that semantics has nothing to do with this distinction and that the distinction is made purely on pragmatic grounds. From the semantic perspective, a description’s meaning is always of the attributive nature; the referential sense may at most be generated as a kind of Grice’s implicature. Kripke holds then that sentences where the description is used referentially and the object identified does not satisfy the description are simply false, and their communicative function is founded on the speaker’s intentions only. The proposed solution however is seriously flawed. Grice’s theory (cf. Grice 1989) assumes that conversational implicature, generated by exploitation of the so called conversational maxims, requires that any

335 violation of a given maxim be explicit, if not played up deliberately; the speaker’s intention alone is of little use. Meanwhile the majority of referential uses identify objects which do satisfy the given description (the difference being that their doing so is not essential for identifying the object). Moreover, treating as false sentences which contain descriptions used referentially in respect of objects which do not satisfy them is highly counter-intuitive. Rather, it is held that if an object is identified unambiguously and if it satisfies the predicate asserted of it in the sentence, then, even if it fails to satisfy the subject description that alone makes the sentence true. Intuition seems to suggest then that sentences containing descriptions used referentially have a different semantics from the same sentences where the description is used attributively, and a similar semantics to the analogical sentences containing demonstratives (see, for example, Kaplan 1978, 1997). The paradigm of Asymmetrical Semantics affords a very simple treatment: based on the pragmatic criterion – circumstances accompanying use – a decision is made as to which of the semantic theories in play gives a correct account of the correspondence constituted by that use. Descriptions used referentially are characteristic of A-situations – captured correctly by the semantics for demonstratives. Descriptions used attributively are typical of D-situations – accommodated fully by the traditional semantics of the Russellian type. 1.3. Dispute over identity criteria of situations This dispute is focused on a completely different problem area than the previous two – that of situation semantics. Situation semantics, broadly speaking, is a research paradigm which does not admit Frege’s claim that sentences take truth-values for their designates. Sentences denote situations instead. As to what situations actually are the differences of opinion are quite pronounced. I hold most closely with Barwise and Perry, who claim that at least some situations are concrete entities. This view (and not it alone), however, runs up against the difficulty of formulating an appropriate identity criterion for situations. We would want to accept that a situation, being a sentence correlate, is comprised of objects designated by the names appearing in the sentence in

336 accordance with the pattern determined by the syntax of this sentence. From the perspective of the categorial grammar, a situation is the value of a function denoted by the main functor of the sentence – let us call this function ‘Ajdukiewicz’s function’6 – whose arguments are objects denoted by the arguments of that functor. In that case a situation denoted by sentence ϕ would be the same situation as that denoted by sentence φ, if and only if the respective function’s values were in agreement. The difficulty is that characterization of these functions requires prior characterization of their value set, and thus reference to situations. Our criterion then leads to a vicious circle. Asymmetrical Semantics overcomes this problem: the presented criterion is used only in D-discourse-situations. In A-discourse-situations the problem of an identity criterion for situation-correlates is not an issue at all. Situations are given in advance while the criterion of their identity, from the semantic perspective, is trivial indeed: two expressions refer to the same thing when they refer to the same thing.7 Meanwhile, all acquisition procedures, including definition of Ajdukiewicz’s functions, properly belong to A-discourse. The interdependence of situations and functions is thus not of a circular but helical nature: in an extended process of language acquisition we often switch from A- to D-discourse and back. We can then ‘compute’ certain situations from sentences by extrapolating from Ajudkiewicz’s functions (whose typical values were available to our senses) previously acquired in A-discourse. A variation of the dispute over identity criteria is a dispute over the ‘being a part’ relation between situations. What is the relation between situation s1 (Brutus killed Caesar) and situation s2 (Brutus killed Caesar with a knife)? The most common answer is that the two situations are not identical – one of them is a proper part of the other – because the sentences in brackets are not equivalent (the first follows from the 6 7

The proposed terminology is explained at greater length in my book on categorial grammar (Tałasiewicz 2009). From the metaphysical point of view this criterion is far from trivial. The question about the conditions of identity may be a profound one if we interpret it as a question about the nature of objects or situations. I will argue however that the answer to this question is not within the purview of situation semantics (or any semantics for that matter) but metaphysics and epistemology.

337 second, but not the other way round). Such explanation, however, borders on petitio principii: we discern an inclusion in the situations only because we discern the direction of entailment between the sentences, and then we explain the entailment by reference to inclusion. It is noteworthy that different conceptions of situations establish opposite directions of the ‘being a part’ relation (although the direction of entailment remains unchanged). In the semantics of possible worlds situations are identified with sets of worlds where objects enter into certain relations with one another. On this understanding, s2 is part of s1 because the set of possible worlds in which Brutus kills Caesar with a knife is contained in the set of possible worlds in which Brutus kills Caesar somehow. In the Barwise and Perry type of semantics, where situations are, in a sense, sets of information units (or instructions), the reverse relation obtains. The set of instructions: {[at l] , , } which represents situation s1 is a subset of the set of instructions: {[at l] , , , , } which represents situation s2 . This suggests that the relation of ‘being a proper part of’ is irrelevant for entailment. The situation of killing Caesar is, in fact, the situation of his being stabbed, if we look at the facts, not descriptions. Thus just which way we ‘look’ at things turns out yet again to be crucial for capturing the correct intuitions. Our understanding of ‘Brutus killed Caesar’ is different from that of Cassius. 1.4. Dispute over compositionality versus contextuality The principle of compositionality says that the meaning of a complex expression is a function (determined by the syntax of this expression) of the meanings of its constituents. Compositionality is contrasted with the contextuality principle, according to which the meaning of a simple

338 expression is a function of a complex context in which the expression appears (Janssen 1997). It is well worth noting, however, that there is nothing contradictory in saying that the meaning of a complex expression is a function of the meanings of its constituents and that the meaning of the constituents is a function of, among other things, the context. What this means is that the function in question is reversible. From the purely logical point of view, then, it is not impossible for language to be both contextual and compositional. The driving force of this dispute, which is after all one of the most heated disputes in the theory of language, must be something else, then, something different from the logic of functions. Indeed, the question that puts the fizz into the dispute is what comes first in the order of cognition: words or complex sentence contexts? When faced with the task of interpreting a particular expression we ask: what is given first – the meaning of the constituents of the expression from which we calculate the meaning of the whole expression or the meaning of the whole, from which we interpret the meaning of the constituent words? Logic takes a synchronic approach (or stands atemporal altogether). Under logical approach, the whole may be a function of a part, and a part – a function of the whole or other parts. In the act of speech, which is diachronic, something must come first: either a part or a whole. The question is: is natural language in a specific instance of its use compositional or contextual? Asymmetrical Semantics appears to acknowledge, again, both sides of the argument and satisfy conflicting intuitions. It accommodates both principles, depending on the level and communicative function of the language. Natural technical language: the language of textbooks, academic and official publications – thus the language used in Dsituations – is compositional. Words are used in their common, standard and precise meanings. Indexical meanings and pragmatic relativization of all kinds are avoided. Things look different in everyday language, and in situations when we are learning the language (especially our first language). One of the main sources of information about meanings is, in this case, the ostensive procedure, typical of A-discourse-situations. Situational context often

339 allows us to capture the meaning of the whole utterance (which the mind matches with the situation) without necessarily knowing all the constituents of this utterance, or even without knowing the details of how the syntactic structure works. The constituents, and the syntax, can then be recovered by means of appropriate analytical procedures from the meaning of the whole. A-discourse is thus contextual to a large extent. 1.5. Dispute over the deflationary conception of truth This dispute, once contained, is now being fanned anew by Paul Horwich, who deftly and energetically defends the deflationary conception of truth (Horwich 1998). He and his followers question the classic correspondence conception whereby the meaning of a sentence determines its truth conditions, the sentence being true if and only if it satisfies those conditions. In contrast, deflationists hold that the truth predicate does not have any profound meaning and that in fact it could be done without. It is merely a shortcut for endless conjunctions or alternatives. Prima facie it would appear that the deflationary conception is in a lost position. Its critics correctly point out that it is too weak to give an account of many important generalizations involving the concept of truth (cf. e.g. Cieśliński 2007). Despite that, the basic concept which Horwich puts forward is, for many, backed up by intuition. It says that there is no natural relation which combines predicates with their extensions in a way which would allow us to accept that a sentence which asserts a given predicate of an object belonging to its extension is true if and only if the given relation obtains. For deflationists the formula which defines truth is as follows: s means that p Æ (s is true iff p) and what attaches meaning to expression s is a tendency in a language community to use the expression in particular circumstances (that is, if p). The question of truth cannot be then separated from a meaninggiving activity, and since each instance of its use becomes, as if it were,

340 a partial definition of the term used, asking about the truth of such use does not make much sense.8 Asymmetrical Semantics settles the dispute by conciliation. Deflationary intuitions are allowed in A-discourse situations. When what we are talking about is directly available to the senses, any utterance may be considered to be analytically true as being an ostensive definition of expressions appearing therein. In D-discourse meanings are fixed, one way or another, and cannot be redefined ostensively. Here, the deflationary conception does not seem to have much sway and gives ground to the correspondence theory. 2. Problems The division of semantics into A-semantics and D-semantics is made, as we have seen, on the pragmatic-epistemic grounds: the criterion of this division is the kind of cognitive situation which surrounds the participant of the speech act – whether the object in question is experienced directly or not. Formally speaking, our criterion is very clear, in principle dichotomous. In practice, however, it may be difficult to apply. The immediateness of cognition, despite the etymology of the word ‘immediateness’, comes in degrees. A paradigmatic example of immediate cognition is looking at something. A paradigm like this belongs in fact to the so called folk psychology, but it shall not be underestimated: it is this folk psychology, as aptly noted by Barwise and Perry (1983), that is the basis for our predictions about other people’s intentional states, systematization of our own observations and speech planning. It determines the applicability of our criterion.9 A somewhat less informal reflection on cognition shows that even a visual perception of something as an object owes much to indirectness: thanks to both the anatomy and physiology of the eye (including the corresponding areas of the brain) and an array of ideas, memories, habits and preconceptions. 8 9

See Horwich (2008). A philosophical mirage of ‘truly’ immediate cognition, of some pure sense data, may lead to skepticism or solipsism but most definitely not to an accurate description of language.

341 This array, at least in some parts, incorporates meanings passed on to us in the process of socialization via verbal definitions, not ostensively. For this reason, even the purest examples of A-discourse-situations are contaminated by residual D-discourse, which is difficult if not impossible to isolate and neutralize. This residual descriptivism, occurring in every A-discourse, may long remain latent only to emerge suddenly and emphatically during a chance utterance and bear on our semantic intuitions which accompany it. The difficulty is compounded when we consider that the immediateness of cognition, imperfect as it is from what we have seen earlier, declines gradually as time passes. When the object is still in our field of vision, conscious perception takes place in our mind after some time has passed from the moment optical waves are reflected off its surface – the time needed for light to cover the distance between us and the object, the time needed to generate and transmit the electric impulse from the retina to the brain, and finally the time needed to synthesize (with the engagement of our ideas, memories and preconceptions) of the image perceived and committing it to short term memory, where it can undergo further conscious processing. A methodologically sophisticated consciousness, which admits optical illusions or hallucinations, is quick to attach to the notion ‘the object of this instance of perception’ a proviso ‘provided it exists’. When the object disappears from our field of vision, no radical change takes place in terms of this conscious processing: we continue to hold in our short term memory the perceived image of the object which our brain can utilize. The most that can happen is that the image ceases to be refreshed with new visual impulses, which makes it lose its status of immediate perception and makes the mind even more inclined to reach for the ‘provided it exists’ proviso. After a while, when the fine details have faded, our mental representation of the object perceived, relegated by now to long term memory, begins to approximate the status of a notion generated on having first heard its description somewhere.10 This status varies too according to how credible our information is. We know more 10

The problem of demonstrative expressions used with objects which were the focus of perception but which have disappeared from the field of vision before being pointed at preoccupied, e.g., Kaplan (1978/1997).

342 immediately events related to us directly by a credible eye witness than those which are passed down to us as legends from generation to generation.11 Equally fluid changes take place in the semantics of the expressions we have managed to generate during this time concerning our perception, which recedes in time (and with every new intermediaries): first the dominant A-aspect gets slowly but surely displaced by the Daspect. It is often difficult to decide which type of discourse we are dealing with. Thus, a semantic description of an expression requires, to some extent, that we make an arbitrary decision. There may be as many decisions as interpreters (whether they are semanticians analyzing the discourse or the participants thereof, who must also, if only implicite, adopt some kind of interpretation key to what has been communicated).12 It is also common knowledge that certain aspects of the semantic (or syntactic) characterization can only be identified for suprasentential communication units (e.g. discourses in the technical sense of Discourse Representation Theory – cf. e.g. Kamp and Reyle 1993). Such units consist of a number of utterances. It is possible for the units to be heterogeneous with regard to the proposed Asymmetrical Semantics criterion (i.e. some of their utterances belong to A-discourse, others to D-discourse) – due to a cognitive shift occurring in time (or due to the change of subject of a conversation within each such unit). Heterogeneity may appear even in one complex expression: for example, distinguishing between the referential and attributive use of a definite description (see above) may be difficult in the case of complex descriptions (the book on the table), which suggests that such descriptions are partially referential and partially attributive. 11

12

The fine details of these mechanisms are not relevant here. What is important is that current knowledge about cognitive processes practically rules out existence of any sharp division between immediate and indirect cognition. It could be expected that intuitional difficulties of this sort will arise in connection with proper names belonging to objects half-removed in time. Aristotle is a typical D-discourse object, while Gödel – from Kripke’s point of view – an A-discourse one. As for Peano, nineteenth century mathematician, referred to by Kripke, it is hard to say. Our intuitions are not clear.

343 Heterogeneous discourses are not exceptional. Even in scientific discourse, which is a paradigmatic example of D-discourse, A-discourse interference is not uncommon in discussions about experimental set-ups or in the form of unverbalized areas of science, such as models or exhibits of all kinds (e.g. hominid bones in paleoanthropology). Thus, one type of discourse should not be given prominence over the other. They are too entangled. In highlighting the distinction, Asymmetrical Semantics highlights the entanglement, too. Explaining the relation between A-discourse and Ddiscourse is in itself a difficulty which Asymmetrical Semantics need not come to terms with – on current knowledge, no contemporary semantic theory throws enough light on the problem. But Asymmetrical Semantics definitely brings it to the fore. In any developed language, taken synchronically, we observe this reciprocal relationship: what we are speaking about depends on the meaning of what we say. The meaning of what we say depends in turn on what we are speaking about: In a sense what one says depends on the sentences one utters and their interpretation […]. But what a sentence means also depends on what people in a linguistic community use it to say. (Barwise and Perry 1983: 280)

At first glance, the more problematic of the two is the dependence of Ddiscourse on A-discourse, in particular when we analyze it from the perspective of first language acquisition. How can we learn D-semantics if the first instances of usage belong to A-discourse, the purest of all possible? To answer the question we must turn to interdisciplinary studies, of which psycholinguistics is the first candidate (see e.g. Bartsch 1998, Clark 2003). Switching from D-discourse to A-discourse is not easy, either. So much is obvious to any scientist who has ever tried to verify empirically a general theory. And to everyone else: in everyday life the gulf between immediate and indirect cognition can be depressing, too. That is why story-tellers, who could paint pictures in words of times gone-by, bring out the clamour of a hunt, the thundering of hooves, the smell of the

344 prairie at sunset, have been held in high esteem by the rest of us. Thanks to them we can relive situations known only from descriptions.

References Bartsch, Renate 1998. Dynamic Conceptual Semantics. Stanford: CSLI Publications, Stanford University. Barwise, Jon and John Perry 1983. Situations and Attitudes. Cambridge MA: The MIT Press. Bianchi, Claudia (ed.) 2004. The Semantics/Pragmatics Distinction, CSLI, Stanford University. Chalmers, David 2006. Two-Dimensional Semantics. In: Barry Smith and Ernest Lepore (eds.), The Oxford Handbook of the Philosophy of Language. Oxford: Clarendon Press. Cieśliński, Cezary 2007. Deflationism, conservativeness and maximality. Journal of Philosophical Logic 36:6, 695-705. Clark, Eve V. 2003. First Language Acquisition. Cambridge: Cambridge University Press. Donnellan, Keith S. 1966/1997. Reference and Definite Descriptions. In: Peter Ludlow (ed.), 361-381. Evans, Gareth 1973/1997. The Causal Theory of Names. In: Peter Ludlow (ed.), 635-655. Grice, H. Paul 1989. Studies in the Way of Words. Cambridge MA: Harvard University Press. Horwich, Paul 1998. Truth. Oxford: Blackwell. Horwich, Paul 2008. Kripke’s Paradox of Meaning. A paper given at 8th Congress of Polish Philosophy, Warsaw, 15-20th September 2008. Janssen, Theo M. V. 1997. Compositionality. In: Johan van Benthem and Alice ter Meulen (eds.), Handbook of Logic and Language. Amsterdam: Elsevier, 417473. Kamp, Hans and Uwe Reyle 1993. From Discourse to Logic. Dordrecht: Kluwer. Kaplan, David 1978/1997. Dthat. In: Peter Ludlow (ed.), 669-692. Kawczyński, Filip 2009. The Hybrid Theory of Reference for Proper Names. (this volume) Kratzer, Angelika 2002. Facts: particulars or information units? Linguistics and Philosophy 25, 655–670. Kripke, Saul A. 1977/1997. Speaker’s Reference and Semantic Reference. In: Peter Ludlow (ed.), 383-414.

345 Kripke, Saul 1980. Naming and Necessity. Cambridge MA: Harvard University Press. Ludlow, Peter (ed.) 1997. Readings in the Philosophy of Language. Cambridge MA: The MIT Press. Russell, Bertrand 1910. Knowledge by Acquaintance and Knowledge by Description. Proceedings of the Aristotelian Society 11, 108-128. Sainsbury, R. Mark 2002. Two ways to smoke a cigarette. In: Emma Borg (ed.), Meaning and Representation. Oxford: Blackwell, 94-114. Searle, John 1958/1997. Proper Names. In: Peter Ludlow (ed.), 586-592. Tałasiewicz, Mieszko 2008. Some intuitions about situations. In: Anna Brożek (ed.), Logic, methodology and philosophy of science at Warsaw University vol. 3. Warszawa: Wydawnictwo Naukowe Semper, 104–121. Tałasiewicz, Mieszko 2009. Philosophy of Syntax. Foundational Topics. BerlinNew York: Springer.

Luca Tranchini University of Tübingen and University of Siena [email protected]

Truth: An Anti-realist Adequacy Condition Abstract: The aim of the paper is that of showing that the notion of truth naturally finds its place in the anti-realist proof-theoretic semantic approach. We start from Dummett’s (1973) analysis of the so-called paradox of deduction, namely the tension between two crucial features of inference: validity and usefulness (or epistemic fruitfulness). We reconsider the principle according to which an inference is valid if and only if it preserves the truth from the premises to the conclusion. In the light of the independent account of the notion of validity offered by antirealism, the principle can be taken as the milestone of an explication of the notion of truth also in the proof-theoretic framework.

0. Introduction To understand the role that truth plays in the architecture of an antirealist theory of meaning, a good starting point is the relationship that is generally acknowledged between truth and deduction: (1) A deductive inference is valid if it preserves the truth from the premises to the conclusion. Realists take this as a definition of inference validity by means of truth, anti-realists of the proof-theoretic tradition reject (1) as a definition of validity and they try to give an account of validity independently of truth. In this paper we try to show that, under a certain reading, (1) can be accepted also by anti-realists. In particular, it can be taken as an adequacy condition to be imposed on the anti-realist notion of truth.

348 1. The paradox of deduction According to Dummett, The existence of deductive inference is problematic because of the tension between what seems necessary to account for its legitimacy and what seems necessary to account for its usefulness. For it to be legitimate, the process of recognizing the premises as true must already have accomplished whatever is needed for the recognition of the truth of the conclusion; for it to be useful, a recognition of its truth need not actually have been accorded to the conclusion when it was accorded to the premises. (Dummett 1973: 297)

The legitimacy Dummett is claiming for is nothing but validity. If we accept (1), then Dummett’s problem with deduction can be rephrased as follows: (2) There is a tension between the validity of an inference and its usefulness, that is between the fact that truth is transmitted from the premises to the conclusion and that the recognition of the truth of the conclusion is not yet achieved when the truth of the premises is recognized. According to the their reading of (1), realists take truth as an independently defined notion to which inference validity is to be reduced. Their problem is that of giving a sound account of what is truth recognition. For anti-realists the solution is more complicated, as they reject to define validity in terms of truth. As an anticipation, we will actually show that the anti-realists can be seen as proposing, on the contrary, to define truth in terms of validity. In this view, the meaning of sentences is fully determinate by the use made of them in linguistic practice. Restricted to logically complex sentences, the typical context in which they figure is deduction. So, it is natural to take the deductive inferences in which logically complex sentences figure as fixing the meaning of logical operators.

349 2. Inferences as definitions A natural way of clarifying what is actually meant by saying that inferences fix the meaning of the logical operators is the following: (3) Whenever a competent speaker accepts the premises as true, he will also accept the conclusion as true, when presented with it. We can think of this situation as the proof-theoretic counterpart of what happens in the Tarski-Carnap style semantics when the meaning of a given expression is given by means of a definition. Consider the case of ‘bachelor’ being defined as ‘not married’: if a speaker knows the meaning of ‘bachelor’ than it is not possible that he assents to the sentence: ‘Luca is not married’ and not to ‘Luca is a bachelor’. That is, it is not possible that a competent speaker recognizes the truth of the first sentence without recognizing the truth of the second one, when presented with it. Clearly, the definition can be taken as warranting inferences from sentences of the first kind to sentences of the second kind. Hence, the inferences that are taken to fix the meaning of logical operators will be acknowledged as valid by definition. For, if speakers understanding of the meaning of a logical operator consists in the mastery of some deductive inferences, it is not possible that the speakers know the meaning of the operators without accepting these inferences as valid.1 So, for this kind of inferences we have that a speaker cannot recognize the truth of the premise without recognizing the truth of the conclusion. 1

We take, as Dummett does, the development of a theory of meaning as a highly theoretical enterprise. Of course, this does not mean that the specification of meanings is an arbitrary choice, as any specification has to satisfy several constraints such as articulation, molecularity, compositionality, manifestability and so on. Nonetheless, it is possible in principle that different theories of meaning satisfy all such requirements. Hence the claim that a given inference is valid by definition is not an empirical statement that can be proved or rejected by, say, asking speakers. Rather, it is a theoretical statements: it is possible that an inference is valid by definition relatively to a given theory of meaning, but not relatively to another one.

350 But at this point we face the problem stressed by Dummett: if all inferences were needed to fix the meaning of logical operators, then all inferences would be such that whenever the truth of the premises were recognized so would be the truth of the conclusion. In other words, no inference would be useful. This seems to be actually the traditional way of accounting for the validity of deductive inference, but at the price of treating it as petitio principii. Hence, in order to warrant the usefulness of deductive inference, we have to allow for inferences which are valid even if they do not fix the meaning of the sentence. 3. A Wittgensteinian perspective But this seems to be no easy task. If in justifying validity one risks to repudiate the aspect of deductive inference making it fruitful; it is easy to fall in the opposite error: once warranted usefulness, being incapable of accounting for inference validity. According to Dummett, Wittgenstein comes close to this, when he holds that in accepting a new proof of a statement, we are modifying its meaning. To clarify the point, one can consider the proof that a cylinder intersects a plane in an ellipse. The proof is an example of what is meant by fruitfulness of deduction: it provides a new criterion for recognizing something as an ellipse. Now we turn to the other aspect of deduction and we ask on which basis is the proof to be accepted as valid. According to Wittgenstein, there is no further jury being entitled to settle the matter beyond the linguistic (and in this case mathematical) community. That is, the decision of accepting a given proof as valid or not is a matter of agreement in the community and there is no base on which social practice can be criticized. The idea of feeling the correctness of a proof to be imposed on us is accordingly a misconceived illusion. In particular, it is not on the basis of meaning specifications that we acknowledge some inferential procedures as valid. On the contrary, it is the acceptance of a given set of inferential procedures that gives meaning to sentences. And as we accept new inferences and start using them, meanings change. But, Dummett contests, are we sure that accepting new proofs is

351 always a modification of meanings? Considering our example, are we sure that the adoption of the new criterion for its application modifies the meaning that we attach to the predicate ‘ellipse’[?] To speak of our accepting something new as a ground for applying a predicate as a modification of its meaning would not be, in itself, to go beyond what is banal, save in the use of the world ‘meaning’: to give substance to the thesis, we have to construe the modification as consisting, not merely in our acceptance of the new criterion, but in the possibility of its yielding a a different extension for the predicate from that yielded by the old criteria. (Dummett 1973: 300-301)

Dummett’s remark is crucial for grasping the significance of Wittgenstein’s position and at the same time seeing where an alternative solution can be found to preserve both aspects of deduction. In fact, the following situation is envisaged. Suppose some means to establish sentences are given. Then, when we face a new inference, two possibilities are open. Either the new inference allows us to establish sentences also in cases in which, by using only the inferences previously available, it was not. In this case, the community’s acceptance of the new inference would constitute a modification of the meaning of the sentences. Or the new inference allows to formulate new criteria for establishing sentences which are equivalent (or eventually simply faithful) to the previous ones. In this case it seems more natural to claim that no meaning modification takes place in accepting the inference. As we will see, Dummett construes his own position as grounded on the idea that it is the very recognition of this fact (the faithfulness to the previously established practices) that prompts the community to smoothly accept the new inference. In a sense, Wittgenstein’s position (as Dummett reconstructs it) amounts to the claim that in general the behavior of the linguistic community, when it comes to deciding about the acceptance or not of some new inference forms, is so motley to make senseless to ask for some general criteria (like the one suggested) to which it should conform. In Dummett’s words:

352 We speak as we want to speak, and our practice, in respect to the whole of our language, determines the meaning of each sentence belonging to it. [. . . ] It is not, therefore, that there is something which must hold good of deductive inference, if it has to be justified, but which, because we should thereby be trapped in a vicious circle, we are unable to demonstrate, but must simply assume: rather, there is no condition whatever which a form of inference can be required to satisfy, and therefore nothing to be shown. (Dummett 1973: 304)

4. Dummett’s molecular conception According to Dummett, it is only when this is rejected that the possibility of a new account, in which both aspects can be properly accounted, appears. This emerges as soon as we reconsider the analysis Dummett gives of the geometrical theorem we quoted above. As Dummett remarks, it is not always the case that by accepting a new inference (in this case the proof of the theorem) we modify the meaning of the expressions of our language. Namely, when the new inference provides criteria for establishing sentences which are faithful to those already accepted. That is, whenever a sentence established by means of the new inference could have already been established without it, or in other words when the set of inferences obtained by accepting the new one forms a conservative extension of the previous set. For Dummett this amounts to the possibility of a molecular conception of language under which each sentence possesses an individual content which may be grasped without a knowledge of the entire language. Such a conception requires that we can imagine each sentence as retaining its content, as being used in exactly the same way as we now use it, even when belonging to some extremely fragmentary language, containing only the expressions which occur in it and others, of the same or of lower complexity, whose understanding is necessary to the understanding of these expressions: in such a fragmentary language, sentences of greater logical complexity than the given one would not occur. Our language would then be a conservative extension of the fragmentary language: we could not establish, by its use, any sentence of the fragmentary language which could not already be established in that fragmentary language. The rules of inference which are

353 applied in our language are, on such molecular view, justified precisely by this fact, the fact, namely, that they remain faithful to the individual contents of the sentences which occur in any deduction carried out in accordance with them. (Dummett 1973: 302-303)

This suggests the idea that not all inferences are actually needed to specify the meaning of sentences, but only a subset of them; these inferences will be plausibly claimed to be valid by definition. Other inferences will be said to be valid in virtue of their being sound to them. On the basis of the possibility of finding inferences of this latter kind, deduction can be said to be fruitful: the enrichment of the set of inferential practices accepted by a linguistic community gives rise to new criteria for accepting sentences as true. In fact, Dummett is ready to accept that sometimes the acceptance of new inferential procedure constitutes a modification of the meanings. But this does not happen always, he says. Most times speakers come to accept new inferences exactly, as their introduction yields an extension of the practices which is sound to the previously accepted ones. That is, sometimes the acceptance of new inferences will yield nonconservative extensions of the existing practices. And in cases such as these, a modification in the meaning of sentences will be acknowledged. In particular, the inferences in question will be constitutive of the “new” meanings of the sentences that could not have been previously established. Nonetheless, during the time in which no such inferences are admitted, meanings are stable and the possibility of accounting for both validity and fruitfulness of deductive inference is opened. In fact, if speakers accept a new inference only when it yields a conservative extension of the practices, it is natural to claim that meaning does not change with the acceptance of the new inference. Furthermore, the new inference is accepted exactly because it is sound to the meaning of the expressions as previously established: it is in this sense that the inference can be said to be valid. Finally, the possibility of coming to accept new inferences as valid in this way constitutes the fruitfulness of deduction: the crucial difference from Wittgenstein is that Dummett is allowing for a notion of fruitfulness different from the one based on simple revision of the existing practices.

354 So, molecularism can be seen as a third way between the two views presented. We take the ideas presented in these sections as a natural way of developing Dummett’s account of the paradox of deduction. In the rest of the paper we try to beat a path from these ideas toward an adequacy condition to be imposed on the anti-realist notion of truth. 5. Intuitionism: Constructions and methods As a starting point, it may be useful to consider the notion of construction in intuitionism. Historically, it is here that we find the idea that the meaning of a sentence is given by making reference to the means of establishing it. In particular, the anti-realist account of logical constants can be seen as a refinement of the intuitionistic one. In intuitionism the conditions for establishing logically complex sentences are specified by means of the so called Brouwer-HeytingKolmogorov informal semantics. This is actually an inductive specification of what is a construction for logically complex sentences:2 • a construction for a conjunction is a pair of constructions, one for each conjunct; • a construction for a disjunction is a construction for either of the disjuncts. In saying that the BHK semantics can be taken as a specification of the meaning of the logical constants, we stick to the ideas presented in the previous sections: consider a speaker presented with two constructions for two sentences A and B: he cannot reject to assent to the conjunction 2

Note that there is no base clause of the induction. It is in this sense that we speak of an informal semantics rather than of a proper semantics. The clause for implication sounds as follows: • a construction for an implication is a method that transforms constructions for the antecedent into constructions for the consequent. The clause makes the notion of construction and method interact. Consequences of this, thoroughly inquired by Usberti (1995, Ch. 3.5), are shown in footnote 4. For reasons of space, we leave a full discussion of these issues out of the main body of this paper.

355 A and B or he will be said not to understand the meaning of the connective. Analogous reasoning applies to disjunction. Just as in the case of the definition of ‘Bachelor’ given in section 2, it is quite easy to convert these meaning specifications into inference rules: in the case of conjunction we infer from two sentences their conjunction and in the case of disjunction we infer from a sentence its disjunction with another one. Also here, we have that these inferences are acknowledged by definition, that is a competent speaker cannot but accept these inferences as valid. As a consequence, these inferences are not useful, in the sense that their application does not give rise to epistemic gain. In Dummett’s terms, the recognition of its truth is accorded to the conclusion whenever it is accorded to the premises. It is in this sense that constructions are said to be epistemically transparent. That is, one cannot be in possession of a construction for a given sentence without, thereby, recognizing the truth of the sentence. This happens exactly because the meaning of sentences is specified in terms of what is a construction for them. So, in full analogy with the inferences that fix the meaning of sentences, if one does not accept a sentence as true when presented with a construction for it, it means that he does not understand the meaning of the sentence. Together with the one of construction, the notion of method plays a crucial role in intuitionism. We present it with an example. Consider the following definition of a natural number n: if 212 < 37 then n = 4; otherwise n = 5. Although just by reading the definition we cannot say whether n is 4 or 5, it is clear that, once the two powers are calculated, it will be possible to decide the value of n. Intuitionists would put it like this: although we do not actually possess a construction for the sentence ‘n = 4 or n = 5’ (as such a construction would consist in a construction for either of the disjuncts), we are in the position to obtain it. The definition itself is (or can be seen as an instruction for) a method for obtaining the required construction. There is a crucial distinction between constructions and methods.

356 When one is in possession of a construction for a sentence he does recognize the truth of the sentence (the construction being constitutive of the recognition), when one is in possession of a method of obtaining a construction for a sentence, he has to first apply the method in order to recognize the sentence as true. We will refer to this by saying that while constructions are epistemically transparent entities, methods are not. 6. Direct and indirect evidence If we want to generalize the notions of construction and method beyond the mathematical domain, we can speak of direct and indirect evidence. The adjective “direct” refers to the epistemic process through which we acquire evidence for the sentence. In our example, a construction for one of the disjuncts is the most direct way to establish the disjunctive sentence ‘n = 4 or n = 5’, since it is structured according to the meaning of the sentence main operator. Once we know the meaning of ‘or’ we immediately accept a construction for one of the disjuncts as evidence for the sentence. In general, the direct means of verifying the statement is that which corresponds, step by step, with the internal structure of the statements, in accordance with that model of meaning for the statements and its constituents expression which is being employed. (Dummett 1973: 312)

At least for some sentences, the natural model of meaning does involve inferences. According to Dummett, it is this insight which is one of the great contributions of Quine’s celebrated essay “Two Dogmas of empiricism”, and is there expressed by means of the image of language as an articulated structure of interconnected sentences, upon which experience impinges only at the periphery. The impact of experience may have the eventual effect of inducing us to assign (new) truth values to sentences in the interior structure: but this impact will be mediated by truth-value assignments to other sentences which lie upon a path from the periphery, where the impact is initially felt, to the more centrally located sentences. This metaphor presumably represent the entirely correct conception that, save for the peripheral sentence, the process of establishing a statement as

357 true does not consist in a sequence of bare sense-perceptions, as on the logicalpositivist model of the process of verification, but in the drawing of inferences (which need not, of course, all be strictly deductive) whose ultimate premises will be based on observation. It is inherent in the meaning of such sentences as ‘The Earth goes round the Sun’ or ‘Plague is transmitted by rats’ that it cannot be used as a direct report of observation (and thus is not, in Quine’s image, located at the periphery of the linguistic structure), but it can be established only on the basis of reasoning which takes its departure from what can be directly observed. In extremes cases, for instances a numerical equation or a statement of the validity of a schema of first-order predicate logic, it is intrinsic to the meaning of the statement that it is to be established by purely linguistic operations, without appeal to observation at all (save the minimum necessary for the manipulation of the symbols themselves). (Dummett 1973: 298)

From this image we get the idea that direct evidence is the one that proceeds from the periphery toward the interior, in accordance with the meaning of sentences, being determined by the links between it and other statements adjacent to it in the direction of the periphery, and their meanings in turn by the links that connect them with further sentences yet closer to the periphery, and so on until we reach the observation statements which lie at the periphery itself. (Dummett 1973: 299)

In extreme cases, direct evidence will be constituted by computation. In this sense the notion of direct evidence is epistemically transparent: each step constituting it is in accord with the very meaning, we have that each deductive step will be constituted by a meaning-fixing inference. As we saw in section 2, for these inferences we have that by recognizing the truth of their premise one recognizes one of the conclusions as well. So, just as with constructions, if a speaker is presented with direct evidence for a sentence he cannot but accept the sentence as established. On the other hand, it at least appears that chains of deductive reasoning occur which involve, either as premises or as steps in the proof, statements which lie deeper in the interior than does in the conclusion of the argument; even that the conclusion may, on occasion, be a peripheral statement. In any such case, the conclusion of the deductive argument is being established indirectly, that is by a process

358 our understanding of which is not immediately involved in our grasp of the meaning of the statement. (Dummett 1973: 299)

Considering the definition of n, we have that to obtain a construction for the sentence, we have to master the notion of power of a number, which even intuitively is not required to understand the meaning of the disjunctive sentence. In general, indirect evidence need not be structured according to the meaning of the sentence, in particular it will involve inferences which non-necessarily are meaning-fixing.3 Hence, indirect evidence lacks the immediacy of direct evidence. Nonetheless it gives us clear instructions on how to get the direct evidence for the sentence. It is on the basis of the possibility of extracting from it direct evidence that we accept it as (indirect) evidence. This characterization of the relationship between direct and indirect evidence, by modeling it on the intuitionistic couple constructionsmethods, is a way of embodying the idea presented in section 4 of the way in which the validity of non-meaning-fixing inference is to be understood. There we said that, whenever a sentence was established by indirect means, it could have been established directly (that is indirect inferential procedures must be a conservative extension of the meaningfixing ones). It is on the basis of this possibility that the indirect procedures are actually accepted as valid. In the light of intuitionism, this notion of possibility is characterized in procedural terms: to say that the sentence could have been established by direct means is interpreted as the possession of a procedure, effective in principle, which will 3

In full analogy with what we said in footnote 1, it should be clear then that whether a given portion of evidence counts as direct or indirect evidence for a sentence depends on how the meaning of the sentence has been specified. In this case, to claim that some evidence is direct or not will be relative to the theory of meaning in the background. In this paper, we stick to the canonical neoverificationist presentation of the matter. Hence, concerning logical constants, we accept the proof-theoretic counterpart of the informal specification of meaning provided by the BHK semantics for intuitionistic logic. Even though we are strongly sympathetic with this view, we do not deny the possibility of eventually moving away from it.

359 transform the indirect evidence for the sentence into the direct one. In a sense, indirect evidence is itself (or very naturally suggests) the method, as the example of the definition of n suggests. 7. Truth and its recognition As Dummett remarks, for there to have been an epistemic advance, it is essential that the recognition of the truth of the premise did not involve an explicit recognition of that of the conclusion. (Dummett 1973: 313)

It is exactly because the recognition of the truth of their premises does involve the recognition of the truth of their conclusion that meaningfixing inference do not yield epistemic advance. We can contrast the role played by truth-recognition in meaningfixing inference with the feature by means of which valid inferences are usually characterized: [to say that] the rules of inferences we ordinarily employ are in fact valid [is to say] that they are justified in the sense that truth is preserved as we pass from the premises to conclusion. (Dummett 1973: 311)

At this point, it should at least seem reasonable to characterize the two features of deduction by means of two different notions: truth and truthrecognition. On the one hand, meaning-fixing inferences are those that “preserve” truth-recognition in passing from premises to conclusion. That is, one cannot accord the recognition of the truth to the premises and not to the conclusion. Valid inferences, on the other hand, are those that preserve the truth in passing from premises to conclusion. This is to be understood as follow. Whenever one is in possession of (direct or indirect) evidence for the premises, he is in possession of a method to obtain direct evidence for the conclusion, i.e. of indirect evidence for it: if the inferential step is not a meaning-fixing one, than one must claim that the conclusion has not been established by direct means. Meaning-fixing inferences are a

360 special case: in fact, to be in possession of direct evidence for the premises of a meaning-fixing inference is already to be in possession of direct evidence for the conclusion, the meaning of the conclusion being specified exactly by means of the inference. Obviously, as indirect evidence is defined as a method to obtain direct evidence, then direct evidence can itself be seen as a very special kind of indirect evidence: the method to recover direct evidence from it is very simple, just doing nothing. And hence, also meaning-fixing inferences preserve truth. We have the following situation: a non-meaning fixing inference does not preserve truth-recognition, because when one is in possession of direct evidence for the premises, he only has indirect evidence for the conclusion. On the other hand, meaning-fixing inferences are such that whenever one is in possession of direct evidence for the premises, he is also in possession of direct evidence for the conclusion (the inference being constitutive of the meaning), so they do preserve truth-recognition. Valid inferences preserve truth, in the sense that whenever one is in possession of evidence (of any kind) for the premises, he is in possession of evidence for the conclusion, even though not necessarily of direct kind. Clearly, both meaning-fixing and non-meaning-fixing valid inferences preserve truth. So, the difference between the two kinds of inferences is whether they preserve truth-recognition or not. This is actually in line with the intuition that meaning-fixing inferences, being valid, must share some property with non-meaning fixing valid ones.4 4

We did not consider what happens when evidence for the premises of a meaningfixing inference is of indirect kind. If we take seriously the first quotation of section 6, then one should maintain that direct evidence is the one constituted only by meaning-fixing inferences. Hence if one has non-direct evidence for the premises of a meaning-giving inference he should not be said to be in possession of direct evidence for the conclusion. Unfortunately, this is problematic in the case of implication. In fact, as the BHK semantic clause (cf. footnote 2) suggests, most of times implications are established by making reference to methods to obtain direct evidence of the consequent, i.e. to indirect evidence rather than to direct evidence for the consequent. This prompts a “slight” revision, according to which one requires only the last step of direct evidence to be a meaning-fixing

361 8. Possibility, realism and anti-realism As Dummett remarks, this point is crucial: The relation of truth to the recognition of truth is the fundamental problem of the theory of meaning, or, what is the same thing, of metaphysics: for the question as to the nature of reality is also the question what is the appropriate notion of truth for the sentences of our language, or again, how we represent reality by means of sentences. What I am affirming here is that the justifiability of deductive inferences – the possibility of displaying it as both valid and useful – requires some gap between truth and its recognition; that is it requires us to travel some distance, however small, along the path to realism, by allowing that a statement may be true when things are such as to make it possible for us to recognize it as true, even though we have not accorded it such recognition. Of course from a realist standpoint, the gap is much wider: the most that can be said, from that standpoint, is that the truth of a statement involves the possibility in principle that it should be, or should have been, recognized as true by a being – not necessarily a human being – appropriately situated and with sufficient perceptual and intellectual powers. (Dummett 1973: 314)

The notion of possibility is what mediates between the notions of direct and indirect evidence and hence between those of construction and methods in intuitionism. The possession of indirect evidence (a method) is the possibility of obtaining direct evidence (a construction). According to Dummett, the same relationship holds between truth and its recognition as well: the truth of a sentence is the possibility to recognize it as true. The difference between realist and anti-realist is how the possibility is to be conceived. Following Dummett, we can think of the possibility the realist has in mind as unconstrained by human limits. That is for the realist a sentence is true if an omniscient entity can recognize it as true. On the other hand, the anti-realist restricts the notion of possibility so that a sentence is true if it is recognizable as such by one. This is in a sense to relax the character of direct evidence, by allowing it to be constituted by portions of indirect evidence. We agree with Usberti (1995) that the result has dramatic consequences: in particular, the conceptual priority of direct evidence over the indirect one becomes untenable as the two notions become unavoidably intertwined.

362 means of an effective procedure. Accordingly, if one is in possession of indirect evidence for a sentence, he can transform it by means of an effective procedure into direct evidence. In the light of this, it should now appear quite natural to say that valid inferences preserve truth also from an anti-realist standpoint, keeping in mind that by saying this we are not reducing validity to truth. Rather, the claims highlights the crucial role that truth must have in accounting for deduction. In the light of the crucial role of deduction plays in a prooftheoretic account of meaning, it seems natural to take conditional (1) as an adequacy condition to be imposed on an anti-realist definition of the notion of truth.

References Dummett, Michael 1973. The Justification of Deduction. In: Michael Dummet, Truth and Other Enigmas, Harvard: Harvad University Press, 290-318. Usberti, Gabriele 1995. Significato e Conoscenza. Milano: Guerini Scientifica.

Giacomo Turbanti Scuola Normale Superiore di Pisa [email protected]

Belief Reports: Defaults, Intentions and Scorekeeping Abstract: Dynamic approaches to semantics like Discourse Representation Theory or Jaszczolt’s Default Semantics provide more and more effective tools to represent how speakers handle meanings in linguistic practices. These deeper perspectives may give us a lever to lift some of the philosophical perplexities crowding semantics and to catch a glimpse of what hides beneath them. In this paper, I exploit these approaches with relation to the analysis of belief reports. However, it will emerge that, despite their benefits, the theories that support these representational advances may be themselves question begging from a philosophical point of view. Brandom’s remarks about normative character of intentional content offer an important contribution to bring into focus the right path to drive these representational improvements towards really acceptable answers to philosophical questions about semantics.

0. Introduction Propositional attitudes are a well worn theme in the philosophy of language. Frege himself had to dedicate several pages to them and to other “indirect contexts”, because they represented a striking exception to his new born semantics of sense and reference.1 Statements like (1) Giacomo believes that the democratic candidate will win the elections. violate the principle of compositionality. His solution to this semantic problem was to consider them as an actual exception: instead of their ordinary reference (a truth value), statements in indirect contexts have an indirect reference, that is their 1

Frege (1892).

364 ordinary sense. In other words, he construed those contexts as dealing not with what there is in the world, but with how we conceive things in the world, how we take things to be. Ways in which things are conceived may vary from speaker to speaker and this explains why, in those contexts, coreferential terms are not substitutable salva veritate. Now, puzzles arise, because speakers can make referential mistakes and, what is worse, these mistakes cannot be identified by those to whom they are attributed, no matter how much logical introspection they are required to have. These mistakes may generate contradictions in the attitude reports, and yet we would hesitate to represent beliefs that involve referential mistakes as logically contradictory beliefs. So the question Kripke, for example, believed every conscious account of belief reports should take seriously is: where do those contradictions come from?2 1. Dynamic representations Dynamic semantics approaches in this area seem to be particularly promising because they deal with identification of referents inside contexts of communication and provide tools to handle anaphoric links among expressions. Discourse Representation Theory (henceforth DRT),3 for example, identifies referents in the context of propositional attitudes representations by using the formal tool of “anchors”: external anchors, like

which are functions from the set of discourse referents of a given DRS to the set of objects of a given world, and internal anchors, like

2 3

Kripke (1979). Kamp (1981).

365

whose only purpose is to signal that the discourse referent in their scope has to be interpreted as directly linked to some object in the world. Let us consider now propositional attitude ascriptions. First of all, DRT standard language has to be extended in order to represent propositional attitude ascriptions. Let L be a standard DRS-language. Following Kamp we can extend L to LPA as follows: 4 1. Predicate Att is added to L’s vocabulary. 2. Conditions of the form s:Att (a, K, EA) are added to L’s set of DRS conditions, where: (a) s is a discourse referent for states. (b) a is a discourse referent for individuals. (c) K is an ADS (attitude description set), i.e. a set of pairs , where MOD∈{BEL, DES, ...} and K is a DRS.5 (d) EA is an external anchor for K, i.e. a set of pairs , where x is an internally anchored discourse referent of K and y is a discourse referent not occurring in K. Notice in particular the external anchor inside the attitude description set. Kamp accepts only two different ways of anchoring discourse referents: one with both external and internal anchors, which is supposed to represent de re beliefs, and the other with just an internal anchor, which represents formally de re beliefs, i.e. referential mistakes. 4 5

Kamp (2003). As Kamp points out, condition (2c) is not complete, because the set of modal indicators should be extended to include all propositional attitudes. However it is not so obvious that this analysis could be extended without modifications to other attitude operators such as “desire”, “wish”, etc.

366 We can construct a DRS for the de re representation of example (1): 6 (2)

Here the referent of the predicate “The democratic candidate” is anchored both internally and externally: the internal anchor [ANCH, y’] specifies the formal condition that Giacomo believes something de re about y’, while the external anchor anchors the discourse referent y’ of the ADS to y, the corresponding discourse referent of the principal DRS. This means that both in the interpreter’s and in Giacomo’s perspective y refers to an actual individual. On the other side, the DRS for the formally de re representation of (1) should be something like this: (3)

Here the absence of the external anchor deprives referents of the ADS of any link with those of the principal DRS. 6

The analysis and representation of time are excluded for simplicity reasons.

367 The problem with (3) is that it fails compositionality, because the referent y’ anchored inside the internal anchor of the ADS does not match any discourse referent in the principal DRS. This means that the DRS does not express any proposition at all.7 Thus, DRT seems to share most of the problems compositionality causes to any first order logic analysis of propositional attitudes: even if anchors allow to distinguish the different ways in which the reporter and the believer individuate referents, the requirement to identify extensionally the interpretations of discourse referents still remains unavoidable for the DRSs to be meaningful.8 2. Default intentions An interesting way to overcome this kind of hurdle has been recently shown by Jaszczolt in Default Semantics (henceforth DS). According to Jaszczolt (2005), standard inferential approaches to communication theory are unable to account for the efficiency of linguistic interpretation and fail to develop a cognitively acceptable and unitary model for it. Their intrinsic limit is to postulate the process of interpretation divided into different levels: roughly, a first level where syntactic information has to be interpreted through truth conditions, and a second of subsequently pragmatic integrations that modify the first one. Jaszczolt describes instead a mono-leveled representation of the content of acts of communication and formalizes it in a structure she calls Merger Representation (henceforth MR), which is supposed to combine different sources of information: a combination of word meaning and sentence structure (WS), pragmatic inferences consciously performed by the interpreter (CPI1), cognitive defaults (CD), socio-

7

8

While Kamp accepts the logical possibility of purely formally anchored DRSs, he rejects them in principle as incorrect representations of possible beliefs, because, as he says, they do not express complete propositions: as Kamp (1990: 56-60) says, “they suffer from failure of presupposition”. Kamp (2003) himself admits that a model semantics for LPA appears to be quite problematic because of the same difficulties every model semantics faces when it has to deal with intensional objects.

368 cultural defaults (SCD1).9 MRs can then be interpreted in terms of truth conditions: in DS the determination of truth values follows the interpretation and the representation of communicative content. The structure of MRs is based upon that of DRSs in DRT, while the analysis of intentionality, which allows Jaszczolt to account for default interpretations, is developed along the lines of Sperber and Wilson (1986). Acts of communication are interpreted as the overt expressions of mental states and thus acquire from these mental states the intentional properties which are realized in communicative, informative and referential intentions. According to Jaszczolt the intentionality of an act of communication can be stronger or weaker, and this depends on the degree with which speaker’s mental state is about something in the world (the degree of its aboutness). Defaults are thus cases with the strongest intentionality. MRs do not need a device like anchors in DRT to link referents to objects in the world because objects in the world are directly pointed at by the intentionality of the act of communication they represent. Now, since the act of communication with the highest degree of intentionality is the one with the highest referential intention, the de re reading will be the default interpretation of a speaker’s belief. On the other side of the intentionality scale there will be the de dicto reading. 2.1. Referential mistakes Referential mistakes occur when the intentionality of an act of communication misfires: the speaker intends to refer to something in the world, but his referring intention does not reach its object and remains “dispersed” between what the speaker intends to refer to and what the interpreter construes the communication act to be about. The resulting reading of the speaker’s belief is called by Jaszczolt (2005: 123) de dicto1. Let us see in detail the structure of the MRs corresponding to these

9

The index on CPI1 and SCD1 distinguishes this kind of information from that of conversational implicatures (in Jaszczolt, CPI2 and SCD2), which is not included in the truth-functional representation of communicative content.

369 three readings. The structure of MRs is developed upon that of DRSs.10 MRs contain then referents and conditions. But since they are representation of acts of communication and merge different sources of information, they also contain interpreted determination of referents and logical form of conditions.11 Besides this, in the particular case of belief reports, in order to represent beliefs, Jaszczolt equips MRs with a belief operator Bel(x,τ), where x is an individual and τ a cognitive state. Notice that τ is an intensional object that is treated as a compositional element of MRs: this, Jaszczolt says, does not configure as a flaw because (i) DS aims at a formalization of acts of communication which does not require the reduction of intensional context to extensional semantic conditions, and (ii) compositionality is preserved as it has to be evaluated at the level of MRs. Let us see then the MR for the de re reading of (1): (4)

10 11

Jaszczolt adapts syntactic and semantic formalization from Relational Semantics in DRT. See Kamp and Van Eijck (1997). In the case of referents, it is important to notice the difference between the identifying conditions of MRs and the external anchors of DRSs. The latter are a sort of metalinguistic device developed to introduce direct reference theory inside DRT framework, and impose constraints on the interpretation of discourse referents; the former are necessary conditions for the representation of the intentionality of an act of communication. This difference depends on the fact that the target of DS is a different level of the interpretation process: DS provides a formalization of those aspects of linguistic interpretation that interface syntactic comprehension with semantic and pragmatic information in order to represent communicative content, see Jaszczolt (2005: 96).

370 The belief operator Bel(x, τ) corresponds to the condition “[[x]CD [believes]CD τ]WS”. Here the problem of substitution inside intensional context is overridden by the default determination of the referent of y. This is possible as the MR represents the content of the communication act as already interpreted by a model hearer in a given context. On the other side the MR for the de dicto reading of (1) is: (5)

Here CD information about y is substituted by CPI1 information because of the different degree of referential intention: this leads to the attributive reading of the definite description which identifies y. It is worth noticing that the belief predicate too has to be interpreted through CPI1 in order to give to τ the de dicto interpretation. At last, the case of de dicto1 is represented by the MR: (6)

Here the belief predicate has its default CD meaning and thus the referring intention is interpreted as originally de re. However the referent

371 y is identified through CPI1, which means that the default referential intention has been discarded in the process of interpretation because the interpreter discovered Giacomo’s referential mistake: the belief thus results de re about someone else, or de dicto1 as Jaszczolt prefers. Unlike DRT, Jaszczolt’s account highlights the shift from the default interpretation of the speaker’s de re intention to refer to something in the world to the interpreted belief which contains the original, semantically mistaken, referent intended by the speaker. In DRT we are unable to appreciate this shift of perspective because, even if we accept purely formally anchored DRSs (DRSs with internal anchors but devoid of external ones), we would still be unable to identify the original (mistaken) intended referent: in fact conditions in the main DRS are semantically evaluated according to the actual world, and they could not identify the believer’s intended referent. This is, very roughly, the sight from Jaszczolt’s perspective.But what exactly have we gained by shifting, with DS, compositionality and interpretation “one level higher”? 12 2.2. Where do intentions come from? Referential mistakes do not raise contradictions anymore because the referent intended by the speaker is already determined before the MR is constructed and no contrast arises between the two different ways of thinking about that object (respectively of the speaker and of the interpreter). However this result does not come for free. It takes for granted an account of referential intentionality. Actually Jaszczolt presents DS as a model for linguistic practices of utterances processing,13 which is thus supposed to account just for the output of the process of interpretation. All that matters for the identification of the correct content of the act of communication is the intention of that act, which guides the interpreter in using the different sources of information at his disposal and determines how much the intentionality diverges from the default (the highest degree). 12 13

Jaszczolt (2005: 82-83). Jaszczolt (2005: 45).

372 But if intentions must be part of the semantic representation, their contribution to the determination of the content is better to be found in the very process of interpretation, rather than in its ready made output. Otherwise we risk to use referential intentions just as semantical intermediaries, which simply happen to be more reliable than the old objective Fregean senses. Moreover while we have a representation of referential mistakes, we still miss a clear explanation for them. In the picture provided by DS it is not obvious why de dicto1 should be considered a mistake: it actually instantiates a degree of divergence from default de re intention which is interpreted through automatic inference just like the de dicto reading. The representation of a de dicto1 belief content is reconstructed a posteriori so the referential mistake is revealed as a departure from the default intentional stance, but it is not explained, because no representation is given of this departure. Jaszczolt correctly admits that the original mental act of the speaker in this case is de re, which means that the speaker intends to refer to an object in the world. However, along the process of interpretation, this intentionality is “dispersed” between the correct referent, and the intended referent. The different degrees of intentionality seem to be identified at the level of interpretation. But then, how can acts of communication be interpreted as intentional if we are to cut or to weaken their link with the original intentionality of mental acts? While de re and de dicto cases match different referential intentions that can be easily traced back to intentionality of mental acts, the intentional status of de dicto1 seems to me to remain doubtful. One main result of Jaszczolt’s approach to the problem of propositional attitudes, I think, is to show clearly not only that reference is the output of a complex process involving lexical, semantic and pragmatic contribution, but also how the commitment to this articulated structure has to be developed in order to represent the content of speakers’ referring intentions. In this picture the notion of default plays a double key role. On the one side, it determines the standard of correctness which makes sense of the referential mistake as a mistake. Without such a standard the only way to secure the correctness of the referring purport of a lexical expression would be to presuppose

373 semantical intermediaries which connect directly espressions with referents. On the other side, the notion of default hints in the direction of the normative character of the inferential determination of content. Talking about defaults requires to deal with the rules those defaults are defaults of. 3. Normative defaults What we still need is an account of this normative character. This is exactly the kind of account Brandom offers in Making it Explicit. If we want to explain intentionality we should describe what it is for speakers to perform correct speech acts in the linguistic practices they are involved in. Brandom assumes an inferentialist semantics,14 and maintains that assertions of sentences are the basic performances of a linguistic practice in which expressions are used: an assertion is a move in the sellarsian game of giving and asking for reasons in the sense that by asserting a content a speaker undertakes a social normative status which can be described in terms of the entitlements he has for such an assertions and the commitments such an assertion binds him to. In this framework, sentences are thus the basic elements of semantic analysis. The semantics of subpropositional expressions can be obtained by exploiting compositionality top down, in order to show how component expressions have meaning only in so far they contribute to the meaning of complex expressions (this is simply an application of Fregean Context Principle). Brandom shows that singular terms (as opposed to predicates) thus individuated form equivalence classes, whose elements are intersubstitutable, i.e. they are related by substitution inferences, like, for example: 14

Here it is important to distinguish two different notions of inferentialism. The first one deals with theory of communication, and it is the idea that linguistic content has to be inferred from speaker’s communicative intention and contextual information; it is opposed to anti-inferentialism which instead maintains that literal meaning, if nothing goes wrong, is enough to provide communicated content. The second one deals with a non truth-functional semantics and it is the idea that inferential structure determines linguistic content. It is Brandom analysis of the normative character of these semantic inferential structure that forms a bridge between the two problems showing how semantics makes explicit features of linguistic practices.

374

(7) The democratic candidate will win the elections → Barak Obama will win the elections. Brandom defines Simple Material Substitution-Inferential Commitment (SMSIC) the normative value of this kind of inferences inside a communicative practice. The meaning of a singular term is thus determined by the SMSICs that relate substitution inferences in which the term is essentially involved. Those commitments allow to evaluate, in terms of correctness, the referential intentions of a speaker who performs those substitution inferences. In Brandom’s account belief reports make explicit the perspectival character of linguistic content: each speaker is characterized by his deontic status as a player in a linguistic game and this determines the correctness of his performances; but each speaker evaluates other speakers’ performances according to his perspective, that is according to the commitments he undertakes and to the commitments he attributes to those other speakers. This is the so called practice of the Scorekeeping. 3.1. Attitude ascriptions The crucial point then is that the difference between a de re and a de dicto propositional attitude, in this sense, is not a difference in beliefs themselves, but a difference in the ascription of deontic status through a belief report. Two different commitments are at play in ascriptions of propositional attitudes: a doxastic commitment (whose content is the content of the proposition object of the attitude) and a substitutional commitment about the referent. In a de dicto ascription the scorekeeper attributes to the speaker the doxastic commitment and the substitutional commitment, while in a de re ascription the speaker attributes only the doxastic commitment. Thus, when the interpreter ascribes de re a belief to the speaker and he does not attribute the substitutional commitment he undertakes, he describes the speaker’s performance as incorrect, because he judges the speaker as not entitled to a SMSIC which is incompatible with his own.

375 So far, however, it could seem that the real point is still missing: how can the interpreter reconstruct the correct reference? Speakers keep track of other speakers’ intentional contents by keeping the score of their commitments and are thus aware of the substitution inferences each speaker undertakes (whether he is entitled to or not). But how can we really tell if the expression “the democratic candidate” refers to the man Barak Obama? Brandom invites us to observe that in an identity statement like (7) The democratic candidate is (=) Barak Obama, what is really equated are neither referents nor tokenings of expressions, but types of expressions. Let us introduce his notation: he uses brackets to designate types, as in , and subscripted slashes to designate tokenings, as in /Barak Obama/i. Tokenings, like pronouns, recur in anaphoric chains in which they asymmetrically inherit the relevant SMSICs that determine their meaning. These anaphoric commitments thus described cannot be subject to the perspectival character of the scorekeeping practice: even if speakers can be mistaken in identifying the correct anaphoric chain (i.e. the correct type of a tokening: for example, if /Giacomo/n belongs to the type which is substitutable with , or to which is substitutable with ) the anaphoric relation is rigid from the anaphoric initiator to the considered tokening.15 So in (7) what one does in treating the two expression as intersubstitutable, is consider both anaphoric chains, represented by tokenings of and by tokenings of , as originated from the same anaphoric initiator. Thus, anaphoric chains are supposed to accomplish the same tasks of Fregean senses16 and of Kripke’s causal-historical chains.17 What is crucial to realize, however, is that the meaning of anaphoric 15 16 17

Brandom (1994: 452). Brandom (1994: 572, 578-583). Brandom (1994: 470).

376 dependents is not secured by the anaphoric initiator (for example a demonstrative tokening) as in Kripke’s causal chains which fix the direct reference up to an initial baptism; rather, an anaphoric initiator has the meaning it has because it is possible to track such a meaning, as determined by SMSICs, along the anaphoric chain up to its initiator: in this sense “deictic uses presuppose anaphoric ones”. 18 Now, with this analysis of intentionality we can go back to DRT and see if we can shed some more light on the contents that DRSs are supposed to represent. In doing this we follow Jaszczolt’s path. From this point of view the main deficit of DRT is its lack of resources for a perspectival evaluation of DRSs: these represent the interpreter’s point of view straightforwardly as the objective one. The external anchors of the ADS link speaker’s discourse referents to the principal DRS’s discourse referents, which later obtain a semantic interpretation. The absence of ADS’s external anchor does not qualify the referential intention as mistaken, it rather characterizes the belief attributed as without a proper object. If we would like to make explicit in DRT the kind of referential mistake that puzzled Kripke, we should make explicit the substitution inferences undertaken by the speaker and the interpreter. Let me conclude with an observation about external anchors which has an interesting relapse in the evaluation of Brandom’s account. If we construe DRS representations as the scorekeeper’s perspectival interpretation, as far as external anchors are used in ADSs, they simply contribute to track anaphoric chains to determine the right interpretation of belief ascriptions. But what about their use in the principal DRSs? 3.2. Tracking referents Actually in Brandom’s picture we lose such an external point of view. The best we can have is an internal anchor which relates an expression to an anaphoric chain whose origin is, for example, an indexical perceptive experience. This is what Brandom would call an attribution of a strong de re belief. It involves the attribution of a demonstrative 18

Brandom (1994: 464).

377 doxastic commitment Giacomo believes that the democratic candidate will win the elections and the undertaking of an existential commitment that amounts to the democratic candidate = “Barak Obama” where in an expression of the form a, a is a tokening anaphorically dependent from a tokening of type , and “Barak Obama” is an expression whose role has been pragmatically determined to be that of the canonical designator of the class of its intersubstitutable expressions. Notice that, given Brandom’s account of deixis, the demonstrative required for the attribution of the belief to be strongly de re has not to be construed as a causal dependence of the meaning of the expression from the object: the object-dependence of strong de re beliefs is explicitated by the the existential commitment. In Brandom’s interpretation the existential commitment is simply construed as the commitment to use an expression, “Barak Obama”, as the canonical designator of the class of expressions intersubstitutable with the democratic candidate. Just to give a hint of what would be obtained by forcing the structure of DRSs to cope with these ideas, consider the following: (8)

The point is that in Brandom’s picture discourse referents themselves would have no role to play: everything deals with assertions. Consequently neither internal nor external anchors are required, because substitutional commitments and anaphoric commitments are all that is

378 needed to secure the meaning of singular terms involved in the belief report. This remark is strictly connected with a complaint one may be, by now, willing to raise against Brandom's account of the de dicto/de re distinction in terms of different styles of belief ascriptions. In fact one could find it hard to completely discard the talking about de re beliefs and de dicto beliefs. After all, there seems to be something in the idea that speakers can entertain purely attributive beliefs like “whoever is the democratic candidate, he will be a better president than the republican one”. The underneath intuition is that these sort of beliefs do not deal with anything in particular, because the speaker does not intend to refer to anything in particular. And one could then object that Brandom really accounts just for de re beliefs and that his distiction between styles of ascriptions really purports to explain how communicative meaning is pragmatically recovered despite of referential mistakes. But the crucial point to stress here is that, in Brandom’s account, the aboutness of beliefs is just their taking part in a structure of substitution inferences. In this sense, there's nothing like a distinction between beliefs that are about objects in the world and beliefs that are just about their content, because it is only the content, pragmatically determined by the scorekeeping practice, that has to be accounted for. Such a content may acquire a representational character. Indeed, the possibility to ascribe de re beliefs to speakers is essential for the representational purport of any language, because only by inserting the content of a belief in a structure of substitutional inferences such content can be shared and communicated: in fact, in this sense, purely de dicto ascriptions alone would not permit interpretation (this being the reason of their ‘opaqueness’):19 the idea is that, in order to interpret, for example, the shaman who asserts “the seventh god graces us with his presence”,

19

Quine (1956: 331).

379 one has to be able to apply substitutional inferences to the content of such an assertion and, eventually, report it de re as “the shaman believes of the sun that it is shining”.20 Thus, I think Brandom would consider letting off anchors as a good result. But we still can ask ourselves if the representational level as acquired through his account of anaphoric chains completely matches our semantic intuitions about the notion of reference.

References Brandom, Robert 1994. Making it Explicit. Reasoning, Representing, and Discursive Commitment. Cambridge: Harvard University Press. Frege, Gottlob 1892. Über Sinn und Bedeutung. In: Zeitschrift für Philosophie und philosophische Kritik, 100: 25-50. Translated as On Sense and Reference by Max Black in Translations from the Philosophical Writings of Gottlob Frege, Peter Geach and Max Black (eds. and trans.), Oxford: Blackwell (third edition, 1980). van Eijck, Jan and Hans Kamp 1997. Representing Discourse in Context. In: Johan van Benthem and Alice ter Meulen (eds.), Handbook of Logic and Language. Amsterdam: North-Holland, 179-237. Kamp, Hans 1981. A Theory of Truth and Semantic Representation. In: Jeroen Groenendijk, Theo Janssen and Martin Stokhof (eds.), Formal Methods in the Study of Language, part I, Amsterdam, Mathematisch Centrum Tracts, 277-322. Kamp, Hans 1990. Prolegomena to a structural account of belief and other attitudes. In: C. Anthony Anderson and Joseph Owens (eds.), Propositional Attitudes: The Role of Content in Logic, Language, and Mind. Stanford: CSLI Publications, 2790. Kamp, Hans 2003. Temporal relations inside and outside attitudinal contexts. Paper presented at the workshop Where Semantics Meets Pragmatics, LSA Summer School, Michigan State University, July 2003. Kamp, Hans and Uwe Reyle 1993. From Discourse to Logic. Dordrecht: Kluwer. Kripke, Saul 1979. A Puzzle about Belief. In: Avishai Margalit (ed.), Meaning and Use, Dordrecht and Boston: Reidel. Jaszczolt, Katarzyna 2005. Default Semantics: Foundations of a Compositional Theory of Acts of Communication. Oxford: Oxford University Press. 20

Brandom (1994: 513-517).

380 Quine, Willard Van Orman 1956. Quantifiers and Propositional Attitudes. Journal of Philosophy 53, 177-187. Sperber, Dan and Deirdre Wilson 1986. Relevance. Communication and Cognition. Oxford: Blackwell.

Bartosz Więckowski Universität Tübingen [email protected]

On Truth in Time Abstract: The paper outlines an aboutness-free account of truth in time and develops an associative substitutional semantics for the first-order tense-logical fragment of English. In associative semantics the truth of an atomic sentence is explained in terms of the mutual matching of the semantic values (associates) which are associated with the terms from which the sentence is composed. The associative account is used for the semantical analysis of a selection of temporal constructions which are problematic from the point of view of presentism (i.e., the ontological assumption that only present entities exist) as they prima facie seem to involve reference to or quantification over non-present entities. The associative analyses of the problem cases are in agreement with presentism and do not encounter the difficulties to which a denotational reading gives rise; moreover, they are ontologically parsimonious and both compositional (down to the subatomic level) and comparatively faithful to the surface structure of the problem sentences.

0. Introduction Denotational model-theoretic semantics is intimately tied to a referential picture of the relation between language and the world according to which language is about the world (see, e.g., Tarski 1983: 401, Dowty et al. 1981: 5). This semantics is philosophically problematic in cases in which one wants to analyse true sentences which contain terms that are, or are taken to be, denotationless. Understanding ‘denotationless’ stricto sensu, the analysis of such sentences calls for an alternative conception of truth, one on which a sentence can be true even though it is not about anything whatsoever. Associative substitutional semantics (Więckowski 2008a, 2008b, 2009) is a semantical framework which intends to make an aboutnessfree conception of truth formally precise. In this semantics the truth of an atomic sentence is explained in terms of the mutual matching of the semantic values (associates) which are associated with the terms from

382 which the sentence is composed. Roughly, an atomic sentence is true in an associative model just in case it is contained in the intersection of the associates of all the terms from which the sentence is composed. In contrast to denotational models, the models of the associative framework represent the level of sense of the object-language rather than its level of reference. Accordingly, the semantics intends to capture the notion of truth with respect to the level of sense. My aim in this paper is to explore to what extent the aboutness-free conception of truth and associative semantics can help to explain the truth of constructions such as (1) (2) (3) (4)

Dinosaurs once existed. Abraham Lincoln was tall. Some American philosophers admire ancient Greek philosophers. There are two different times, one at which John is bent and one at which he is straight. (5) Mary recalls September 11, 2001. in a way which is in agreement with presentism. Presentism, as we shall understand it here, is a view on temporal ontology, according to which the following two theses hold: (P1) Only present individuals exist. (P2) Only the present instant of time exists.1 Due to the first thesis presentists take it that neither past nor future individuals exist and admit quantification only over individuals which exist at the present instant of time. Due to (P2) presentists seek to avoid commitment to the existence of non-present instants of time (or times, for short). Obviously, these ontological constraints pose difficulties for the presentist in providing analyses of truths such as (1)-(5) as they seem 1

There is no established terminology and the exact characterization of ‘presentism’ is one of the issues in the debate. I shall suggest a more precise interpretation of these theses in Subsection 3.4.

383 to involve presently empty denoting singular terms (e.g., ‘Abraham Lincoln’), presently empty predicates (e.g., ‘... is an ancient Greek philosopher’), phrases which seem to quantify over non-present instants of time (e.g., ‘there are two different times ...’) or dates which refer to such instants (or perhaps intervals of them) like ‘September 11, 2001’. The main proposal of this paper is that truth in time is truth with respect to the level of sense. I suggest that temporal sentences, that is, sentences like (1)-(5) which contain either tense-operators, or denoting singular terms which do no longer refer to presently existing individuals, or denoting predicates which do no longer have a referential extension, or expressions which prima facie seem to quantify over or to refer to non-present instants of time are to be evaluated with respect to that level. Our focus will be mainly on temporal sentences which are sensitive to the past. Moreover, we shall largely ignore cases which involve nondenoting terms, i.e., singular terms which never refer and predicates which never receive a referential extension (e.g., ‘Superman’, ‘... is a Kryptonian’). The paper is arranged as follows: Section 1 discusses the semantical difficulties the presentist encounters in analysing temporal sentences. Section 2 draws the intuitive picture of the aboutness-free conception of truth. Section 3 presents an associative semantics for a first-order tenselogical language L. Sections 4 and 5 provide associative analyses of several problem sentences which can be expressed in L. These analyses are both in agreement with (P1) and (P2) and free from the difficulties discussed in Section 1. Finally, Section 6 presents a method for dispensing with quantification over non-present times also in the metalanguage for L.2

2

The focus of the paper is primarily on semantics. For discussions of further issues concerning presentism (e.g., the problem of characterizing presentism, the compatibility of presentism with special relativity or the problem of truthmaking) see, e.g., Crisp (2007), Fine (2005: chs. 4 and 8), Hinchliff (2000), Oaklander (2003), Savitt (2000), and Sider (2001: Ch. 2). Book-length defenses of presentism include Bourne (2006), Ludlow (1999), and Smith (1993).

384 1. Semantical problems for presentism 1.1. Problems with (P1) The typical presentist is a tense (or A-) theorists of time. Tense theorists single out one instant to be the present instant of time. For them, unlike for the tenseless (or B-) theorists, this instant is ontologically privileged in some sense. The sense in which that instant is ontologically privileged for the presentist is captured by (P2). To avoid a conflict with (P1), i.e., an ontological commitment to past individuals, presentists typically assume that the existential quantifier— more exactly, the tenseless objectual (or referential) existential quantifier—is not ontologically committing, when it occurs within the scope of a past-tense operator. Thus the analysis of (1) in terms of (1*) allows them to claim the truth of (1) without admitting past individuals into their ontology (cf. Sider 2006: 77): (1) Dinosaurs once existed. (1*) It was the case that: Dinosaurs exist. In symbols: P(∃xDx). Granting the intelligibility of this assumption, critics object that presentists fall short of explaining, e.g., the truth of constructions which involve singular terms such as (2) or trans-temporal predication such as (3) in a way which is in agreement with (P1). A natural presentist analysis of sentence (2) is (2*) (cf. Sider 1999: 327; the symbolization is mine): (2) Abraham Lincoln was tall. (2*) It was the case that: Abraham Lincoln is tall. In symbols: P(∃x(x = a ∧ Lx)). But the problem with (2*) is that it does not seem to allow to refer to Abraham Lincoln, since for the presentist there is no such thing like a past individual. The philosophical difficulty here is that of explaining

385 how a presentist is to understand (alleged) reference to and talk about non-present individuals. Sentence (3) gives rise to problems with trans-temporal predication: (3) Some American philosophers admire ancient Greek philosophers. (3) clearly seems to be true, but, as critics object, the analyses which the presentist might want to offer fail to account for the truth of that sentence (cf. Sider 2001: 25-26): (3*a) There is at least one American philosopher, and there is at least one ancient Greek philosopher, and the former admires the latter. In symbols: ∃x∃y(Ax ∧ Gy ∧ Dxy). (3*b) It was the case that: Some American philosophers admire ancient Greek philosophers. In symbols: P(∃x∃y(Ax ∧ Gy ∧ Dxy)). (3*c) There is at least one American philosopher, and it was the case that: there is at least one ancient Greek philosopher, and the former admires the latter. In symbols: ∃x(Ax ∧ P(∃y(Gy ∧ Dxy))). (3*a) seems to be false, since there do not presently exist any ancient Greek philosophers. (3*b) seems inadequate as well, since, intuitively, there is no time at which both American and ancient Greek philosophers exist. Finally, (3*c) does not seem to be acceptable either, even though – granting the ontological innocence of the existential quantifier inside the scope of P – it does not incur a commitment to the existence of ancient Greek philosophers. The problem with (3*c) is that the elementary predication symbolized as Dxy occurs inside the scope of the past-tense operator and has to be satisfied at some past time at which ancient Greek philosophers existed, despite the fact that American philosophers did not exist at that time. Finally, there is the intuitive semantical difficulty for the presentist to explain what exactly the semantical contribution of the non-logical terms

386 (i.e., singular terms and elementary predicates) to the truth of (1)-(3) consists in. 1.2. Problems with (P2) To avoid a conflict with (P2), i.e., an ontological commitment to nonpresent instants of time, presentists typically take tenses to be conceptually prior to quantifiers over instants of time and, accordingly, tense operator-talk to be more basic than temporal quantifier-talk. Indeed, for the typical presentist (in the Priorian tradition) tenses are primitive.3 Theorists of this persuasion typically analyse sentences which involve explicit quantification over (or reference to) non-present instants of time in terms of paraphrases in a tensed language. Sentence (4), for instance, which claims John's persistence through change of shape will receive a paraphrase in terms of disjunctions of tensed propositions (cf. Zimmerman 1998: 215; the symbolization is mine): (4) There are two different times, one at which John is bent and one at which he is straight. (4*) Either John was bent and would become or had previously been straight, or John was straight and would become or had previously been bent, or John will be bent and will have been or be about to become straight, or John will be straight and will have been or be about to become bent. In symbols: [P(Bj) ∧ (PF(Sj) ∨ PP(Sj))] ∨ [P(Sj) ∧ (PF(Bj) ∨ PP(Bj))] ∨ [F(Bj) ∧ (FP(Sj) ∨ FF(Sj))] ∨ [F(Sj) ∧ (FP(Bj) ∨ FF(Bj))]. The primitiveness assumption leads at least to the following three difficulties: First, in view of the fact that tense-logical languages are, in 3

This primitiveness assumption corresponds to a position in the philosophy of modality called modalism, which takes the possibility operator to be conceptually prior to the existential quantifier over possible worlds; see, in particular, Forbes (1989: Ch. 4).

387 effect, sublanguages of the temporal logical language with quantifiers over instants of time, it gives rise to difficulties concerning expressive strength (see, e.g., van Benthem (1983: sect. II.1)). Secondly, the tenselogical paraphrases obviously fall short of capturing the surface structure of the original sentences since the latter seem to explicitly quantify over non-present instants of time and not to contain tenses. Finally, there is the metaphysical problem of truthmaking, i.e., the problem of explaining, as it is often put, what in reality grounds the truth of sentences (1*) through (4*) given that the tense operators are primitive and no commitment to non-present individuals and times must be incurred (see, e.g., Crisp 2007, Oaklander 2002, Sider 2001: Sect. 2.3). 2. The intuitive picture By presentist lights, it seems, none of (1)-(5) can be sensibly understood as being about anything at all, as – taking (P1) and (P2) seriously – some of the individuals and times they might be taken to be about do no longer exist. To obtain a picture of the aboutness-free perspective, we modify the Fregean triangle singular term–sense–individual (see, e.g., Kaplan 1989: 485), with a further meaning-theoretic item, namely the term's sense-extension: sense-extension

determines

reflects

expresses

sense

determines

refers to

denoting singular term

individual

Figure 1: Reference and Reflection

We take the sense expressed by the denoting singular term to determine two kinds of semantic value, first, the individual to which it refers (or

388 which is denoted by it) and, second the sense-extension which it reflects. In a temporal setting, we take this rectangle to depict the relation between a denoting singular term and its two kinds of semantic value at the present instant of time. At non-present instants we shall assume that a denoting singular term has only one kind of semantic value: its senseextension.4 Intuitively, the sense-extension of a singular term contains all the data which is associated with that term in virtue of the definition of the term (its sense, as we shall understand this notion) and the meaning postulates for the predicates which occur in the definition.5 The sense-extension of ‘Abraham Lincoln’, for instance, will be the union of the following portions: 1. The defining portion which contains {Abraham Lincoln is a son of Thomas Lincoln, Abraham Lincoln is a son of Nancy Hanks} as a subset. 2. The consequential portion (consequential with respect to the defining portion and the meaning postulates) which contains {Abraham Lincoln is a son, Abraham Lincoln is male} as a subset. (We call the union of the defining and the consequential portion the characteristic portion.) 3. The conforming portion (conforming with respect to the characteristic portion and the meaning postulates) which contains, e.g., {Abraham Lincoln is a bachelor, Abraham Lincoln is married} as a subset. In a metaphysical setting, we may think of the defining portion of a (presently empty) denoting name’s sense-extension as capturing the 4

5

The qualification ‘Frege-related’ is more adequate than ‘Fregean’ since my understanding of sense differs somewhat from what I take to be Frege’s original conception. I conceive of the sense of a name as a certain kind of nominal definition. In Subsection 3.4 the four meaning-theoretic items in Fig. 1 will receive a formal interpretation. In the associative framework proper names and definite descriptions are treated largely in the same way (cf. Więckowski (2008b, 2009)).

389 essence of an individual (once) denoted by the name. Note that the conforming portion need not be consistent with respect to the meaning postulates (which will contain the one for ‘… is a bachelor’). As a consequence the whole sense-extension of a singular term need not be consistent. We take the sense-extension of a singular term to be rigid across instants of time. Intuitively, at non-present instants the referent of a denoting singular term is “deleted” and only its sense-extension remains. (These and the following remarks of this section will be made more precise in Section 3.) The sense-extension of an elementary predicate is a set which contains all the atomic data which are associated with that predicate at a given instant of time. Intuitively, the sense-extension of a presently non-empty elementary predicate mirrors the referential extension of that predicate. Thus, for example, Vladimir Putin will be contained in the referential extension of the predicate ‘... is a Russian’ just in case ‘Vladimir Putin is a Russian’ is contained in that predicate's sense-extension. The senseextension of the predicate ‘... is alive’ will contain {Vladimir Putin is alive, Anna Politkovskaya is alive} at some instant, but not {Anna Politkovskaya is alive} at some preceding instant as a subset. We take the sense-extension of a predicate to vary across instants of time. At non-present instants the referential extension of a non-empty elementary predicate is deleted and only its sense-extension remains. In effect, the level of sense is the union of the (instantaneous) senseextensions of all the terms of the language or the language-like system (of, e.g., Fregean concepts) in question. And just like viewing L as an objectively given set of syntactical objects, we may think of the language in question as being given in just the same objective way. Depending on one's intentions or convictions the association of data with a term in its sense-extension can be understood objectively – as in the present setting – or, alternatively, (inter-)subjectively, i.e., as depending in some sense on a cognitive subject (or a group of subjects). What is needed for the evaluation of temporally sensitive constructions is only the complement of the original triangle. So the intuition is that when we claim the truth of a temporal sentence like (2), we reflect the sense-extensions which are associated with the non-logical

390 terms which occur in that sentence rather than talk about some level of reference conceived of as something like a “reality of past objects”. The data contained in the sense-extensions which are associated with the terms is not true, it has to be evaluated for truth. The truth of an atomic sentence at a given instant of time amounts to a mutual matching of the sense-extensions of the terms from which the atom is composed with respect to that instant. For instance, (2) will be true with respect to the present instant of time just in case the sentence ‘Abraham Lincoln is tall’ is contained in the sense-extensions of the proper name ‘Abraham Lincoln’ and the predicate ‘... is tall’ at an earlier instant of time. Obviously, this informal explanation of the truth of (2) is only in agreement with (P1) as it involves reference to a non-present instant of time. A strategy to ensure agreement with (P2) for sentences like (2) will be presented in Section 6. Before that, in Section 5, it will be suggested how sentences like (4) and (5) can be explained in agreement with (P1) and (P2). 3. The formal framework In contrast to the usual presentist custom, we symbolize temporal sentences in a substitutional tense-logical first-order language with substitutional identity. 3.1. The substitutional language L We distinguish the substitutional language proper L from its base language L0 of which it is an extension (cf. Kripke 1976). The alphabet of L0 contains nominal substitutional constants a, b, ... (metavariables: α, β, γ, ...), pure n-ary predicates Fn, Gn, ... with n ≥ 1 (metavariables: φn, χn, ...), and the substitutional identity predicate =. (The symbols of the first two categories can appear with subscripts.) We call the constants nominal to indicate that no individuals will be required for their semantical interpretation. We let C be the set of all nominal constants of L0 and P the set of all pure predicates of that language, where = ∉ P. The notion of a sentence of the base language is defined in the usual inductive manner giving us sentences of the form φnα1...αn (pure atomic

391 sentences) and α1 = α2 (substitutional identities; where the constants need not be distinct). We let Atm be the set of pure atomic sentences of L0. Moreover, we define the sets Atm(α) and Atm(φn) by putting Atm(α) = {A ∈ Atm: A contains at least one occurrence of α} and Atm(φn) = {A ∈ Atm: A contains an occurrence of φn}, respectively. The language L extends the alphabet of L0 with nominal substitutional variables x, y, ..., the existential substitutional quantifier symbol Σ, the logical connectives ¬ (negation) and ∧ (conjunction), the past-tense operator P, and with parentheses. We let V be the denumerable set of nominal variables and we let the set of nominal terms of L be the union of C and V. We let o, o1, ... be variables ranging over nominal terms. The notion of a formula of L is defined in the familiar inductive way. Atomic formulae of L have the shape of either φno1...on (pure atomic formulae) or o1 = on (substitutional identity formulae) where the terms need not be distinct. The set of L-formulae (metavariables: A, B, ...) comprises atomic formulae and formulae of the forms ¬A, A ∧ B, ΣxA (substitutionally quantified existential formulae), PA (singular past-tense formulae to be read as ‘it was the case that A’), and formulae which are composed from defined connectives. The latter include ΠxA (substitutionally quantified universal formulae), and HA (universal pasttense formulae), where, in particular, HA abbreviates ¬P¬A. As usual a sentence is a formula which does not contain free variables. It will have been noted that the formulae of the base language are just the atomic sentences of the extended language. 3.2. Associative substitutional semantics This semantics combines ideas of both traditional substitutional (or truth-value) semantics (cf. Leblanc (1976)) and denotational (or referential) semantics. In particular, unlike the former it analyses the semantics of atomic sentences in a compositional manner rather than simply assigning truth-values to them without sensitivity to the subatomic components, and unlike the latter it does not invoke objectual domains.

392 An associative temporal model T is a 6-tuple 〈T,